title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
9.2. Permissions | 9.2. Permissions Access to a CacheManager or a Cache is controlled using a set of required permissions. Permissions control the type of action that is performed on the CacheManager or Cache, rather than the type of data being manipulated. Some of these permissions can apply to specifically name entities, such as a named cache. Different types of permissions are available depending on the entity. Table 9.1. CacheManager Permissions Permission Function Description CONFIGURATION defineConfiguration Whether a new cache configuration can be defined. LISTEN addListener Whether listeners can be registered against a cache manager. LIFECYCLE stop, start Whether the cache manager can be stopped or started respectively. ALL A convenience permission which includes all of the above. Table 9.2. Cache Permissions Permission Function Description READ get, contains Whether entries can be retrieved from the cache. WRITE put, putIfAbsent, replace, remove, evict Whether data can be written/replaced/removed/evicted from the cache. EXEC distexec, mapreduce Whether code execution can be run against the cache. LISTEN addListener Whether listeners can be registered against a cache. BULK_READ keySet, values, entrySet,query Whether bulk retrieve operations can be executed. BULK_WRITE clear, putAll Whether bulk write operations can be executed. LIFECYCLE start, stop Whether a cache can be started / stopped. ADMIN getVersion, addInterceptor*, removeInterceptor, getInterceptorChain, getEvictionManager, getComponentRegistry, getDistributionManager, getAuthorizationManager, evict, getRpcManager, getCacheConfiguration, getCacheManager, getInvocationContextContainer, setAvailability, getDataContainer, getStats, getXAResource Whether access to the underlying components/internal structures is allowed. ALL A convenience permission which includes all of the above. ALL_READ Combines READ and BULK_READ. ALL_WRITE Combines WRITE and BULK_WRITE. Note Some permissions may need to be combined with others in order to be useful. For example, EXEC with READ or with WRITE. Report a bug | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/developer_guide/permissions5 |
Chapter 11. SNMP Configuration with the Red Hat High Availability Add-On | Chapter 11. SNMP Configuration with the Red Hat High Availability Add-On As of the Red Hat Enterprise Linux 6.1 release and later, the Red Hat High Availability Add-On provides support for SNMP traps. This chapter describes how to configure your system for SNMP followed by a summary of the traps that the Red Hat High Availability Add-On emits for specific cluster events. 11.1. SNMP and the Red Hat High Availability Add-On The Red Hat High Availability Add-On SNMP subagent is foghorn , which emits the SNMP traps. The foghorn subagent talks to the snmpd daemon by means of the AgentX Protocol. The foghorn subagent only creates SNMP traps; it does not support other SNMP operations such as get or set . There are currently no config options for the foghorn subagent. It cannot be configured to use a specific socket; only the default AgentX socket is currently supported. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/ch-snmp-configuration-ca |
Chapter 3. Configuring observability dashboards and alerts | Chapter 3. Configuring observability dashboards and alerts Connectivity Link provides a variety of starting points for monitoring your Connectivity Link deployment by using example dashboards and alerts, which are ready-to-use and customizable to fit your environment. The Connectivity Link example dashboards are uploaded to the Grafana dashboards website . You can import the following dashboards into your Grafana deployment on OpenShift: Table 3.1. Connectivity Link example dashbords in Grafana Name Dashboard ID App Developer Dashboard 21538 Platform Engineer Dashboard 20982 Business User Dashboard 20981 This section explains how to enable the example dashboards and alerts and provides links additional resources for more information. Note You must perform these steps on each OpenShift cluster that you want to use Connectivity Link on. Prerequisites You have configured metrics as described in Chapter 2, Configuring observability metrics . You have installed and set up Grafana on OpenShift. For an example, see Installing Grafana on Openshift for Kaudrant Observability . 3.1. Configuring example Grafana dashboards You can import dashboards into Grafana by using its user interface, or automatically by using custom resources in OpenShift: Importing dashboards in the Grafana UI JSON file : Use the Import feature in the Grafana UI to upload dashboard JSON files directly. Dashboard ID : Use the Import feature in the Grafana UI to import via Grafana.com using a dashboard ID. You can download the JSON file or copy the dashboard ID from the relevant dashboard page on Grafana dashboards website . For more information, see the Grafana documentation on how to import dashbords . Importing dashboards automatically in OpenShift You can automate dashboard provisioning in Grafana by adding JSON files to a ConfigMap , which must be mounted at /etc/grafana/provisioning/dashboards . Tip Alternatively, to avoid adding ConfigMap volume mounts in your Grafana deployment, you can use a GrafanaDashboard resource to reference a ConfigMap . For an example, see Dashboard from ConfigMap in the Grafana documentation . Data sources are configured as template variables, automatically integrating with your existing data sources. Metrics for these dashboards are sourced from Prometheus. For more information, see the Kuadrant documentation on metrics . Note For some example dashboard panels to work correctly, HTTPRoutes in Connectivity Link must include a service and deployment label with a value that matches the name of the service and deployment being routed to, for example, service=my-app and deployment=my-app . This allows low-level Istio and Envoy metrics to be joined with Gateway API state metrics. Additional information Grafana product documentation . Kuadrant example dashboards . 3.2. Configuring example Prometheus alerts You can integrate the Kuadrant example alerts into Prometheus as PrometheusRule resources, and then adjust the alert thresholds to suit your specific operational needs. Service Level Objective (SLO) alerts generated with Sloth are also included. A benefit of these alerts is the ability to integrate them with this SLO dashboard, which utilizes generated labels to comprehensively overview your SLOs. For more information on the metrics used for these alerts, see the Kuadrant documentation on metrics . Additional information Prometheus GitHub project . Sloth GitHub project . | null | https://docs.redhat.com/en/documentation/red_hat_connectivity_link/1.0/html/connectivity_link_observability_guide/configure_observability_dashboardsconnectivity-link |
CLI tools | CLI tools OpenShift Dedicated 4 Learning how to use the command-line tools for OpenShift Dedicated Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/cli_tools/index |
Chapter 5. HighAvailability | Chapter 5. HighAvailability The following table lists all the packages in the High Availability add-on. For more information about core packages, see the Scope of Coverage Details document. Package Core Package? License ccs Yes GPLv2 clufter-cli No GPLv2+ clufter-lib-ccs No GPLv2+ clufter-lib-general No GPLv2+ clufter-lib-pcs No GPLv2+ cluster-cim Yes GPLv2 cluster-glue No GPLv2+ and LGPLv2+ cluster-glue-libs No GPLv2+ and LGPLv2+ cluster-glue-libs-devel Yes GPLv2+ and LGPLv2+ cluster-snmp Yes GPLv2 clusterlib No GPLv2+ and LGPLv2+ clusterlib-devel Yes GPLv2+ and LGPLv2+ cman Yes GPLv2+ and LGPLv2+ corosync No BSD corosynclib No BSD corosynclib-devel Yes BSD fence-virt No GPLv2+ fence-virtd-checkpoint Yes GPLv2+ foghorn Yes GPLv2+ libesmtp-devel Yes LGPLv2+ and GPLv2+ libqb No LGPLv2+ libqb-devel No LGPLv2+ luci Yes GPLv2 and MIT modcluster No GPLv2 omping Yes ISC openais No BSD openaislib No BSD openaislib-devel Yes BSD pacemaker Yes GPLv2+ and LGPLv2+ pacemaker-cli No GPLv2+ and LGPLv2+ pacemaker-cluster-libs No GPLv2+ and LGPLv2+ pacemaker-cts No GPLv2+ and LGPLv2+ pacemaker-doc Yes GPLv2+ and LGPLv2+ pacemaker-libs No GPLv2+ and LGPLv2+ pacemaker-libs-devel Yes GPLv2+ and LGPLv2+ pacemaker-remote No GPLv2+ and LGPLv2+ pcs Yes GPLv2 python-clufter No GPLv2+ and GFDL python-repoze-what-plugins-sql No BSD python-repoze-what-quickstart Yes BSD python-repoze-who-friendlyform No BSD python-repoze-who-plugins-sa No BSD python-tw-forms No MIT and LGPLv2+ resource-agents Yes GPLv2+ and LGPLv2+ resource-agents-sap-hana No GPLv2+ rgmanager Yes GPLv2+ and LGPLv2+ ricci No GPLv2 sbd Yes GPLv2+ | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/package_manifest/chap-highavailability |
function::user_int | function::user_int Name function::user_int - Retrieves an int value stored in user space Synopsis Arguments addr the user space address to retrieve the int from Description Returns the int value from a given user space address. Returns zero when user space data is not accessible. | [
"user_int:long(addr:long)"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-user-int |
Configuring basic system settings | Configuring basic system settings Red Hat Enterprise Linux 8 Set up the essential functions of your system and customize your system environment Red Hat Customer Content Services | [
"nmcli connection show NAME UUID TYPE DEVICE Wired connection 1 a5eb6490-cc20-3668-81f8-0314a27f3f75 ethernet enp1s0",
"nmcli connection add con-name <connection-name> ifname <device-name> type ethernet",
"nmcli connection modify \"Wired connection 1\" connection.id \"Internal-LAN\"",
"nmcli connection show Internal-LAN connection.interface-name: enp1s0 connection.autoconnect: yes ipv4.method: auto ipv6.method: auto",
"nmcli connection modify Internal-LAN ipv4.method auto",
"nmcli connection modify Internal-LAN ipv4.method manual ipv4.addresses 192.0.2.1/24 ipv4.gateway 192.0.2.254 ipv4.dns 192.0.2.200 ipv4.dns-search example.com",
"nmcli connection modify Internal-LAN ipv6.method auto",
"nmcli connection modify Internal-LAN ipv6.method manual ipv6.addresses 2001:db8:1::fffe/64 ipv6.gateway 2001:db8:1::fffe ipv6.dns 2001:db8:1::ffbb ipv6.dns-search example.com",
"nmcli connection modify <connection-name> <setting> <value>",
"nmcli connection up Internal-LAN",
"ip address show enp1s0 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:00:17:b8:b6 brd ff:ff:ff:ff:ff:ff inet 192.0.2.1/24 brd 192.0.2.255 scope global noprefixroute enp1s0 valid_lft forever preferred_lft forever inet6 2001:db8:1::fffe/64 scope global noprefixroute valid_lft forever preferred_lft forever",
"ip route show default default via 192.0.2.254 dev enp1s0 proto static metric 102",
"ip -6 route show default default via 2001:db8:1::ffee dev enp1s0 proto static metric 102 pref medium",
"cat /etc/resolv.conf search example.com nameserver 192.0.2.200 nameserver 2001:db8:1::ffbb",
"ping <host-name-or-IP-address>",
"nmcli device status DEVICE TYPE STATE CONNECTION enp1s0 ethernet unavailable --",
"nmtui",
"ip address show enp1s0 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:00:17:b8:b6 brd ff:ff:ff:ff:ff:ff inet 192.0.2.1/24 brd 192.0.2.255 scope global noprefixroute enp1s0 valid_lft forever preferred_lft forever inet6 2001:db8:1::fffe/64 scope global noprefixroute valid_lft forever preferred_lft forever",
"ip route show default default via 192.0.2.254 dev enp1s0 proto static metric 102",
"ip -6 route show default default via 2001:db8:1::ffee dev enp1s0 proto static metric 102 pref medium",
"cat /etc/resolv.conf search example.com nameserver 192.0.2.200 nameserver 2001:db8:1::ffbb",
"ping <host-name-or-IP-address>",
"--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with dynamic IP address settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp1s0 interface_name: enp1s0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes state: up",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m ansible.builtin.setup \"ansible_default_ipv4\": { \"address\": \"192.0.2.1\", \"alias\": \"enp1s0\", \"broadcast\": \"192.0.2.255\", \"gateway\": \"192.0.2.254\", \"interface\": \"enp1s0\", \"macaddress\": \"52:54:00:17:b8:b6\", \"mtu\": 1500, \"netmask\": \"255.255.255.0\", \"network\": \"192.0.2.0\", \"prefix\": \"24\", \"type\": \"ether\" }, \"ansible_default_ipv6\": { \"address\": \"2001:db8:1::1\", \"gateway\": \"2001:db8:1::fffe\", \"interface\": \"enp1s0\", \"macaddress\": \"52:54:00:17:b8:b6\", \"mtu\": 1500, \"prefix\": \"64\", \"scope\": \"global\", \"type\": \"ether\" }, \"ansible_dns\": { \"nameservers\": [ \"192.0.2.1\", \"2001:db8:1::ffbb\" ], \"search\": [ \"example.com\" ] },",
"subscription-manager register Registering to: subscription.rhsm.redhat.com:443/subscription Username: <example_username> Password: <example_password> The system has been registered with ID: 37to907c-ece6-49ea-9174-20b87ajk9ee7 The registered system name is: client1.example.com",
"subscription-manager list --available --all",
"subscription-manager attach --pool= <example_pool_id>",
"yum install sos",
"sosreport",
"timedatectl list-timezones Europe/Berlin",
"timedatectl set-timezone <time_zone>",
"timedatectl set-time <YYYY-mm-dd HH:MM-SS>",
"date Mon Mar 30 16:02:59 CEST 2020",
"timedatectl Local time: Mon 2020-03-30 16:04:42 CEST Universal time: Mon 2020-03-30 14:04:42 UTC RTC time: Mon 2020-03-30 14:04:41 Time zone: Europe/Prague (CEST, +0200) System clock synchronized: yes NTP service: active RTC in local TZ: no",
"localectl status System Locale: LANG=en_US.UTF-8 VC Keymap: de-nodeadkeys X11 Layout: de X11 Variant: nodeadkeys",
"localectl list-locales C.UTF-8 en_US.UTF-8 en_ZA.UTF-8 en_ZW.UTF-8",
"localectl set-locale LANG= en_US .UTF-8",
"localectl list-keymaps ANSI-dvorak al al-plisi amiga-de amiga-us",
"localectl status VC Keymap: us",
"localectl set-keymap us",
"cat /etc/vconsole.conf FONT=\"eurlatgr\"",
"ls -1 /usr/lib/kbd/consolefonts/*.psfu.gz /usr/lib/kbd/consolefonts/eurlatgr.psfu.gz /usr/lib/kbd/consolefonts/LatArCyrHeb-08.psfu.gz /usr/lib/kbd/consolefonts/LatArCyrHeb-14.psfu.gz /usr/lib/kbd/consolefonts/LatArCyrHeb-16.psfu.gz /usr/lib/kbd/consolefonts/LatArCyrHeb-16+.psfu.gz /usr/lib/kbd/consolefonts/LatArCyrHeb-19.psfu.gz",
"setfont LatArCyrHeb-16.psfu.gz",
"FONT=LatArCyrHeb-16",
"reboot",
"ssh-keygen -t ecdsa Generating public/private ecdsa key pair. Enter file in which to save the key (/home/ <username> /.ssh/id_ecdsa): Enter passphrase (empty for no passphrase): <password> Enter same passphrase again: <password> Your identification has been saved in /home/ <username> /.ssh/id_ecdsa. Your public key has been saved in /home/ <username> /.ssh/id_ecdsa.pub. The key fingerprint is: SHA256:Q/x+qms4j7PCQ0qFd09iZEFHA+SqwBKRNaU72oZfaCI <username> @ <localhost.example.com> The key's randomart image is: +---[ECDSA 256]---+ |.oo..o=++ | |.. o .oo . | |. .. o. o | |....o.+... | |o.oo.o +S . | |.=.+. .o | |E.*+. . . . | |.=..+ +.. o | | . oo*+o. | +----[SHA256]-----+",
"ssh-copy-id <username> @ <ssh-server-example.com> /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed <username> @ <ssh-server-example.com> 's password: ... Number of key(s) added: 1 Now try logging into the machine, with: \"ssh ' <username> @ <ssh-server-example.com> '\" and check to make sure that only the key(s) you wanted were added.",
"ssh -o PreferredAuthentications=publickey <username> @ <ssh-server-example.com>",
"vi /etc/ssh/sshd_config",
"PasswordAuthentication no",
"setsebool -P use_nfs_home_dirs 1",
"systemctl reload sshd",
"vi ~/.bashrc",
"eval USD(ssh-agent)",
"AddKeysToAgent yes",
"ssh <example.user> @ <[email protected]>",
"ssh-keygen -D pkcs11: > keys.pub",
"ssh-copy-id -f -i keys.pub <[email protected]>",
"ssh -i \"pkcs11:id=%01?module-path=/usr/lib64/pkcs11/opensc-pkcs11.so\" <ssh-server-example.com> Enter PIN for 'SSH key': [ssh-server-example.com] USD",
"ssh -i \"pkcs11:id=%01\" <ssh-server-example.com> Enter PIN for 'SSH key': [ssh-server-example.com] USD",
"ssh -i pkcs11: <ssh-server-example.com> Enter PIN for 'SSH key': [ssh-server-example.com] USD",
"cat ~/.ssh/config IdentityFile \"pkcs11:id=%01?module-path=/usr/lib64/pkcs11/opensc-pkcs11.so\" ssh <ssh-server-example.com> Enter PIN for 'SSH key': [ssh-server-example.com] USD",
"systemctl status firewalld ● firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled) Active: inactive (dead)",
"systemctl enable --now firewalld",
"systemctl status firewalld ● firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled) Active: active (running)",
"yum install rsyslog-doc",
"firefox /usr/share/doc/rsyslog/html/index.html &",
"semanage port -a -t syslogd_port_t -p tcp 30514",
"firewall-cmd --zone= <zone-name> --permanent --add-port=30514/tcp success firewall-cmd --reload",
"Define templates before the rules that use them Per-Host templates for remote systems template(name=\"TmplAuthpriv\" type=\"list\") { constant(value=\"/var/log/remote/auth/\") property(name=\"hostname\") constant(value=\"/\") property(name=\"programname\" SecurePath=\"replace\") constant(value=\".log\") } template(name=\"TmplMsg\" type=\"list\") { constant(value=\"/var/log/remote/msg/\") property(name=\"hostname\") constant(value=\"/\") property(name=\"programname\" SecurePath=\"replace\") constant(value=\".log\") } Provides TCP syslog reception module(load=\"imtcp\") Adding this ruleset to process remote messages ruleset(name=\"remote1\"){ authpriv.* action(type=\"omfile\" DynaFile=\"TmplAuthpriv\") *.info;mail.none;authpriv.none;cron.none action(type=\"omfile\" DynaFile=\"TmplMsg\") } input(type=\"imtcp\" port=\"30514\" ruleset=\"remote1\")",
"rsyslogd -N 1 rsyslogd: version 8.1911.0-2.el8, config validation run rsyslogd: End of config validation run. Bye.",
"systemctl status rsyslog",
"systemctl restart rsyslog",
"systemctl enable rsyslog",
"*.* action(type=\"omfwd\" queue.type=\"linkedlist\" queue.filename=\"example_fwd\" action.resumeRetryCount=\"-1\" queue.saveOnShutdown=\"on\" target=\"example.com\" port=\"30514\" protocol=\"tcp\" )",
"systemctl restart rsyslog",
"logger test",
"cat /var/log/remote/msg/ hostname /root.log Feb 25 03:53:17 hostname root[6064]: test",
"Set certificate files global( DefaultNetstreamDriverCAFile=\"/etc/pki/ca-trust/source/anchors/ca-cert.pem\" DefaultNetstreamDriverCertFile=\"/etc/pki/ca-trust/source/anchors/server-cert.pem\" DefaultNetstreamDriverKeyFile=\"/etc/pki/ca-trust/source/anchors/server-key.pem\" ) TCP listener module( load=\"imtcp\" PermittedPeer=[\"client1.example.com\", \"client2.example.com\"] StreamDriver.AuthMode=\"x509/name\" StreamDriver.Mode=\"1\" StreamDriver.Name=\"ossl\" ) Start up listener at port 514 input( type=\"imtcp\" port=\"514\" )",
"rsyslogd -N 1 rsyslogd: version 8.1911.0-2.el8, config validation run (level 1) rsyslogd: End of config validation run. Bye.",
"systemctl status rsyslog",
"systemctl restart rsyslog",
"systemctl enable rsyslog",
"Set certificate files global( DefaultNetstreamDriverCAFile=\"/etc/pki/ca-trust/source/anchors/ca-cert.pem\" DefaultNetstreamDriverCertFile=\"/etc/pki/ca-trust/source/anchors/client-cert.pem\" DefaultNetstreamDriverKeyFile=\"/etc/pki/ca-trust/source/anchors/client-key.pem\" ) Set up the action for all messages *.* action( type=\"omfwd\" StreamDriver=\"ossl\" StreamDriverMode=\"1\" StreamDriverPermittedPeers=\"server.example.com\" StreamDriverAuthMode=\"x509/name\" target=\"server.example.com\" port=\"514\" protocol=\"tcp\" )",
"rsyslogd -N 1 rsyslogd: version 8.1911.0-2.el8, config validation run (level 1) rsyslogd: End of config validation run. Bye.",
"systemctl status rsyslog",
"systemctl restart rsyslog",
"systemctl enable rsyslog",
"logger test",
"cat /var/log/remote/msg/ <hostname> /root.log Feb 25 03:53:17 <hostname> root[6064]: test",
"semanage port -a -t syslogd_port_t -p udp portno",
"firewall-cmd --zone= zone --permanent --add-port= portno /udp success firewall-cmd --reload",
"firewall-cmd --reload",
"Define templates before the rules that use them Per-Host templates for remote systems template(name=\"TmplAuthpriv\" type=\"list\") { constant(value=\"/var/log/remote/auth/\") property(name=\"hostname\") constant(value=\"/\") property(name=\"programname\" SecurePath=\"replace\") constant(value=\".log\") } template(name=\"TmplMsg\" type=\"list\") { constant(value=\"/var/log/remote/msg/\") property(name=\"hostname\") constant(value=\"/\") property(name=\"programname\" SecurePath=\"replace\") constant(value=\".log\") } Provides UDP syslog reception module(load=\"imudp\") This ruleset processes remote messages ruleset(name=\"remote1\"){ authpriv.* action(type=\"omfile\" DynaFile=\"TmplAuthpriv\") *.info;mail.none;authpriv.none;cron.none action(type=\"omfile\" DynaFile=\"TmplMsg\") } input(type=\"imudp\" port=\"514\" ruleset=\"remote1\")",
"rsyslogd -N 1 rsyslogd: version 8.1911.0-2.el8, config validation run",
"systemctl restart rsyslog",
"systemctl enable rsyslog",
"*.* action(type=\"omfwd\" queue.type=\"linkedlist\" queue.filename=\" example_fwd \" action.resumeRetryCount=\"-1\" queue.saveOnShutdown=\"on\" target=\" example.com \" port=\" portno \" protocol=\"udp\" )",
"systemctl restart rsyslog",
"systemctl enable rsyslog",
"logger test",
"cat /var/log/remote/msg/ hostname /root.log Feb 25 03:53:17 hostname root[6064]: test",
"action(type=\"omfwd\" protocol=\"tcp\" RebindInterval=\"250\" target=\" example.com \" port=\"514\" ...) action(type=\"omfwd\" protocol=\"udp\" RebindInterval=\"250\" target=\" example.com \" port=\"514\" ...) action(type=\"omrelp\" RebindInterval=\"250\" target=\" example.com \" port=\"6514\" ...)",
"module(load=\"omrelp\") *.* action(type=\"omrelp\" target=\"_target_IP_\" port=\"_target_port_\")",
"systemctl restart rsyslog",
"systemctl enable rsyslog",
"ruleset(name=\"relp\"){ *.* action(type=\"omfile\" file=\"_log_path_\") } module(load=\"imrelp\") input(type=\"imrelp\" port=\"_target_port_\" ruleset=\"relp\")",
"systemctl restart rsyslog",
"systemctl enable rsyslog",
"logger test",
"cat /var/log/remote/msg/hostname/root.log Feb 25 03:53:17 hostname root[6064]: test",
"ls /usr/lib64/rsyslog/{i,o}m *",
"yum install netconsole-service",
"SYSLOGADDR= 192.0.2.1",
"systemctl enable --now netconsole",
"--- - name: Deploy the logging solution hosts: managed-node-01.example.com tasks: - name: Filter logs based on a specific value they contain ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: files_input type: basics logging_outputs: - name: files_output0 type: files property: msg property_op: contains property_value: error path: /var/log/errors.log - name: files_output1 type: files property: msg property_op: \"!contains\" property_value: error path: /var/log/others.log logging_flows: - name: flow0 inputs: [files_input] outputs: [files_output0, files_output1]",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"rsyslogd -N 1 rsyslogd: version 8.1911.0-6.el8, config validation run rsyslogd: End of config validation run. Bye.",
"logger error",
"cat /var/log/errors.log Aug 5 13:48:31 hostname root[6778]: error",
"--- - name: Deploy the logging solution hosts: managed-node-01.example.com tasks: - name: Configure the server to receive remote input ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: remote_udp_input type: remote udp_ports: [ 601 ] - name: remote_tcp_input type: remote tcp_ports: [ 601 ] logging_outputs: - name: remote_files_output type: remote_files logging_flows: - name: flow_0 inputs: [remote_udp_input, remote_tcp_input] outputs: [remote_files_output] - name: Deploy the logging solution hosts: managed-node-02.example.com tasks: - name: Configure the server to output the logs to local files in directories named by remote host names ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: basic_input type: basics logging_outputs: - name: forward_output0 type: forwards severity: info target: <host1.example.com> udp_port: 601 - name: forward_output1 type: forwards facility: mail target: <host1.example.com> tcp_port: 601 logging_flows: - name: flows0 inputs: [basic_input] outputs: [forward_output0, forward_output1] [basic_input] [forward_output0, forward_output1]",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"rsyslogd -N 1 rsyslogd: version 8.1911.0-6.el8, config validation run (level 1), master config /etc/rsyslog.conf rsyslogd: End of config validation run. Bye.",
"logger test",
"cat /var/log/ <host2.example.com> /messages Aug 5 13:48:31 <host2.example.com> root[6778]: test",
"--- - name: Configure remote logging solution using TLS for secure transfer of logs hosts: managed-node-01.example.com tasks: - name: Deploying files input and forwards output with certs ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_certificates: - name: logging_cert dns: ['localhost', 'www.example.com'] ca: ipa logging_pki_files: - ca_cert: /local/path/to/ca_cert.pem cert: /local/path/to/logging_cert.pem private_key: /local/path/to/logging_cert.pem logging_inputs: - name: input_name type: files input_log_path: /var/log/containers/*.log logging_outputs: - name: output_name type: forwards target: your_target_host tcp_port: 514 tls: true pki_authmode: x509/name permitted_server: 'server.example.com' logging_flows: - name: flow_name inputs: [input_name] outputs: [output_name]",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"--- - name: Configure remote logging solution using TLS for secure transfer of logs hosts: managed-node-01.example.com tasks: - name: Deploying remote input and remote_files output with certs ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_certificates: - name: logging_cert dns: ['localhost', 'www.example.com'] ca: ipa logging_pki_files: - ca_cert: /local/path/to/ca_cert.pem cert: /local/path/to/logging_cert.pem private_key: /local/path/to/logging_cert.pem logging_inputs: - name: input_name type: remote tcp_ports: 514 tls: true permitted_clients: ['clients.example.com'] logging_outputs: - name: output_name type: remote_files remote_log_path: /var/log/remote/%FROMHOST%/%PROGRAMNAME:::secpath-replace%.log async_writing: true client_count: 20 io_buffer_size: 8192 logging_flows: - name: flow_name inputs: [input_name] outputs: [output_name]",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"--- - name: Configure client-side of the remote logging solution using RELP hosts: managed-node-01.example.com tasks: - name: Deploy basic input and RELP output ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: basic_input type: basics logging_outputs: - name: relp_client type: relp target: logging.server.com port: 20514 tls: true ca_cert: /etc/pki/tls/certs/ca.pem cert: /etc/pki/tls/certs/client-cert.pem private_key: /etc/pki/tls/private/client-key.pem pki_authmode: name permitted_servers: - '*.server.example.com' logging_flows: - name: example_flow inputs: [basic_input] outputs: [relp_client]",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"--- - name: Configure server-side of the remote logging solution using RELP hosts: managed-node-01.example.com tasks: - name: Deploying remote input and remote_files output ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: relp_server type: relp port: 20514 tls: true ca_cert: /etc/pki/tls/certs/ca.pem cert: /etc/pki/tls/certs/server-cert.pem private_key: /etc/pki/tls/private/server-key.pem pki_authmode: name permitted_clients: - '*example.client.com' logging_outputs: - name: remote_files_output type: remote_files logging_flows: - name: example_flow inputs: relp_server outputs: remote_files_output",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"/usr/share/doc/setup/uidgid",
"Min/max values for automatic uid selection in useradd # UID_MIN 5000",
"Min/max values for automatic gid selection in groupadd # GID_MIN 5000",
"useradd example.user",
"passwd example.user",
"usermod -a -G example.group example.user",
"useradd <options> <username>",
"useradd -u 5000 sarah",
"id sarah",
"uid=5000(sarah) gid=5000(sarah) groups=5000(sarah)",
"groupadd options group-name",
"groupadd -g 5000 sysadmins",
"getent group sysadmin",
"sysadmins:x:5000:",
"usermod --append -G <group_name> <username>",
"groups <username>",
"mkdir <directory-name>",
"groupadd <group-name>",
"usermod --append -G <group_name> <username>",
"chgrp <group_name> <directory>",
"chmod g+rwxs <directory>",
"ls -ld <directory>",
"*drwx__rws__r-x.* 2 root _group-name_ 6 Nov 25 08:45 _directory-name_",
"loginctl terminate-user user-name",
"userdel user-name",
"userdel --remove --selinux-user user-name",
"rm -rf /var/lib/AccountsService/users/ user-name",
"groups user-name",
"groups sarah",
"sarah : sarah wheel developer",
"usermod -g <group-name> <user-name>",
"groups <username>",
"usermod --append -G <group_name> <username>",
"groups <username>",
"gpasswd -d <user-name> <group-name>",
"groups <username>",
"usermod -G <group-names> <username>",
"groups <username>",
"passwd",
"sudo passwd root",
"load_video set gfx_payload=keep insmod gzio linux (USDroot)/vmlinuz-4.18.0-80.e18.x86_64 root=/dev/mapper/rhel-root ro crash kernel=auto resume=/dev/mapper/rhel-swap rd.lvm.lv/swap rhgb quiet initrd (USDroot)/initramfs-4.18.0-80.e18.x86_64.img USDtuned_initrd",
"mount -o remount,rw /sysroot",
"chroot /sysroot",
"passwd",
"touch /.autorelabel",
"exit",
"exit",
"whoami",
"## Next comes the main part: which users can run what software on ## which machines (the sudoers file can be shared between multiple ## systems).",
"<username> <hostname.example.com> =( <run_as_user> : <run_as_group> ) <path/to/command>",
"#includedir /etc/sudoers.d",
"visudo",
"## Allows people in group wheel to run all commands %wheel ALL=(ALL) ALL",
"usermod --append -G wheel <username>",
"sudo whoami root",
"visudo -f /etc/sudoers.d/ <filename>",
"<username> <hostname.example.com> = ( <run_as_user> : <run_as_group> ) <path/to/command>",
"user1 host1.example.com = /bin/dnf, /sbin/reboot",
"Defaults mail_always Defaults mailto=\" <[email protected]> \"",
"su <username> -",
"sudo whoami [sudo] password for <username> :",
"usage: dnf [options] COMMAND",
"<username> is not in the sudoers file. This incident will be reported.",
"<username> is not allowed to run sudo on <host.example.com>.",
"`Sorry, user _<username>_ is not allowed to execute '_<path/to/command>_' as root on _<host.example.com>_.`",
"ls -l -rwxrw----. 1 sysadmins sysadmins 2 Mar 2 08:43 file",
"ls -dl directory drwxr-----. 1 sysadmins sysadmins 2 Mar 2 08:43 directory",
"chmod <level><operation><permission> file-name",
"ls -l file-name",
"ls -dl directory-name",
"ls -l directory-name",
"ls -l my-file.txt -rw-rw-r--. 1 username username 0 Feb 24 17:56 my-file.txt",
"chmod go= my-file.txt",
"ls -l my-file.txt -rw-------. 1 username username 0 Feb 24 17:56 my-file.txt",
"ls -dl my-directory drwxrwx---. 2 username username 4096 Feb 24 18:12 my-directory",
"chmod o+rx my-directory",
"ls -dl my-directory drwxrwxr-x. 2 username username 4096 Feb 24 18:12 my-directory",
"chmod octal_value file-name",
"getfacl file-name",
"setfacl -m u: username : symbolic_value file-name",
"setfacl -m u:andrew:rw- group-project setfacl -m u:susan:--- group-project",
"getfacl group-project",
"file: group-project owner: root group: root user:andrew:rw- user:susan:--- group::r-- mask::rw- other::r--",
"umask -S",
"umask",
"umask -S <level><operation><permission>",
"umask octal_value",
"if [ USDUID -gt 199 ] && [ \"id -gn\" = \"id -un\" ]; then umask 002 else umask 022 fi",
"if [ USDUID -gt 199 ] && [ \"/usr/bin/id -gn\" = \"/usr/bin/id -un\" ]; then umask 002 else umask 022 fi",
"echo 'umask octal_value ' >> /home/ username /.bashrc",
"HOME_MODE is used by useradd(8) and newusers(8) to set the mode for new home directories. If HOME_MODE is not set, the value of UMASK is used to create the mode. HOME_MODE 0700",
"Default initial \"umask\" value used by login(1) on non-PAM enabled systems. Default \"umask\" value for pam_umask(8) on PAM enabled systems. UMASK is also used by useradd(8) and newusers(8) to set the mode for new home directories if HOME_MODE is not set. 022 is the default value, but 027, or even 077, could be considered for increased privacy. There is no One True Answer here: each sysadmin must make up their mind. UMASK 022",
"DefaultTimeoutStartSec= required value",
"systemctl list-units --type service UNIT LOAD ACTIVE SUB DESCRIPTION abrt-ccpp.service loaded active exited Install ABRT coredump hook abrt-oops.service loaded active running ABRT kernel log watcher abrtd.service loaded active running ABRT Automated Bug Reporting Tool systemd-vconsole-setup.service loaded active exited Setup Virtual Console tog-pegasus.service loaded active running OpenPegasus CIM Server LOAD = Reflects whether the unit definition was properly loaded. ACTIVE = The high-level unit activation state, or a generalization of SUB. SUB = The low-level unit activation state, values depend on unit type. 46 loaded units listed. Pass --all to see loaded but inactive units, too. To show all installed unit files use 'systemctl list-unit-files'",
"systemctl list-units --type service --all",
"systemctl list-unit-files --type service UNIT FILE STATE abrt-ccpp.service enabled abrt-oops.service enabled abrtd.service enabled wpa_supplicant.service disabled ypbind.service disabled 208 unit files listed.",
"systemctl status <name> .service",
"systemctl is-active <name> .service",
"systemctl is-enabled <name> .service",
"systemctl list-dependencies --after <name> .service",
"systemctl list-dependencies --after gdm.service gdm.service ├─dbus.socket ├─[email protected] ├─livesys.service ├─plymouth-quit.service ├─system.slice ├─systemd-journald.socket ├─systemd-user-sessions.service └─basic.target [output truncated]",
"systemctl list-dependencies --before <name> .service",
"systemctl list-dependencies --before gdm.service gdm.service ├─dracut-shutdown.service ├─graphical.target │ ├─systemd-readahead-done.service │ ├─systemd-readahead-done.timer │ └─systemd-update-utmp-runlevel.service └─shutdown.target ├─systemd-reboot.service └─final.target └─systemd-reboot.service",
"*systemctl start <systemd_unit> *",
"systemctl stop <name> .service",
"systemctl restart <name> .service",
"systemctl try-restart <name> .service",
"systemctl reload <name> .service",
"systemctl status <systemd_unit>",
"systemctl unmask <systemd_unit>",
"systemctl enable <systemd_unit>",
"systemctl disable <name> .service",
"systemctl mask <name> .service",
"systemctl get-default graphical.target",
"systemctl list-units --type target",
"systemctl set-default <name> .target",
"Example: systemctl set-default multi-user.target Removed /etc/systemd/system/default.target Created symlink /etc/systemd/system/default.target -> /usr/lib/systemd/system/multi-user.target",
"systemctl get-default multi-user.target",
"systemctl isolate default.target",
"systemctl list-units --type target",
"systemctl isolate <name> .target",
"Example: systemctl isolate multi-user.target",
"systemctl rescue Broadcast message from root@localhost on pts/0 (Fri 2023-03-24 18:23:15 CEST): The system is going down to rescue mode NOW!",
"systemctl --no-wall rescue",
"linux (USDroot)/vmlinuz-5.14.0-70.22.1.e19_0.x86_64 root=/dev/mapper/rhel-root ro crash kernel=auto resume=/dev/mapper/rhel-swap rd.lvm.lv/swap rhgb quiet",
"linux (USDroot)/vmlinuz-5.14.0-70.22.1.e19_0.x86_64 root=/dev/mapper/rhel-root ro crash kernel=auto resume=/dev/mapper/rhel-swap rd.lvm.lv/swap rhgb quiet systemd.unit= <name> .target",
"shutdown --poweroff hh:mm",
"shutdown --halt +m",
"shutdown -c",
"systemctl poweroff",
"systemctl halt",
"systemctl reboot",
"systemctl suspend",
"systemctl hibernate",
"systemctl hybrid-sleep",
"systemctl suspend-then-hibernate",
"HandlePowerKey=reboot",
"[org/gnome/settings-daemon/plugins/power] power-button-action=<value>",
"/org/gnome/settings-daemon/plugins/power/power-button-action",
"dconf update",
"chronyc",
"chronyc>",
"chronyc command",
"chronyd -q 'server ntp.example.com iburst' 2018-05-18T12:37:43Z chronyd version 3.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +SECHASH +IPV6 +DEBUG) 2018-05-18T12:37:43Z Initial frequency -2.630 ppm 2018-05-18T12:37:48Z System clock wrong by 0.003159 seconds (step) 2018-05-18T12:37:48Z chronyd exiting",
"python3 /usr/share/doc/chrony/ntp2chrony.py -b -v Reading /etc/ntp.conf Reading /etc/ntp/crypto/pw Reading /etc/ntp/keys Writing /etc/chrony.conf Writing /etc/chrony.keys",
"yum install chrony",
"systemctl status chronyd chronyd.service - NTP client/server Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled) Active: active (running) since Wed 2013-06-12 22:23:16 CEST; 11h ago",
"systemctl start chronyd",
"systemctl enable chronyd",
"systemctl stop chronyd",
"systemctl disable chronyd",
"chronyc tracking Reference ID : CB00710F (ntp-server.example.net) Stratum : 3 Ref time (UTC) : Fri Jan 27 09:49:17 2017 System time : 0.000006523 seconds slow of NTP time Last offset : -0.000006747 seconds RMS offset : 0.000035822 seconds Frequency : 3.225 ppm slow Residual freq : 0.000 ppm Skew : 0.129 ppm Root delay : 0.013639022 seconds Root dispersion : 0.001100737 seconds Update interval : 64.2 seconds Leap status : Normal",
"chronyc sources 210 Number of sources = 3 MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== #* GPS0 0 4 377 11 -479ns[ -621ns] /- 134ns ^? a.b.c 2 6 377 23 -923us[ -924us] +/- 43ms ^ d.e.f 1 6 377 21 -2629us[-2619us] +/- 86ms",
"chronyc sourcestats 210 Number of sources = 1 Name/IP Address NP NR Span Frequency Freq Skew Offset Std Dev =============================================================================== abc.def.ghi 11 5 46m -0.001 0.045 1us 25us",
"chronyc makestep",
"#!/bin/sh exit 0",
"driftfile /var/lib/chrony/drift commandkey 1 keyfile /etc/chrony.keys initstepslew 10 client1 client3 client6 local stratum 8 manual allow <subnet>",
"server <server_fqdn> driftfile /var/lib/chrony/drift logdir /var/log/chrony log measurements statistics tracking keyfile /etc/chrony.keys commandkey 24 local stratum 10 initstepslew 20 ntp1.example.net allow <server_ip_address>",
"bindcmdaddress 0.0.0.0",
"bindcmdaddress ::",
"cmdallow 192.168.1.0/24",
"cmdallow 2001:db8::/64",
"firewall-cmd --permanent --add-port=323/udp",
"firewall-cmd --reload",
"--- - hosts: timesync-test vars: timesync_ntp_servers: - hostname: 2.rhel.pool.ntp.org pool: yes iburst: yes roles: - rhel-system-roles.timesync",
"ethtool -T enp1s0",
"hwtimestamp enp1s0 hwtimestamp eno*",
"server ntp.example.comlocal minpoll 0 maxpoll 0",
"server ntp.example.comlocal minpoll 0 maxpoll 0 xleave",
"clientloglimit 100000000",
"systemctrl restart chronyd",
"chronyd[4081]: Enabled HW timestamping on enp1s0 chronyd[4081]: Enabled HW timestamping on eno1",
"chronyc ntpdata Output: [literal,subs=\"+quotes,verbatim,normal\"] Remote address : 203.0.113.15 (CB00710F) Remote port : 123 Local address : 203.0.113.74 (CB00714A) Leap status : Normal Version : 4 Mode : Server Stratum : 1 Poll interval : 0 (1 seconds) Precision : -24 (0.000000060 seconds) Root delay : 0.000015 seconds Root dispersion : 0.000015 seconds Reference ID : 47505300 (GPS) Reference time : Wed May 03 13:47:45 2017 Offset : -0.000000134 seconds Peer delay : 0.000005396 seconds Peer dispersion : 0.000002329 seconds Response time : 0.000152073 seconds Jitter asymmetry: +0.00 NTP tests : 111 111 1111 Interleaved : Yes Authenticated : No TX timestamping : Hardware RX timestamping : Hardware Total TX : 27 Total RX : 27 Total valid RX : 27",
"chronyc sourcestats Output: [literal,subs=\"+quotes,verbatim,normal\"] . 210 Number of sources = 1 Name/IP Address NP NR Span Frequency Freq Skew Offset Std Dev ntp.local 12 7 11 +0.000 0.019 +0ns 49ns .",
"bindaddress 203.0.113.74 hwtimestamp enp1s0 local stratum 1",
"systemctrl restart chronyd",
"chronyc -n tracking Reference ID : 0A051B0A (10.5.27.10) Stratum : 2 Ref time (UTC) : Thu Mar 08 15:46:20 2018 System time : 0.000000338 seconds slow of NTP time Last offset : +0.000339408 seconds RMS offset : 0.000339408 seconds Frequency : 2.968 ppm slow Residual freq : +0.001 ppm Skew : 3.336 ppm Root delay : 0.157559142 seconds Root dispersion : 0.001339232 seconds Update interval : 64.5 seconds Leap status : Normal",
"ntpstat synchronised to NTP server (10.5.27.10) at stratum 2 time correct to within 80 ms polling server every 64 s",
"For example: server time.example.com iburst nts server nts.netnod.se iburst nts server ptbtime1.ptb.de iburst nts",
"ntsdumpdir /var/lib/chrony",
"PEERNTP=no",
"systemctl restart chronyd",
"chronyc -N authdata Name/IP address Mode KeyID Type KLen Last Atmp NAK Cook CLen ================================================================ time.example.com NTS 1 15 256 33m 0 0 8 100 nts.netnod.se NTS 1 15 256 33m 0 0 8 100 ptbtime1.ptb.de NTS 1 15 256 33m 0 0 8 100",
"chronyc -N sources MS Name/IP address Stratum Poll Reach LastRx Last sample ========================================================= time.example.com 3 6 377 45 +355us[ +375us] +/- 11ms nts.netnod.se 1 6 377 44 +237us[ +237us] +/- 23ms ptbtime1.ptb.de 1 6 377 44 -170us[ -170us] +/- 22ms",
"ntsserverkey /etc/pki/tls/private/ <ntp-server.example.net> .key ntsservercert /etc/pki/tls/certs/ <ntp-server.example.net> .crt",
"chown root:chrony /etc/pki/tls/private/<ntp-server.example.net>.key /etc/pki/tls/certs/<ntp-server.example.net>.crt chmod 644 /etc/pki/tls/private/<ntp-server.example.net>.key /etc/pki/tls/certs/<ntp-server.example.net>.crt",
"firewall-cmd -permannent --add-port={323/udp,4460/tcp} firewall-cmd --reload",
"systemctl restart chronyd",
"chronyd -Q -t 3 'server ntp-server.example.net iburst nts maxsamples 1' 2021-09-15T13:45:26Z chronyd version 4.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) 2021-09-15T13:45:26Z Disabled control of system clock 2021-09-15T13:45:28Z System clock wrong by 0.002205 seconds (ignored) 2021-09-15T13:45:28Z chronyd exiting",
"chronyc serverstats NTP packets received : 7 NTP packets dropped : 0 Command packets received : 22 Command packets dropped : 0 Client log records dropped : 0 NTS-KE connections accepted: 1 NTS-KE connections dropped : 0 Authenticated NTP packets: 7",
"yum list langpacks- *",
"yum list installed langpacks *",
"yum list available langpacks *",
"yum repoquery --whatsupplements langpacks-<locale_code>",
"yum install langpacks-<locale_code>",
"yum remove langpacks-<locale_code>",
"yum install glibc-langpack-<locale_code>",
"sudo grubby --update-kernel ALL --args crashkernel=512M",
"--- - hosts: kdump-test vars: kdump_path: /var/crash roles: - rhel-system-roles.kdump",
"yum install rear",
"vi /etc/rear/local.conf",
"BACKUP=NETFS BACKUP_URL= backup.location",
"NETFS_KEEP_OLD_BACKUP_COPY=y",
"BACKUP_TYPE=incremental",
"rear mkrescue",
"rear mkbackuponly",
"rear mkbackup",
"BACKUP=NETFS BACKUP_URL=nfs:// <nfsserver name> / <share path>",
"rear mkbackup",
"virtualenv -p python3.11 venv3.11 Running virtualenv with interpreter /usr/bin/python3.11 ERROR: Virtual environments created by virtualenv < 20 are not compatible with Python 3.11. ERROR: Use python3.11 -m venv instead.",
"yum install python3",
"yum install python38",
"yum install python39",
"yum install python3.11",
"yum install python3.12",
"python3 --version",
"python3.8 --version",
"python3.9 --version",
"python3.11 --version",
"python3.12 --version",
"yum install python3-requests",
"yum install python38-Cython",
"yum install python39-pip",
"yum install python3.11-pip",
"yum install python3.12-pip",
"subscription-manager repos --enable codeready-builder-for-rhel-8-x86_64-rpms",
"yum module enable python39-devel",
"yum install python3-pytest",
"yum install python38-pytest",
"yum install python39-pytest",
"yum install python3.11-pytest",
"yum install python3.12-pytest",
"yum install python2",
"yum install python2-requests",
"yum install python2-Cython",
"python2 --version",
"python3 python3 -m venv --help python3 -m pip install package pip3 install package",
"python3.8 python3.8 -m venv --help python3.8 -m pip install package pip3.8 install package",
"python3.9 python3.9 -m venv --help python3.9 -m pip install package pip3.9 install package",
"python3.11 python3.11 -m venv --help python3.11 -m pip install package pip3.11 install package",
"python3.12 python3.12 -m venv --help python3.12 -m pip install package pip3.12 install package",
"python2 python2 -m pip install package pip2 install package",
"alternatives --set python /usr/bin/python3",
"alternatives --set python /usr/bin/python3.8",
"alternatives --set python /usr/bin/python3.9",
"alternatives --set python /usr/bin/python3.11",
"alternatives --set python /usr/bin/python3.12",
"alternatives --set python /usr/bin/python2",
"alternatives --config python",
"alternatives --auto python",
"%global modname detox 1 Name: python3-detox 2 Version: 0.12 Release: 4%{?dist} Summary: Distributing activities of the tox tool License: MIT URL: https://pypi.io/project/detox Source0: https://pypi.io/packages/source/d/%{modname}/%{modname}-%{version}.tar.gz BuildArch: noarch BuildRequires: python36-devel 3 BuildRequires: python3-setuptools BuildRequires: python36-rpm-macros BuildRequires: python3-six BuildRequires: python3-tox BuildRequires: python3-py BuildRequires: python3-eventlet %?python_enable_dependency_generator 4 %description Detox is the distributed version of the tox python testing tool. It makes efficient use of multiple CPUs by running all possible activities in parallel. Detox has the same options and configuration that tox has, so after installation you can run it in the same way and with the same options that you use for tox. USD detox %prep %autosetup -n %{modname}-%{version} %build %py3_build 5 %install %py3_install %check %{__python3} setup.py test 6 %files -n python3-%{modname} %doc CHANGELOG %license LICENSE %{_bindir}/detox %{python3_sitelib}/%{modname}/ %{python3_sitelib}/%{modname}-%{version}* %changelog",
"#!/usr/bin/python3 #!/usr/bin/python3.6 #!/usr/bin/python3.8 #!/usr/bin/python3.9 #!/usr/bin/python3.11 #!/usr/bin/python3.12 #!/usr/bin/python2",
"#!/usr/bin/python",
"#!/usr/bin/env python",
"pathfix.py -pn -i %{__python3} PATH ...",
"BuildRequires: python36-rpm-macros",
"%undefine __brp_mangle_shebangs",
"yum module install php: stream",
"yum module install php:8.0",
"yum module install php: stream/profile",
"yum module install php:8.0/minimal",
"yum module install httpd:2.4",
"systemctl start httpd",
"systemctl restart httpd",
"systemctl start php-fpm",
"systemctl enable php-fpm httpd",
"echo '<?php phpinfo(); ?>' > /var/www/html/index.php",
"http://<hostname>/",
"mkdir hello",
"<!DOCTYPE html> <html> <head> <title>Hello, World! Page</title> </head> <body> <?php echo 'Hello, World!'; ?> </body> </html>",
"systemctl start httpd",
"http://<hostname>/hello/hello.php",
"yum module install nginx: stream",
"yum module install nginx:1.18",
"systemctl start nginx",
"systemctl restart nginx",
"systemctl start php-fpm",
"systemctl enable php-fpm nginx",
"echo '<?php phpinfo(); ?>' > /usr/share/nginx/html/index.php",
"http://<hostname>/",
"mkdir hello",
"<!DOCTYPE html> <html> <head> <title>Hello, World! Page</title> </head> <body> <?php echo 'Hello, World!'; ?> </body> </html>",
"systemctl start nginx",
"http://<hostname>/hello/hello.php",
"php filename .php",
"<?php echo 'Hello, World!'; ?>",
"php hello.php",
"Tcl_GetErrorLine(interp)",
"include <tcl.h> if !defined(Tcl_GetErrorLine) define Tcl_GetErrorLine(interp) (interp->errorLine) endif",
"tkIconList_Arrange tkIconList_AutoScan tkIconList_Btn1 tkIconList_Config tkIconList_Create tkIconList_CtrlBtn1 tkIconList_Curselection tkIconList_DeleteAll tkIconList_Double1 tkIconList_DrawSelection tkIconList_FocusIn tkIconList_FocusOut tkIconList_Get tkIconList_Goto tkIconList_Index tkIconList_Invoke tkIconList_KeyPress tkIconList_Leave1 tkIconList_LeftRight tkIconList_Motion1 tkIconList_Reset tkIconList_ReturnKey tkIconList_See tkIconList_Select tkIconList_Selection tkIconList_ShiftBtn1 tkIconList_UpDown"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html-single/configuring_basic_system_settings/index |
1.2. Overview of DM-Multipath | 1.2. Overview of DM-Multipath DM-Multipath can be used to provide: Redundancy DM-Multipath can provide failover in an active/passive configuration. In an active/passive configuration, only half the paths are used at any time for I/O. If any element of an I/O path (the cable, switch, or controller) fails, DM-Multipath switches to an alternate path. Improved Performance DM-Multipath can be configured in active/active mode, where I/O is spread over the paths in a round-robin fashion. In some configurations, DM-Multipath can detect loading on the I/O paths and dynamically re-balance the load. Figure 1.1, "Active/Passive Multipath Configuration with One RAID Device" shows an active/passive configuration with two I/O paths from the server to a RAID device. There are 2 HBAs on the server, 2 SAN switches, and 2 RAID controllers. Figure 1.1. Active/Passive Multipath Configuration with One RAID Device In this configuration, there is one I/O path that goes through hba1, SAN1, and controller 1 and a second I/O path that goes through hba2, SAN2, and controller2. There are many points of possible failure in this configuration: HBA failure FC cable failure SAN switch failure Array controller port failure With DM-Multipath configured, a failure at any of these points will cause DM-Multipath to switch to the alternate I/O path. Figure 1.2, "Active/Passive Multipath Configuration with Two RAID Devices" shows a more complex active/passive configuration with 2 HBAs on the server, 2 SAN switches, and 2 RAID devices with 2 RAID controllers each. Figure 1.2. Active/Passive Multipath Configuration with Two RAID Devices In the example shown in Figure 1.2, "Active/Passive Multipath Configuration with Two RAID Devices" , there are two I/O paths to each RAID device (just as there are in the example shown in Figure 1.1, "Active/Passive Multipath Configuration with One RAID Device" ). With DM-Multipath configured, a failure at any of the points of the I/O path to either of the RAID devices will cause DM-Multipath to switch to the alternate I/O path for that device. Figure 1.3, "Active/Active Multipath Configuration with One RAID Device" shows an active/active configuration with 2 HBAs on the server, 1 SAN switch, and 2 RAID controllers. There are four I/O paths from the server to a storage device: hba1 to controller1 hba1 to controller2 hba2 to controller1 hba2 to controller2 In this configuration, I/O can be spread among those four paths. Figure 1.3. Active/Active Multipath Configuration with One RAID Device | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/dm_multipath/mpio_description |
Chapter 9. Clair configuration overview | Chapter 9. Clair configuration overview Clair is configured by a structured YAML file. Each Clair node needs to specify what mode it will run in and a path to a configuration file through CLI flags or environment variables. For example: USD clair -conf ./path/to/config.yaml -mode indexer or USD clair -conf ./path/to/config.yaml -mode matcher The aforementioned commands each start two Clair nodes using the same configuration file. One runs the indexing facilities, while other runs the matching facilities. If you are running Clair in combo mode, you must supply the indexer, matcher, and notifier configuration blocks in the configuration. 9.1. Information about using Clair in a proxy environment Environment variables respected by the Go standard library can be specified if needed, for example: HTTP_PROXY USD export HTTP_PROXY=http://<user_name>:<password>@<proxy_host>:<proxy_port> HTTPS_PROXY . USD export HTTPS_PROXY=https://<user_name>:<password>@<proxy_host>:<proxy_port> SSL_CERT_DIR USD export SSL_CERT_DIR=/<path>/<to>/<ssl>/<certificates> NO_PROXY USD export NO_PROXY=<comma_separated_list_of_hosts_and_domains> If you are using a proxy server in your environment with Clair's updater URLs, you must identify which URL needs to be added to the proxy allowlist to ensure that Clair can access them unimpeded. For example, the osv updater requires access to https://osv-vulnerabilities.storage.googleapis.com to fetch ecosystem data dumps. In this scenario, the URL must be added to the proxy allowlist. For a full list of updater URLs, see "Clair updater URLs". You must also ensure that the standard Clair URLs are added to the proxy allowlist: https://search.maven.org/solrsearch/select https://catalog.redhat.com/api/containers/ https://access.redhat.com/security/data/metrics/repository-to-cpe.json https://access.redhat.com/security/data/metrics/container-name-repos-map.json When configuring the proxy server, take into account any authentication requirements or specific proxy settings needed to enable seamless communication between Clair and these URLs. By thoroughly documenting and addressing these considerations, you can ensure that Clair functions effectively while routing its updater traffic through the proxy. 9.2. Clair configuration reference The following YAML shows an example Clair configuration: http_listen_addr: "" introspection_addr: "" log_level: "" tls: {} indexer: connstring: "" scanlock_retry: 0 layer_scan_concurrency: 5 migrations: false scanner: {} airgap: false matcher: connstring: "" indexer_addr: "" migrations: false period: "" disable_updaters: false update_retention: 2 matchers: names: nil config: nil updaters: sets: nil config: nil notifier: connstring: "" migrations: false indexer_addr: "" matcher_addr: "" poll_interval: "" delivery_interval: "" disable_summary: false webhook: null amqp: null stomp: null auth: psk: nil trace: name: "" probability: null jaeger: agent: endpoint: "" collector: endpoint: "" username: null password: null service_name: "" tags: nil buffer_max: 0 metrics: name: "" prometheus: endpoint: null dogstatsd: url: "" Note The above YAML file lists every key for completeness. Using this configuration file as-is will result in some options not having their defaults set normally. 9.3. Clair general fields The following table describes the general configuration fields available for a Clair deployment. Field Typhttp_listen_ae Description http_listen_addr String Configures where the HTTP API is exposed. Default: :6060 introspection_addr String Configures where Clair's metrics and health endpoints are exposed. log_level String Sets the logging level. Requires one of the following strings: debug-color , debug , info , warn , error , fatal , panic tls String A map containing the configuration for serving the HTTP API of TLS/SSL and HTTP/2. .cert String The TLS certificate to be used. Must be a full-chain certificate. Example configuration for general Clair fields The following example shows a Clair configuration. Example configuration for general Clair fields # ... http_listen_addr: 0.0.0.0:6060 introspection_addr: 0.0.0.0:8089 log_level: info # ... 9.4. Clair indexer configuration fields The following table describes the configuration fields for Clair's indexer component. Field Type Description indexer Object Provides Clair indexer node configuration. .airgap Boolean Disables HTTP access to the internet for indexers and fetchers. Private IPv4 and IPv6 addresses are allowed. Database connections are unaffected. .connstring String A Postgres connection string. Accepts format as a URL or libpq connection string. .index_report_request_concurrency Integer Rate limits the number of index report creation requests. Setting this to 0 attemps to auto-size this value. Setting a negative value means unlimited. The auto-sizing is a multiple of the number of available cores. The API returns a 429 status code if concurrency is exceeded. .scanlock_retry Integer A positive integer representing seconds. Concurrent indexers lock on manifest scans to avoid clobbering. This value tunes how often a waiting indexer polls for the lock. .layer_scan_concurrency Integer Positive integer limiting the number of concurrent layer scans. Indexers will match a manifest's layer concurrently. This value tunes the number of layers an indexer scans in parallel. .migrations Boolean Whether indexer nodes handle migrations to their database. .scanner String Indexer configuration. Scanner allows for passing configuration options to layer scanners. The scanner will have this configuration pass to it on construction if designed to do so. .scanner.dist String A map with the name of a particular scanner and arbitrary YAML as a value. .scanner.package String A map with the name of a particular scanner and arbitrary YAML as a value. .scanner.repo String A map with the name of a particular scanner and arbitrary YAML as a value. Example indexer configuration The following example shows a hypothetical indexer configuration for Clair. Example indexer configuration # ... indexer: connstring: host=quay-server.example.com port=5433 dbname=clair user=clairuser password=clairpass sslmode=disable scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true # ... 9.5. Clair matcher configuration fields The following table describes the configuration fields for Clair's matcher component. Note Differs from matchers configuration fields. Field Type Description matcher Object Provides Clair matcher node configuration. .cache_age String Controls how long users should be hinted to cache responses for. .connstring String A Postgres connection string. Accepts format as a URL or libpq connection string. .max_conn_pool Integer Limits the database connection pool size. Clair allows for a custom connection pool size. This number directly sets how many active database connections are allowed concurrently. This parameter will be ignored in a future version. Users should configure this through the connection string. .indexer_addr String A matcher contacts an indexer to create a vulnerability report. The location of this indexer is required. Defaults to 30m . .migrations Boolean Whether matcher nodes handle migrations to their databases. .period String Determines how often updates for new security advisories take place. Defaults to 6h . .disable_updaters Boolean Whether to run background updates or not. Default: False .update_retention Integer Sets the number of update operations to retain between garbage collection cycles. This should be set to a safe MAX value based on database size constraints. Defaults to 10m . If a value of less than 0 is provided, garbage collection is disabled. 2 is the minimum value to ensure updates can be compared to notifications. Example matcher configuration Example matcher configuration # ... matcher: connstring: >- host=<DB_HOST> port=5432 dbname=<matcher> user=<DB_USER> password=D<B_PASS> sslmode=verify-ca sslcert=/etc/clair/ssl/cert.pem sslkey=/etc/clair/ssl/key.pem sslrootcert=/etc/clair/ssl/ca.pem indexer_addr: http://clair-v4/ disable_updaters: false migrations: true period: 6h update_retention: 2 # ... 9.6. Clair matchers configuration fields The following table describes the configuration fields for Clair's matchers component. Note Differs from matcher configuration fields. Table 9.1. Matchers configuration fields Field Type Description matchers Array of strings Provides configuration for the in-tree matchers . .names String A list of string values informing the matcher factory about enabled matchers. If value is set to null , the default list of matchers run. The following strings are accepted: alpine-matcher , aws-matcher , debian-matcher , gobin , java-maven , oracle , photon , python , rhel , rhel-container-matcher , ruby , suse , ubuntu-matcher .config String Provides configuration to a specific matcher. A map keyed by the name of the matcher containing a sub-object which will be provided to the matchers factory constructor. For example: Example matchers configuration The following example shows a hypothetical Clair deployment that only requires only the alpine , aws , debian , oracle matchers. Example matchers configuration # ... matchers: names: - "alpine-matcher" - "aws" - "debian" - "oracle" # ... 9.7. Clair updaters configuration fields The following table describes the configuration fields for Clair's updaters component. Table 9.2. Updaters configuration fields Field Type Description updaters Object Provides configuration for the matcher's update manager. .sets String A list of values informing the update manager which updaters to run. If value is set to null , the default set of updaters runs the following: alpine , aws , clair.cvss , debian , oracle , photon , osv , rhel , rhcc suse , ubuntu If left blank, zero updaters run. .config String Provides configuration to specific updater sets. A map keyed by the name of the updater set containing a sub-object which will be provided to the updater set's constructor. For a list of the sub-objects for each updater, see "Advanced updater configuration". Example updaters configuration In the following configuration, only the rhel set is configured. The ignore_unpatched variable, which is specific to the rhel updater, is also defined. Example updaters configuration # ... updaters: sets: - rhel config: rhel: ignore_unpatched: false # ... 9.8. Clair notifier configuration fields The general notifier configuration fields for Clair are listed below. Field Type Description notifier Object Provides Clair notifier node configuration. .connstring String Postgres connection string. Accepts format as URL, or libpq connection string. .migrations Boolean Whether notifier nodes handle migrations to their database. .indexer_addr String A notifier contacts an indexer to create or obtain manifests affected by vulnerabilities. The location of this indexer is required. .matcher_addr String A notifier contacts a matcher to list update operations and acquire diffs. The location of this matcher is required. .poll_interval String The frequency at which the notifier will query a matcher for update operations. .delivery_interval String The frequency at which the notifier attempts delivery of created, or previously failed, notifications. .disable_summary Boolean Controls whether notifications should be summarized to one per manifest. Example notifier configuration The following notifier snippet is for a minimal configuration. Example notifier configuration # ... notifier: connstring: >- host=DB_HOST port=5432 dbname=notifier user=DB_USER password=DB_PASS sslmode=verify-ca sslcert=/etc/clair/ssl/cert.pem sslkey=/etc/clair/ssl/key.pem sslrootcert=/etc/clair/ssl/ca.pem indexer_addr: http://clair-v4/ matcher_addr: http://clair-v4/ delivery_interval: 5s migrations: true poll_interval: 15s webhook: target: "http://webhook/" callback: "http://clair-notifier/notifier/api/v1/notifications" headers: "" amqp: null stomp: null # ... 9.8.1. Clair webhook configuration fields The following webhook fields are available for the Clair notifier environment. Table 9.3. Clair webhook fields .webhook Object Configures the notifier for webhook delivery. .webhook.target String URL where the webhook will be delivered. .webhook.callback String The callback URL where notifications can be retrieved. The notification ID will be appended to this URL. This will typically be where the Clair notifier is hosted. .webhook.headers String A map associating a header name to a list of values. Example webhook configuration Example webhook configuration # ... notifier: # ... webhook: target: "http://webhook/" callback: "http://clair-notifier/notifier/api/v1/notifications" # ... 9.8.2. Clair amqp configuration fields The following Advanced Message Queuing Protocol (AMQP) fields are available for the Clair notifier environment. .amqp Object Configures the notifier for AMQP delivery. [NOTE] ==== Clair does not declare any AMQP components on its own. All attempts to use an exchange or queue are passive only and will fail. Broker administrators should setup exchanges and queues ahead of time. ==== .amqp.direct Boolean If true , the notifier will deliver individual notifications (not a callback) to the configured AMQP broker. .amqp.rollup Integer When amqp.direct is set to true , this value informs the notifier of how many notifications to send in a direct delivery. For example, if direct is set to true , and amqp.rollup is set to 5 , the notifier delivers no more than 5 notifications in a single JSON payload to the broker. Setting the value to 0 effectively sets it to 1 . .amqp.exchange Object The AMQP exchange to connect to. .amqp.exchange.name String The name of the exchange to connect to. .amqp.exchange.type String The type of the exchange. Typically one of the following: direct , fanout , topic , headers . .amqp.exchange.durability Boolean Whether the configured queue is durable. .amqp.exchange.auto_delete Boolean Whether the configured queue uses an auto_delete_policy . .amqp.routing_key String The name of the routing key each notification is sent with. .amqp.callback String If amqp.direct is set to false , this URL is provided in the notification callback sent to the broker. This URL should point to Clair's notification API endpoint. .amqp.uris String A list of one or more AMQP brokers to connect to, in priority order. .amqp.tls Object Configures TLS/SSL connection to an AMQP broker. .amqp.tls.root_ca String The filesystem path where a root CA can be read. .amqp.tls.cert String The filesystem path where a TLS/SSL certificate can be read. [NOTE] ==== Clair also allows SSL_CERT_DIR , as documented for the Go crypto/x509 package. ==== .amqp.tls.key String The filesystem path where a TLS/SSL private key can be read. Example AMQP configuration The following example shows a hypothetical AMQP configuration for Clair. Example AMQP configuration # ... notifier: # ... amqp: exchange: name: "" type: "direct" durable: true auto_delete: false uris: ["amqp://user:pass@host:10000/vhost"] direct: false routing_key: "notifications" callback: "http://clair-notifier/notifier/api/v1/notifications" tls: root_ca: "optional/path/to/rootca" cert: "madatory/path/to/cert" key: "madatory/path/to/key" # ... 9.8.3. Clair STOMP configuration fields The following Simple Text Oriented Message Protocol (STOMP) fields are available for the Clair notifier environment. .stomp Object Configures the notifier for STOMP delivery. .stomp.direct Boolean If true , the notifier delivers individual notifications (not a callback) to the configured STOMP broker. .stomp.rollup Integer If stomp.direct is set to true , this value limits the number of notifications sent in a single direct delivery. For example, if direct is set to true , and rollup is set to 5 , the notifier delivers no more than 5 notifications in a single JSON payload to the broker. Setting the value to 0 effectively sets it to 1 . .stomp.callback String If stomp.callback is set to false , the provided URL in the notification callback is sent to the broker. This URL should point to Clair's notification API endpoint. .stomp.destination String The STOMP destination to deliver notifications to. .stomp.uris String A list of one or more STOMP brokers to connect to in priority order. .stomp.tls Object Configured TLS/SSL connection to STOMP broker. .stomp.tls.root_ca String The filesystem path where a root CA can be read. [NOTE] ==== Clair also respects SSL_CERT_DIR , as documented for the Go crypto/x509 package. ==== .stomp.tls.cert String The filesystem path where a TLS/SSL certificate can be read. .stomp.tls.key String The filesystem path where a TLS/SSL private key can be read. .stomp.user String Configures login details for the STOMP broker. .stomp.user.login String The STOMP login to connect with. .stomp.user.passcode String The STOMP passcode to connect with. Example STOMP configuration The following example shows a hypothetical STOMP configuration for Clair. Example STOMP configuration # ... notifier: # ... stomp: desitnation: "notifications" direct: false callback: "http://clair-notifier/notifier/api/v1/notifications" login: login: "username" passcode: "passcode" tls: root_ca: "optional/path/to/rootca" cert: "madatory/path/to/cert" key: "madatory/path/to/key" # ... 9.9. Clair authorization configuration fields The following authorization configuration fields are available for Clair. Field Type Description auth Object Defines Clair's external and intra-service JWT based authentication. If multiple auth mechanisms are defined, Clair picks one. Currently, multiple mechanisms are unsupported. .psk String Defines pre-shared key authentication. .psk.key String A shared base64 encoded key distributed between all parties signing and verifying JWTs. .psk.iss String A list of JWT issuers to verify. An empty list accepts any issuer in a JWT claim. Example authorization configuration The following authorization snippet is for a minimal configuration. Example authorization configuration # ... auth: psk: key: MTU5YzA4Y2ZkNzJoMQ== 1 iss: ["quay"] # ... 9.10. Clair trace configuration fields The following trace configuration fields are available for Clair. Field Type Description trace Object Defines distributed tracing configuration based on OpenTelemetry. .name String The name of the application traces will belong to. .probability Integer The probability a trace will occur. .jaeger Object Defines values for Jaeger tracing. .jaeger.agent Object Defines values for configuring delivery to a Jaeger agent. .jaeger.agent.endpoint String An address in the <host>:<post> syntax where traces can be submitted. .jaeger.collector Object Defines values for configuring delivery to a Jaeger collector. .jaeger.collector.endpoint String An address in the <host>:<post> syntax where traces can be submitted. .jaeger.collector.username String A Jaeger username. .jaeger.collector.password String A Jaeger password. .jaeger.service_name String The service name registered in Jaeger. .jaeger.tags String Key-value pairs to provide additional metadata. .jaeger.buffer_max Integer The maximum number of spans that can be buffered in memory before they are sent to the Jaeger backend for storage and analysis. Example trace configuration The following example shows a hypothetical trace configuration for Clair. Example trace configuration # ... trace: name: "jaeger" probability: 1 jaeger: agent: endpoint: "localhost:6831" service_name: "clair" # ... 9.11. Clair metrics configuration fields The following metrics configuration fields are available for Clair. Field Type Description metrics Object Defines distributed tracing configuration based on OpenTelemetry. .name String The name of the metrics in use. .prometheus String Configuration for a Prometheus metrics exporter. .prometheus.endpoint String Defines the path where metrics are served. Example metrics configuration The following example shows a hypothetical metrics configuration for Clair. Example metrics configuration # ... metrics: name: "prometheus" prometheus: endpoint: "/metricsz" # ... | [
"clair -conf ./path/to/config.yaml -mode indexer",
"clair -conf ./path/to/config.yaml -mode matcher",
"export HTTP_PROXY=http://<user_name>:<password>@<proxy_host>:<proxy_port>",
"export HTTPS_PROXY=https://<user_name>:<password>@<proxy_host>:<proxy_port>",
"export SSL_CERT_DIR=/<path>/<to>/<ssl>/<certificates>",
"export NO_PROXY=<comma_separated_list_of_hosts_and_domains>",
"http_listen_addr: \"\" introspection_addr: \"\" log_level: \"\" tls: {} indexer: connstring: \"\" scanlock_retry: 0 layer_scan_concurrency: 5 migrations: false scanner: {} airgap: false matcher: connstring: \"\" indexer_addr: \"\" migrations: false period: \"\" disable_updaters: false update_retention: 2 matchers: names: nil config: nil updaters: sets: nil config: nil notifier: connstring: \"\" migrations: false indexer_addr: \"\" matcher_addr: \"\" poll_interval: \"\" delivery_interval: \"\" disable_summary: false webhook: null amqp: null stomp: null auth: psk: nil trace: name: \"\" probability: null jaeger: agent: endpoint: \"\" collector: endpoint: \"\" username: null password: null service_name: \"\" tags: nil buffer_max: 0 metrics: name: \"\" prometheus: endpoint: null dogstatsd: url: \"\"",
"http_listen_addr: 0.0.0.0:6060 introspection_addr: 0.0.0.0:8089 log_level: info",
"indexer: connstring: host=quay-server.example.com port=5433 dbname=clair user=clairuser password=clairpass sslmode=disable scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true",
"matcher: connstring: >- host=<DB_HOST> port=5432 dbname=<matcher> user=<DB_USER> password=D<B_PASS> sslmode=verify-ca sslcert=/etc/clair/ssl/cert.pem sslkey=/etc/clair/ssl/key.pem sslrootcert=/etc/clair/ssl/ca.pem indexer_addr: http://clair-v4/ disable_updaters: false migrations: true period: 6h update_retention: 2",
"matchers: names: - \"alpine-matcher\" - \"aws\" - \"debian\" - \"oracle\"",
"updaters: sets: - rhel config: rhel: ignore_unpatched: false",
"notifier: connstring: >- host=DB_HOST port=5432 dbname=notifier user=DB_USER password=DB_PASS sslmode=verify-ca sslcert=/etc/clair/ssl/cert.pem sslkey=/etc/clair/ssl/key.pem sslrootcert=/etc/clair/ssl/ca.pem indexer_addr: http://clair-v4/ matcher_addr: http://clair-v4/ delivery_interval: 5s migrations: true poll_interval: 15s webhook: target: \"http://webhook/\" callback: \"http://clair-notifier/notifier/api/v1/notifications\" headers: \"\" amqp: null stomp: null",
"notifier: webhook: target: \"http://webhook/\" callback: \"http://clair-notifier/notifier/api/v1/notifications\"",
"notifier: amqp: exchange: name: \"\" type: \"direct\" durable: true auto_delete: false uris: [\"amqp://user:pass@host:10000/vhost\"] direct: false routing_key: \"notifications\" callback: \"http://clair-notifier/notifier/api/v1/notifications\" tls: root_ca: \"optional/path/to/rootca\" cert: \"madatory/path/to/cert\" key: \"madatory/path/to/key\"",
"notifier: stomp: desitnation: \"notifications\" direct: false callback: \"http://clair-notifier/notifier/api/v1/notifications\" login: login: \"username\" passcode: \"passcode\" tls: root_ca: \"optional/path/to/rootca\" cert: \"madatory/path/to/cert\" key: \"madatory/path/to/key\"",
"auth: psk: key: MTU5YzA4Y2ZkNzJoMQ== 1 iss: [\"quay\"]",
"trace: name: \"jaeger\" probability: 1 jaeger: agent: endpoint: \"localhost:6831\" service_name: \"clair\"",
"metrics: name: \"prometheus\" prometheus: endpoint: \"/metricsz\""
]
| https://docs.redhat.com/en/documentation/red_hat_quay/3/html/vulnerability_reporting_with_clair_on_red_hat_quay/config-fields-overview |
Chapter 4. Get It | Chapter 4. Get It Managed cloud service You have the following options for subscribing to OpenShift AI as a managed service: For OpenShift Dedicated, subscribe through Red Hat. For Red Hat OpenShift Service on Amazon Web Services (ROSA), subscribe through Red Hat or subscribe through the AWS Marketplace. Self-managed software To get Red Hat OpenShift AI as self-managed software, sign up for it with your Red Hat account team. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/introduction_to_red_hat_openshift_ai_cloud_service/get_it |
Chapter 4. Deploying application and view security insights | Chapter 4. Deploying application and view security insights Deploy applications using Argo CD in OpenShift GitOps to enable continuos deployment. Argo CD uses your Git repository as a single source of truth for infrastructure configurations. Updates to the repository trigger deployments across development, staging, and production environments. Note The procedures provide an example deployment workflow. Customize it to fit your organization's requirements. 4.1. Promoting a build to a pre-production or production environment Promote a build by updating the GitOps repository through a pull request (PR). In RHDH, select Catalog . From the Kind dropdown list, select Resource , and then select a GitOps repository. Open the Overview tab and select View Source to access the repository. (Optional) Alternatively, select Catalog , open the Overview tab, and select View TechDocs . In the Home > Repository section, select the GitOps repository. Clone your GitOps repository. Note Ensure that the local clone is up-to-date. Create a new branch. Navigate to the component/<app-name>/overlays directory, which contains subdirectories for development , stage , and prod . Follow these steps to promote the application: To move your application Do this From development to stage environment Open th development/deployment-patch.yaml file and copy the container image URL. For example, quay.io/<username>/imageName:imageHash. Open the stage/deployment-patch.yaml file and replace the container image URL with the one you copied. Note To include additional configuration changes (for example, replicas), copy them from the development/deployment-patch.yaml file to the stage/deployment-patch.yaml file. From stage to production environment Open the stage/deployment-patch.yaml file and copy the containers image URL. For example, quay.io/<username>/imageName:imageHash. Open the prod/deployment-patch.yaml file and replace container image URL with the one you copied. Note To include additional configuration changes (for example, replicas), copy them from the stage/deployment-patch.yaml file to the prod/deployment-patch.yaml file. Commit and push your updates. Create a PR to start a promotion pipeline. The pipeline validates the changes against Red Hat Enterprise Contract (Enterprise Contract) policies. Check the pipeline run in the CI tab of RHDH. Merge the PR to trigger Argo CD, which applies the changes and promotes the build to the environment. Verification Use the Topology tab in RHDH to confirm the application distribution across namespace. Use CD tab to view deployment details, including the status, updates, commit message (for example, Promote stage to prod), and container image changes. 4.2. Viewing security insights The promotion pipeline includes several tasks to ensure secure and compliant deployments. The pipeline tasks include: git-clone : Clones the repository into the workspace using the git-clone task. gather-deploy-images : Extracts the container images from deployment YAML files for validation. verify-enterprise-contract : Validates the container images using Enterprise Contract (EC) policies and Sigstore's cosign tool. deploy-images : Deploys validates images to the target environment. download-sbom-from-url-in-attestations : Retrieves SBOMs for images by downloading OCI blobs referenced in image attestations. upload-sbom-to-trustification : Uploads SBOMs to Trustification using the BOMbastic API. 4.2.1. Enterprise contract task The Enterprise Contract (EC) is a suite of tools designed to maintain software supply chain security. It helps maintain the integrity of container images by verifying that they meet defined requirements before being promoted to production. If an image does not comply with the set policies, the EC generates a report identifying the issues that must be resolved. The Red Hat Trusted Application Pipeline build process generates a signed in-toto attestation of the build pipeline. These attestation cryptographically verify the build's integrity. The EC then evaluates the build against defined policies, ensuring it complies with the organizational security standards. Interpreting compliance reports EC compliance reports provide detailed insights into application security and adherence to policies. Here's how to understand these reports: Policy compliance overview: Displays the checks performed, their status, (success, warning, or failure) , and messages explaining warnings or failures. Details provided: Policy reports detail: Successful checks : Lists the policies that passed validation. Warnings and failures : Highlights policies that triggered warnings or failed checks, with explanations. Rule compliance : Shows how the application adheres to individual policy rules, such as source code references or attestation validations. Figure 4.1. The EC report Using compliance insights The insights from EC compliance reports help prioritize security and compliance tasks: Review policy compliance: Ensure your application meets standards such as Supply Chain Levels for Software Artifacts (SLSA). Address any compliance gaps based on the recommendations in the report. Streamline review: Use filters in the reports to focus on critical issues, enabling a faster and more efficient review process. Additional resources For more information on EC policy and configuration, refer Managing compliance with Enterprise Contract . Revised on 2025-02-06 17:58:30 UTC | null | https://docs.redhat.com/en/documentation/red_hat_trusted_application_pipeline/1.4/html/getting_started_with_red_hat_trusted_application_pipeline/deploy-application-and-view-security-insights_default |
Appendix A. Using your subscription | Appendix A. Using your subscription AMQ is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal. A.1. Accessing your account Procedure Go to access.redhat.com . If you do not already have an account, create one. Log in to your account. A.2. Activating a subscription Procedure Go to access.redhat.com . Navigate to My Subscriptions . Navigate to Activate a subscription and enter your 16-digit activation number. A.3. Downloading release files To access .zip, .tar.gz, and other release files, use the customer portal to find the relevant files for download. If you are using RPM packages or the Red Hat Maven repository, this step is not required. Procedure Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads . Locate the Red Hat AMQ entries in the INTEGRATION AND AUTOMATION category. Select the desired AMQ product. The Software Downloads page opens. Click the Download link for your component. A.4. Registering your system for packages To install RPM packages for this product on Red Hat Enterprise Linux, your system must be registered. If you are using downloaded release files, this step is not required. Procedure Go to access.redhat.com . Navigate to Registration Assistant . Select your OS version and continue to the page. Use the listed command in your system terminal to complete the registration. For more information about registering your system, see one of the following resources: Red Hat Enterprise Linux 6 - Registering the system and managing subscriptions Red Hat Enterprise Linux 7 - Registering the system and managing subscriptions Red Hat Enterprise Linux 8 - Registering the system and managing subscriptions | null | https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/getting_started_with_amq_broker/using_your_subscription |
Chapter 10. Endpoints [v1] | Chapter 10. Endpoints [v1] Description Endpoints is a collection of endpoints that implement the actual service. Example: Type object 10.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata subsets array The set of all endpoints is the union of all subsets. Addresses are placed into subsets according to the IPs they share. A single address with multiple ports, some of which are ready and some of which are not (because they come from different containers) will result in the address being displayed in different subsets for the different ports. No address will appear in both Addresses and NotReadyAddresses in the same subset. Sets of addresses and ports that comprise a service. subsets[] object EndpointSubset is a group of addresses with a common set of ports. The expanded set of endpoints is the Cartesian product of Addresses x Ports. For example, given: { Addresses: [{"ip": "10.10.1.1"}, {"ip": "10.10.2.2"}], Ports: [{"name": "a", "port": 8675}, {"name": "b", "port": 309}] } The resulting set of endpoints can be viewed as: a: [ 10.10.1.1:8675, 10.10.2.2:8675 ], b: [ 10.10.1.1:309, 10.10.2.2:309 ] 10.1.1. .subsets Description The set of all endpoints is the union of all subsets. Addresses are placed into subsets according to the IPs they share. A single address with multiple ports, some of which are ready and some of which are not (because they come from different containers) will result in the address being displayed in different subsets for the different ports. No address will appear in both Addresses and NotReadyAddresses in the same subset. Sets of addresses and ports that comprise a service. Type array 10.1.2. .subsets[] Description EndpointSubset is a group of addresses with a common set of ports. The expanded set of endpoints is the Cartesian product of Addresses x Ports. For example, given: The resulting set of endpoints can be viewed as: Type object Property Type Description addresses array IP addresses which offer the related ports that are marked as ready. These endpoints should be considered safe for load balancers and clients to utilize. addresses[] object EndpointAddress is a tuple that describes single IP address. notReadyAddresses array IP addresses which offer the related ports but are not currently marked as ready because they have not yet finished starting, have recently failed a readiness check, or have recently failed a liveness check. notReadyAddresses[] object EndpointAddress is a tuple that describes single IP address. ports array Port numbers available on the related IP addresses. ports[] object EndpointPort is a tuple that describes a single port. 10.1.3. .subsets[].addresses Description IP addresses which offer the related ports that are marked as ready. These endpoints should be considered safe for load balancers and clients to utilize. Type array 10.1.4. .subsets[].addresses[] Description EndpointAddress is a tuple that describes single IP address. Type object Required ip Property Type Description hostname string The Hostname of this endpoint ip string The IP of this endpoint. May not be loopback (127.0.0.0/8 or ::1), link-local (169.254.0.0/16 or fe80::/10), or link-local multicast (224.0.0.0/24 or ff02::/16). nodeName string Optional: Node hosting this endpoint. This can be used to determine endpoints local to a node. targetRef object ObjectReference contains enough information to let you inspect or modify the referred object. 10.1.5. .subsets[].addresses[].targetRef Description ObjectReference contains enough information to let you inspect or modify the referred object. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 10.1.6. .subsets[].notReadyAddresses Description IP addresses which offer the related ports but are not currently marked as ready because they have not yet finished starting, have recently failed a readiness check, or have recently failed a liveness check. Type array 10.1.7. .subsets[].notReadyAddresses[] Description EndpointAddress is a tuple that describes single IP address. Type object Required ip Property Type Description hostname string The Hostname of this endpoint ip string The IP of this endpoint. May not be loopback (127.0.0.0/8 or ::1), link-local (169.254.0.0/16 or fe80::/10), or link-local multicast (224.0.0.0/24 or ff02::/16). nodeName string Optional: Node hosting this endpoint. This can be used to determine endpoints local to a node. targetRef object ObjectReference contains enough information to let you inspect or modify the referred object. 10.1.8. .subsets[].notReadyAddresses[].targetRef Description ObjectReference contains enough information to let you inspect or modify the referred object. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 10.1.9. .subsets[].ports Description Port numbers available on the related IP addresses. Type array 10.1.10. .subsets[].ports[] Description EndpointPort is a tuple that describes a single port. Type object Required port Property Type Description appProtocol string The application protocol for this port. This is used as a hint for implementations to offer richer behavior for protocols that they understand. This field follows standard Kubernetes label syntax. Valid values are either: * Un-prefixed protocol names - reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names ). * Kubernetes-defined prefixed names: * 'kubernetes.io/h2c' - HTTP/2 prior knowledge over cleartext as described in https://www.rfc-editor.org/rfc/rfc9113.html#name-starting-http-2-with-prior- * 'kubernetes.io/ws' - WebSocket over cleartext as described in https://www.rfc-editor.org/rfc/rfc6455 * 'kubernetes.io/wss' - WebSocket over TLS as described in https://www.rfc-editor.org/rfc/rfc6455 * Other protocols should use implementation-defined prefixed names such as mycompany.com/my-custom-protocol. name string The name of this port. This must match the 'name' field in the corresponding ServicePort. Must be a DNS_LABEL. Optional only if one port is defined. port integer The port number of the endpoint. protocol string The IP protocol for this port. Must be UDP, TCP, or SCTP. Default is TCP. Possible enum values: - "SCTP" is the SCTP protocol. - "TCP" is the TCP protocol. - "UDP" is the UDP protocol. 10.2. API endpoints The following API endpoints are available: /api/v1/endpoints GET : list or watch objects of kind Endpoints /api/v1/watch/endpoints GET : watch individual changes to a list of Endpoints. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/endpoints DELETE : delete collection of Endpoints GET : list or watch objects of kind Endpoints POST : create Endpoints /api/v1/watch/namespaces/{namespace}/endpoints GET : watch individual changes to a list of Endpoints. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/endpoints/{name} DELETE : delete Endpoints GET : read the specified Endpoints PATCH : partially update the specified Endpoints PUT : replace the specified Endpoints /api/v1/watch/namespaces/{namespace}/endpoints/{name} GET : watch changes to an object of kind Endpoints. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 10.2.1. /api/v1/endpoints HTTP method GET Description list or watch objects of kind Endpoints Table 10.1. HTTP responses HTTP code Reponse body 200 - OK EndpointsList schema 401 - Unauthorized Empty 10.2.2. /api/v1/watch/endpoints HTTP method GET Description watch individual changes to a list of Endpoints. deprecated: use the 'watch' parameter with a list operation instead. Table 10.2. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 10.2.3. /api/v1/namespaces/{namespace}/endpoints HTTP method DELETE Description delete collection of Endpoints Table 10.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 10.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind Endpoints Table 10.5. HTTP responses HTTP code Reponse body 200 - OK EndpointsList schema 401 - Unauthorized Empty HTTP method POST Description create Endpoints Table 10.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.7. Body parameters Parameter Type Description body Endpoints schema Table 10.8. HTTP responses HTTP code Reponse body 200 - OK Endpoints schema 201 - Created Endpoints schema 202 - Accepted Endpoints schema 401 - Unauthorized Empty 10.2.4. /api/v1/watch/namespaces/{namespace}/endpoints HTTP method GET Description watch individual changes to a list of Endpoints. deprecated: use the 'watch' parameter with a list operation instead. Table 10.9. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 10.2.5. /api/v1/namespaces/{namespace}/endpoints/{name} Table 10.10. Global path parameters Parameter Type Description name string name of the Endpoints HTTP method DELETE Description delete Endpoints Table 10.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 10.12. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Endpoints Table 10.13. HTTP responses HTTP code Reponse body 200 - OK Endpoints schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Endpoints Table 10.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.15. HTTP responses HTTP code Reponse body 200 - OK Endpoints schema 201 - Created Endpoints schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Endpoints Table 10.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.17. Body parameters Parameter Type Description body Endpoints schema Table 10.18. HTTP responses HTTP code Reponse body 200 - OK Endpoints schema 201 - Created Endpoints schema 401 - Unauthorized Empty 10.2.6. /api/v1/watch/namespaces/{namespace}/endpoints/{name} Table 10.19. Global path parameters Parameter Type Description name string name of the Endpoints HTTP method GET Description watch changes to an object of kind Endpoints. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 10.20. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | [
"Name: \"mysvc\", Subsets: [ { Addresses: [{\"ip\": \"10.10.1.1\"}, {\"ip\": \"10.10.2.2\"}], Ports: [{\"name\": \"a\", \"port\": 8675}, {\"name\": \"b\", \"port\": 309}] }, { Addresses: [{\"ip\": \"10.10.3.3\"}], Ports: [{\"name\": \"a\", \"port\": 93}, {\"name\": \"b\", \"port\": 76}] }, ]",
"{ Addresses: [{\"ip\": \"10.10.1.1\"}, {\"ip\": \"10.10.2.2\"}], Ports: [{\"name\": \"a\", \"port\": 8675}, {\"name\": \"b\", \"port\": 309}] }",
"a: [ 10.10.1.1:8675, 10.10.2.2:8675 ], b: [ 10.10.1.1:309, 10.10.2.2:309 ]"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/network_apis/endpoints-v1 |
11.4. Setting Up Mandatory Extensions | 11.4. Setting Up Mandatory Extensions In GNOME Shell, you can provide a set of extensions that the user has to use. To do so, install the extensions in the /usr/share/gnome-shell/extensions directory and then lock down the org.gnome.shell.enabled-extensions and org.gnome.shell.development-tools keys. Locking down the org.gnome.shell.development-tools key ensures that the user cannot use GNOME Shell's integrated debugger and inspector tool ( Looking Glass ) to disable any mandatory extensions. Procedure 11.4. Setting up mandatory extensions Create a local database file for machine-wide settings in /etc/dconf/db/local.d/00-extensions-mandatory : The enabled-extensions key specifies the enabled extensions using the extensions' uuid ( [email protected] and [email protected] ). The development-tools key is set to false to disable access to Looking Glass . Override the user's setting and prevent the user from changing it in /etc/dconf/db/local.d/locks/extensions-mandatory : Update the system databases: Users must log out and back in again before the system-wide settings take effect. | [
"List all mandatory extensions enabled-extensions=[' [email protected] ', ' [email protected] '] Disable access to Looking Glass development-tools=false",
"Lock the list of mandatory extensions and access to Looking Glass /org/gnome/shell/enabled-extensions /org/gnome/shell/development-tools",
"dconf update"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/desktop_migration_and_administration_guide/extensions-mandatory |
Chapter 8. ScalableFileSystem | Chapter 8. ScalableFileSystem The following table lists all the packages in the Scalable FileSystem add-on. For more information about core packages, see the Scope of Coverage Details document. Package Core Package? License xfsdump Yes GPL+ xfsprogs Yes GPL+ and LGPLv2+ xfsprogs-devel Yes GPL+ and LGPLv2+ xfsprogs-qa-devel Yes GPL+ and LGPLv2+ | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/package_manifest/chap-scalablefilesystem |
Chapter 5. Compliance Operator | Chapter 5. Compliance Operator 5.1. Compliance Operator overview The OpenShift Container Platform Compliance Operator assists users by automating the inspection of numerous technical implementations and compares those against certain aspects of industry standards, benchmarks, and baselines; the Compliance Operator is not an auditor. In order to be compliant or certified under these various standards, you need to engage an authorized auditor such as a Qualified Security Assessor (QSA), Joint Authorization Board (JAB), or other industry recognized regulatory authority to assess your environment. The Compliance Operator makes recommendations based on generally available information and practices regarding such standards and may assist with remediations, but actual compliance is your responsibility. You are required to work with an authorized auditor to achieve compliance with a standard. For the latest updates, see the Compliance Operator release notes . For more information on compliance support for all Red Hat products, see Product Compliance . Compliance Operator concepts Understanding the Compliance Operator Understanding the Custom Resource Definitions Compliance Operator management Installing the Compliance Operator Updating the Compliance Operator Managing the Compliance Operator Uninstalling the Compliance Operator Compliance Operator scan management Supported compliance profiles Compliance Operator scans Tailoring the Compliance Operator Retrieving Compliance Operator raw results Managing Compliance Operator remediation Performing advanced Compliance Operator tasks Troubleshooting the Compliance Operator Using the oc-compliance plugin 5.2. Compliance Operator release notes The Compliance Operator lets OpenShift Container Platform administrators describe the required compliance state of a cluster and provides them with an overview of gaps and ways to remediate them. These release notes track the development of the Compliance Operator in the OpenShift Container Platform. For an overview of the Compliance Operator, see Understanding the Compliance Operator . To access the latest release, see Updating the Compliance Operator . For more information on compliance support for all Red Hat products, see Product Compliance . 5.2.1. OpenShift Compliance Operator 1.6.2 The following advisory is available for the OpenShift Compliance Operator 1.6.2: RHBA-2025:2659 - OpenShift Compliance Operator 1.6.2 update CVE-2024-45338 is resolved in the Compliance Operator 1.6.2 release. ( CVE-2024-45338 ) 5.2.2. OpenShift Compliance Operator 1.6.1 The following advisory is available for the OpenShift Compliance Operator 1.6.1: RHBA-2024:10367 - OpenShift Compliance Operator 1.6.1 update This update includes upgraded dependencies in underlying base images. 5.2.3. OpenShift Compliance Operator 1.6.0 The following advisory is available for the OpenShift Compliance Operator 1.6.0: RHBA-2024:6761 - OpenShift Compliance Operator 1.6.0 bug fix and enhancement update 5.2.3.1. New features and enhancements The Compliance Operator now contains supported profiles for Payment Card Industry Data Security Standard (PCI-DSS) version 4. For more information, see Supported compliance profiles . The Compliance Operator now contains supported profiles for Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) V2R1. For more information, see Supported compliance profiles . A must-gather extension is now available for the Compliance Operator installed on x86 , ppc64le , and s390x architectures. The must-gather tool provides crucial configuration details to Red Hat Customer Support and engineering. For more information, see Using the must-gather tool for the Compliance Operator . 5.2.3.2. Bug fixes Before this release, a misleading description in the ocp4-route-ip-whitelist rule resulted in misunderstanding, causing potential for misconfigurations. With this update, the rule is now more clearly defined. ( CMP-2485 ) Previously, the reporting of all of the ComplianceCheckResults for a DONE status ComplianceScan was incomplete. With this update, annotation has been added to report the number of total ComplianceCheckResults for a ComplianceScan with a DONE status. ( CMP-2615 ) Previously, the ocp4-cis-scc-limit-container-allowed-capabilities rule description contained ambiguous guidelines, leading to confusion among users. With this update, the rule description and actionable steps are clarified. ( OCPBUGS-17828 ) Before this update, sysctl configurations caused certain auto remediations for RHCOS4 rules to fail scans in affected clusters. With this update, the correct sysctl settings are applied and RHCOS4 rules for FedRAMP High profiles pass scans correctly. ( OCPBUGS-19690 ) Before this update, an issue with a jq filter caused errors with the rhacs-operator-controller-manager deployment during compliance checks. With this update, the jq filter expression is updated and the rhacs-operator-controller-manager deployment is exempt from compliance checks pertaining to container resource limits, eliminating false positive results. ( OCPBUGS-19690 ) Before this update, rhcos4-high and rhcos4-moderate profiles checked values of an incorrectly titled configuration file. As a result, some scan checks could fail. With this update, the rhcos4 profiles now check the correct configuration file and scans pass correctly. ( OCPBUGS-31674 ) Previously, the accessokenInactivityTimeoutSeconds variable used in the oauthclient-inactivity-timeout rule was immutable, leading to a FAIL status when performing DISA STIG scans. With this update, proper enforcement of the accessTokenInactivityTimeoutSeconds variable operates correctly and a PASS status is now possible. ( OCPBUGS-32551 ) Before this update, some annotations for rules were not updated, displaying the incorrect control standards. With this update, annotations for rules are updated correctly, ensuring the correct control standards are displayed. ( OCPBUGS-34982 ) Previously, when upgrading to Compliance Operator 1.5.1, an incorrectly referenced secret in a ServiceMonitor configuration caused integration issues with the Prometheus Operator. With this update, the Compliance Operator will accurately reference the secret containing the token for ServiceMonitor metrics. ( OCPBUGS-39417 ) 5.2.4. OpenShift Compliance Operator 1.5.1 The following advisory is available for the OpenShift Compliance Operator 1.5.1: RHBA-2024:5956 - OpenShift Compliance Operator 1.5.1 bug fix and enhancement update 5.2.5. OpenShift Compliance Operator 1.5.0 The following advisory is available for the OpenShift Compliance Operator 1.5.0: RHBA-2024:3533 - OpenShift Compliance Operator 1.5.0 bug fix and enhancement update 5.2.5.1. New features and enhancements With this update, the Compliance Operator provides a unique profile ID for easier programmatic use. ( CMP-2450 ) With this release, the Compliance Operator is now tested and supported on the ROSA HCP environment. The Compliance Operator loads only Node profiles when running on ROSA HCP. This is because a Red Hat managed platform restricts access to the control plane, which makes Platform profiles irrelevant to the operator's function.( CMP-2581 ) 5.2.5.2. Bug fixes CVE-2024-2961 is resolved in the Compliance Operator 1.5.0 release. ( CVE-2024-2961 ) Previously, for ROSA HCP systems, profile listings were incorrect. This update allows the Compliance Operator to provide correct profile output. ( OCPBUGS-34535 ) With this release, namespaces can be excluded from the ocp4-configure-network-policies-namespaces check by setting the ocp4-var-network-policies-namespaces-exempt-regex variable in the tailored profile. ( CMP-2543 ) 5.2.6. OpenShift Compliance Operator 1.4.1 The following advisory is available for the OpenShift Compliance Operator 1.4.1: RHBA-2024:1830 - OpenShift Compliance Operator bug fix and enhancement update 5.2.6.1. New features and enhancements As of this release, the Compliance Operator now provides the CIS OpenShift 1.5.0 profile rules. ( CMP-2447 ) With this update, the Compliance Operator now provides OCP4 STIG ID and SRG with the profile rules. ( CMP-2401 ) With this update, obsolete rules being applied to s390x have been removed. ( CMP-2471 ) 5.2.6.2. Bug fixes Previously, for Red Hat Enterprise Linux CoreOS (RHCOS) systems using Red Hat Enterprise Linux (RHEL) 9, application of the ocp4-kubelet-enable-protect-kernel-sysctl-file-exist rule failed. This update replaces the rule with ocp4-kubelet-enable-protect-kernel-sysctl . Now, after auto remediation is applied, RHEL 9-based RHCOS systems will show PASS upon the application of this rule. ( OCPBUGS-13589 ) Previously, after applying compliance remediations using profile rhcos4-e8 , the nodes were no longer accessible using SSH to the core user account. With this update, nodes remain accessible through SSH using the `sshkey1 option. ( OCPBUGS-18331 ) Previously, the STIG profile was missing rules from CaC that fulfill requirements on the published STIG for OpenShift Container Platform. With this update, upon remediation, the cluster satisfies STIG requirements that can be remediated using Compliance Operator. ( OCPBUGS-26193 ) Previously, creating a ScanSettingBinding object with profiles of different types for multiple products bypassed a restriction against multiple products types in a binding. With this update, the product validation now allows multiple products regardless of the of profile types in the ScanSettingBinding object. ( OCPBUGS-26229 ) Previously, running the rhcos4-service-debug-shell-disabled rule showed as FAIL even after auto-remediation was applied. With this update, running the rhcos4-service-debug-shell-disabled rule now shows PASS after auto-remediation is applied. ( OCPBUGS-28242 ) With this update, instructions for the use of the rhcos4-banner-etc-issue rule are enhanced to provide more detail. ( OCPBUGS-28797 ) Previously the api_server_api_priority_flowschema_catch_all rule provided FAIL status on OpenShift Container Platform 4.16 clusters. With this update, the api_server_api_priority_flowschema_catch_all rule provides PASS status on OpenShift Container Platform 4.16 clusters. ( OCPBUGS-28918 ) Previously, when a profile was removed from a completed scan shown in a ScanSettingBinding (SSB) object, the Compliance Operator did not remove the old scan. Afterward, when launching a new SSB using the deleted profile, the Compliance Operator failed to update the result. With this release of the Compliance Operator, the new SSB now shows the new compliance check result. ( OCPBUGS-29272 ) Previously, on ppc64le architecture, the metrics service was not created. With this update, when deploying the Compliance Operator v1.4.1 on ppc64le architecture, the metrics service is now created correctly. ( OCPBUGS-32797 ) Previously, on a HyperShift hosted cluster, a scan with the ocp4-pci-dss profile will run into an unrecoverable error due to a filter cannot iterate issue. With this release, the scan for the ocp4-pci-dss profile will reach done status and return either a Compliance or Non-Compliance test result. ( OCPBUGS-33067 ) 5.2.7. OpenShift Compliance Operator 1.4.0 The following advisory is available for the OpenShift Compliance Operator 1.4.0: RHBA-2023:7658 - OpenShift Compliance Operator bug fix and enhancement update 5.2.7.1. New features and enhancements With this update, clusters which use custom node pools outside the default worker and master node pools no longer need to supply additional variables to ensure Compliance Operator aggregates the configuration file for that node pool. Users can now pause scan schedules by setting the ScanSetting.suspend attribute to True . This allows users to suspend a scan schedule and reactivate it without the need to delete and re-create the ScanSettingBinding . This simplifies pausing scan schedules during maintenance periods. ( CMP-2123 ) Compliance Operator now supports an optional version attribute on Profile custom resources. ( CMP-2125 ) Compliance Operator now supports profile names in ComplianceRules . ( CMP-2126 ) Compliance Operator compatibility with improved cronjob API improvements is available in this release. ( CMP-2310 ) 5.2.7.2. Bug fixes Previously, on a cluster with Windows nodes, some rules will FAIL after auto remediation is applied because the Windows nodes were not skipped by the compliance scan. With this release, Windows nodes are correctly skipped when scanning. ( OCPBUGS-7355 ) With this update, rprivate default mount propagation is now handled correctly for root volume mounts of pods that rely on multipathing. ( OCPBUGS-17494 ) Previously, the Compliance Operator would generate a remediation for coreos_vsyscall_kernel_argument without reconciling the rule even while applying the remediation. With release 1.4.0, the coreos_vsyscall_kernel_argument rule properly evaluates kernel arguments and generates an appropriate remediation.( OCPBUGS-8041 ) Before this update, rule rhcos4-audit-rules-login-events-faillock would fail even after auto-remediation has been applied. With this update, rhcos4-audit-rules-login-events-faillock failure locks are now applied correctly after auto-remediation. ( OCPBUGS-24594 ) Previously, upgrades from Compliance Operator 1.3.1 to Compliance Operator 1.4.0 would cause OVS rules scan results to go from PASS to NOT-APPLICABLE . With this update, OVS rules scan results now show PASS ( OCPBUGS-25323 ) 5.2.8. OpenShift Compliance Operator 1.3.1 The following advisory is available for the OpenShift Compliance Operator 1.3.1: RHBA-2023:5669 - OpenShift Compliance Operator bug fix and enhancement update This update addresses a CVE in an underlying dependency. 5.2.8.1. New features and enhancements You can install and use the Compliance Operator in an OpenShift Container Platform cluster running in FIPS mode. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 5.2.8.2. Known issue On a cluster with Windows nodes, some rules will FAIL after auto remediation is applied because the Windows nodes are not skipped by the compliance scan. This differs from the expected results because the Windows nodes must be skipped when scanning. ( OCPBUGS-7355 ) 5.2.9. OpenShift Compliance Operator 1.3.0 The following advisory is available for the OpenShift Compliance Operator 1.3.0: RHBA-2023:5102 - OpenShift Compliance Operator enhancement update 5.2.9.1. New features and enhancements The Defense Information Systems Agency Security Technical Implementation Guide (DISA-STIG) for OpenShift Container Platform is now available from Compliance Operator 1.3.0. See Supported compliance profiles for additional information. Compliance Operator 1.3.0 now supports IBM Power(R) and IBM Z(R) for NIST 800-53 Moderate-Impact Baseline for OpenShift Container Platform platform and node profiles. 5.2.10. OpenShift Compliance Operator 1.2.0 The following advisory is available for the OpenShift Compliance Operator 1.2.0: RHBA-2023:4245 - OpenShift Compliance Operator enhancement update 5.2.10.1. New features and enhancements The CIS OpenShift Container Platform 4 Benchmark v1.4.0 profile is now available for platform and node applications. To locate the CIS OpenShift Container Platform v4 Benchmark, go to CIS Benchmarks and click Download Latest CIS Benchmark , where you can then register to download the benchmark. Important Upgrading to Compliance Operator 1.2.0 will overwrite the CIS OpenShift Container Platform 4 Benchmark 1.1.0 profiles. If your OpenShift Container Platform environment contains existing cis and cis-node remediations, there might be some differences in scan results after upgrading to Compliance Operator 1.2.0. Additional clarity for auditing security context constraints (SCCs) is now available for the scc-limit-container-allowed-capabilities rule. 5.2.11. OpenShift Compliance Operator 1.1.0 The following advisory is available for the OpenShift Compliance Operator 1.1.0: RHBA-2023:3630 - OpenShift Compliance Operator bug fix and enhancement update 5.2.11.1. New features and enhancements A start and end timestamp is now available in the ComplianceScan custom resource definition (CRD) status. The Compliance Operator can now be deployed on hosted control planes using the OperatorHub by creating a Subscription file. For more information, see Installing the Compliance Operator on hosted control planes . 5.2.11.2. Bug fixes Before this update, some Compliance Operator rule instructions were not present. After this update, instructions are improved for the following rules: classification_banner oauth_login_template_set oauth_logout_url_set oauth_provider_selection_set ocp_allowed_registries ocp_allowed_registries_for_import ( OCPBUGS-10473 ) Before this update, check accuracy and rule instructions were unclear. After this update, the check accuracy and instructions are improved for the following sysctl rules: kubelet-enable-protect-kernel-sysctl kubelet-enable-protect-kernel-sysctl-kernel-keys-root-maxbytes kubelet-enable-protect-kernel-sysctl-kernel-keys-root-maxkeys kubelet-enable-protect-kernel-sysctl-kernel-panic kubelet-enable-protect-kernel-sysctl-kernel-panic-on-oops kubelet-enable-protect-kernel-sysctl-vm-overcommit-memory kubelet-enable-protect-kernel-sysctl-vm-panic-on-oom ( OCPBUGS-11334 ) Before this update, the ocp4-alert-receiver-configured rule did not include instructions. With this update, the ocp4-alert-receiver-configured rule now includes improved instructions. ( OCPBUGS-7307 ) Before this update, the rhcos4-sshd-set-loglevel-info rule would fail for the rhcos4-e8 profile. With this update, the remediation for the sshd-set-loglevel-info rule was updated to apply the correct configuration changes, allowing subsequent scans to pass after the remediation is applied. ( OCPBUGS-7816 ) Before this update, a new installation of OpenShift Container Platform with the latest Compliance Operator install failed on the scheduler-no-bind-address rule. With this update, the scheduler-no-bind-address rule has been disabled on newer versions of OpenShift Container Platform since the parameter was removed. ( OCPBUGS-8347 ) 5.2.12. OpenShift Compliance Operator 1.0.0 The following advisory is available for the OpenShift Compliance Operator 1.0.0: RHBA-2023:1682 - OpenShift Compliance Operator bug fix update 5.2.12.1. New features and enhancements The Compliance Operator is now stable and the release channel is upgraded to stable . Future releases will follow Semantic Versioning . To access the latest release, see Updating the Compliance Operator . 5.2.12.2. Bug fixes Before this update, the compliance_operator_compliance_scan_error_total metric had an ERROR label with a different value for each error message. With this update, the compliance_operator_compliance_scan_error_total metric does not increase in values. ( OCPBUGS-1803 ) Before this update, the ocp4-api-server-audit-log-maxsize rule would result in a FAIL state. With this update, the error message has been removed from the metric, decreasing the cardinality of the metric in line with best practices. ( OCPBUGS-7520 ) Before this update, the rhcos4-enable-fips-mode rule description was misleading that FIPS could be enabled after installation. With this update, the rhcos4-enable-fips-mode rule description clarifies that FIPS must be enabled at install time. ( OCPBUGS-8358 ) 5.2.13. OpenShift Compliance Operator 0.1.61 The following advisory is available for the OpenShift Compliance Operator 0.1.61: RHBA-2023:0557 - OpenShift Compliance Operator bug fix update 5.2.13.1. New features and enhancements The Compliance Operator now supports timeout configuration for Scanner Pods. The timeout is specified in the ScanSetting object. If the scan is not completed within the timeout, the scan retries until the maximum number of retries is reached. See Configuring ScanSetting timeout for more information. 5.2.13.2. Bug fixes Before this update, Compliance Operator remediations required variables as inputs. Remediations without variables set were applied cluster-wide and resulted in stuck nodes, even though it appeared the remediation applied correctly. With this update, the Compliance Operator validates if a variable needs to be supplied using a TailoredProfile for a remediation. ( OCPBUGS-3864 ) Before this update, the instructions for ocp4-kubelet-configure-tls-cipher-suites were incomplete, requiring users to refine the query manually. With this update, the query provided in ocp4-kubelet-configure-tls-cipher-suites returns the actual results to perform the audit steps. ( OCPBUGS-3017 ) Before this update, system reserved parameters were not generated in kubelet configuration files, causing the Compliance Operator to fail to unpause the machine config pool. With this update, the Compliance Operator omits system reserved parameters during machine configuration pool evaluation. ( OCPBUGS-4445 ) Before this update, ComplianceCheckResult objects did not have correct descriptions. With this update, the Compliance Operator sources the ComplianceCheckResult information from the rule description. ( OCPBUGS-4615 ) Before this update, the Compliance Operator did not check for empty kubelet configuration files when parsing machine configurations. As a result, the Compliance Operator would panic and crash. With this update, the Compliance Operator implements improved checking of the kubelet configuration data structure and only continues if it is fully rendered. ( OCPBUGS-4621 ) Before this update, the Compliance Operator generated remediations for kubelet evictions based on machine config pool name and a grace period, resulting in multiple remediations for a single eviction rule. With this update, the Compliance Operator applies all remediations for a single rule. ( OCPBUGS-4338 ) Before this update, a regression occurred when attempting to create a ScanSettingBinding that was using a TailoredProfile with a non-default MachineConfigPool marked the ScanSettingBinding as Failed . With this update, functionality is restored and custom ScanSettingBinding using a TailoredProfile performs correctly. ( OCPBUGS-6827 ) Before this update, some kubelet configuration parameters did not have default values. With this update, the following parameters contain default values ( OCPBUGS-6708 ): ocp4-cis-kubelet-enable-streaming-connections ocp4-cis-kubelet-eviction-thresholds-set-hard-imagefs-available ocp4-cis-kubelet-eviction-thresholds-set-hard-imagefs-inodesfree ocp4-cis-kubelet-eviction-thresholds-set-hard-memory-available ocp4-cis-kubelet-eviction-thresholds-set-hard-nodefs-available Before this update, the selinux_confinement_of_daemons rule failed running on the kubelet because of the permissions necessary for the kubelet to run. With this update, the selinux_confinement_of_daemons rule is disabled. ( OCPBUGS-6968 ) 5.2.14. OpenShift Compliance Operator 0.1.59 The following advisory is available for the OpenShift Compliance Operator 0.1.59: RHBA-2022:8538 - OpenShift Compliance Operator bug fix update 5.2.14.1. New features and enhancements The Compliance Operator now supports Payment Card Industry Data Security Standard (PCI-DSS) ocp4-pci-dss and ocp4-pci-dss-node profiles on the ppc64le architecture. 5.2.14.2. Bug fixes Previously, the Compliance Operator did not support the Payment Card Industry Data Security Standard (PCI DSS) ocp4-pci-dss and ocp4-pci-dss-node profiles on different architectures such as ppc64le . Now, the Compliance Operator supports ocp4-pci-dss and ocp4-pci-dss-node profiles on the ppc64le architecture. ( OCPBUGS-3252 ) Previously, after the recent update to version 0.1.57, the rerunner service account (SA) was no longer owned by the cluster service version (CSV), which caused the SA to be removed during the Operator upgrade. Now, the CSV owns the rerunner SA in 0.1.59, and upgrades from any version will not result in a missing SA. ( OCPBUGS-3452 ) 5.2.15. OpenShift Compliance Operator 0.1.57 The following advisory is available for the OpenShift Compliance Operator 0.1.57: RHBA-2022:6657 - OpenShift Compliance Operator bug fix update 5.2.15.1. New features and enhancements KubeletConfig checks changed from Node to Platform type. KubeletConfig checks the default configuration of the KubeletConfig . The configuration files are aggregated from all nodes into a single location per node pool. See Evaluating KubeletConfig rules against default configuration values . The ScanSetting Custom Resource now allows users to override the default CPU and memory limits of scanner pods through the scanLimits attribute. For more information, see Increasing Compliance Operator resource limits . A PriorityClass object can now be set through ScanSetting . This ensures the Compliance Operator is prioritized and minimizes the chance that the cluster falls out of compliance. For more information, see Setting PriorityClass for ScanSetting scans . 5.2.15.2. Bug fixes Previously, the Compliance Operator hard-coded notifications to the default openshift-compliance namespace. If the Operator were installed in a non-default namespace, the notifications would not work as expected. Now, notifications work in non-default openshift-compliance namespaces. ( BZ#2060726 ) Previously, the Compliance Operator was unable to evaluate default configurations used by kubelet objects, resulting in inaccurate results and false positives. This new feature evaluates the kubelet configuration and now reports accurately. ( BZ#2075041 ) Previously, the Compliance Operator reported the ocp4-kubelet-configure-event-creation rule in a FAIL state after applying an automatic remediation because the eventRecordQPS value was set higher than the default value. Now, the ocp4-kubelet-configure-event-creation rule remediation sets the default value, and the rule applies correctly. ( BZ#2082416 ) The ocp4-configure-network-policies rule requires manual intervention to perform effectively. New descriptive instructions and rule updates increase applicability of the ocp4-configure-network-policies rule for clusters using Calico CNIs. ( BZ#2091794 ) Previously, the Compliance Operator would not clean up pods used to scan infrastructure when using the debug=true option in the scan settings. This caused pods to be left on the cluster even after deleting the ScanSettingBinding . Now, pods are always deleted when a ScanSettingBinding is deleted.( BZ#2092913 ) Previously, the Compliance Operator used an older version of the operator-sdk command that caused alerts about deprecated functionality. Now, an updated version of the operator-sdk command is included and there are no more alerts for deprecated functionality. ( BZ#2098581 ) Previously, the Compliance Operator would fail to apply remediations if it could not determine the relationship between kubelet and machine configurations. Now, the Compliance Operator has improved handling of the machine configurations and is able to determine if a kubelet configuration is a subset of a machine configuration. ( BZ#2102511 ) Previously, the rule for ocp4-cis-node-master-kubelet-enable-cert-rotation did not properly describe success criteria. As a result, the requirements for RotateKubeletClientCertificate were unclear. Now, the rule for ocp4-cis-node-master-kubelet-enable-cert-rotation reports accurately regardless of the configuration present in the kubelet configuration file. ( BZ#2105153 ) Previously, the rule for checking idle streaming timeouts did not consider default values, resulting in inaccurate rule reporting. Now, more robust checks ensure increased accuracy in results based on default configuration values. ( BZ#2105878 ) Previously, the Compliance Operator would fail to fetch API resources when parsing machine configurations without Ignition specifications, which caused the api-check-pods processes to crash loop. Now, the Compliance Operator handles Machine Config Pools that do not have Ignition specifications correctly. ( BZ#2117268 ) Previously, rules evaluating the modprobe configuration would fail even after applying remediations due to a mismatch in values for the modprobe configuration. Now, the same values are used for the modprobe configuration in checks and remediations, ensuring consistent results. ( BZ#2117747 ) 5.2.15.3. Deprecations Specifying Install into all namespaces in the cluster or setting the WATCH_NAMESPACES environment variable to "" no longer affects all namespaces. Any API resources installed in namespaces not specified at the time of Compliance Operator installation is no longer be operational. API resources might require creation in the selected namespace, or the openshift-compliance namespace by default. This change improves the Compliance Operator's memory usage. 5.2.16. OpenShift Compliance Operator 0.1.53 The following advisory is available for the OpenShift Compliance Operator 0.1.53: RHBA-2022:5537 - OpenShift Compliance Operator bug fix update 5.2.16.1. Bug fixes Previously, the ocp4-kubelet-enable-streaming-connections rule contained an incorrect variable comparison, resulting in false positive scan results. Now, the Compliance Operator provides accurate scan results when setting streamingConnectionIdleTimeout . ( BZ#2069891 ) Previously, group ownership for /etc/openvswitch/conf.db was incorrect on IBM Z(R) architectures, resulting in ocp4-cis-node-worker-file-groupowner-ovs-conf-db check failures. Now, the check is marked NOT-APPLICABLE on IBM Z(R) architecture systems. ( BZ#2072597 ) Previously, the ocp4-cis-scc-limit-container-allowed-capabilities rule reported in a FAIL state due to incomplete data regarding the security context constraints (SCC) rules in the deployment. Now, the result is MANUAL , which is consistent with other checks that require human intervention. ( BZ#2077916 ) Previously, the following rules failed to account for additional configuration paths for API servers and TLS certificates and keys, resulting in reported failures even if the certificates and keys were set properly: ocp4-cis-api-server-kubelet-client-cert ocp4-cis-api-server-kubelet-client-key ocp4-cis-kubelet-configure-tls-cert ocp4-cis-kubelet-configure-tls-key Now, the rules report accurately and observe legacy file paths specified in the kubelet configuration file. ( BZ#2079813 ) Previously, the content_rule_oauth_or_oauthclient_inactivity_timeout rule did not account for a configurable timeout set by the deployment when assessing compliance for timeouts. This resulted in the rule failing even if the timeout was valid. Now, the Compliance Operator uses the var_oauth_inactivity_timeout variable to set valid timeout length. ( BZ#2081952 ) Previously, the Compliance Operator used administrative permissions on namespaces not labeled appropriately for privileged use, resulting in warning messages regarding pod security-level violations. Now, the Compliance Operator has appropriate namespace labels and permission adjustments to access results without violating permissions. ( BZ#2088202 ) Previously, applying auto remediations for rhcos4-high-master-sysctl-kernel-yama-ptrace-scope and rhcos4-sysctl-kernel-core-pattern resulted in subsequent failures of those rules in scan results, even though they were remediated. Now, the rules report PASS accurately, even after remediations are applied.( BZ#2094382 ) Previously, the Compliance Operator would fail in a CrashLoopBackoff state because of out-of-memory exceptions. Now, the Compliance Operator is improved to handle large machine configuration data sets in memory and function correctly. ( BZ#2094854 ) 5.2.16.2. Known issue When "debug":true is set within the ScanSettingBinding object, the pods generated by the ScanSettingBinding object are not removed when that binding is deleted. As a workaround, run the following command to delete the remaining pods: USD oc delete pods -l compliance.openshift.io/scan-name=ocp4-cis ( BZ#2092913 ) 5.2.17. OpenShift Compliance Operator 0.1.52 The following advisory is available for the OpenShift Compliance Operator 0.1.52: RHBA-2022:4657 - OpenShift Compliance Operator bug fix update 5.2.17.1. New features and enhancements The FedRAMP high SCAP profile is now available for use in OpenShift Container Platform environments. For more information, See Supported compliance profiles . 5.2.17.2. Bug fixes Previously, the OpenScap container would crash due to a mount permission issue in a security environment where DAC_OVERRIDE capability is dropped. Now, executable mount permissions are applied to all users. ( BZ#2082151 ) Previously, the compliance rule ocp4-configure-network-policies could be configured as MANUAL . Now, compliance rule ocp4-configure-network-policies is set to AUTOMATIC . ( BZ#2072431 ) Previously, the Cluster Autoscaler would fail to scale down because the Compliance Operator scan pods were never removed after a scan. Now, the pods are removed from each node by default unless explicitly saved for debugging purposes. ( BZ#2075029 ) Previously, applying the Compliance Operator to the KubeletConfig would result in the node going into a NotReady state due to unpausing the Machine Config Pools too early. Now, the Machine Config Pools are unpaused appropriately and the node operates correctly. ( BZ#2071854 ) Previously, the Machine Config Operator used base64 instead of url-encoded code in the latest release, causing Compliance Operator remediation to fail. Now, the Compliance Operator checks encoding to handle both base64 and url-encoded Machine Config code and the remediation applies correctly. ( BZ#2082431 ) 5.2.17.3. Known issue When "debug":true is set within the ScanSettingBinding object, the pods generated by the ScanSettingBinding object are not removed when that binding is deleted. As a workaround, run the following command to delete the remaining pods: USD oc delete pods -l compliance.openshift.io/scan-name=ocp4-cis ( BZ#2092913 ) 5.2.18. OpenShift Compliance Operator 0.1.49 The following advisory is available for the OpenShift Compliance Operator 0.1.49: RHBA-2022:1148 - OpenShift Compliance Operator bug fix and enhancement update 5.2.18.1. New features and enhancements The Compliance Operator is now supported on the following architectures: IBM Power(R) IBM Z(R) IBM(R) LinuxONE 5.2.18.2. Bug fixes Previously, the openshift-compliance content did not include platform-specific checks for network types. As a result, OVN- and SDN-specific checks would show as failed instead of not-applicable based on the network configuration. Now, new rules contain platform checks for networking rules, resulting in a more accurate assessment of network-specific checks. ( BZ#1994609 ) Previously, the ocp4-moderate-routes-protected-by-tls rule incorrectly checked TLS settings that results in the rule failing the check, even if the connection secure SSL/TLS protocol. Now, the check properly evaluates TLS settings that are consistent with the networking guidance and profile recommendations. ( BZ#2002695 ) Previously, ocp-cis-configure-network-policies-namespace used pagination when requesting namespaces. This caused the rule to fail because the deployments truncated lists of more than 500 namespaces. Now, the entire namespace list is requested, and the rule for checking configured network policies works for deployments with more than 500 namespaces. ( BZ#2038909 ) Previously, remediations using the sshd jinja macros were hard-coded to specific sshd configurations. As a result, the configurations were inconsistent with the content the rules were checking for and the check would fail. Now, the sshd configuration is parameterized and the rules apply successfully. ( BZ#2049141 ) Previously, the ocp4-cluster-version-operator-verify-integrity always checked the first entry in the Cluter Version Operator (CVO) history. As a result, the upgrade would fail in situations where subsequent versions of OpenShift Container Platform would be verified. Now, the compliance check result for ocp4-cluster-version-operator-verify-integrity is able to detect verified versions and is accurate with the CVO history. ( BZ#2053602 ) Previously, the ocp4-api-server-no-adm-ctrl-plugins-disabled rule did not check for a list of empty admission controller plugins. As a result, the rule would always fail, even if all admission plugins were enabled. Now, more robust checking of the ocp4-api-server-no-adm-ctrl-plugins-disabled rule accurately passes with all admission controller plugins enabled. ( BZ#2058631 ) Previously, scans did not contain platform checks for running against Linux worker nodes. As a result, running scans against worker nodes that were not Linux-based resulted in a never ending scan loop. Now, the scan schedules appropriately based on platform type and labels complete successfully. ( BZ#2056911 ) 5.2.19. OpenShift Compliance Operator 0.1.48 The following advisory is available for the OpenShift Compliance Operator 0.1.48: RHBA-2022:0416 - OpenShift Compliance Operator bug fix and enhancement update 5.2.19.1. Bug fixes Previously, some rules associated with extended Open Vulnerability and Assessment Language (OVAL) definitions had a checkType of None . This was because the Compliance Operator was not processing extended OVAL definitions when parsing rules. With this update, content from extended OVAL definitions is parsed so that these rules now have a checkType of either Node or Platform . ( BZ#2040282 ) Previously, a manually created MachineConfig object for KubeletConfig prevented a KubeletConfig object from being generated for remediation, leaving the remediation in the Pending state. With this release, a KubeletConfig object is created by the remediation, regardless if there is a manually created MachineConfig object for KubeletConfig . As a result, KubeletConfig remediations now work as expected. ( BZ#2040401 ) 5.2.20. OpenShift Compliance Operator 0.1.47 The following advisory is available for the OpenShift Compliance Operator 0.1.47: RHBA-2022:0014 - OpenShift Compliance Operator bug fix and enhancement update 5.2.20.1. New features and enhancements The Compliance Operator now supports the following compliance benchmarks for the Payment Card Industry Data Security Standard (PCI DSS): ocp4-pci-dss ocp4-pci-dss-node Additional rules and remediations for FedRAMP moderate impact level are added to the OCP4-moderate, OCP4-moderate-node, and rhcos4-moderate profiles. Remediations for KubeletConfig are now available in node-level profiles. 5.2.20.2. Bug fixes Previously, if your cluster was running OpenShift Container Platform 4.6 or earlier, remediations for USBGuard-related rules would fail for the moderate profile. This is because the remediations created by the Compliance Operator were based on an older version of USBGuard that did not support drop-in directories. Now, invalid remediations for USBGuard-related rules are not created for clusters running OpenShift Container Platform 4.6. If your cluster is using OpenShift Container Platform 4.6, you must manually create remediations for USBGuard-related rules. Additionally, remediations are created only for rules that satisfy minimum version requirements. ( BZ#1965511 ) Previously, when rendering remediations, the compliance operator would check that the remediation was well-formed by using a regular expression that was too strict. As a result, some remediations, such as those that render sshd_config , would not pass the regular expression check and therefore, were not created. The regular expression was found to be unnecessary and removed. Remediations now render correctly. ( BZ#2033009 ) 5.2.21. OpenShift Compliance Operator 0.1.44 The following advisory is available for the OpenShift Compliance Operator 0.1.44: RHBA-2021:4530 - OpenShift Compliance Operator bug fix and enhancement update 5.2.21.1. New features and enhancements In this release, the strictNodeScan option is now added to the ComplianceScan , ComplianceSuite and ScanSetting CRs. This option defaults to true which matches the behavior, where an error occurred if a scan was not able to be scheduled on a node. Setting the option to false allows the Compliance Operator to be more permissive about scheduling scans. Environments with ephemeral nodes can set the strictNodeScan value to false, which allows a compliance scan to proceed, even if some of the nodes in the cluster are not available for scheduling. You can now customize the node that is used to schedule the result server workload by configuring the nodeSelector and tolerations attributes of the ScanSetting object. These attributes are used to place the ResultServer pod, the pod that is used to mount a PV storage volume and store the raw Asset Reporting Format (ARF) results. Previously, the nodeSelector and the tolerations parameters defaulted to selecting one of the control plane nodes and tolerating the node-role.kubernetes.io/master taint . This did not work in environments where control plane nodes are not permitted to mount PVs. This feature provides a way for you to select the node and tolerate a different taint in those environments. The Compliance Operator can now remediate KubeletConfig objects. A comment containing an error message is now added to help content developers differentiate between objects that do not exist in the cluster compared to objects that cannot be fetched. Rule objects now contain two new attributes, checkType and description . These attributes allow you to determine if the rule pertains to a node check or platform check, and also allow you to review what the rule does. This enhancement removes the requirement that you have to extend an existing profile to create a tailored profile. This means the extends field in the TailoredProfile CRD is no longer mandatory. You can now select a list of rule objects to create a tailored profile. Note that you must select whether your profile applies to nodes or the platform by setting the compliance.openshift.io/product-type: annotation or by setting the -node suffix for the TailoredProfile CR. In this release, the Compliance Operator is now able to schedule scans on all nodes irrespective of their taints. Previously, the scan pods would only tolerated the node-role.kubernetes.io/master taint , meaning that they would either ran on nodes with no taints or only on nodes with the node-role.kubernetes.io/master taint. In deployments that use custom taints for their nodes, this resulted in the scans not being scheduled on those nodes. Now, the scan pods tolerate all node taints. In this release, the Compliance Operator supports the following North American Electric Reliability Corporation (NERC) security profiles: ocp4-nerc-cip ocp4-nerc-cip-node rhcos4-nerc-cip In this release, the Compliance Operator supports the NIST 800-53 Moderate-Impact Baseline for the Red Hat OpenShift - Node level, ocp4-moderate-node, security profile. 5.2.21.2. Templating and variable use In this release, the remediation template now allows multi-value variables. With this update, the Compliance Operator can change remediations based on variables that are set in the compliance profile. This is useful for remediations that include deployment-specific values such as time outs, NTP server host names, or similar. Additionally, the ComplianceCheckResult objects now use the label compliance.openshift.io/check-has-value that lists the variables a check has used. 5.2.21.3. Bug fixes Previously, while performing a scan, an unexpected termination occurred in one of the scanner containers of the pods. In this release, the Compliance Operator uses the latest OpenSCAP version 1.3.5 to avoid a crash. Previously, using autoReplyRemediations to apply remediations triggered an update of the cluster nodes. This was disruptive if some of the remediations did not include all of the required input variables. Now, if a remediation is missing one or more required input variables, it is assigned a state of NeedsReview . If one or more remediations are in a NeedsReview state, the machine config pool remains paused, and the remediations are not applied until all of the required variables are set. This helps minimize disruption to the nodes. The RBAC Role and Role Binding used for Prometheus metrics are changed to 'ClusterRole' and 'ClusterRoleBinding' to ensure that monitoring works without customization. Previously, if an error occurred while parsing a profile, rules or variables objects were removed and deleted from the profile. Now, if an error occurs during parsing, the profileparser annotates the object with a temporary annotation that prevents the object from being deleted until after parsing completes. ( BZ#1988259 ) Previously, an error occurred if titles or descriptions were missing from a tailored profile. Because the XCCDF standard requires titles and descriptions for tailored profiles, titles and descriptions are now required to be set in TailoredProfile CRs. Previously, when using tailored profiles, TailoredProfile variable values were allowed to be set using only a specific selection set. This restriction is now removed, and TailoredProfile variables can be set to any value. 5.2.22. Release Notes for Compliance Operator 0.1.39 The following advisory is available for the OpenShift Compliance Operator 0.1.39: RHBA-2021:3214 - OpenShift Compliance Operator bug fix and enhancement update 5.2.22.1. New features and enhancements Previously, the Compliance Operator was unable to parse Payment Card Industry Data Security Standard (PCI DSS) references. Now, the Operator can parse compliance content that is provided with PCI DSS profiles. Previously, the Compliance Operator was unable to execute rules for AU-5 control in the moderate profile. Now, permission is added to the Operator so that it can read Prometheusrules.monitoring.coreos.com objects and run the rules that cover AU-5 control in the moderate profile. 5.2.23. Additional resources Understanding the Compliance Operator 5.3. Compliance Operator support 5.3.1. Compliance Operator lifecycle The Compliance Operator is a "Rolling Stream" Operator, meaning updates are available asynchronously of OpenShift Container Platform releases. For more information, see OpenShift Operator Life Cycles on the Red Hat Customer Portal. 5.3.2. Getting support If you experience difficulty with a procedure described in this documentation, or with OpenShift Container Platform in general, visit the Red Hat Customer Portal . From the Customer Portal, you can: Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products. Submit a support case to Red Hat Support. Access other product documentation. To identify issues with your cluster, you can use Insights in OpenShift Cluster Manager . Insights provides details about issues and, if available, information on how to solve a problem. If you have a suggestion for improving this documentation or have found an error, submit a Jira issue for the most relevant documentation component. Please provide specific details, such as the section name and OpenShift Container Platform version. 5.3.3. Using the must-gather tool for the Compliance Operator Starting in Compliance Operator v1.6.0, you can collect data about the Compliance Operator resources by running the must-gather command with the Compliance Operator image. Note Consider using the must-gather tool when opening support cases or filing bug reports, as it provides additional details about the Operator configuration and logs. Procedure Run the following command to collect data about the Compliance Operator: USD oc adm must-gather --image=USD(oc get csv compliance-operator.v1.6.0 -o=jsonpath='{.spec.relatedImages[?(@.name=="must-gather")].image}') 5.3.4. Additional resources About the must-gather tool Product Compliance 5.4. Compliance Operator concepts 5.4.1. Understanding the Compliance Operator The Compliance Operator lets OpenShift Container Platform administrators describe the required compliance state of a cluster and provides them with an overview of gaps and ways to remediate them. The Compliance Operator assesses compliance of both the Kubernetes API resources of OpenShift Container Platform, as well as the nodes running the cluster. The Compliance Operator uses OpenSCAP, a NIST-certified tool, to scan and enforce security policies provided by the content. Important The Compliance Operator is available for Red Hat Enterprise Linux CoreOS (RHCOS) deployments only. 5.4.1.1. Compliance Operator profiles There are several profiles available as part of the Compliance Operator installation. You can use the oc get command to view available profiles, profile details, and specific rules. View the available profiles: USD oc get profile.compliance -n openshift-compliance Example output NAME AGE VERSION ocp4-cis 3h49m 1.5.0 ocp4-cis-1-4 3h49m 1.4.0 ocp4-cis-1-5 3h49m 1.5.0 ocp4-cis-node 3h49m 1.5.0 ocp4-cis-node-1-4 3h49m 1.4.0 ocp4-cis-node-1-5 3h49m 1.5.0 ocp4-e8 3h49m ocp4-high 3h49m Revision 4 ocp4-high-node 3h49m Revision 4 ocp4-high-node-rev-4 3h49m Revision 4 ocp4-high-rev-4 3h49m Revision 4 ocp4-moderate 3h49m Revision 4 ocp4-moderate-node 3h49m Revision 4 ocp4-moderate-node-rev-4 3h49m Revision 4 ocp4-moderate-rev-4 3h49m Revision 4 ocp4-nerc-cip 3h49m ocp4-nerc-cip-node 3h49m ocp4-pci-dss 3h49m 3.2.1 ocp4-pci-dss-3-2 3h49m 3.2.1 ocp4-pci-dss-4-0 3h49m 4.0.0 ocp4-pci-dss-node 3h49m 3.2.1 ocp4-pci-dss-node-3-2 3h49m 3.2.1 ocp4-pci-dss-node-4-0 3h49m 4.0.0 ocp4-stig 3h49m V2R1 ocp4-stig-node 3h49m V2R1 ocp4-stig-node-v1r1 3h49m V1R1 ocp4-stig-node-v2r1 3h49m V2R1 ocp4-stig-v1r1 3h49m V1R1 ocp4-stig-v2r1 3h49m V2R1 rhcos4-e8 3h49m rhcos4-high 3h49m Revision 4 rhcos4-high-rev-4 3h49m Revision 4 rhcos4-moderate 3h49m Revision 4 rhcos4-moderate-rev-4 3h49m Revision 4 rhcos4-nerc-cip 3h49m rhcos4-stig 3h49m V2R1 rhcos4-stig-v1r1 3h49m V1R1 rhcos4-stig-v2r1 3h49m V2R1 These profiles represent different compliance benchmarks. Each profile has the product name that it applies to added as a prefix to the profile's name. ocp4-e8 applies the Essential 8 benchmark to the OpenShift Container Platform product, while rhcos4-e8 applies the Essential 8 benchmark to the Red Hat Enterprise Linux CoreOS (RHCOS) product. Run the following command to view the details of the rhcos4-e8 profile: USD oc get -n openshift-compliance -oyaml profiles.compliance rhcos4-e8 Example 5.1. Example output apiVersion: compliance.openshift.io/v1alpha1 description: 'This profile contains configuration checks for Red Hat Enterprise Linux CoreOS that align to the Australian Cyber Security Centre (ACSC) Essential Eight. A copy of the Essential Eight in Linux Environments guide can be found at the ACSC website: https://www.cyber.gov.au/acsc/view-all-content/publications/hardening-linux-workstations-and-servers' id: xccdf_org.ssgproject.content_profile_e8 kind: Profile metadata: annotations: compliance.openshift.io/image-digest: pb-rhcos4hrdkm compliance.openshift.io/product: redhat_enterprise_linux_coreos_4 compliance.openshift.io/product-type: Node creationTimestamp: "2022-10-19T12:06:49Z" generation: 1 labels: compliance.openshift.io/profile-bundle: rhcos4 name: rhcos4-e8 namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ProfileBundle name: rhcos4 uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d resourceVersion: "43699" uid: 86353f70-28f7-40b4-bf0e-6289ec33675b rules: - rhcos4-accounts-no-uid-except-zero - rhcos4-audit-rules-dac-modification-chmod - rhcos4-audit-rules-dac-modification-chown - rhcos4-audit-rules-execution-chcon - rhcos4-audit-rules-execution-restorecon - rhcos4-audit-rules-execution-semanage - rhcos4-audit-rules-execution-setfiles - rhcos4-audit-rules-execution-setsebool - rhcos4-audit-rules-execution-seunshare - rhcos4-audit-rules-kernel-module-loading-delete - rhcos4-audit-rules-kernel-module-loading-finit - rhcos4-audit-rules-kernel-module-loading-init - rhcos4-audit-rules-login-events - rhcos4-audit-rules-login-events-faillock - rhcos4-audit-rules-login-events-lastlog - rhcos4-audit-rules-login-events-tallylog - rhcos4-audit-rules-networkconfig-modification - rhcos4-audit-rules-sysadmin-actions - rhcos4-audit-rules-time-adjtimex - rhcos4-audit-rules-time-clock-settime - rhcos4-audit-rules-time-settimeofday - rhcos4-audit-rules-time-stime - rhcos4-audit-rules-time-watch-localtime - rhcos4-audit-rules-usergroup-modification - rhcos4-auditd-data-retention-flush - rhcos4-auditd-freq - rhcos4-auditd-local-events - rhcos4-auditd-log-format - rhcos4-auditd-name-format - rhcos4-auditd-write-logs - rhcos4-configure-crypto-policy - rhcos4-configure-ssh-crypto-policy - rhcos4-no-empty-passwords - rhcos4-selinux-policytype - rhcos4-selinux-state - rhcos4-service-auditd-enabled - rhcos4-sshd-disable-empty-passwords - rhcos4-sshd-disable-gssapi-auth - rhcos4-sshd-disable-rhosts - rhcos4-sshd-disable-root-login - rhcos4-sshd-disable-user-known-hosts - rhcos4-sshd-do-not-permit-user-env - rhcos4-sshd-enable-strictmodes - rhcos4-sshd-print-last-log - rhcos4-sshd-set-loglevel-info - rhcos4-sysctl-kernel-dmesg-restrict - rhcos4-sysctl-kernel-kptr-restrict - rhcos4-sysctl-kernel-randomize-va-space - rhcos4-sysctl-kernel-unprivileged-bpf-disabled - rhcos4-sysctl-kernel-yama-ptrace-scope - rhcos4-sysctl-net-core-bpf-jit-harden title: Australian Cyber Security Centre (ACSC) Essential Eight Run the following command to view the details of the rhcos4-audit-rules-login-events rule: USD oc get -n openshift-compliance -oyaml rules rhcos4-audit-rules-login-events Example 5.2. Example output apiVersion: compliance.openshift.io/v1alpha1 checkType: Node description: |- The audit system already collects login information for all users and root. If the auditd daemon is configured to use the augenrules program to read audit rules during daemon startup (the default), add the following lines to a file with suffix.rules in the directory /etc/audit/rules.d in order to watch for attempted manual edits of files involved in storing logon events: -w /var/log/tallylog -p wa -k logins -w /var/run/faillock -p wa -k logins -w /var/log/lastlog -p wa -k logins If the auditd daemon is configured to use the auditctl utility to read audit rules during daemon startup, add the following lines to /etc/audit/audit.rules file in order to watch for unattempted manual edits of files involved in storing logon events: -w /var/log/tallylog -p wa -k logins -w /var/run/faillock -p wa -k logins -w /var/log/lastlog -p wa -k logins id: xccdf_org.ssgproject.content_rule_audit_rules_login_events kind: Rule metadata: annotations: compliance.openshift.io/image-digest: pb-rhcos4hrdkm compliance.openshift.io/rule: audit-rules-login-events control.compliance.openshift.io/NIST-800-53: AU-2(d);AU-12(c);AC-6(9);CM-6(a) control.compliance.openshift.io/PCI-DSS: Req-10.2.3 policies.open-cluster-management.io/controls: AU-2(d),AU-12(c),AC-6(9),CM-6(a),Req-10.2.3 policies.open-cluster-management.io/standards: NIST-800-53,PCI-DSS creationTimestamp: "2022-10-19T12:07:08Z" generation: 1 labels: compliance.openshift.io/profile-bundle: rhcos4 name: rhcos4-audit-rules-login-events namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ProfileBundle name: rhcos4 uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d resourceVersion: "44819" uid: 75872f1f-3c93-40ca-a69d-44e5438824a4 rationale: Manual editing of these files may indicate nefarious activity, such as an attacker attempting to remove evidence of an intrusion. severity: medium title: Record Attempts to Alter Logon and Logout Events warning: Manual editing of these files may indicate nefarious activity, such as an attacker attempting to remove evidence of an intrusion. 5.4.1.1.1. Compliance Operator profile types There are two types of compliance profiles available: Platform and Node. Platform Platform scans target your OpenShift Container Platform cluster. Node Node scans target the nodes of the cluster. Important For compliance profiles that have Node and Platform applications, such as pci-dss compliance profiles, you must run both in your OpenShift Container Platform environment. 5.4.1.2. Additional resources Supported compliance profiles 5.4.2. Understanding the Custom Resource Definitions The Compliance Operator in the OpenShift Container Platform provides you with several Custom Resource Definitions (CRDs) to accomplish the compliance scans. To run a compliance scan, it leverages the predefined security policies, which are derived from the ComplianceAsCode community project. The Compliance Operator converts these security policies into CRDs, which you can use to run compliance scans and get remediations for the issues found. 5.4.2.1. CRDs workflow The CRD provides you the following workflow to complete the compliance scans: Define your compliance scan requirements Configure the compliance scan settings Process compliance requirements with compliance scans settings Monitor the compliance scans Check the compliance scan results 5.4.2.2. Defining the compliance scan requirements By default, the Compliance Operator CRDs include ProfileBundle and Profile objects, in which you can define and set the rules for your compliance scan requirements. You can also customize the default profiles by using a TailoredProfile object. 5.4.2.2.1. ProfileBundle object When you install the Compliance Operator, it includes ready-to-run ProfileBundle objects. The Compliance Operator parses the ProfileBundle object and creates a Profile object for each profile in the bundle. It also parses Rule and Variable objects, which are used by the Profile object. Example ProfileBundle object apiVersion: compliance.openshift.io/v1alpha1 kind: ProfileBundle name: <profile bundle name> namespace: openshift-compliance status: dataStreamStatus: VALID 1 1 Indicates whether the Compliance Operator was able to parse the content files. Note When the contentFile fails, an errorMessage attribute appears, which provides details of the error that occurred. Troubleshooting When you roll back to a known content image from an invalid image, the ProfileBundle object stops responding and displays PENDING state. As a workaround, you can move to a different image than the one. Alternatively, you can delete and re-create the ProfileBundle object to return to the working state. 5.4.2.2.2. Profile object The Profile object defines the rules and variables that can be evaluated for a certain compliance standard. It contains parsed out details about an OpenSCAP profile, such as its XCCDF identifier and profile checks for a Node or Platform type. You can either directly use the Profile object or further customize it using a TailorProfile object. Note You cannot create or modify the Profile object manually because it is derived from a single ProfileBundle object. Typically, a single ProfileBundle object can include several Profile objects. Example Profile object apiVersion: compliance.openshift.io/v1alpha1 description: <description of the profile> id: xccdf_org.ssgproject.content_profile_moderate 1 kind: Profile metadata: annotations: compliance.openshift.io/product: <product name> compliance.openshift.io/product-type: Node 2 creationTimestamp: "YYYY-MM-DDTMM:HH:SSZ" generation: 1 labels: compliance.openshift.io/profile-bundle: <profile bundle name> name: rhcos4-moderate namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ProfileBundle name: <profile bundle name> uid: <uid string> resourceVersion: "<version number>" selfLink: /apis/compliance.openshift.io/v1alpha1/namespaces/openshift-compliance/profiles/rhcos4-moderate uid: <uid string> rules: 3 - rhcos4-account-disable-post-pw-expiration - rhcos4-accounts-no-uid-except-zero - rhcos4-audit-rules-dac-modification-chmod - rhcos4-audit-rules-dac-modification-chown title: <title of the profile> 1 Specify the XCCDF name of the profile. Use this identifier when you define a ComplianceScan object as the value of the profile attribute of the scan. 2 Specify either a Node or Platform . Node profiles scan the cluster nodes and platform profiles scan the Kubernetes platform. 3 Specify the list of rules for the profile. Each rule corresponds to a single check. 5.4.2.2.3. Rule object The Rule object, which forms the profiles, are also exposed as objects. Use the Rule object to define your compliance check requirements and specify how it could be fixed. Example Rule object apiVersion: compliance.openshift.io/v1alpha1 checkType: Platform 1 description: <description of the rule> id: xccdf_org.ssgproject.content_rule_configure_network_policies_namespaces 2 instructions: <manual instructions for the scan> kind: Rule metadata: annotations: compliance.openshift.io/rule: configure-network-policies-namespaces control.compliance.openshift.io/CIS-OCP: 5.3.2 control.compliance.openshift.io/NERC-CIP: CIP-003-3 R4;CIP-003-3 R4.2;CIP-003-3 R5;CIP-003-3 R6;CIP-004-3 R2.2.4;CIP-004-3 R3;CIP-007-3 R2;CIP-007-3 R2.1;CIP-007-3 R2.2;CIP-007-3 R2.3;CIP-007-3 R5.1;CIP-007-3 R6.1 control.compliance.openshift.io/NIST-800-53: AC-4;AC-4(21);CA-3(5);CM-6;CM-6(1);CM-7;CM-7(1);SC-7;SC-7(3);SC-7(5);SC-7(8);SC-7(12);SC-7(13);SC-7(18) labels: compliance.openshift.io/profile-bundle: ocp4 name: ocp4-configure-network-policies-namespaces namespace: openshift-compliance rationale: <description of why this rule is checked> severity: high 3 title: <summary of the rule> 1 Specify the type of check this rule executes. Node profiles scan the cluster nodes and Platform profiles scan the Kubernetes platform. An empty value indicates there is no automated check. 2 Specify the XCCDF name of the rule, which is parsed directly from the datastream. 3 Specify the severity of the rule when it fails. Note The Rule object gets an appropriate label for an easy identification of the associated ProfileBundle object. The ProfileBundle also gets specified in the OwnerReferences of this object. 5.4.2.2.4. TailoredProfile object Use the TailoredProfile object to modify the default Profile object based on your organization requirements. You can enable or disable rules, set variable values, and provide justification for the customization. After validation, the TailoredProfile object creates a ConfigMap , which can be referenced by a ComplianceScan object. Tip You can use the TailoredProfile object by referencing it in a ScanSettingBinding object. For more information about ScanSettingBinding , see ScanSettingBinding object. Example TailoredProfile object apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: rhcos4-with-usb spec: extends: rhcos4-moderate 1 title: <title of the tailored profile> disableRules: - name: <name of a rule object to be disabled> rationale: <description of why this rule is checked> status: id: xccdf_compliance.openshift.io_profile_rhcos4-with-usb 2 outputRef: name: rhcos4-with-usb-tp 3 namespace: openshift-compliance state: READY 4 1 This is optional. Name of the Profile object upon which the TailoredProfile is built. If no value is set, a new profile is created from the enableRules list. 2 Specifies the XCCDF name of the tailored profile. 3 Specifies the ConfigMap name, which can be used as the value of the tailoringConfigMap.name attribute of a ComplianceScan . 4 Shows the state of the object such as READY , PENDING , and FAILURE . If the state of the object is ERROR , then the attribute status.errorMessage provides the reason for the failure. With the TailoredProfile object, it is possible to create a new Profile object using the TailoredProfile construct. To create a new Profile , set the following configuration parameters : an appropriate title extends value must be empty scan type annotation on the TailoredProfile object: compliance.openshift.io/product-type: Platform/Node Note If you have not set the product-type annotation, the Compliance Operator defaults to Platform scan type. Adding the -node suffix to the name of the TailoredProfile object results in node scan type. 5.4.2.3. Configuring the compliance scan settings After you have defined the requirements of the compliance scan, you can configure it by specifying the type of the scan, occurrence of the scan, and location of the scan. To do so, Compliance Operator provides you with a ScanSetting object. 5.4.2.3.1. ScanSetting object Use the ScanSetting object to define and reuse the operational policies to run your scans. By default, the Compliance Operator creates the following ScanSetting objects: default - it runs a scan every day at 1 AM on both master and worker nodes using a 1Gi Persistent Volume (PV) and keeps the last three results. Remediation is neither applied nor updated automatically. default-auto-apply - it runs a scan every day at 1AM on both control plane and worker nodes using a 1Gi Persistent Volume (PV) and keeps the last three results. Both autoApplyRemediations and autoUpdateRemediations are set to true. Example ScanSetting object apiVersion: compliance.openshift.io/v1alpha1 autoApplyRemediations: true 1 autoUpdateRemediations: true 2 kind: ScanSetting maxRetryOnTimeout: 3 metadata: creationTimestamp: "2022-10-18T20:21:00Z" generation: 1 name: default-auto-apply namespace: openshift-compliance resourceVersion: "38840" uid: 8cb0967d-05e0-4d7a-ac1c-08a7f7e89e84 rawResultStorage: nodeSelector: node-role.kubernetes.io/master: "" pvAccessModes: - ReadWriteOnce rotation: 3 3 size: 1Gi 4 tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists roles: 5 - master - worker scanTolerations: - operator: Exists schedule: 0 1 * * * 6 showNotApplicable: false strictNodeScan: true timeout: 30m 1 Set to true to enable auto remediations. Set to false to disable auto remediations. 2 Set to true to enable auto remediations for content updates. Set to false to disable auto remediations for content updates. 3 Specify the number of stored scans in the raw result format. The default value is 3 . As the older results get rotated, the administrator must store the results elsewhere before the rotation happens. 4 Specify the storage size that should be created for the scan to store the raw results. The default value is 1Gi 6 Specify how often the scan should be run in cron format. Note To disable the rotation policy, set the value to 0 . 5 Specify the node-role.kubernetes.io label value to schedule the scan for Node type. This value has to match the name of a MachineConfigPool . 5.4.2.4. Processing the compliance scan requirements with compliance scans settings When you have defined the compliance scan requirements and configured the settings to run the scans, then the Compliance Operator processes it using the ScanSettingBinding object. 5.4.2.4.1. ScanSettingBinding object Use the ScanSettingBinding object to specify your compliance requirements with reference to the Profile or TailoredProfile object. It is then linked to a ScanSetting object, which provides the operational constraints for the scan. Then the Compliance Operator generates the ComplianceSuite object based on the ScanSetting and ScanSettingBinding objects. Example ScanSettingBinding object apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: <name of the scan> profiles: 1 # Node checks - name: rhcos4-with-usb kind: TailoredProfile apiGroup: compliance.openshift.io/v1alpha1 # Cluster checks - name: ocp4-moderate kind: Profile apiGroup: compliance.openshift.io/v1alpha1 settingsRef: 2 name: my-companys-constraints kind: ScanSetting apiGroup: compliance.openshift.io/v1alpha1 1 Specify the details of Profile or TailoredProfile object to scan your environment. 2 Specify the operational constraints, such as schedule and storage size. The creation of ScanSetting and ScanSettingBinding objects results in the compliance suite. To get the list of compliance suite, run the following command: USD oc get compliancesuites Important If you delete ScanSettingBinding , then compliance suite also is deleted. 5.4.2.5. Tracking the compliance scans After the creation of compliance suite, you can monitor the status of the deployed scans using the ComplianceSuite object. 5.4.2.5.1. ComplianceSuite object The ComplianceSuite object helps you keep track of the state of the scans. It contains the raw settings to create scans and the overall result. For Node type scans, you should map the scan to the MachineConfigPool , since it contains the remediations for any issues. If you specify a label, ensure it directly applies to a pool. Example ComplianceSuite object apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: <name_of_the_suite> spec: autoApplyRemediations: false 1 schedule: "0 1 * * *" 2 scans: 3 - name: workers-scan scanType: Node profile: xccdf_org.ssgproject.content_profile_moderate content: ssg-rhcos4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc... rule: "xccdf_org.ssgproject.content_rule_no_netrc_files" nodeSelector: node-role.kubernetes.io/worker: "" status: Phase: DONE 4 Result: NON-COMPLIANT 5 scanStatuses: - name: workers-scan phase: DONE result: NON-COMPLIANT 1 Set to true to enable auto remediations. Set to false to disable auto remediations. 2 Specify how often the scan should be run in cron format. 3 Specify a list of scan specifications to run in the cluster. 4 Indicates the progress of the scans. 5 Indicates the overall verdict of the suite. The suite in the background creates the ComplianceScan object based on the scans parameter. You can programmatically fetch the ComplianceSuites events. To get the events for the suite, run the following command: USD oc get events --field-selector involvedObject.kind=ComplianceSuite,involvedObject.name=<name of the suite> Important You might create errors when you manually define the ComplianceSuite , since it contains the XCCDF attributes. 5.4.2.5.2. Advanced ComplianceScan Object The Compliance Operator includes options for advanced users for debugging or integrating with existing tooling. While it is recommended that you not create a ComplianceScan object directly, you can instead manage it using a ComplianceSuite object. Example Advanced ComplianceScan object apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceScan metadata: name: <name_of_the_compliance_scan> spec: scanType: Node 1 profile: xccdf_org.ssgproject.content_profile_moderate 2 content: ssg-ocp4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc... 3 rule: "xccdf_org.ssgproject.content_rule_no_netrc_files" 4 nodeSelector: 5 node-role.kubernetes.io/worker: "" status: phase: DONE 6 result: NON-COMPLIANT 7 1 Specify either Node or Platform . Node profiles scan the cluster nodes and platform profiles scan the Kubernetes platform. 2 Specify the XCCDF identifier of the profile that you want to run. 3 Specify the container image that encapsulates the profile files. 4 It is optional. Specify the scan to run a single rule. This rule has to be identified with the XCCDF ID, and has to belong to the specified profile. Note If you skip the rule parameter, then scan runs for all the available rules of the specified profile. 5 If you are on the OpenShift Container Platform and wants to generate a remediation, then nodeSelector label has to match the MachineConfigPool label. Note If you do not specify nodeSelector parameter or match the MachineConfig label, scan will still run, but it will not create remediation. 6 Indicates the current phase of the scan. 7 Indicates the verdict of the scan. Important If you delete a ComplianceSuite object, then all the associated scans get deleted. When the scan is complete, it generates the result as Custom Resources of the ComplianceCheckResult object. However, the raw results are available in ARF format. These results are stored in a Persistent Volume (PV), which has a Persistent Volume Claim (PVC) associated with the name of the scan. You can programmatically fetch the ComplianceScans events. To generate events for the suite, run the following command: oc get events --field-selector involvedObject.kind=ComplianceScan,involvedObject.name=<name_of_the_compliance_scan> 5.4.2.6. Viewing the compliance results When the compliance suite reaches the DONE phase, you can view the scan results and possible remediations. 5.4.2.6.1. ComplianceCheckResult object When you run a scan with a specific profile, several rules in the profiles are verified. For each of these rules, a ComplianceCheckResult object is created, which provides the state of the cluster for a specific rule. Example ComplianceCheckResult object apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceCheckResult metadata: labels: compliance.openshift.io/check-severity: medium compliance.openshift.io/check-status: FAIL compliance.openshift.io/suite: example-compliancesuite compliance.openshift.io/scan-name: workers-scan name: workers-scan-no-direct-root-logins namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ComplianceScan name: workers-scan description: <description of scan check> instructions: <manual instructions for the scan> id: xccdf_org.ssgproject.content_rule_no_direct_root_logins severity: medium 1 status: FAIL 2 1 Describes the severity of the scan check. 2 Describes the result of the check. The possible values are: PASS: check was successful. FAIL: check was unsuccessful. INFO: check was successful and found something not severe enough to be considered an error. MANUAL: check cannot automatically assess the status and manual check is required. INCONSISTENT: different nodes report different results. ERROR: check run successfully, but could not complete. NOTAPPLICABLE: check did not run as it is not applicable. To get all the check results from a suite, run the following command: oc get compliancecheckresults \ -l compliance.openshift.io/suite=workers-compliancesuite 5.4.2.6.2. ComplianceRemediation object For a specific check you can have a datastream specified fix. However, if a Kubernetes fix is available, then the Compliance Operator creates a ComplianceRemediation object. Example ComplianceRemediation object apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceRemediation metadata: labels: compliance.openshift.io/suite: example-compliancesuite compliance.openshift.io/scan-name: workers-scan machineconfiguration.openshift.io/role: worker name: workers-scan-disable-users-coredumps namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ComplianceCheckResult name: workers-scan-disable-users-coredumps uid: <UID> spec: apply: false 1 object: current: 2 apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:,%2A%20%20%20%20%20hard%20%20%20core%20%20%20%200 filesystem: root mode: 420 path: /etc/security/limits.d/75-disable_users_coredumps.conf outdated: {} 3 1 true indicates the remediation was applied. false indicates the remediation was not applied. 2 Includes the definition of the remediation. 3 Indicates remediation that was previously parsed from an earlier version of the content. The Compliance Operator still retains the outdated objects to give the administrator a chance to review the new remediations before applying them. To get all the remediations from a suite, run the following command: oc get complianceremediations \ -l compliance.openshift.io/suite=workers-compliancesuite To list all failing checks that can be remediated automatically, run the following command: oc get compliancecheckresults \ -l 'compliance.openshift.io/check-status in (FAIL),compliance.openshift.io/automated-remediation' To list all failing checks that can be remediated manually, run the following command: oc get compliancecheckresults \ -l 'compliance.openshift.io/check-status in (FAIL),!compliance.openshift.io/automated-remediation' 5.5. Compliance Operator management 5.5.1. Installing the Compliance Operator Before you can use the Compliance Operator, you must ensure it is deployed in the cluster. Important The Compliance Operator might report incorrect results on managed platforms, such as OpenShift Dedicated, Red Hat OpenShift Service on AWS Classic, and Microsoft Azure Red Hat OpenShift. For more information, see the Knowledgebase article Compliance Operator reports incorrect results on Managed Services . Important Before deploying the Compliance Operator, you are required to define persistent storage in your cluster to store the raw results output. For more information, see Persistant storage overview and Managing the default storage class . 5.5.1.1. Installing the Compliance Operator through the web console Prerequisites You must have admin privileges. You must have a StorageClass resource configured. Procedure In the OpenShift Container Platform web console, navigate to Operators OperatorHub . Search for the Compliance Operator, then click Install . Keep the default selection of Installation mode and namespace to ensure that the Operator will be installed to the openshift-compliance namespace. Click Install . Verification To confirm that the installation is successful: Navigate to the Operators Installed Operators page. Check that the Compliance Operator is installed in the openshift-compliance namespace and its status is Succeeded . If the Operator is not installed successfully: Navigate to the Operators Installed Operators page and inspect the Status column for any errors or failures. Navigate to the Workloads Pods page and check the logs in any pods in the openshift-compliance project that are reporting issues. Important If the restricted Security Context Constraints (SCC) have been modified to contain the system:authenticated group or has added requiredDropCapabilities , the Compliance Operator may not function properly due to permissions issues. You can create a custom SCC for the Compliance Operator scanner pod service account. For more information, see Creating a custom SCC for the Compliance Operator . 5.5.1.2. Installing the Compliance Operator using the CLI Prerequisites You must have admin privileges. You must have a StorageClass resource configured. Procedure Define a Namespace object: Example namespace-object.yaml apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: "true" pod-security.kubernetes.io/enforce: privileged 1 name: openshift-compliance 1 In OpenShift Container Platform 4.18, the pod security label must be set to privileged at the namespace level. Create the Namespace object: USD oc create -f namespace-object.yaml Define an OperatorGroup object: Example operator-group-object.yaml apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: compliance-operator namespace: openshift-compliance spec: targetNamespaces: - openshift-compliance Create the OperatorGroup object: USD oc create -f operator-group-object.yaml Define a Subscription object: Example subscription-object.yaml apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: compliance-operator-sub namespace: openshift-compliance spec: channel: "stable" installPlanApproval: Automatic name: compliance-operator source: redhat-operators sourceNamespace: openshift-marketplace Create the Subscription object: USD oc create -f subscription-object.yaml Note If you are setting the global scheduler feature and enable defaultNodeSelector , you must create the namespace manually and update the annotations of the openshift-compliance namespace, or the namespace where the Compliance Operator was installed, with openshift.io/node-selector: "" . This removes the default node selector and prevents deployment failures. Verification Verify the installation succeeded by inspecting the CSV file: USD oc get csv -n openshift-compliance Verify that the Compliance Operator is up and running: USD oc get deploy -n openshift-compliance 5.5.1.3. Installing the Compliance Operator on ROSA hosted control planes (HCP) As of the Compliance Operator 1.5.0 release, the Operator is tested against Red Hat OpenShift Service on AWS using Hosted control planes. Red Hat OpenShift Service on AWS Hosted control planes clusters have restricted access to the control plane, which is managed by Red Hat. By default, the Compliance Operator will schedule to nodes within the master node pool, which is not available in Red Hat OpenShift Service on AWS Hosted control planes installations. This requires you to configure the Subscription object in a way that allows the Operator to schedule on available node pools. This step is necessary for a successful installation on Red Hat OpenShift Service on AWS Hosted control planes clusters. Prerequisites You must have admin privileges. You must have a StorageClass resource configured. Procedure Define a Namespace object: Example namespace-object.yaml file apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: "true" pod-security.kubernetes.io/enforce: privileged 1 name: openshift-compliance 1 In OpenShift Container Platform 4.18, the pod security label must be set to privileged at the namespace level. Create the Namespace object by running the following command: USD oc create -f namespace-object.yaml Define an OperatorGroup object: Example operator-group-object.yaml file apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: compliance-operator namespace: openshift-compliance spec: targetNamespaces: - openshift-compliance Create the OperatorGroup object by running the following command: USD oc create -f operator-group-object.yaml Define a Subscription object: Example subscription-object.yaml file apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: compliance-operator-sub namespace: openshift-compliance spec: channel: "stable" installPlanApproval: Automatic name: compliance-operator source: redhat-operators sourceNamespace: openshift-marketplace config: nodeSelector: node-role.kubernetes.io/worker: "" 1 1 Update the Operator deployment to deploy on worker nodes. Create the Subscription object by running the following command: USD oc create -f subscription-object.yaml Verification Verify that the installation succeeded by running the following command to inspect the cluster service version (CSV) file: USD oc get csv -n openshift-compliance Verify that the Compliance Operator is up and running by using the following command: USD oc get deploy -n openshift-compliance Important If the restricted Security Context Constraints (SCC) have been modified to contain the system:authenticated group or has added requiredDropCapabilities , the Compliance Operator may not function properly due to permissions issues. You can create a custom SCC for the Compliance Operator scanner pod service account. For more information, see Creating a custom SCC for the Compliance Operator . 5.5.1.4. Installing the Compliance Operator on Hypershift hosted control planes The Compliance Operator can be installed in hosted control planes using the OperatorHub by creating a Subscription file. Important Hosted control planes is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites You must have admin privileges. Procedure Define a Namespace object similar to the following: Example namespace-object.yaml apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: "true" pod-security.kubernetes.io/enforce: privileged 1 name: openshift-compliance 1 In OpenShift Container Platform 4.18, the pod security label must be set to privileged at the namespace level. Create the Namespace object by running the following command: USD oc create -f namespace-object.yaml Define an OperatorGroup object: Example operator-group-object.yaml apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: compliance-operator namespace: openshift-compliance spec: targetNamespaces: - openshift-compliance Create the OperatorGroup object by running the following command: USD oc create -f operator-group-object.yaml Define a Subscription object: Example subscription-object.yaml apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: compliance-operator-sub namespace: openshift-compliance spec: channel: "stable" installPlanApproval: Automatic name: compliance-operator source: redhat-operators sourceNamespace: openshift-marketplace config: nodeSelector: node-role.kubernetes.io/worker: "" env: - name: PLATFORM value: "HyperShift" Create the Subscription object by running the following command: USD oc create -f subscription-object.yaml Verification Verify the installation succeeded by inspecting the CSV file by running the following command: USD oc get csv -n openshift-compliance Verify that the Compliance Operator is up and running by running the following command: USD oc get deploy -n openshift-compliance Additional resources Hosted control planes overview 5.5.1.5. Additional resources The Compliance Operator is supported in a restricted network environment. For more information, see Using Operator Lifecycle Manager in disconnected environments . 5.5.2. Updating the Compliance Operator As a cluster administrator, you can update the Compliance Operator on your OpenShift Container Platform cluster. Important Updating your OpenShift Container Platform cluster to version 4.14 might cause the Compliance Operator to not work as expected. This is due to an ongoing known issue. For more information, see OCPBUGS-18025 . 5.5.2.1. Preparing for an Operator update The subscription of an installed Operator specifies an update channel that tracks and receives updates for the Operator. You can change the update channel to start tracking and receiving updates from a newer channel. The names of update channels in a subscription can differ between Operators, but the naming scheme typically follows a common convention within a given Operator. For example, channel names might follow a minor release update stream for the application provided by the Operator ( 1.2 , 1.3 ) or a release frequency ( stable , fast ). Note You cannot change installed Operators to a channel that is older than the current channel. Red Hat Customer Portal Labs include the following application that helps administrators prepare to update their Operators: Red Hat OpenShift Container Platform Operator Update Information Checker You can use the application to search for Operator Lifecycle Manager-based Operators and verify the available Operator version per update channel across different versions of OpenShift Container Platform. Cluster Version Operator-based Operators are not included. 5.5.2.2. Changing the update channel for an Operator You can change the update channel for an Operator by using the OpenShift Container Platform web console. Tip If the approval strategy in the subscription is set to Automatic , the update process initiates as soon as a new Operator version is available in the selected channel. If the approval strategy is set to Manual , you must manually approve pending updates. Prerequisites An Operator previously installed using Operator Lifecycle Manager (OLM). Procedure In the Administrator perspective of the web console, navigate to Operators Installed Operators . Click the name of the Operator you want to change the update channel for. Click the Subscription tab. Click the name of the update channel under Update channel . Click the newer update channel that you want to change to, then click Save . For subscriptions with an Automatic approval strategy, the update begins automatically. Navigate back to the Operators Installed Operators page to monitor the progress of the update. When complete, the status changes to Succeeded and Up to date . For subscriptions with a Manual approval strategy, you can manually approve the update from the Subscription tab. 5.5.2.3. Manually approving a pending Operator update If an installed Operator has the approval strategy in its subscription set to Manual , when new updates are released in its current update channel, the update must be manually approved before installation can begin. Prerequisites An Operator previously installed using Operator Lifecycle Manager (OLM). Procedure In the Administrator perspective of the OpenShift Container Platform web console, navigate to Operators Installed Operators . Operators that have a pending update display a status with Upgrade available . Click the name of the Operator you want to update. Click the Subscription tab. Any updates requiring approval are displayed to Upgrade status . For example, it might display 1 requires approval . Click 1 requires approval , then click Preview Install Plan . Review the resources that are listed as available for update. When satisfied, click Approve . Navigate back to the Operators Installed Operators page to monitor the progress of the update. When complete, the status changes to Succeeded and Up to date . 5.5.3. Managing the Compliance Operator This section describes the lifecycle of security content, including how to use an updated version of compliance content and how to create a custom ProfileBundle object. 5.5.3.1. ProfileBundle CR example The ProfileBundle object requires two pieces of information: the URL of a container image that contains the contentImage and the file that contains the compliance content. The contentFile parameter is relative to the root of the file system. You can define the built-in rhcos4 ProfileBundle object as shown in the following example: apiVersion: compliance.openshift.io/v1alpha1 kind: ProfileBundle metadata: creationTimestamp: "2022-10-19T12:06:30Z" finalizers: - profilebundle.finalizers.compliance.openshift.io generation: 1 name: rhcos4 namespace: openshift-compliance resourceVersion: "46741" uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d spec: contentFile: ssg-rhcos4-ds.xml 1 contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:900e... 2 status: conditions: - lastTransitionTime: "2022-10-19T12:07:51Z" message: Profile bundle successfully parsed reason: Valid status: "True" type: Ready dataStreamStatus: VALID 1 Location of the file containing the compliance content. 2 Content image location. Important The base image used for the content images must include coreutils . 5.5.3.2. Updating security content Security content is included as container images that the ProfileBundle objects refer to. To accurately track updates to ProfileBundles and the custom resources parsed from the bundles such as rules or profiles, identify the container image with the compliance content using a digest instead of a tag: USD oc -n openshift-compliance get profilebundles rhcos4 -oyaml Example output apiVersion: compliance.openshift.io/v1alpha1 kind: ProfileBundle metadata: creationTimestamp: "2022-10-19T12:06:30Z" finalizers: - profilebundle.finalizers.compliance.openshift.io generation: 1 name: rhcos4 namespace: openshift-compliance resourceVersion: "46741" uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d spec: contentFile: ssg-rhcos4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:900e... 1 status: conditions: - lastTransitionTime: "2022-10-19T12:07:51Z" message: Profile bundle successfully parsed reason: Valid status: "True" type: Ready dataStreamStatus: VALID 1 Security container image. Each ProfileBundle is backed by a deployment. When the Compliance Operator detects that the container image digest has changed, the deployment is updated to reflect the change and parse the content again. Using the digest instead of a tag ensures that you use a stable and predictable set of profiles. 5.5.3.3. Additional resources The Compliance Operator is supported in a restricted network environment. For more information, see Using Operator Lifecycle Manager in disconnected environments . 5.5.4. Uninstalling the Compliance Operator You can remove the OpenShift Compliance Operator from your cluster by using the OpenShift Container Platform web console or the CLI. 5.5.4.1. Uninstalling the OpenShift Compliance Operator from OpenShift Container Platform using the web console To remove the Compliance Operator, you must first delete the objects in the namespace. After the objects are removed, you can remove the Operator and its namespace by deleting the openshift-compliance project. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. The OpenShift Compliance Operator must be installed. Procedure To remove the Compliance Operator by using the OpenShift Container Platform web console: Go to the Operators Installed Operators Compliance Operator page. Click All instances . In All namespaces , click the Options menu and delete all ScanSettingBinding, ComplainceSuite, ComplianceScan, and ProfileBundle objects. Switch to the Administration Operators Installed Operators page. Click the Options menu on the Compliance Operator entry and select Uninstall Operator . Switch to the Home Projects page. Search for 'compliance'. Click the Options menu to the openshift-compliance project, and select Delete Project . Confirm the deletion by typing openshift-compliance in the dialog box, and click Delete . 5.5.4.2. Uninstalling the OpenShift Compliance Operator from OpenShift Container Platform using the CLI To remove the Compliance Operator, you must first delete the objects in the namespace. After the objects are removed, you can remove the Operator and its namespace by deleting the openshift-compliance project. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. The OpenShift Compliance Operator must be installed. Procedure Delete all objects in the namespace. Delete the ScanSettingBinding objects: USD oc delete ssb --all -n openshift-compliance Delete the ScanSetting objects: USD oc delete ss --all -n openshift-compliance Delete the ComplianceSuite objects: USD oc delete suite --all -n openshift-compliance Delete the ComplianceScan objects: USD oc delete scan --all -n openshift-compliance Delete the ProfileBundle objects: USD oc delete profilebundle.compliance --all -n openshift-compliance Delete the Subscription object: USD oc delete sub --all -n openshift-compliance Delete the CSV object: USD oc delete csv --all -n openshift-compliance Delete the project: USD oc delete project openshift-compliance Example output project.project.openshift.io "openshift-compliance" deleted Verification Confirm the namespace is deleted: USD oc get project/openshift-compliance Example output Error from server (NotFound): namespaces "openshift-compliance" not found 5.6. Compliance Operator scan management 5.6.1. Supported compliance profiles There are several profiles available as part of the Compliance Operator (CO) installation. While you can use the following profiles to assess gaps in a cluster, usage alone does not infer or guarantee compliance with a particular profile and is not an auditor. In order to be compliant or certified under these various standards, you need to engage an authorized auditor such as a Qualified Security Assessor (QSA), Joint Authorization Board (JAB), or other industry recognized regulatory authority to assess your environment. You are required to work with an authorized auditor to achieve compliance with a standard. For more information on compliance support for all Red Hat products, see Product Compliance . Important The Compliance Operator might report incorrect results on some managed platforms, such as OpenShift Dedicated and Azure Red Hat OpenShift. For more information, see the Red Hat Knowledgebase Solution #6983418 . 5.6.1.1. Compliance profiles The Compliance Operator provides profiles to meet industry standard benchmarks. Note The following tables reflect the latest available profiles in the Compliance Operator. 5.6.1.1.1. CIS compliance profiles Table 5.1. Supported CIS compliance profiles Profile Profile title Application Industry compliance benchmark Supported architectures Supported platforms ocp4-cis [1] CIS Red Hat OpenShift Container Platform Benchmark v1.5.0 Platform CIS Benchmarks TM [1] x86_64 ppc64le s390x ocp4-cis-1-4 [3] CIS Red Hat OpenShift Container Platform Benchmark v1.4.0 Platform CIS Benchmarks TM [4] x86_64 ppc64le s390x ocp4-cis-1-5 CIS Red Hat OpenShift Container Platform Benchmark v1.5.0 Platform CIS Benchmarks TM [4] x86_64 ppc64le s390x ocp4-cis-node [1] CIS Red Hat OpenShift Container Platform Benchmark v1.5.0 Node [2] CIS Benchmarks TM [4] x86_64 ppc64le s390x Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) ocp4-cis-node-1-4 [3] CIS Red Hat OpenShift Container Platform Benchmark v1.4.0 Node [2] CIS Benchmarks TM [4] x86_64 ppc64le s390x Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) ocp4-cis-node-1-5 CIS Red Hat OpenShift Container Platform Benchmark v1.5.0 Node [2] CIS Benchmarks TM [4] x86_64 ppc64le s390x Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) The ocp4-cis and ocp4-cis-node profiles maintain the most up-to-date version of the CIS benchmark as it becomes available in the Compliance Operator. If you want to adhere to a specific version, such as CIS v1.4.0, use the ocp4-cis-1-4 and ocp4-cis-node-1-4 profiles. Node profiles must be used with the relevant Platform profile. For more information, see Compliance Operator profile types . CIS v1.4.0 is superceded by CIS v1.5.0. It is recommended to apply the latest profile to your environment. To locate the CIS OpenShift Container Platform v4 Benchmark, go to CIS Benchmarks and click Download Latest CIS Benchmark , where you can then register to download the benchmark. 5.6.1.1.2. Essential Eight compliance profiles Table 5.2. Supported Essential Eight compliance profiles Profile Profile title Application Industry compliance benchmark Supported architectures Supported platforms ocp4-e8 Australian Cyber Security Centre (ACSC) Essential Eight Platform ACSC Hardening Linux Workstations and Servers x86_64 rhcos4-e8 Australian Cyber Security Centre (ACSC) Essential Eight Node ACSC Hardening Linux Workstations and Servers x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) 5.6.1.1.3. FedRAMP High compliance profiles Table 5.3. Supported FedRAMP High compliance profiles Profile Profile title Application Industry compliance benchmark Supported architectures Supported platforms ocp4-high [1] NIST 800-53 High-Impact Baseline for Red Hat OpenShift - Platform level Platform NIST SP-800-53 Release Search x86_64 ocp4-high-node [1] NIST 800-53 High-Impact Baseline for Red Hat OpenShift - Node level Node [2] NIST SP-800-53 Release Search x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) ocp4-high-node-rev-4 NIST 800-53 High-Impact Baseline for Red Hat OpenShift - Node level Node [2] NIST SP-800-53 Release Search x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) ocp4-high-rev-4 NIST 800-53 High-Impact Baseline for Red Hat OpenShift - Platform level Platform NIST SP-800-53 Release Search x86_64 rhcos4-high [1] NIST 800-53 High-Impact Baseline for Red Hat Enterprise Linux CoreOS Node NIST SP-800-53 Release Search x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) rhcos4-high-rev-4 NIST 800-53 High-Impact Baseline for Red Hat Enterprise Linux CoreOS Node NIST SP-800-53 Release Search x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) The ocp4-high , ocp4-high-node and rhcos4-high profiles maintain the most up-to-date version of the FedRAMP High standard as it becomes available in the Compliance Operator. If you want to adhere to a specific version, such as FedRAMP high R4, use the ocp4-high-rev-4 and ocp4-high-node-rev-4 profiles. Node profiles must be used with the relevant Platform profile. For more information, see Compliance Operator profile types . 5.6.1.1.4. FedRAMP Moderate compliance profiles Table 5.4. Supported FedRAMP Moderate compliance profiles Profile Profile title Application Industry compliance benchmark Supported architectures Supported platforms ocp4-moderate [1] NIST 800-53 Moderate-Impact Baseline for Red Hat OpenShift - Platform level Platform NIST SP-800-53 Release Search x86_64 ppc64le s390x ocp4-moderate-node [1] NIST 800-53 Moderate-Impact Baseline for Red Hat OpenShift - Node level Node [2] NIST SP-800-53 Release Search x86_64 ppc64le s390x Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) ocp4-moderate-node-rev-4 NIST 800-53 Moderate-Impact Baseline for Red Hat OpenShift - Node level Node [2] NIST SP-800-53 Release Search x86_64 ppc64le s390x Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) ocp4-moderate-rev-4 NIST 800-53 Moderate-Impact Baseline for Red Hat OpenShift - Platform level Platform NIST SP-800-53 Release Search x86_64 ppc64le s390x rhcos4-moderate [1] NIST 800-53 Moderate-Impact Baseline for Red Hat Enterprise Linux CoreOS Node NIST SP-800-53 Release Search x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) rhcos4-moderate-rev-4 NIST 800-53 Moderate-Impact Baseline for Red Hat Enterprise Linux CoreOS Node NIST SP-800-53 Release Search x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) The ocp4-moderate , ocp4-moderate-node and rhcos4-moderate profiles maintain the most up-to-date version of the FedRAMP Moderate standard as it becomes available in the Compliance Operator. If you want to adhere to a specific version, such as FedRAMP Moderate R4, use the ocp4-moderate-rev-4 and ocp4-moderate-node-rev-4 profiles. Node profiles must be used with the relevant Platform profile. For more information, see Compliance Operator profile types . 5.6.1.1.5. NERC-CIP compliance profiles Table 5.5. Supported NERC-CIP compliance profiles Profile Profile title Application Industry compliance benchmark Supported architectures Supported platforms ocp4-nerc-cip North American Electric Reliability Corporation (NERC) Critical Infrastructure Protection (CIP) cybersecurity standards profile for the OpenShift Container Platform - Platform level Platform NERC CIP Standards x86_64 ocp4-nerc-cip-node North American Electric Reliability Corporation (NERC) Critical Infrastructure Protection (CIP) cybersecurity standards profile for the OpenShift Container Platform - Node level Node [1] NERC CIP Standards x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) rhcos4-nerc-cip North American Electric Reliability Corporation (NERC) Critical Infrastructure Protection (CIP) cybersecurity standards profile for Red Hat Enterprise Linux CoreOS Node NERC CIP Standards x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) Node profiles must be used with the relevant Platform profile. For more information, see Compliance Operator profile types . 5.6.1.1.6. PCI-DSS compliance profiles Table 5.6. Supported PCI-DSS compliance profiles Profile Profile title Application Industry compliance benchmark Supported architectures Supported platforms ocp4-pci-dss [1] PCI-DSS v4 Control Baseline for OpenShift Container Platform 4 Platform PCI Security Standards (R) Council Document Library x86_64 ocp4-pci-dss-3-2 [3] PCI-DSS v3.2.1 Control Baseline for OpenShift Container Platform 4 Platform PCI Security Standards (R) Council Document Library x86_64 ppc64le s390x ocp4-pci-dss-4-0 PCI-DSS v4 Control Baseline for OpenShift Container Platform 4 Platform PCI Security Standards (R) Council Document Library x86_64 ocp4-pci-dss-node [1] PCI-DSS v4 Control Baseline for OpenShift Container Platform 4 Node [2] PCI Security Standards (R) Council Document Library x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) ocp4-pci-dss-node-3-2 [3] PCI-DSS v3.2.1 Control Baseline for OpenShift Container Platform 4 Node [2] PCI Security Standards (R) Council Document Library x86_64 ppc64le s390x Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) ocp4-pci-dss-node-4-0 PCI-DSS v4 Control Baseline for OpenShift Container Platform 4 Node [2] PCI Security Standards (R) Council Document Library x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) The ocp4-pci-dss and ocp4-pci-dss-node profiles maintain the most up-to-date version of the PCI-DSS standard as it becomes available in the Compliance Operator. If you want to adhere to a specific version, such as PCI-DSS v3.2.1, use the ocp4-pci-dss-3-2 and ocp4-pci-dss-node-3-2 profiles. Node profiles must be used with the relevant Platform profile. For more information, see Compliance Operator profile types . PCI-DSS v3.2.1 is superceded by PCI-DSS v4. It is recommended to apply the latest profile to your environment. 5.6.1.1.7. STIG compliance profiles Table 5.7. Supported STIG compliance profiles Profile Profile title Application Industry compliance benchmark Supported architectures Supported platforms ocp4-stig [1] Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Openshift Platform DISA-STIG x86_64 ocp4-stig-node [1] Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Openshift Node [2] DISA-STIG x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) ocp4-stig-node-v1r1 [3] Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Openshift V1R1 Node [2] DISA-STIG x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) ocp4-stig-node-v2r1 Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Openshift V2R1 Node [2] DISA-STIG x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) ocp4-stig-v1r1 [3] Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Openshift V1R1 Platform DISA-STIG x86_64 ocp4-stig-v2r1 Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Openshift V2R1 Platform DISA-STIG x86_64 rhcos4-stig Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Openshift Node DISA-STIG x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) rhcos4-stig-v1r1 [3] Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Openshift V1R1 Node DISA-STIG [3] x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) rhcos4-stig-v2r1 Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Openshift V2R1 Node DISA-STIG x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) The ocp4-stig , ocp4-stig-node and rhcos4-stig profiles maintain the most up-to-date version of the DISA-STIG benchmark as it becomes available in the Compliance Operator. If you want to adhere to a specific version, such as DISA-STIG V2R1, use the ocp4-stig-v2r1 and ocp4-stig-node-v2r1 profiles. Node profiles must be used with the relevant Platform profile. For more information, see Compliance Operator profile types . DISA-STIG V1R1 is superceded by DISA-STIG V2R1. It is recommended to apply the latest profile to your environment. 5.6.1.1.8. About extended compliance profiles Some compliance profiles have controls that require following industry best practices, resulting in some profiles extending others. Combining the Center for Internet Security (CIS) best practices with National Institute of Standards and Technology (NIST) security frameworks establishes a path to a secure and compliant environment. For example, the NIST High-Impact and Moderate-Impact profiles extend the CIS profile to achieve compliance. As a result, extended compliance profiles eliminate the need to run both profiles in a single cluster. Table 5.8. Profile extensions Profile Extends ocp4-pci-dss ocp4-cis ocp4-pci-dss-node ocp4-cis-node ocp4-high ocp4-cis ocp4-high-node ocp4-cis-node ocp4-moderate ocp4-cis ocp4-moderate-node ocp4-cis-node ocp4-nerc-cip ocp4-moderate ocp4-nerc-cip-node ocp4-moderate-node 5.6.1.2. Additional resources Compliance Operator profile types 5.6.2. Compliance Operator scans The ScanSetting and ScanSettingBinding APIs are recommended to run compliance scans with the Compliance Operator. For more information on these API objects, run: USD oc explain scansettings or USD oc explain scansettingbindings 5.6.2.1. Running compliance scans You can run a scan using the Center for Internet Security (CIS) profiles. For convenience, the Compliance Operator creates a ScanSetting object with reasonable defaults on startup. This ScanSetting object is named default . Note For all-in-one control plane and worker nodes, the compliance scan runs twice on the worker and control plane nodes. The compliance scan might generate inconsistent scan results. You can avoid inconsistent results by defining only a single role in the ScanSetting object. For more information about inconsistent scan results, see Compliance Operator shows INCONSISTENT scan result with worker node . Procedure Inspect the ScanSetting object by running the following command: USD oc describe scansettings default -n openshift-compliance Example output Name: default Namespace: openshift-compliance Labels: <none> Annotations: <none> API Version: compliance.openshift.io/v1alpha1 Kind: ScanSetting Max Retry On Timeout: 3 Metadata: Creation Timestamp: 2024-07-16T14:56:42Z Generation: 2 Resource Version: 91655682 UID: 50358cf1-57a8-4f69-ac50-5c7a5938e402 Raw Result Storage: Node Selector: node-role.kubernetes.io/master: Pv Access Modes: ReadWriteOnce 1 Rotation: 3 2 Size: 1Gi 3 Storage Class Name: standard 4 Tolerations: Effect: NoSchedule Key: node-role.kubernetes.io/master Operator: Exists Effect: NoExecute Key: node.kubernetes.io/not-ready Operator: Exists Toleration Seconds: 300 Effect: NoExecute Key: node.kubernetes.io/unreachable Operator: Exists Toleration Seconds: 300 Effect: NoSchedule Key: node.kubernetes.io/memory-pressure Operator: Exists Roles: master 5 worker 6 Scan Tolerations: 7 Operator: Exists Schedule: 0 1 * * * 8 Show Not Applicable: false Strict Node Scan: true Suspend: false Timeout: 30m Events: <none> 1 The Compliance Operator creates a persistent volume (PV) that contains the results of the scans. By default, the PV will use access mode ReadWriteOnce because the Compliance Operator cannot make any assumptions about the storage classes configured on the cluster. Additionally, ReadWriteOnce access mode is available on most clusters. If you need to fetch the scan results, you can do so by using a helper pod, which also binds the volume. Volumes that use the ReadWriteOnce access mode can be mounted by only one pod at time, so it is important to remember to delete the helper pods. Otherwise, the Compliance Operator will not be able to reuse the volume for subsequent scans. 2 The Compliance Operator keeps results of three subsequent scans in the volume; older scans are rotated. 3 The Compliance Operator will allocate one GB of storage for the scan results. 4 The scansetting.rawResultStorage.storageClassName field specifies the storageClassName value to use when creating the PersistentVolumeClaim object to store the raw results. The default value is null, which will attempt to use the default storage class configured in the cluster. If there is no default class specified, then you must set a default class. 5 6 If the scan setting uses any profiles that scan cluster nodes, scan these node roles. 7 The default scan setting object scans all the nodes. 8 The default scan setting object runs scans at 01:00 each day. As an alternative to the default scan setting, you can use default-auto-apply , which has the following settings: Name: default-auto-apply Namespace: openshift-compliance Labels: <none> Annotations: <none> API Version: compliance.openshift.io/v1alpha1 Auto Apply Remediations: true 1 Auto Update Remediations: true 2 Kind: ScanSetting Metadata: Creation Timestamp: 2022-10-18T20:21:00Z Generation: 1 Managed Fields: API Version: compliance.openshift.io/v1alpha1 Fields Type: FieldsV1 fieldsV1: f:autoApplyRemediations: f:autoUpdateRemediations: f:rawResultStorage: .: f:nodeSelector: .: f:node-role.kubernetes.io/master: f:pvAccessModes: f:rotation: f:size: f:tolerations: f:roles: f:scanTolerations: f:schedule: f:showNotApplicable: f:strictNodeScan: Manager: compliance-operator Operation: Update Time: 2022-10-18T20:21:00Z Resource Version: 38840 UID: 8cb0967d-05e0-4d7a-ac1c-08a7f7e89e84 Raw Result Storage: Node Selector: node-role.kubernetes.io/master: Pv Access Modes: ReadWriteOnce Rotation: 3 Size: 1Gi Tolerations: Effect: NoSchedule Key: node-role.kubernetes.io/master Operator: Exists Effect: NoExecute Key: node.kubernetes.io/not-ready Operator: Exists Toleration Seconds: 300 Effect: NoExecute Key: node.kubernetes.io/unreachable Operator: Exists Toleration Seconds: 300 Effect: NoSchedule Key: node.kubernetes.io/memory-pressure Operator: Exists Roles: master worker Scan Tolerations: Operator: Exists Schedule: 0 1 * * * Show Not Applicable: false Strict Node Scan: true Events: <none> 1 2 Setting autoUpdateRemediations and autoApplyRemediations flags to true allows you to easily create ScanSetting objects that auto-remediate without extra steps. Create a ScanSettingBinding object that binds to the default ScanSetting object and scans the cluster using the cis and cis-node profiles. For example: apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: cis-compliance namespace: openshift-compliance profiles: - name: ocp4-cis-node kind: Profile apiGroup: compliance.openshift.io/v1alpha1 - name: ocp4-cis kind: Profile apiGroup: compliance.openshift.io/v1alpha1 settingsRef: name: default kind: ScanSetting apiGroup: compliance.openshift.io/v1alpha1 Create the ScanSettingBinding object by running: USD oc create -f <file-name>.yaml -n openshift-compliance At this point in the process, the ScanSettingBinding object is reconciled and based on the Binding and the Bound settings. The Compliance Operator creates a ComplianceSuite object and the associated ComplianceScan objects. Follow the compliance scan progress by running: USD oc get compliancescan -w -n openshift-compliance The scans progress through the scanning phases and eventually reach the DONE phase when complete. In most cases, the result of the scan is NON-COMPLIANT . You can review the scan results and start applying remediations to make the cluster compliant. See Managing Compliance Operator remediation for more information. 5.6.2.2. Setting custom storage size for results While the custom resources such as ComplianceCheckResult represent an aggregated result of one check across all scanned nodes, it can be useful to review the raw results as produced by the scanner. The raw results are produced in the ARF format and can be large (tens of megabytes per node), it is impractical to store them in a Kubernetes resource backed by the etcd key-value store. Instead, every scan creates a persistent volume (PV) which defaults to 1GB size. Depending on your environment, you may want to increase the PV size accordingly. This is done using the rawResultStorage.size attribute that is exposed in both the ScanSetting and ComplianceScan resources. A related parameter is rawResultStorage.rotation which controls how many scans are retained in the PV before the older scans are rotated. The default value is 3, setting the rotation policy to 0 disables the rotation. Given the default rotation policy and an estimate of 100MB per a raw ARF scan report, you can calculate the right PV size for your environment. 5.6.2.2.1. Using custom result storage values Because OpenShift Container Platform can be deployed in a variety of public clouds or bare metal, the Compliance Operator cannot determine available storage configurations. By default, the Compliance Operator will try to create the PV for storing results using the default storage class of the cluster, but a custom storage class can be configured using the rawResultStorage.StorageClassName attribute. Important If your cluster does not specify a default storage class, this attribute must be set. Configure the ScanSetting custom resource to use a standard storage class and create persistent volumes that are 10GB in size and keep the last 10 results: Example ScanSetting CR apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: storageClassName: standard rotation: 10 size: 10Gi roles: - worker - master scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *' 5.6.2.3. Scheduling the result server pod on a worker node The result server pod mounts the persistent volume (PV) that stores the raw Asset Reporting Format (ARF) scan results. The nodeSelector and tolerations attributes enable you to configure the location of the result server pod. This is helpful for those environments where control plane nodes are not permitted to mount persistent volumes. Procedure Create a ScanSetting custom resource (CR) for the Compliance Operator: Define the ScanSetting CR, and save the YAML file, for example, rs-workers.yaml : apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: rs-on-workers namespace: openshift-compliance rawResultStorage: nodeSelector: node-role.kubernetes.io/worker: "" 1 pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - operator: Exists 2 roles: - worker - master scanTolerations: - operator: Exists schedule: 0 1 * * * 1 The Compliance Operator uses this node to store scan results in ARF format. 2 The result server pod tolerates all taints. To create the ScanSetting CR, run the following command: USD oc create -f rs-workers.yaml Verification To verify that the ScanSetting object is created, run the following command: USD oc get scansettings rs-on-workers -n openshift-compliance -o yaml Example output apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: creationTimestamp: "2021-11-19T19:36:36Z" generation: 1 name: rs-on-workers namespace: openshift-compliance resourceVersion: "48305" uid: 43fdfc5f-15a7-445a-8bbc-0e4a160cd46e rawResultStorage: nodeSelector: node-role.kubernetes.io/worker: "" pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - operator: Exists roles: - worker - master scanTolerations: - operator: Exists schedule: 0 1 * * * strictNodeScan: true 5.6.2.4. ScanSetting Custom Resource The ScanSetting Custom Resource now allows you to override the default CPU and memory limits of scanner pods through the scan limits attribute. The Compliance Operator will use defaults of 500Mi memory, 100m CPU for the scanner container, and 200Mi memory with 100m CPU for the api-resource-collector container. To set the memory limits of the Operator, modify the Subscription object if installed through OLM or the Operator deployment itself. To increase the default CPU and memory limits of the Compliance Operator, see Increasing Compliance Operator resource limits . Important Increasing the memory limit for the Compliance Operator or the scanner pods is needed if the default limits are not sufficient and the Operator or scanner pods are ended by the Out Of Memory (OOM) process. 5.6.2.5. Configuring the hosted control planes management cluster If you are hosting your own Hosted control plane or Hypershift environment and want to scan a Hosted Cluster from the management cluster, you will need to set the name and prefix namespace for the target Hosted Cluster. You can achieve this by creating a TailoredProfile . Important This procedure only applies to users managing their own hosted control planes environment. Note Only ocp4-cis and ocp4-pci-dss profiles are supported in hosted control planes management clusters. Prerequisites The Compliance Operator is installed in the management cluster. Procedure Obtain the name and namespace of the hosted cluster to be scanned by running the following command: USD oc get hostedcluster -A Example output NAMESPACE NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE local-cluster 79136a1bdb84b3c13217 4.13.5 79136a1bdb84b3c13217-admin-kubeconfig Completed True False The hosted control plane is available In the management cluster, create a TailoredProfile extending the scan Profile and define the name and namespace of the Hosted Cluster to be scanned: Example management-tailoredprofile.yaml apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: hypershift-cisk57aw88gry namespace: openshift-compliance spec: description: This profile test required rules extends: ocp4-cis 1 title: Management namespace profile setValues: - name: ocp4-hypershift-cluster rationale: This value is used for HyperShift version detection value: 79136a1bdb84b3c13217 2 - name: ocp4-hypershift-namespace-prefix rationale: This value is used for HyperShift control plane namespace detection value: local-cluster 3 1 Variable. Only ocp4-cis and ocp4-pci-dss profiles are supported in hosted control planes management clusters. 2 The value is the NAME from the output in the step. 3 The value is the NAMESPACE from the output in the step. Create the TailoredProfile : USD oc create -n openshift-compliance -f mgmt-tp.yaml 5.6.2.6. Applying resource requests and limits When the kubelet starts a container as part of a Pod, the kubelet passes that container's requests and limits for memory and CPU to the container runtime. In Linux, the container runtime configures the kernel cgroups that apply and enforce the limits you defined. The CPU limit defines how much CPU time the container can use. During each scheduling interval, the Linux kernel checks to see if this limit is exceeded. If so, the kernel waits before allowing the cgroup to resume execution. If several different containers (cgroups) want to run on a contended system, workloads with larger CPU requests are allocated more CPU time than workloads with small requests. The memory request is used during Pod scheduling. On a node that uses cgroups v2, the container runtime might use the memory request as a hint to set memory.min and memory.low values. If a container attempts to allocate more memory than this limit, the Linux kernel out-of-memory subsystem activates and intervenes by stopping one of the processes in the container that tried to allocate memory. The memory limit for the Pod or container can also apply to pages in memory-backed volumes, such as an emptyDir. The kubelet tracks tmpfs emptyDir volumes as container memory is used, rather than as local ephemeral storage. If a container exceeds its memory request and the node that it runs on becomes short of memory overall, the Pod's container might be evicted. Important A container may not exceed its CPU limit for extended periods. Container run times do not stop Pods or containers for excessive CPU usage. To determine whether a container cannot be scheduled or is being killed due to resource limits, see Troubleshooting the Compliance Operator . 5.6.2.7. Scheduling Pods with container resource requests When a Pod is created, the scheduler selects a Node for the Pod to run on. Each node has a maximum capacity for each resource type in the amount of CPU and memory it can provide for the Pods. The scheduler ensures that the sum of the resource requests of the scheduled containers is less than the capacity nodes for each resource type. Although memory or CPU resource usage on nodes is very low, the scheduler might still refuse to place a Pod on a node if the capacity check fails to protect against a resource shortage on a node. For each container, you can specify the following resource limits and request: spec.containers[].resources.limits.cpu spec.containers[].resources.limits.memory spec.containers[].resources.limits.hugepages-<size> spec.containers[].resources.requests.cpu spec.containers[].resources.requests.memory spec.containers[].resources.requests.hugepages-<size> Although you can specify requests and limits for only individual containers, it is also useful to consider the overall resource requests and limits for a pod. For a particular resource, a container resource request or limit is the sum of the resource requests or limits of that type for each container in the pod. Example container resource requests and limits apiVersion: v1 kind: Pod metadata: name: frontend spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: app image: images.my-company.example/app:v4 resources: requests: 1 memory: "64Mi" cpu: "250m" limits: 2 memory: "128Mi" cpu: "500m" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] - name: log-aggregator image: images.my-company.example/log-aggregator:v6 resources: requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] 1 The container is requesting 64 Mi of memory and 250 m CPU. 2 The container's limits are 128 Mi of memory and 500 m CPU. 5.6.3. Tailoring the Compliance Operator While the Compliance Operator comes with ready-to-use profiles, they must be modified to fit the organizations' needs and requirements. The process of modifying a profile is called tailoring . The Compliance Operator provides the TailoredProfile object to help tailor profiles. 5.6.3.1. Creating a new tailored profile You can write a tailored profile from scratch by using the TailoredProfile object. Set an appropriate title and description and leave the extends field empty. Indicate to the Compliance Operator what type of scan this custom profile will generate: Node scan: Scans the Operating System. Platform scan: Scans the OpenShift Container Platform configuration. Procedure Set the following annotation on the TailoredProfile object: Example new-profile.yaml apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: new-profile annotations: compliance.openshift.io/product-type: Node 1 spec: extends: ocp4-cis-node 2 description: My custom profile 3 title: Custom profile 4 enableRules: - name: ocp4-etcd-unique-ca rationale: We really need to enable this disableRules: - name: ocp4-file-groupowner-cni-conf rationale: This does not apply to the cluster 1 Set Node or Platform accordingly. 2 The extends field is optional. 3 Use the description field to describe the function of the new TailoredProfile object. 4 Give your TailoredProfile object a title with the title field. Note Adding the -node suffix to the name field of the TailoredProfile object is similar to adding the Node product type annotation and generates an Operating System scan. 5.6.3.2. Using tailored profiles to extend existing ProfileBundles While the TailoredProfile CR enables the most common tailoring operations, the XCCDF standard allows even more flexibility in tailoring OpenSCAP profiles. In addition, if your organization has been using OpenScap previously, you may have an existing XCCDF tailoring file and can reuse it. The ComplianceSuite object contains an optional TailoringConfigMap attribute that you can point to a custom tailoring file. The value of the TailoringConfigMap attribute is a name of a config map, which must contain a key called tailoring.xml and the value of this key is the tailoring contents. Procedure Browse the available rules for the Red Hat Enterprise Linux CoreOS (RHCOS) ProfileBundle : USD oc get rules.compliance -n openshift-compliance -l compliance.openshift.io/profile-bundle=rhcos4 Browse the available variables in the same ProfileBundle : USD oc get variables.compliance -n openshift-compliance -l compliance.openshift.io/profile-bundle=rhcos4 Create a tailored profile named nist-moderate-modified : Choose which rules you want to add to the nist-moderate-modified tailored profile. This example extends the rhcos4-moderate profile by disabling two rules and changing one value. Use the rationale value to describe why these changes were made: Example new-profile-node.yaml apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: nist-moderate-modified spec: extends: rhcos4-moderate description: NIST moderate profile title: My modified NIST moderate profile disableRules: - name: rhcos4-file-permissions-var-log-messages rationale: The file contains logs of error messages in the system - name: rhcos4-account-disable-post-pw-expiration rationale: No need to check this as it comes from the IdP setValues: - name: rhcos4-var-selinux-state rationale: Organizational requirements value: permissive Table 5.9. Attributes for spec variables Attribute Description extends Name of the Profile object upon which this TailoredProfile is built. title Human-readable title of the TailoredProfile . disableRules A list of name and rationale pairs. Each name refers to a name of a rule object that is to be disabled. The rationale value is human-readable text describing why the rule is disabled. manualRules A list of name and rationale pairs. When a manual rule is added, the check result status will always be manual and remediation will not be generated. This attribute is automatic and by default has no values when set as a manual rule. enableRules A list of name and rationale pairs. Each name refers to a name of a rule object that is to be enabled. The rationale value is human-readable text describing why the rule is enabled. description Human-readable text describing the TailoredProfile . setValues A list of name, rationale, and value groupings. Each name refers to a name of the value set. The rationale is human-readable text describing the set. The value is the actual setting. Add the tailoredProfile.spec.manualRules attribute: Example tailoredProfile.spec.manualRules.yaml apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: ocp4-manual-scc-check spec: extends: ocp4-cis description: This profile extends ocp4-cis by forcing the SCC check to always return MANUAL title: OCP4 CIS profile with manual SCC check manualRules: - name: ocp4-scc-limit-container-allowed-capabilities rationale: We use third party software that installs its own SCC with extra privileges Create the TailoredProfile object: USD oc create -n openshift-compliance -f new-profile-node.yaml 1 1 The TailoredProfile object is created in the default openshift-compliance namespace. Example output tailoredprofile.compliance.openshift.io/nist-moderate-modified created Define the ScanSettingBinding object to bind the new nist-moderate-modified tailored profile to the default ScanSetting object. Example new-scansettingbinding.yaml apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: nist-moderate-modified profiles: - apiGroup: compliance.openshift.io/v1alpha1 kind: Profile name: ocp4-moderate - apiGroup: compliance.openshift.io/v1alpha1 kind: TailoredProfile name: nist-moderate-modified settingsRef: apiGroup: compliance.openshift.io/v1alpha1 kind: ScanSetting name: default Create the ScanSettingBinding object: USD oc create -n openshift-compliance -f new-scansettingbinding.yaml Example output scansettingbinding.compliance.openshift.io/nist-moderate-modified created 5.6.4. Retrieving Compliance Operator raw results When proving compliance for your OpenShift Container Platform cluster, you might need to provide the scan results for auditing purposes. 5.6.4.1. Obtaining Compliance Operator raw results from a persistent volume Procedure The Compliance Operator generates and stores the raw results in a persistent volume. These results are in Asset Reporting Format (ARF). Explore the ComplianceSuite object: USD oc get compliancesuites nist-moderate-modified \ -o json -n openshift-compliance | jq '.status.scanStatuses[].resultsStorage' Example output { "name": "ocp4-moderate", "namespace": "openshift-compliance" } { "name": "nist-moderate-modified-master", "namespace": "openshift-compliance" } { "name": "nist-moderate-modified-worker", "namespace": "openshift-compliance" } This shows the persistent volume claims where the raw results are accessible. Verify the raw data location by using the name and namespace of one of the results: USD oc get pvc -n openshift-compliance rhcos4-moderate-worker Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE rhcos4-moderate-worker Bound pvc-548f6cfe-164b-42fe-ba13-a07cfbc77f3a 1Gi RWO gp2 92m Fetch the raw results by spawning a pod that mounts the volume and copying the results: USD oc create -n openshift-compliance -f pod.yaml Example pod.yaml apiVersion: "v1" kind: Pod metadata: name: pv-extract spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: pv-extract-pod image: registry.access.redhat.com/ubi9/ubi command: ["sleep", "3000"] volumeMounts: - mountPath: "/workers-scan-results" name: workers-scan-vol securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: workers-scan-vol persistentVolumeClaim: claimName: rhcos4-moderate-worker After the pod is running, download the results: USD oc cp pv-extract:/workers-scan-results -n openshift-compliance . Important Spawning a pod that mounts the persistent volume will keep the claim as Bound . If the volume's storage class in use has permissions set to ReadWriteOnce , the volume is only mountable by one pod at a time. You must delete the pod upon completion, or it will not be possible for the Operator to schedule a pod and continue storing results in this location. After the extraction is complete, the pod can be deleted: USD oc delete pod pv-extract -n openshift-compliance 5.6.5. Managing Compliance Operator result and remediation Each ComplianceCheckResult represents a result of one compliance rule check. If the rule can be remediated automatically, a ComplianceRemediation object with the same name, owned by the ComplianceCheckResult is created. Unless requested, the remediations are not applied automatically, which gives an OpenShift Container Platform administrator the opportunity to review what the remediation does and only apply a remediation once it has been verified. Important Full remediation for Federal Information Processing Standards (FIPS) compliance requires enabling FIPS mode for the cluster. To enable FIPS mode, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . FIPS mode is supported on the following architectures: x86_64 ppc64le s390x 5.6.5.1. Filters for compliance check results By default, the ComplianceCheckResult objects are labeled with several useful labels that allow you to query the checks and decide on the steps after the results are generated. List checks that belong to a specific suite: USD oc get -n openshift-compliance compliancecheckresults \ -l compliance.openshift.io/suite=workers-compliancesuite List checks that belong to a specific scan: USD oc get -n openshift-compliance compliancecheckresults \ -l compliance.openshift.io/scan=workers-scan Not all ComplianceCheckResult objects create ComplianceRemediation objects. Only ComplianceCheckResult objects that can be remediated automatically do. A ComplianceCheckResult object has a related remediation if it is labeled with the compliance.openshift.io/automated-remediation label. The name of the remediation is the same as the name of the check. List all failing checks that can be remediated automatically: USD oc get -n openshift-compliance compliancecheckresults \ -l 'compliance.openshift.io/check-status=FAIL,compliance.openshift.io/automated-remediation' List all failing checks sorted by severity: USD oc get compliancecheckresults -n openshift-compliance \ -l 'compliance.openshift.io/check-status=FAIL,compliance.openshift.io/check-severity=high' Example output NAME STATUS SEVERITY nist-moderate-modified-master-configure-crypto-policy FAIL high nist-moderate-modified-master-coreos-pti-kernel-argument FAIL high nist-moderate-modified-master-disable-ctrlaltdel-burstaction FAIL high nist-moderate-modified-master-disable-ctrlaltdel-reboot FAIL high nist-moderate-modified-master-enable-fips-mode FAIL high nist-moderate-modified-master-no-empty-passwords FAIL high nist-moderate-modified-master-selinux-state FAIL high nist-moderate-modified-worker-configure-crypto-policy FAIL high nist-moderate-modified-worker-coreos-pti-kernel-argument FAIL high nist-moderate-modified-worker-disable-ctrlaltdel-burstaction FAIL high nist-moderate-modified-worker-disable-ctrlaltdel-reboot FAIL high nist-moderate-modified-worker-enable-fips-mode FAIL high nist-moderate-modified-worker-no-empty-passwords FAIL high nist-moderate-modified-worker-selinux-state FAIL high ocp4-moderate-configure-network-policies-namespaces FAIL high ocp4-moderate-fips-mode-enabled-on-all-nodes FAIL high List all failing checks that must be remediated manually: USD oc get -n openshift-compliance compliancecheckresults \ -l 'compliance.openshift.io/check-status=FAIL,!compliance.openshift.io/automated-remediation' The manual remediation steps are typically stored in the description attribute in the ComplianceCheckResult object. Table 5.10. ComplianceCheckResult Status ComplianceCheckResult Status Description PASS Compliance check ran to completion and passed. FAIL Compliance check ran to completion and failed. INFO Compliance check ran to completion and found something not severe enough to be considered an error. MANUAL Compliance check does not have a way to automatically assess the success or failure and must be checked manually. INCONSISTENT Compliance check reports different results from different sources, typically cluster nodes. ERROR Compliance check ran, but could not complete properly. NOT-APPLICABLE Compliance check did not run because it is not applicable or not selected. 5.6.5.2. Reviewing a remediation Review both the ComplianceRemediation object and the ComplianceCheckResult object that owns the remediation. The ComplianceCheckResult object contains human-readable descriptions of what the check does and the hardening trying to prevent, as well as other metadata like the severity and the associated security controls. The ComplianceRemediation object represents a way to fix the problem described in the ComplianceCheckResult . After first scan, check for remediations with the state MissingDependencies . Below is an example of a check and a remediation called sysctl-net-ipv4-conf-all-accept-redirects . This example is redacted to only show spec and status and omits metadata : spec: apply: false current: object: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/sysctl.d/75-sysctl_net_ipv4_conf_all_accept_redirects.conf mode: 0644 contents: source: data:,net.ipv4.conf.all.accept_redirects%3D0 outdated: {} status: applicationState: NotApplied The remediation payload is stored in the spec.current attribute. The payload can be any Kubernetes object, but because this remediation was produced by a node scan, the remediation payload in the above example is a MachineConfig object. For Platform scans, the remediation payload is often a different kind of an object (for example, a ConfigMap or Secret object), but typically applying that remediation is up to the administrator, because otherwise the Compliance Operator would have required a very broad set of permissions to manipulate any generic Kubernetes object. An example of remediating a Platform check is provided later in the text. To see exactly what the remediation does when applied, the MachineConfig object contents use the Ignition objects for the configuration. See the Ignition specification for further information about the format. In our example, the spec.config.storage.files[0].path attribute specifies the file that is being create by this remediation ( /etc/sysctl.d/75-sysctl_net_ipv4_conf_all_accept_redirects.conf ) and the spec.config.storage.files[0].contents.source attribute specifies the contents of that file. Note The contents of the files are URL-encoded. Use the following Python script to view the contents: USD echo "net.ipv4.conf.all.accept_redirects%3D0" | python3 -c "import sys, urllib.parse; print(urllib.parse.unquote(''.join(sys.stdin.readlines())))" Example output net.ipv4.conf.all.accept_redirects=0 Important The Compliance Operator does not automatically resolve dependency issues that can occur between remediations. Users should perform a rescan after remediations are applied to ensure accurate results. 5.6.5.3. Applying remediation when using customized machine config pools When you create a custom MachineConfigPool , add a label to the MachineConfigPool so that machineConfigPoolSelector present in the KubeletConfig can match the label with MachineConfigPool . Important Do not set protectKernelDefaults: false in the KubeletConfig file, because the MachineConfigPool object might fail to unpause unexpectedly after the Compliance Operator finishes applying remediation. Procedure List the nodes. USD oc get nodes -n openshift-compliance Example output NAME STATUS ROLES AGE VERSION ip-10-0-128-92.us-east-2.compute.internal Ready master 5h21m v1.31.3 ip-10-0-158-32.us-east-2.compute.internal Ready worker 5h17m v1.31.3 ip-10-0-166-81.us-east-2.compute.internal Ready worker 5h17m v1.31.3 ip-10-0-171-170.us-east-2.compute.internal Ready master 5h21m v1.31.3 ip-10-0-197-35.us-east-2.compute.internal Ready master 5h22m v1.31.3 Add a label to nodes. USD oc -n openshift-compliance \ label node ip-10-0-166-81.us-east-2.compute.internal \ node-role.kubernetes.io/<machine_config_pool_name>= Example output node/ip-10-0-166-81.us-east-2.compute.internal labeled Create custom MachineConfigPool CR. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: <machine_config_pool_name> labels: pools.operator.machineconfiguration.openshift.io/<machine_config_pool_name>: '' 1 spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,<machine_config_pool_name>]} nodeSelector: matchLabels: node-role.kubernetes.io/<machine_config_pool_name>: "" 1 The labels field defines label name to add for Machine config pool(MCP). Verify MCP created successfully. USD oc get mcp -w 5.6.5.4. Evaluating KubeletConfig rules against default configuration values OpenShift Container Platform infrastructure might contain incomplete configuration files at run time, and nodes assume default configuration values for missing configuration options. Some configuration options can be passed as command line arguments. As a result, the Compliance Operator cannot verify if the configuration file on the node is complete because it might be missing options used in the rule checks. To prevent false negative results where the default configuration value passes a check, the Compliance Operator uses the Node/Proxy API to fetch the configuration for each node in a node pool, then all configuration options that are consistent across nodes in the node pool are stored in a file that represents the configuration for all nodes within that node pool. This increases the accuracy of the scan results. No additional configuration changes are required to use this feature with default master and worker node pools configurations. 5.6.5.5. Scanning custom node pools The Compliance Operator does not maintain a copy of each node pool configuration. The Compliance Operator aggregates consistent configuration options for all nodes within a single node pool into one copy of the configuration file. The Compliance Operator then uses the configuration file for a particular node pool to evaluate rules against nodes within that pool. Procedure Add the example role to the ScanSetting object that will be stored in the ScanSettingBinding CR: apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: rotation: 3 size: 1Gi roles: - worker - master - example scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *' Create a scan that uses the ScanSettingBinding CR: apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: cis namespace: openshift-compliance profiles: - apiGroup: compliance.openshift.io/v1alpha1 kind: Profile name: ocp4-cis - apiGroup: compliance.openshift.io/v1alpha1 kind: Profile name: ocp4-cis-node settingsRef: apiGroup: compliance.openshift.io/v1alpha1 kind: ScanSetting name: default Verification The Platform KubeletConfig rules are checked through the Node/Proxy object. You can find those rules by running the following command: USD oc get rules -o json | jq '.items[] | select(.checkType == "Platform") | select(.metadata.name | contains("ocp4-kubelet-")) | .metadata.name' 5.6.5.6. Remediating KubeletConfig sub pools KubeletConfig remediation labels can be applied to MachineConfigPool sub-pools. Procedure Add a label to the sub-pool MachineConfigPool CR: USD oc label mcp <sub-pool-name> pools.operator.machineconfiguration.openshift.io/<sub-pool-name>= 5.6.5.7. Applying a remediation The boolean attribute spec.apply controls whether the remediation should be applied by the Compliance Operator. You can apply the remediation by setting the attribute to true : USD oc -n openshift-compliance \ patch complianceremediations/<scan-name>-sysctl-net-ipv4-conf-all-accept-redirects \ --patch '{"spec":{"apply":true}}' --type=merge After the Compliance Operator processes the applied remediation, the status.ApplicationState attribute would change to Applied or to Error if incorrect. When a machine config remediation is applied, that remediation along with all other applied remediations are rendered into a MachineConfig object named 75-USDscan-name-USDsuite-name . That MachineConfig object is subsequently rendered by the Machine Config Operator and finally applied to all the nodes in a machine config pool by an instance of the machine control daemon running on each node. Note that when the Machine Config Operator applies a new MachineConfig object to nodes in a pool, all the nodes belonging to the pool are rebooted. This might be inconvenient when applying multiple remediations, each of which re-renders the composite 75-USDscan-name-USDsuite-name MachineConfig object. To prevent applying the remediation immediately, you can pause the machine config pool by setting the .spec.paused attribute of a MachineConfigPool object to true . The Compliance Operator can apply remediations automatically. Set autoApplyRemediations: true in the ScanSetting top-level object. Warning Applying remediations automatically should only be done with careful consideration. Important The Compliance Operator does not automatically resolve dependency issues that can occur between remediations. Users should perform a rescan after remediations are applied to ensure accurate results. 5.6.5.8. Remediating a platform check manually Checks for Platform scans typically have to be remediated manually by the administrator for two reasons: It is not always possible to automatically determine the value that must be set. One of the checks requires that a list of allowed registries is provided, but the scanner has no way of knowing which registries the organization wants to allow. Different checks modify different API objects, requiring automated remediation to possess root or superuser access to modify objects in the cluster, which is not advised. Procedure The example below uses the ocp4-ocp-allowed-registries-for-import rule, which would fail on a default OpenShift Container Platform installation. Inspect the rule oc get rule.compliance/ocp4-ocp-allowed-registries-for-import -oyaml , the rule is to limit the registries the users are allowed to import images from by setting the allowedRegistriesForImport attribute, The warning attribute of the rule also shows the API object checked, so it can be modified and remediate the issue: USD oc edit image.config.openshift.io/cluster Example output apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: "true" creationTimestamp: "2020-09-10T10:12:54Z" generation: 2 name: cluster resourceVersion: "363096" selfLink: /apis/config.openshift.io/v1/images/cluster uid: 2dcb614e-2f8a-4a23-ba9a-8e33cd0ff77e spec: allowedRegistriesForImport: - domainName: registry.redhat.io status: externalRegistryHostnames: - default-route-openshift-image-registry.apps.user-cluster-09-10-12-07.devcluster.openshift.com internalRegistryHostname: image-registry.openshift-image-registry.svc:5000 Re-run the scan: USD oc -n openshift-compliance \ annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan= 5.6.5.9. Updating remediations When a new version of compliance content is used, it might deliver a new and different version of a remediation than the version. The Compliance Operator will keep the old version of the remediation applied. The OpenShift Container Platform administrator is also notified of the new version to review and apply. A ComplianceRemediation object that had been applied earlier, but was updated changes its status to Outdated . The outdated objects are labeled so that they can be searched for easily. The previously applied remediation contents would then be stored in the spec.outdated attribute of a ComplianceRemediation object and the new updated contents would be stored in the spec.current attribute. After updating the content to a newer version, the administrator then needs to review the remediation. As long as the spec.outdated attribute exists, it would be used to render the resulting MachineConfig object. After the spec.outdated attribute is removed, the Compliance Operator re-renders the resulting MachineConfig object, which causes the Operator to push the configuration to the nodes. Procedure Search for any outdated remediations: USD oc -n openshift-compliance get complianceremediations \ -l complianceoperator.openshift.io/outdated-remediation= Example output NAME STATE workers-scan-no-empty-passwords Outdated The currently applied remediation is stored in the Outdated attribute and the new, unapplied remediation is stored in the Current attribute. If you are satisfied with the new version, remove the Outdated field. If you want to keep the updated content, remove the Current and Outdated attributes. Apply the newer version of the remediation: USD oc -n openshift-compliance patch complianceremediations workers-scan-no-empty-passwords \ --type json -p '[{"op":"remove", "path":/spec/outdated}]' The remediation state will switch from Outdated to Applied : USD oc get -n openshift-compliance complianceremediations workers-scan-no-empty-passwords Example output NAME STATE workers-scan-no-empty-passwords Applied The nodes will apply the newer remediation version and reboot. Important The Compliance Operator does not automatically resolve dependency issues that can occur between remediations. Users should perform a rescan after remediations are applied to ensure accurate results. 5.6.5.10. Unapplying a remediation It might be required to unapply a remediation that was previously applied. Procedure Set the apply flag to false : USD oc -n openshift-compliance \ patch complianceremediations/rhcos4-moderate-worker-sysctl-net-ipv4-conf-all-accept-redirects \ --patch '{"spec":{"apply":false}}' --type=merge The remediation status will change to NotApplied and the composite MachineConfig object would be re-rendered to not include the remediation. Important All affected nodes with the remediation will be rebooted. Important The Compliance Operator does not automatically resolve dependency issues that can occur between remediations. Users should perform a rescan after remediations are applied to ensure accurate results. 5.6.5.11. Removing a KubeletConfig remediation KubeletConfig remediations are included in node-level profiles. In order to remove a KubeletConfig remediation, you must manually remove it from the KubeletConfig objects. This example demonstrates how to remove the compliance check for the one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available remediation. Procedure Locate the scan-name and compliance check for the one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available remediation: USD oc -n openshift-compliance get remediation \ one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available -o yaml Example output apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceRemediation metadata: annotations: compliance.openshift.io/xccdf-value-used: var-kubelet-evictionhard-imagefs-available creationTimestamp: "2022-01-05T19:52:27Z" generation: 1 labels: compliance.openshift.io/scan-name: one-rule-tp-node-master 1 compliance.openshift.io/suite: one-rule-ssb-node name: one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ComplianceCheckResult name: one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available uid: fe8e1577-9060-4c59-95b2-3e2c51709adc resourceVersion: "84820" uid: 5339d21a-24d7-40cb-84d2-7a2ebb015355 spec: apply: true current: object: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig spec: kubeletConfig: evictionHard: imagefs.available: 10% 2 outdated: {} type: Configuration status: applicationState: Applied 1 The scan name of the remediation. 2 The remediation that was added to the KubeletConfig objects. Note If the remediation invokes an evictionHard kubelet configuration, you must specify all of the evictionHard parameters: memory.available , nodefs.available , nodefs.inodesFree , imagefs.available , and imagefs.inodesFree . If you do not specify all parameters, only the specified parameters are applied and the remediation will not function properly. Remove the remediation: Set apply to false for the remediation object: USD oc -n openshift-compliance patch \ complianceremediations/one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available \ -p '{"spec":{"apply":false}}' --type=merge Using the scan-name , find the KubeletConfig object that the remediation was applied to: USD oc -n openshift-compliance get kubeletconfig \ --selector compliance.openshift.io/scan-name=one-rule-tp-node-master Example output NAME AGE compliance-operator-kubelet-master 2m34s Manually remove the remediation, imagefs.available: 10% , from the KubeletConfig object: USD oc edit -n openshift-compliance KubeletConfig compliance-operator-kubelet-master Important All affected nodes with the remediation will be rebooted. Note You must also exclude the rule from any scheduled scans in your tailored profiles that auto-applies the remediation, otherwise, the remediation will be re-applied during the scheduled scan. 5.6.5.12. Inconsistent ComplianceScan The ScanSetting object lists the node roles that the compliance scans generated from the ScanSetting or ScanSettingBinding objects would scan. Each node role usually maps to a machine config pool. Important It is expected that all machines in a machine config pool are identical and all scan results from the nodes in a pool should be identical. If some of the results are different from others, the Compliance Operator flags a ComplianceCheckResult object where some of the nodes will report as INCONSISTENT . All ComplianceCheckResult objects are also labeled with compliance.openshift.io/inconsistent-check . Because the number of machines in a pool might be quite large, the Compliance Operator attempts to find the most common state and list the nodes that differ from the common state. The most common state is stored in the compliance.openshift.io/most-common-status annotation and the annotation compliance.openshift.io/inconsistent-source contains pairs of hostname:status of check statuses that differ from the most common status. If no common state can be found, all the hostname:status pairs are listed in the compliance.openshift.io/inconsistent-source annotation . If possible, a remediation is still created so that the cluster can converge to a compliant status. However, this might not always be possible and correcting the difference between nodes must be done manually. The compliance scan must be re-run to get a consistent result by annotating the scan with the compliance.openshift.io/rescan= option: USD oc -n openshift-compliance \ annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan= 5.6.5.13. Additional resources Modifying nodes . 5.6.6. Performing advanced Compliance Operator tasks The Compliance Operator includes options for advanced users for the purpose of debugging or integration with existing tooling. 5.6.6.1. Using the ComplianceSuite and ComplianceScan objects directly While it is recommended that users take advantage of the ScanSetting and ScanSettingBinding objects to define the suites and scans, there are valid use cases to define the ComplianceSuite objects directly: Specifying only a single rule to scan. This can be useful for debugging together with the debug: true attribute which increases the OpenSCAP scanner verbosity, as the debug mode tends to get quite verbose otherwise. Limiting the test to one rule helps to lower the amount of debug information. Providing a custom nodeSelector. In order for a remediation to be applicable, the nodeSelector must match a pool. Pointing the Scan to a bespoke config map with a tailoring file. For testing or development when the overhead of parsing profiles from bundles is not required. The following example shows a ComplianceSuite that scans the worker machines with only a single rule: apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: workers-compliancesuite spec: scans: - name: workers-scan profile: xccdf_org.ssgproject.content_profile_moderate content: ssg-rhcos4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc... debug: true rule: xccdf_org.ssgproject.content_rule_no_direct_root_logins nodeSelector: node-role.kubernetes.io/worker: "" The ComplianceSuite object and the ComplianceScan objects referred to above specify several attributes in a format that OpenSCAP expects. To find out the profile, content, or rule values, you can start by creating a similar Suite from ScanSetting and ScanSettingBinding or inspect the objects parsed from the ProfileBundle objects like rules or profiles. Those objects contain the xccdf_org identifiers you can use to refer to them from a ComplianceSuite . 5.6.6.2. Setting PriorityClass for ScanSetting scans In large scale environments, the default PriorityClass object can be too low to guarantee Pods execute scans on time. For clusters that must maintain compliance or guarantee automated scanning, it is recommended to set the PriorityClass variable to ensure the Compliance Operator is always given priority in resource constrained situations. Procedure Set the PriorityClass variable: apiVersion: compliance.openshift.io/v1alpha1 strictNodeScan: true metadata: name: default namespace: openshift-compliance priorityClass: compliance-high-priority 1 kind: ScanSetting showNotApplicable: false rawResultStorage: nodeSelector: node-role.kubernetes.io/master: '' pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists schedule: 0 1 * * * roles: - master - worker scanTolerations: - operator: Exists 1 If the PriorityClass referenced in the ScanSetting cannot be found, the Operator will leave the PriorityClass empty, issue a warning, and continue scheduling scans without a PriorityClass . 5.6.6.3. Using raw tailored profiles While the TailoredProfile CR enables the most common tailoring operations, the XCCDF standard allows even more flexibility in tailoring OpenSCAP profiles. In addition, if your organization has been using OpenScap previously, you may have an existing XCCDF tailoring file and can reuse it. The ComplianceSuite object contains an optional TailoringConfigMap attribute that you can point to a custom tailoring file. The value of the TailoringConfigMap attribute is a name of a config map which must contain a key called tailoring.xml and the value of this key is the tailoring contents. Procedure Create the ConfigMap object from a file: USD oc -n openshift-compliance \ create configmap nist-moderate-modified \ --from-file=tailoring.xml=/path/to/the/tailoringFile.xml Reference the tailoring file in a scan that belongs to a suite: apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: workers-compliancesuite spec: debug: true scans: - name: workers-scan profile: xccdf_org.ssgproject.content_profile_moderate content: ssg-rhcos4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc... debug: true tailoringConfigMap: name: nist-moderate-modified nodeSelector: node-role.kubernetes.io/worker: "" 5.6.6.4. Performing a rescan Typically you will want to re-run a scan on a defined schedule, like every Monday or daily. It can also be useful to re-run a scan once after fixing a problem on a node. To perform a single scan, annotate the scan with the compliance.openshift.io/rescan= option: USD oc -n openshift-compliance \ annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan= A rescan generates four additional mc for rhcos-moderate profile: USD oc get mc Example output 75-worker-scan-chronyd-or-ntpd-specify-remote-server 75-worker-scan-configure-usbguard-auditbackend 75-worker-scan-service-usbguard-enabled 75-worker-scan-usbguard-allow-hid-and-hub Important When the scan setting default-auto-apply label is applied, remediations are applied automatically and outdated remediations automatically update. If there are remediations that were not applied due to dependencies, or remediations that had been outdated, rescanning applies the remediations and might trigger a reboot. Only remediations that use MachineConfig objects trigger reboots. If there are no updates or dependencies to be applied, no reboot occurs. 5.6.6.5. Setting custom storage size for results While the custom resources such as ComplianceCheckResult represent an aggregated result of one check across all scanned nodes, it can be useful to review the raw results as produced by the scanner. The raw results are produced in the ARF format and can be large (tens of megabytes per node), it is impractical to store them in a Kubernetes resource backed by the etcd key-value store. Instead, every scan creates a persistent volume (PV) which defaults to 1GB size. Depending on your environment, you may want to increase the PV size accordingly. This is done using the rawResultStorage.size attribute that is exposed in both the ScanSetting and ComplianceScan resources. A related parameter is rawResultStorage.rotation which controls how many scans are retained in the PV before the older scans are rotated. The default value is 3, setting the rotation policy to 0 disables the rotation. Given the default rotation policy and an estimate of 100MB per a raw ARF scan report, you can calculate the right PV size for your environment. 5.6.6.5.1. Using custom result storage values Because OpenShift Container Platform can be deployed in a variety of public clouds or bare metal, the Compliance Operator cannot determine available storage configurations. By default, the Compliance Operator will try to create the PV for storing results using the default storage class of the cluster, but a custom storage class can be configured using the rawResultStorage.StorageClassName attribute. Important If your cluster does not specify a default storage class, this attribute must be set. Configure the ScanSetting custom resource to use a standard storage class and create persistent volumes that are 10GB in size and keep the last 10 results: Example ScanSetting CR apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: storageClassName: standard rotation: 10 size: 10Gi roles: - worker - master scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *' 5.6.6.6. Applying remediations generated by suite scans Although you can use the autoApplyRemediations boolean parameter in a ComplianceSuite object, you can alternatively annotate the object with compliance.openshift.io/apply-remediations . This allows the Operator to apply all of the created remediations. Procedure Apply the compliance.openshift.io/apply-remediations annotation by running: USD oc -n openshift-compliance \ annotate compliancesuites/workers-compliancesuite compliance.openshift.io/apply-remediations= 5.6.6.7. Automatically update remediations In some cases, a scan with newer content might mark remediations as OUTDATED . As an administrator, you can apply the compliance.openshift.io/remove-outdated annotation to apply new remediations and remove the outdated ones. Procedure Apply the compliance.openshift.io/remove-outdated annotation: USD oc -n openshift-compliance \ annotate compliancesuites/workers-compliancesuite compliance.openshift.io/remove-outdated= Alternatively, set the autoUpdateRemediations flag in a ScanSetting or ComplianceSuite object to update the remediations automatically. 5.6.6.8. Creating a custom SCC for the Compliance Operator In some environments, you must create a custom Security Context Constraints (SCC) file to ensure the correct permissions are available to the Compliance Operator api-resource-collector . Prerequisites You must have admin privileges. Procedure Define the SCC in a YAML file named restricted-adjusted-compliance.yaml : SecurityContextConstraints object definition allowHostDirVolumePlugin: false allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: true allowPrivilegedContainer: false allowedCapabilities: null apiVersion: security.openshift.io/v1 defaultAddCapabilities: null fsGroup: type: MustRunAs kind: SecurityContextConstraints metadata: name: restricted-adjusted-compliance priority: 30 1 readOnlyRootFilesystem: false requiredDropCapabilities: - KILL - SETUID - SETGID - MKNOD runAsUser: type: MustRunAsRange seLinuxContext: type: MustRunAs supplementalGroups: type: RunAsAny users: - system:serviceaccount:openshift-compliance:api-resource-collector 2 volumes: - configMap - downwardAPI - emptyDir - persistentVolumeClaim - projected - secret 1 The priority of this SCC must be higher than any other SCC that applies to the system:authenticated group. 2 Service Account used by Compliance Operator Scanner pod. Create the SCC: USD oc create -n openshift-compliance -f restricted-adjusted-compliance.yaml Example output securitycontextconstraints.security.openshift.io/restricted-adjusted-compliance created Verification Verify the SCC was created: USD oc get -n openshift-compliance scc restricted-adjusted-compliance Example output NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES restricted-adjusted-compliance false <no value> MustRunAs MustRunAsRange MustRunAs RunAsAny 30 false ["configMap","downwardAPI","emptyDir","persistentVolumeClaim","projected","secret"] 5.6.6.9. Additional resources Managing security context constraints 5.6.7. Troubleshooting Compliance Operator scans This section describes how to troubleshoot the Compliance Operator. The information can be useful either to diagnose a problem or provide information in a bug report. Some general tips: The Compliance Operator emits Kubernetes events when something important happens. You can either view all events in the cluster using the command: USD oc get events -n openshift-compliance Or view events for an object like a scan using the command: USD oc describe -n openshift-compliance compliancescan/cis-compliance The Compliance Operator consists of several controllers, approximately one per API object. It could be useful to filter only those controllers that correspond to the API object having issues. If a ComplianceRemediation cannot be applied, view the messages from the remediationctrl controller. You can filter the messages from a single controller by parsing with jq : USD oc -n openshift-compliance logs compliance-operator-775d7bddbd-gj58f \ | jq -c 'select(.logger == "profilebundlectrl")' The timestamps are logged as seconds since UNIX epoch in UTC. To convert them to a human-readable date, use date -d @timestamp --utc , for example: USD date -d @1596184628.955853 --utc Many custom resources, most importantly ComplianceSuite and ScanSetting , allow the debug option to be set. Enabling this option increases verbosity of the OpenSCAP scanner pods, as well as some other helper pods. If a single rule is passing or failing unexpectedly, it could be helpful to run a single scan or a suite with only that rule to find the rule ID from the corresponding ComplianceCheckResult object and use it as the rule attribute value in a Scan CR. Then, together with the debug option enabled, the scanner container logs in the scanner pod would show the raw OpenSCAP logs. 5.6.7.1. Anatomy of a scan The following sections outline the components and stages of Compliance Operator scans. 5.6.7.1.1. Compliance sources The compliance content is stored in Profile objects that are generated from a ProfileBundle object. The Compliance Operator creates a ProfileBundle object for the cluster and another for the cluster nodes. USD oc get -n openshift-compliance profilebundle.compliance USD oc get -n openshift-compliance profile.compliance The ProfileBundle objects are processed by deployments labeled with the Bundle name. To troubleshoot an issue with the Bundle , you can find the deployment and view logs of the pods in a deployment: USD oc logs -n openshift-compliance -lprofile-bundle=ocp4 -c profileparser USD oc get -n openshift-compliance deployments,pods -lprofile-bundle=ocp4 USD oc logs -n openshift-compliance pods/<pod-name> USD oc describe -n openshift-compliance pod/<pod-name> -c profileparser 5.6.7.1.2. The ScanSetting and ScanSettingBinding objects lifecycle and debugging With valid compliance content sources, the high-level ScanSetting and ScanSettingBinding objects can be used to generate ComplianceSuite and ComplianceScan objects: apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: my-companys-constraints debug: true # For each role, a separate scan will be created pointing # to a node-role specified in roles roles: - worker --- apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: my-companys-compliance-requirements profiles: # Node checks - name: rhcos4-e8 kind: Profile apiGroup: compliance.openshift.io/v1alpha1 # Cluster checks - name: ocp4-e8 kind: Profile apiGroup: compliance.openshift.io/v1alpha1 settingsRef: name: my-companys-constraints kind: ScanSetting apiGroup: compliance.openshift.io/v1alpha1 Both ScanSetting and ScanSettingBinding objects are handled by the same controller tagged with logger=scansettingbindingctrl . These objects have no status. Any issues are communicated in form of events: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuiteCreated 9m52s scansettingbindingctrl ComplianceSuite openshift-compliance/my-companys-compliance-requirements created Now a ComplianceSuite object is created. The flow continues to reconcile the newly created ComplianceSuite . 5.6.7.1.3. ComplianceSuite custom resource lifecycle and debugging The ComplianceSuite CR is a wrapper around ComplianceScan CRs. The ComplianceSuite CR is handled by controller tagged with logger=suitectrl . This controller handles creating scans from a suite, reconciling and aggregating individual Scan statuses into a single Suite status. If a suite is set to execute periodically, the suitectrl also handles creating a CronJob CR that re-runs the scans in the suite after the initial run is done: USD oc get cronjobs Example output NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE <cron_name> 0 1 * * * False 0 <none> 151m For the most important issues, events are emitted. View them with oc describe compliancesuites/<name> . The Suite objects also have a Status subresource that is updated when any of Scan objects that belong to this suite update their Status subresource. After all expected scans are created, control is passed to the scan controller. 5.6.7.1.4. ComplianceScan custom resource lifecycle and debugging The ComplianceScan CRs are handled by the scanctrl controller. This is also where the actual scans happen and the scan results are created. Each scan goes through several phases: 5.6.7.1.4.1. Pending phase The scan is validated for correctness in this phase. If some parameters like storage size are invalid, the scan transitions to DONE with ERROR result, otherwise proceeds to the Launching phase. 5.6.7.1.4.2. Launching phase In this phase, several config maps that contain either environment for the scanner pods or directly the script that the scanner pods will be evaluating. List the config maps: USD oc -n openshift-compliance get cm \ -l compliance.openshift.io/scan-name=rhcos4-e8-worker,complianceoperator.openshift.io/scan-script= These config maps will be used by the scanner pods. If you ever needed to modify the scanner behavior, change the scanner debug level or print the raw results, modifying the config maps is the way to go. Afterwards, a persistent volume claim is created per scan to store the raw ARF results: USD oc get pvc -n openshift-compliance -lcompliance.openshift.io/scan-name=rhcos4-e8-worker The PVCs are mounted by a per-scan ResultServer deployment. A ResultServer is a simple HTTP server where the individual scanner pods upload the full ARF results to. Each server can run on a different node. The full ARF results might be very large and you cannot presume that it would be possible to create a volume that could be mounted from multiple nodes at the same time. After the scan is finished, the ResultServer deployment is scaled down. The PVC with the raw results can be mounted from another custom pod and the results can be fetched or inspected. The traffic between the scanner pods and the ResultServer is protected by mutual TLS protocols. Finally, the scanner pods are launched in this phase; one scanner pod for a Platform scan instance and one scanner pod per matching node for a node scan instance. The per-node pods are labeled with the node name. Each pod is always labeled with the ComplianceScan name: USD oc get pods -lcompliance.openshift.io/scan-name=rhcos4-e8-worker,workload=scanner --show-labels Example output NAME READY STATUS RESTARTS AGE LABELS rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod 0/2 Completed 0 39m compliance.openshift.io/scan-name=rhcos4-e8-worker,targetNode=ip-10-0-169-90.eu-north-1.compute.internal,workload=scanner + The scan then proceeds to the Running phase. 5.6.7.1.4.3. Running phase The running phase waits until the scanner pods finish. The following terms and processes are in use in the running phase: init container : There is one init container called content-container . It runs the contentImage container and executes a single command that copies the contentFile to the /content directory shared with the other containers in this pod. scanner : This container runs the scan. For node scans, the container mounts the node filesystem as /host and mounts the content delivered by the init container. The container also mounts the entrypoint ConfigMap created in the Launching phase and executes it. The default script in the entrypoint ConfigMap executes OpenSCAP and stores the result files in the /results directory shared between the pod's containers. Logs from this pod can be viewed to determine what the OpenSCAP scanner checked. More verbose output can be viewed with the debug flag. logcollector : The logcollector container waits until the scanner container finishes. Then, it uploads the full ARF results to the ResultServer and separately uploads the XCCDF results along with scan result and OpenSCAP result code as a ConfigMap. These result config maps are labeled with the scan name ( compliance.openshift.io/scan-name=rhcos4-e8-worker ): USD oc describe cm/rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod Example output Name: rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod Namespace: openshift-compliance Labels: compliance.openshift.io/scan-name-scan=rhcos4-e8-worker complianceoperator.openshift.io/scan-result= Annotations: compliance-remediations/processed: compliance.openshift.io/scan-error-msg: compliance.openshift.io/scan-result: NON-COMPLIANT OpenSCAP-scan-result/node: ip-10-0-169-90.eu-north-1.compute.internal Data ==== exit-code: ---- 2 results: ---- <?xml version="1.0" encoding="UTF-8"?> ... Scanner pods for Platform scans are similar, except: There is one extra init container called api-resource-collector that reads the OpenSCAP content provided by the content-container init, container, figures out which API resources the content needs to examine and stores those API resources to a shared directory where the scanner container would read them from. The scanner container does not need to mount the host file system. When the scanner pods are done, the scans move on to the Aggregating phase. 5.6.7.1.4.4. Aggregating phase In the aggregating phase, the scan controller spawns yet another pod called the aggregator pod. Its purpose it to take the result ConfigMap objects, read the results and for each check result create the corresponding Kubernetes object. If the check failure can be automatically remediated, a ComplianceRemediation object is created. To provide human-readable metadata for the checks and remediations, the aggregator pod also mounts the OpenSCAP content using an init container. When a config map is processed by an aggregator pod, it is labeled the compliance-remediations/processed label. The result of this phase are ComplianceCheckResult objects: USD oc get compliancecheckresults -lcompliance.openshift.io/scan-name=rhcos4-e8-worker Example output NAME STATUS SEVERITY rhcos4-e8-worker-accounts-no-uid-except-zero PASS high rhcos4-e8-worker-audit-rules-dac-modification-chmod FAIL medium and ComplianceRemediation objects: USD oc get complianceremediations -lcompliance.openshift.io/scan-name=rhcos4-e8-worker Example output NAME STATE rhcos4-e8-worker-audit-rules-dac-modification-chmod NotApplied rhcos4-e8-worker-audit-rules-dac-modification-chown NotApplied rhcos4-e8-worker-audit-rules-execution-chcon NotApplied rhcos4-e8-worker-audit-rules-execution-restorecon NotApplied rhcos4-e8-worker-audit-rules-execution-semanage NotApplied rhcos4-e8-worker-audit-rules-execution-setfiles NotApplied After these CRs are created, the aggregator pod exits and the scan moves on to the Done phase. 5.6.7.1.4.5. Done phase In the final scan phase, the scan resources are cleaned up if needed and the ResultServer deployment is either scaled down (if the scan was one-time) or deleted if the scan is continuous; the scan instance would then recreate the deployment again. It is also possible to trigger a re-run of a scan in the Done phase by annotating it: USD oc -n openshift-compliance \ annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan= After the scan reaches the Done phase, nothing else happens on its own unless the remediations are set to be applied automatically with autoApplyRemediations: true . The OpenShift Container Platform administrator would now review the remediations and apply them as needed. If the remediations are set to be applied automatically, the ComplianceSuite controller takes over in the Done phase, pauses the machine config pool to which the scan maps to and applies all the remediations in one go. If a remediation is applied, the ComplianceRemediation controller takes over. 5.6.7.1.5. ComplianceRemediation controller lifecycle and debugging The example scan has reported some findings. One of the remediations can be enabled by toggling its apply attribute to true : USD oc patch complianceremediations/rhcos4-e8-worker-audit-rules-dac-modification-chmod --patch '{"spec":{"apply":true}}' --type=merge The ComplianceRemediation controller ( logger=remediationctrl ) reconciles the modified object. The result of the reconciliation is change of status of the remediation object that is reconciled, but also a change of the rendered per-suite MachineConfig object that contains all the applied remediations. The MachineConfig object always begins with 75- and is named after the scan and the suite: USD oc get mc | grep 75- Example output 75-rhcos4-e8-worker-my-companys-compliance-requirements 3.2.0 2m46s The remediations the mc currently consists of are listed in the machine config's annotations: USD oc describe mc/75-rhcos4-e8-worker-my-companys-compliance-requirements Example output Name: 75-rhcos4-e8-worker-my-companys-compliance-requirements Labels: machineconfiguration.openshift.io/role=worker Annotations: remediation/rhcos4-e8-worker-audit-rules-dac-modification-chmod: The ComplianceRemediation controller's algorithm works like this: All currently applied remediations are read into an initial remediation set. If the reconciled remediation is supposed to be applied, it is added to the set. A MachineConfig object is rendered from the set and annotated with names of remediations in the set. If the set is empty (the last remediation was unapplied), the rendered MachineConfig object is removed. If and only if the rendered machine config is different from the one already applied in the cluster, the applied MC is updated (or created, or deleted). Creating or modifying a MachineConfig object triggers a reboot of nodes that match the machineconfiguration.openshift.io/role label - see the Machine Config Operator documentation for more details. The remediation loop ends once the rendered machine config is updated, if needed, and the reconciled remediation object status is updated. In our case, applying the remediation would trigger a reboot. After the reboot, annotate the scan to re-run it: USD oc -n openshift-compliance \ annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan= The scan will run and finish. Check for the remediation to pass: USD oc -n openshift-compliance \ get compliancecheckresults/rhcos4-e8-worker-audit-rules-dac-modification-chmod Example output NAME STATUS SEVERITY rhcos4-e8-worker-audit-rules-dac-modification-chmod PASS medium 5.6.7.1.6. Useful labels Each pod that is spawned by the Compliance Operator is labeled specifically with the scan it belongs to and the work it does. The scan identifier is labeled with the compliance.openshift.io/scan-name label. The workload identifier is labeled with the workload label. The Compliance Operator schedules the following workloads: scanner : Performs the compliance scan. resultserver : Stores the raw results for the compliance scan. aggregator : Aggregates the results, detects inconsistencies and outputs result objects (checkresults and remediations). suitererunner : Will tag a suite to be re-run (when a schedule is set). profileparser : Parses a datastream and creates the appropriate profiles, rules and variables. When debugging and logs are required for a certain workload, run: USD oc logs -l workload=<workload_name> -c <container_name> 5.6.7.2. Increasing Compliance Operator resource limits In some cases, the Compliance Operator might require more memory than the default limits allow. The best way to mitigate this issue is to set custom resource limits. To increase the default memory and CPU limits of scanner pods, see `ScanSetting` Custom resource . Procedure To increase the Operator's memory limits to 500 Mi, create the following patch file named co-memlimit-patch.yaml : spec: config: resources: limits: memory: 500Mi Apply the patch file: USD oc patch sub compliance-operator -nopenshift-compliance --patch-file co-memlimit-patch.yaml --type=merge 5.6.7.3. Configuring Operator resource constraints The resources field defines Resource Constraints for all the containers in the Pod created by the Operator Lifecycle Manager (OLM). Note Resource Constraints applied in this process overwrites the existing resource constraints. Procedure Inject a request of 0.25 cpu and 64 Mi of memory, and a limit of 0.5 cpu and 128 Mi of memory in each container by editing the Subscription object: kind: Subscription metadata: name: compliance-operator namespace: openshift-compliance spec: package: package-name channel: stable config: resources: requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m" 5.6.7.4. Configuring ScanSetting resources When using the Compliance Operator in a cluster that contains more than 500 MachineConfigs, the ocp4-pci-dss-api-checks-pod pod may pause in the init phase when performing a Platform scan. Note Resource constraints applied in this process overwrites the existing resource constraints. Procedure Confirm the ocp4-pci-dss-api-checks-pod pod is stuck in the Init:OOMKilled status: USD oc get pod ocp4-pci-dss-api-checks-pod -w Example output NAME READY STATUS RESTARTS AGE ocp4-pci-dss-api-checks-pod 0/2 Init:1/2 8 (5m56s ago) 25m ocp4-pci-dss-api-checks-pod 0/2 Init:OOMKilled 8 (6m19s ago) 26m Edit the scanLimits attribute in the ScanSetting CR to increase the available memory for the ocp4-pci-dss-api-checks-pod pod: timeout: 30m strictNodeScan: true metadata: name: default namespace: openshift-compliance kind: ScanSetting showNotApplicable: false rawResultStorage: nodeSelector: node-role.kubernetes.io/master: '' pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists schedule: 0 1 * * * roles: - master - worker apiVersion: compliance.openshift.io/v1alpha1 maxRetryOnTimeout: 3 scanTolerations: - operator: Exists scanLimits: memory: 1024Mi 1 1 The default setting is 500Mi . Apply the ScanSetting CR to your cluster: USD oc apply -f scansetting.yaml 5.6.7.5. Configuring ScanSetting timeout The ScanSetting object has a timeout option that can be specified in the ComplianceScanSetting object as a duration string, such as 1h30m . If the scan does not finish within the specified timeout, the scan reattempts until the maxRetryOnTimeout limit is reached. Procedure To set a timeout and maxRetryOnTimeout in ScanSetting, modify an existing ScanSetting object: apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: rotation: 3 size: 1Gi roles: - worker - master scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *' timeout: '10m0s' 1 maxRetryOnTimeout: 3 2 1 The timeout variable is defined as a duration string, such as 1h30m . The default value is 30m . To disable the timeout, set the value to 0s . 2 The maxRetryOnTimeout variable defines how many times a retry is attempted. The default value is 3 . 5.6.7.6. Getting support If you experience difficulty with a procedure described in this documentation, or with OpenShift Container Platform in general, visit the Red Hat Customer Portal . From the Customer Portal, you can: Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products. Submit a support case to Red Hat Support. Access other product documentation. To identify issues with your cluster, you can use Insights in OpenShift Cluster Manager . Insights provides details about issues and, if available, information on how to solve a problem. If you have a suggestion for improving this documentation or have found an error, submit a Jira issue for the most relevant documentation component. Please provide specific details, such as the section name and OpenShift Container Platform version. 5.6.8. Using the oc-compliance plugin Although the Compliance Operator automates many of the checks and remediations for the cluster, the full process of bringing a cluster into compliance often requires administrator interaction with the Compliance Operator API and other components. The oc-compliance plugin makes the process easier. 5.6.8.1. Installing the oc-compliance plugin Procedure Extract the oc-compliance image to get the oc-compliance binary: USD podman run --rm -v ~/.local/bin:/mnt/out:Z registry.redhat.io/compliance/oc-compliance-rhel8:stable /bin/cp /usr/bin/oc-compliance /mnt/out/ Example output W0611 20:35:46.486903 11354 manifest.go:440] Chose linux/amd64 manifest from the manifest list. You can now run oc-compliance . 5.6.8.2. Fetching raw results When a compliance scan finishes, the results of the individual checks are listed in the resulting ComplianceCheckResult custom resource (CR). However, an administrator or auditor might require the complete details of the scan. The OpenSCAP tool creates an Advanced Recording Format (ARF) formatted file with the detailed results. This ARF file is too large to store in a config map or other standard Kubernetes resource, so a persistent volume (PV) is created to contain it. Procedure Fetching the results from the PV with the Compliance Operator is a four-step process. However, with the oc-compliance plugin, you can use a single command: USD oc compliance fetch-raw <object-type> <object-name> -o <output-path> <object-type> can be either scansettingbinding , compliancescan or compliancesuite , depending on which of these objects the scans were launched with. <object-name> is the name of the binding, suite, or scan object to gather the ARF file for, and <output-path> is the local directory to place the results. For example: USD oc compliance fetch-raw scansettingbindings my-binding -o /tmp/ Example output Fetching results for my-binding scans: ocp4-cis, ocp4-cis-node-worker, ocp4-cis-node-master Fetching raw compliance results for scan 'ocp4-cis'....... The raw compliance results are available in the following directory: /tmp/ocp4-cis Fetching raw compliance results for scan 'ocp4-cis-node-worker'........... The raw compliance results are available in the following directory: /tmp/ocp4-cis-node-worker Fetching raw compliance results for scan 'ocp4-cis-node-master'...... The raw compliance results are available in the following directory: /tmp/ocp4-cis-node-master View the list of files in the directory: USD ls /tmp/ocp4-cis-node-master/ Example output ocp4-cis-node-master-ip-10-0-128-89.ec2.internal-pod.xml.bzip2 ocp4-cis-node-master-ip-10-0-150-5.ec2.internal-pod.xml.bzip2 ocp4-cis-node-master-ip-10-0-163-32.ec2.internal-pod.xml.bzip2 Extract the results: USD bunzip2 -c resultsdir/worker-scan/worker-scan-stage-459-tqkg7-compute-0-pod.xml.bzip2 > resultsdir/worker-scan/worker-scan-ip-10-0-170-231.us-east-2.compute.internal-pod.xml View the results: USD ls resultsdir/worker-scan/ Example output worker-scan-ip-10-0-170-231.us-east-2.compute.internal-pod.xml worker-scan-stage-459-tqkg7-compute-0-pod.xml.bzip2 worker-scan-stage-459-tqkg7-compute-1-pod.xml.bzip2 5.6.8.3. Re-running scans Although it is possible to run scans as scheduled jobs, you must often re-run a scan on demand, particularly after remediations are applied or when other changes to the cluster are made. Procedure Rerunning a scan with the Compliance Operator requires use of an annotation on the scan object. However, with the oc-compliance plugin you can rerun a scan with a single command. Enter the following command to rerun the scans for the ScanSettingBinding object named my-binding : USD oc compliance rerun-now scansettingbindings my-binding Example output Rerunning scans from 'my-binding': ocp4-cis Re-running scan 'openshift-compliance/ocp4-cis' 5.6.8.4. Using ScanSettingBinding custom resources When using the ScanSetting and ScanSettingBinding custom resources (CRs) that the Compliance Operator provides, it is possible to run scans for multiple profiles while using a common set of scan options, such as schedule , machine roles , tolerations , and so on. While that is easier than working with multiple ComplianceSuite or ComplianceScan objects, it can confuse new users. The oc compliance bind subcommand helps you create a ScanSettingBinding CR. Procedure Run: USD oc compliance bind [--dry-run] -N <binding name> [-S <scansetting name>] <objtype/objname> [..<objtype/objname>] If you omit the -S flag, the default scan setting provided by the Compliance Operator is used. The object type is the Kubernetes object type, which can be profile or tailoredprofile . More than one object can be provided. The object name is the name of the Kubernetes resource, such as .metadata.name . Add the --dry-run option to display the YAML file of the objects that are created. For example, given the following profiles and scan settings: USD oc get profile.compliance -n openshift-compliance Example output NAME AGE VERSION ocp4-cis 3h49m 1.5.0 ocp4-cis-1-4 3h49m 1.4.0 ocp4-cis-1-5 3h49m 1.5.0 ocp4-cis-node 3h49m 1.5.0 ocp4-cis-node-1-4 3h49m 1.4.0 ocp4-cis-node-1-5 3h49m 1.5.0 ocp4-e8 3h49m ocp4-high 3h49m Revision 4 ocp4-high-node 3h49m Revision 4 ocp4-high-node-rev-4 3h49m Revision 4 ocp4-high-rev-4 3h49m Revision 4 ocp4-moderate 3h49m Revision 4 ocp4-moderate-node 3h49m Revision 4 ocp4-moderate-node-rev-4 3h49m Revision 4 ocp4-moderate-rev-4 3h49m Revision 4 ocp4-nerc-cip 3h49m ocp4-nerc-cip-node 3h49m ocp4-pci-dss 3h49m 3.2.1 ocp4-pci-dss-3-2 3h49m 3.2.1 ocp4-pci-dss-4-0 3h49m 4.0.0 ocp4-pci-dss-node 3h49m 3.2.1 ocp4-pci-dss-node-3-2 3h49m 3.2.1 ocp4-pci-dss-node-4-0 3h49m 4.0.0 ocp4-stig 3h49m V2R1 ocp4-stig-node 3h49m V2R1 ocp4-stig-node-v1r1 3h49m V1R1 ocp4-stig-node-v2r1 3h49m V2R1 ocp4-stig-v1r1 3h49m V1R1 ocp4-stig-v2r1 3h49m V2R1 rhcos4-e8 3h49m rhcos4-high 3h49m Revision 4 rhcos4-high-rev-4 3h49m Revision 4 rhcos4-moderate 3h49m Revision 4 rhcos4-moderate-rev-4 3h49m Revision 4 rhcos4-nerc-cip 3h49m rhcos4-stig 3h49m V2R1 rhcos4-stig-v1r1 3h49m V1R1 rhcos4-stig-v2r1 3h49m V2R1 USD oc get scansettings -n openshift-compliance Example output NAME AGE default 10m default-auto-apply 10m To apply the default settings to the ocp4-cis and ocp4-cis-node profiles, run: USD oc compliance bind -N my-binding profile/ocp4-cis profile/ocp4-cis-node Example output Creating ScanSettingBinding my-binding After the ScanSettingBinding CR is created, the bound profile begins scanning for both profiles with the related settings. Overall, this is the fastest way to begin scanning with the Compliance Operator. 5.6.8.5. Printing controls Compliance standards are generally organized into a hierarchy as follows: A benchmark is the top-level definition of a set of controls for a particular standard. For example, FedRAMP Moderate or Center for Internet Security (CIS) v.1.6.0. A control describes a family of requirements that must be met in order to be in compliance with the benchmark. For example, FedRAMP AC-01 (access control policy and procedures). A rule is a single check that is specific for the system being brought into compliance, and one or more of these rules map to a control. The Compliance Operator handles the grouping of rules into a profile for a single benchmark. It can be difficult to determine which controls that the set of rules in a profile satisfy. Procedure The oc compliance controls subcommand provides a report of the standards and controls that a given profile satisfies: USD oc compliance controls profile ocp4-cis-node Example output +-----------+----------+ | FRAMEWORK | CONTROLS | +-----------+----------+ | CIS-OCP | 1.1.1 | + +----------+ | | 1.1.10 | + +----------+ | | 1.1.11 | + +----------+ ... 5.6.8.6. Fetching compliance remediation details The Compliance Operator provides remediation objects that are used to automate the changes required to make the cluster compliant. The fetch-fixes subcommand can help you understand exactly which configuration remediations are used. Use the fetch-fixes subcommand to extract the remediation objects from a profile, rule, or ComplianceRemediation object into a directory to inspect. Procedure View the remediations for a profile: USD oc compliance fetch-fixes profile ocp4-cis -o /tmp Example output No fixes to persist for rule 'ocp4-api-server-api-priority-flowschema-catch-all' 1 No fixes to persist for rule 'ocp4-api-server-api-priority-gate-enabled' No fixes to persist for rule 'ocp4-api-server-audit-log-maxbackup' Persisted rule fix to /tmp/ocp4-api-server-audit-log-maxsize.yaml No fixes to persist for rule 'ocp4-api-server-audit-log-path' No fixes to persist for rule 'ocp4-api-server-auth-mode-no-aa' No fixes to persist for rule 'ocp4-api-server-auth-mode-node' No fixes to persist for rule 'ocp4-api-server-auth-mode-rbac' No fixes to persist for rule 'ocp4-api-server-basic-auth' No fixes to persist for rule 'ocp4-api-server-bind-address' No fixes to persist for rule 'ocp4-api-server-client-ca' Persisted rule fix to /tmp/ocp4-api-server-encryption-provider-cipher.yaml Persisted rule fix to /tmp/ocp4-api-server-encryption-provider-config.yaml 1 The No fixes to persist warning is expected whenever there are rules in a profile that do not have a corresponding remediation, because either the rule cannot be remediated automatically or a remediation was not provided. You can view a sample of the YAML file. The head command will show you the first 10 lines: USD head /tmp/ocp4-api-server-audit-log-maxsize.yaml Example output apiVersion: config.openshift.io/v1 kind: APIServer metadata: name: cluster spec: maximumFileSizeMegabytes: 100 View the remediation from a ComplianceRemediation object created after a scan: USD oc get complianceremediations -n openshift-compliance Example output NAME STATE ocp4-cis-api-server-encryption-provider-cipher NotApplied ocp4-cis-api-server-encryption-provider-config NotApplied USD oc compliance fetch-fixes complianceremediations ocp4-cis-api-server-encryption-provider-cipher -o /tmp Example output Persisted compliance remediation fix to /tmp/ocp4-cis-api-server-encryption-provider-cipher.yaml You can view a sample of the YAML file. The head command will show you the first 10 lines: USD head /tmp/ocp4-cis-api-server-encryption-provider-cipher.yaml Example output apiVersion: config.openshift.io/v1 kind: APIServer metadata: name: cluster spec: encryption: type: aescbc Warning Use caution before applying remediations directly. Some remediations might not be applicable in bulk, such as the usbguard rules in the moderate profile. In these cases, allow the Compliance Operator to apply the rules because it addresses the dependencies and ensures that the cluster remains in a good state. 5.6.8.7. Viewing ComplianceCheckResult object details When scans are finished running, ComplianceCheckResult objects are created for the individual scan rules. The view-result subcommand provides a human-readable output of the ComplianceCheckResult object details. Procedure Run: USD oc compliance view-result ocp4-cis-scheduler-no-bind-address | [
"oc delete pods -l compliance.openshift.io/scan-name=ocp4-cis",
"oc delete pods -l compliance.openshift.io/scan-name=ocp4-cis",
"oc adm must-gather --image=USD(oc get csv compliance-operator.v1.6.0 -o=jsonpath='{.spec.relatedImages[?(@.name==\"must-gather\")].image}')",
"oc get profile.compliance -n openshift-compliance",
"NAME AGE VERSION ocp4-cis 3h49m 1.5.0 ocp4-cis-1-4 3h49m 1.4.0 ocp4-cis-1-5 3h49m 1.5.0 ocp4-cis-node 3h49m 1.5.0 ocp4-cis-node-1-4 3h49m 1.4.0 ocp4-cis-node-1-5 3h49m 1.5.0 ocp4-e8 3h49m ocp4-high 3h49m Revision 4 ocp4-high-node 3h49m Revision 4 ocp4-high-node-rev-4 3h49m Revision 4 ocp4-high-rev-4 3h49m Revision 4 ocp4-moderate 3h49m Revision 4 ocp4-moderate-node 3h49m Revision 4 ocp4-moderate-node-rev-4 3h49m Revision 4 ocp4-moderate-rev-4 3h49m Revision 4 ocp4-nerc-cip 3h49m ocp4-nerc-cip-node 3h49m ocp4-pci-dss 3h49m 3.2.1 ocp4-pci-dss-3-2 3h49m 3.2.1 ocp4-pci-dss-4-0 3h49m 4.0.0 ocp4-pci-dss-node 3h49m 3.2.1 ocp4-pci-dss-node-3-2 3h49m 3.2.1 ocp4-pci-dss-node-4-0 3h49m 4.0.0 ocp4-stig 3h49m V2R1 ocp4-stig-node 3h49m V2R1 ocp4-stig-node-v1r1 3h49m V1R1 ocp4-stig-node-v2r1 3h49m V2R1 ocp4-stig-v1r1 3h49m V1R1 ocp4-stig-v2r1 3h49m V2R1 rhcos4-e8 3h49m rhcos4-high 3h49m Revision 4 rhcos4-high-rev-4 3h49m Revision 4 rhcos4-moderate 3h49m Revision 4 rhcos4-moderate-rev-4 3h49m Revision 4 rhcos4-nerc-cip 3h49m rhcos4-stig 3h49m V2R1 rhcos4-stig-v1r1 3h49m V1R1 rhcos4-stig-v2r1 3h49m V2R1",
"oc get -n openshift-compliance -oyaml profiles.compliance rhcos4-e8",
"apiVersion: compliance.openshift.io/v1alpha1 description: 'This profile contains configuration checks for Red Hat Enterprise Linux CoreOS that align to the Australian Cyber Security Centre (ACSC) Essential Eight. A copy of the Essential Eight in Linux Environments guide can be found at the ACSC website: https://www.cyber.gov.au/acsc/view-all-content/publications/hardening-linux-workstations-and-servers' id: xccdf_org.ssgproject.content_profile_e8 kind: Profile metadata: annotations: compliance.openshift.io/image-digest: pb-rhcos4hrdkm compliance.openshift.io/product: redhat_enterprise_linux_coreos_4 compliance.openshift.io/product-type: Node creationTimestamp: \"2022-10-19T12:06:49Z\" generation: 1 labels: compliance.openshift.io/profile-bundle: rhcos4 name: rhcos4-e8 namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ProfileBundle name: rhcos4 uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d resourceVersion: \"43699\" uid: 86353f70-28f7-40b4-bf0e-6289ec33675b rules: - rhcos4-accounts-no-uid-except-zero - rhcos4-audit-rules-dac-modification-chmod - rhcos4-audit-rules-dac-modification-chown - rhcos4-audit-rules-execution-chcon - rhcos4-audit-rules-execution-restorecon - rhcos4-audit-rules-execution-semanage - rhcos4-audit-rules-execution-setfiles - rhcos4-audit-rules-execution-setsebool - rhcos4-audit-rules-execution-seunshare - rhcos4-audit-rules-kernel-module-loading-delete - rhcos4-audit-rules-kernel-module-loading-finit - rhcos4-audit-rules-kernel-module-loading-init - rhcos4-audit-rules-login-events - rhcos4-audit-rules-login-events-faillock - rhcos4-audit-rules-login-events-lastlog - rhcos4-audit-rules-login-events-tallylog - rhcos4-audit-rules-networkconfig-modification - rhcos4-audit-rules-sysadmin-actions - rhcos4-audit-rules-time-adjtimex - rhcos4-audit-rules-time-clock-settime - rhcos4-audit-rules-time-settimeofday - rhcos4-audit-rules-time-stime - rhcos4-audit-rules-time-watch-localtime - rhcos4-audit-rules-usergroup-modification - rhcos4-auditd-data-retention-flush - rhcos4-auditd-freq - rhcos4-auditd-local-events - rhcos4-auditd-log-format - rhcos4-auditd-name-format - rhcos4-auditd-write-logs - rhcos4-configure-crypto-policy - rhcos4-configure-ssh-crypto-policy - rhcos4-no-empty-passwords - rhcos4-selinux-policytype - rhcos4-selinux-state - rhcos4-service-auditd-enabled - rhcos4-sshd-disable-empty-passwords - rhcos4-sshd-disable-gssapi-auth - rhcos4-sshd-disable-rhosts - rhcos4-sshd-disable-root-login - rhcos4-sshd-disable-user-known-hosts - rhcos4-sshd-do-not-permit-user-env - rhcos4-sshd-enable-strictmodes - rhcos4-sshd-print-last-log - rhcos4-sshd-set-loglevel-info - rhcos4-sysctl-kernel-dmesg-restrict - rhcos4-sysctl-kernel-kptr-restrict - rhcos4-sysctl-kernel-randomize-va-space - rhcos4-sysctl-kernel-unprivileged-bpf-disabled - rhcos4-sysctl-kernel-yama-ptrace-scope - rhcos4-sysctl-net-core-bpf-jit-harden title: Australian Cyber Security Centre (ACSC) Essential Eight",
"oc get -n openshift-compliance -oyaml rules rhcos4-audit-rules-login-events",
"apiVersion: compliance.openshift.io/v1alpha1 checkType: Node description: |- The audit system already collects login information for all users and root. If the auditd daemon is configured to use the augenrules program to read audit rules during daemon startup (the default), add the following lines to a file with suffix.rules in the directory /etc/audit/rules.d in order to watch for attempted manual edits of files involved in storing logon events: -w /var/log/tallylog -p wa -k logins -w /var/run/faillock -p wa -k logins -w /var/log/lastlog -p wa -k logins If the auditd daemon is configured to use the auditctl utility to read audit rules during daemon startup, add the following lines to /etc/audit/audit.rules file in order to watch for unattempted manual edits of files involved in storing logon events: -w /var/log/tallylog -p wa -k logins -w /var/run/faillock -p wa -k logins -w /var/log/lastlog -p wa -k logins id: xccdf_org.ssgproject.content_rule_audit_rules_login_events kind: Rule metadata: annotations: compliance.openshift.io/image-digest: pb-rhcos4hrdkm compliance.openshift.io/rule: audit-rules-login-events control.compliance.openshift.io/NIST-800-53: AU-2(d);AU-12(c);AC-6(9);CM-6(a) control.compliance.openshift.io/PCI-DSS: Req-10.2.3 policies.open-cluster-management.io/controls: AU-2(d),AU-12(c),AC-6(9),CM-6(a),Req-10.2.3 policies.open-cluster-management.io/standards: NIST-800-53,PCI-DSS creationTimestamp: \"2022-10-19T12:07:08Z\" generation: 1 labels: compliance.openshift.io/profile-bundle: rhcos4 name: rhcos4-audit-rules-login-events namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ProfileBundle name: rhcos4 uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d resourceVersion: \"44819\" uid: 75872f1f-3c93-40ca-a69d-44e5438824a4 rationale: Manual editing of these files may indicate nefarious activity, such as an attacker attempting to remove evidence of an intrusion. severity: medium title: Record Attempts to Alter Logon and Logout Events warning: Manual editing of these files may indicate nefarious activity, such as an attacker attempting to remove evidence of an intrusion.",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ProfileBundle name: <profile bundle name> namespace: openshift-compliance status: dataStreamStatus: VALID 1",
"apiVersion: compliance.openshift.io/v1alpha1 description: <description of the profile> id: xccdf_org.ssgproject.content_profile_moderate 1 kind: Profile metadata: annotations: compliance.openshift.io/product: <product name> compliance.openshift.io/product-type: Node 2 creationTimestamp: \"YYYY-MM-DDTMM:HH:SSZ\" generation: 1 labels: compliance.openshift.io/profile-bundle: <profile bundle name> name: rhcos4-moderate namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ProfileBundle name: <profile bundle name> uid: <uid string> resourceVersion: \"<version number>\" selfLink: /apis/compliance.openshift.io/v1alpha1/namespaces/openshift-compliance/profiles/rhcos4-moderate uid: <uid string> rules: 3 - rhcos4-account-disable-post-pw-expiration - rhcos4-accounts-no-uid-except-zero - rhcos4-audit-rules-dac-modification-chmod - rhcos4-audit-rules-dac-modification-chown title: <title of the profile>",
"apiVersion: compliance.openshift.io/v1alpha1 checkType: Platform 1 description: <description of the rule> id: xccdf_org.ssgproject.content_rule_configure_network_policies_namespaces 2 instructions: <manual instructions for the scan> kind: Rule metadata: annotations: compliance.openshift.io/rule: configure-network-policies-namespaces control.compliance.openshift.io/CIS-OCP: 5.3.2 control.compliance.openshift.io/NERC-CIP: CIP-003-3 R4;CIP-003-3 R4.2;CIP-003-3 R5;CIP-003-3 R6;CIP-004-3 R2.2.4;CIP-004-3 R3;CIP-007-3 R2;CIP-007-3 R2.1;CIP-007-3 R2.2;CIP-007-3 R2.3;CIP-007-3 R5.1;CIP-007-3 R6.1 control.compliance.openshift.io/NIST-800-53: AC-4;AC-4(21);CA-3(5);CM-6;CM-6(1);CM-7;CM-7(1);SC-7;SC-7(3);SC-7(5);SC-7(8);SC-7(12);SC-7(13);SC-7(18) labels: compliance.openshift.io/profile-bundle: ocp4 name: ocp4-configure-network-policies-namespaces namespace: openshift-compliance rationale: <description of why this rule is checked> severity: high 3 title: <summary of the rule>",
"apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: rhcos4-with-usb spec: extends: rhcos4-moderate 1 title: <title of the tailored profile> disableRules: - name: <name of a rule object to be disabled> rationale: <description of why this rule is checked> status: id: xccdf_compliance.openshift.io_profile_rhcos4-with-usb 2 outputRef: name: rhcos4-with-usb-tp 3 namespace: openshift-compliance state: READY 4",
"compliance.openshift.io/product-type: Platform/Node",
"apiVersion: compliance.openshift.io/v1alpha1 autoApplyRemediations: true 1 autoUpdateRemediations: true 2 kind: ScanSetting maxRetryOnTimeout: 3 metadata: creationTimestamp: \"2022-10-18T20:21:00Z\" generation: 1 name: default-auto-apply namespace: openshift-compliance resourceVersion: \"38840\" uid: 8cb0967d-05e0-4d7a-ac1c-08a7f7e89e84 rawResultStorage: nodeSelector: node-role.kubernetes.io/master: \"\" pvAccessModes: - ReadWriteOnce rotation: 3 3 size: 1Gi 4 tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists roles: 5 - master - worker scanTolerations: - operator: Exists schedule: 0 1 * * * 6 showNotApplicable: false strictNodeScan: true timeout: 30m",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: <name of the scan> profiles: 1 # Node checks - name: rhcos4-with-usb kind: TailoredProfile apiGroup: compliance.openshift.io/v1alpha1 # Cluster checks - name: ocp4-moderate kind: Profile apiGroup: compliance.openshift.io/v1alpha1 settingsRef: 2 name: my-companys-constraints kind: ScanSetting apiGroup: compliance.openshift.io/v1alpha1",
"oc get compliancesuites",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: <name_of_the_suite> spec: autoApplyRemediations: false 1 schedule: \"0 1 * * *\" 2 scans: 3 - name: workers-scan scanType: Node profile: xccdf_org.ssgproject.content_profile_moderate content: ssg-rhcos4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc rule: \"xccdf_org.ssgproject.content_rule_no_netrc_files\" nodeSelector: node-role.kubernetes.io/worker: \"\" status: Phase: DONE 4 Result: NON-COMPLIANT 5 scanStatuses: - name: workers-scan phase: DONE result: NON-COMPLIANT",
"oc get events --field-selector involvedObject.kind=ComplianceSuite,involvedObject.name=<name of the suite>",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceScan metadata: name: <name_of_the_compliance_scan> spec: scanType: Node 1 profile: xccdf_org.ssgproject.content_profile_moderate 2 content: ssg-ocp4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc... 3 rule: \"xccdf_org.ssgproject.content_rule_no_netrc_files\" 4 nodeSelector: 5 node-role.kubernetes.io/worker: \"\" status: phase: DONE 6 result: NON-COMPLIANT 7",
"get events --field-selector involvedObject.kind=ComplianceScan,involvedObject.name=<name_of_the_compliance_scan>",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceCheckResult metadata: labels: compliance.openshift.io/check-severity: medium compliance.openshift.io/check-status: FAIL compliance.openshift.io/suite: example-compliancesuite compliance.openshift.io/scan-name: workers-scan name: workers-scan-no-direct-root-logins namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ComplianceScan name: workers-scan description: <description of scan check> instructions: <manual instructions for the scan> id: xccdf_org.ssgproject.content_rule_no_direct_root_logins severity: medium 1 status: FAIL 2",
"get compliancecheckresults -l compliance.openshift.io/suite=workers-compliancesuite",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceRemediation metadata: labels: compliance.openshift.io/suite: example-compliancesuite compliance.openshift.io/scan-name: workers-scan machineconfiguration.openshift.io/role: worker name: workers-scan-disable-users-coredumps namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ComplianceCheckResult name: workers-scan-disable-users-coredumps uid: <UID> spec: apply: false 1 object: current: 2 apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:,%2A%20%20%20%20%20hard%20%20%20core%20%20%20%200 filesystem: root mode: 420 path: /etc/security/limits.d/75-disable_users_coredumps.conf outdated: {} 3",
"get complianceremediations -l compliance.openshift.io/suite=workers-compliancesuite",
"get compliancecheckresults -l 'compliance.openshift.io/check-status in (FAIL),compliance.openshift.io/automated-remediation'",
"get compliancecheckresults -l 'compliance.openshift.io/check-status in (FAIL),!compliance.openshift.io/automated-remediation'",
"apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: \"true\" pod-security.kubernetes.io/enforce: privileged 1 name: openshift-compliance",
"oc create -f namespace-object.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: compliance-operator namespace: openshift-compliance spec: targetNamespaces: - openshift-compliance",
"oc create -f operator-group-object.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: compliance-operator-sub namespace: openshift-compliance spec: channel: \"stable\" installPlanApproval: Automatic name: compliance-operator source: redhat-operators sourceNamespace: openshift-marketplace",
"oc create -f subscription-object.yaml",
"oc get csv -n openshift-compliance",
"oc get deploy -n openshift-compliance",
"apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: \"true\" pod-security.kubernetes.io/enforce: privileged 1 name: openshift-compliance",
"oc create -f namespace-object.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: compliance-operator namespace: openshift-compliance spec: targetNamespaces: - openshift-compliance",
"oc create -f operator-group-object.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: compliance-operator-sub namespace: openshift-compliance spec: channel: \"stable\" installPlanApproval: Automatic name: compliance-operator source: redhat-operators sourceNamespace: openshift-marketplace config: nodeSelector: node-role.kubernetes.io/worker: \"\" 1",
"oc create -f subscription-object.yaml",
"oc get csv -n openshift-compliance",
"oc get deploy -n openshift-compliance",
"apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: \"true\" pod-security.kubernetes.io/enforce: privileged 1 name: openshift-compliance",
"oc create -f namespace-object.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: compliance-operator namespace: openshift-compliance spec: targetNamespaces: - openshift-compliance",
"oc create -f operator-group-object.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: compliance-operator-sub namespace: openshift-compliance spec: channel: \"stable\" installPlanApproval: Automatic name: compliance-operator source: redhat-operators sourceNamespace: openshift-marketplace config: nodeSelector: node-role.kubernetes.io/worker: \"\" env: - name: PLATFORM value: \"HyperShift\"",
"oc create -f subscription-object.yaml",
"oc get csv -n openshift-compliance",
"oc get deploy -n openshift-compliance",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ProfileBundle metadata: creationTimestamp: \"2022-10-19T12:06:30Z\" finalizers: - profilebundle.finalizers.compliance.openshift.io generation: 1 name: rhcos4 namespace: openshift-compliance resourceVersion: \"46741\" uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d spec: contentFile: ssg-rhcos4-ds.xml 1 contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:900e... 2 status: conditions: - lastTransitionTime: \"2022-10-19T12:07:51Z\" message: Profile bundle successfully parsed reason: Valid status: \"True\" type: Ready dataStreamStatus: VALID",
"oc -n openshift-compliance get profilebundles rhcos4 -oyaml",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ProfileBundle metadata: creationTimestamp: \"2022-10-19T12:06:30Z\" finalizers: - profilebundle.finalizers.compliance.openshift.io generation: 1 name: rhcos4 namespace: openshift-compliance resourceVersion: \"46741\" uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d spec: contentFile: ssg-rhcos4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:900e... 1 status: conditions: - lastTransitionTime: \"2022-10-19T12:07:51Z\" message: Profile bundle successfully parsed reason: Valid status: \"True\" type: Ready dataStreamStatus: VALID",
"oc delete ssb --all -n openshift-compliance",
"oc delete ss --all -n openshift-compliance",
"oc delete suite --all -n openshift-compliance",
"oc delete scan --all -n openshift-compliance",
"oc delete profilebundle.compliance --all -n openshift-compliance",
"oc delete sub --all -n openshift-compliance",
"oc delete csv --all -n openshift-compliance",
"oc delete project openshift-compliance",
"project.project.openshift.io \"openshift-compliance\" deleted",
"oc get project/openshift-compliance",
"Error from server (NotFound): namespaces \"openshift-compliance\" not found",
"oc explain scansettings",
"oc explain scansettingbindings",
"oc describe scansettings default -n openshift-compliance",
"Name: default Namespace: openshift-compliance Labels: <none> Annotations: <none> API Version: compliance.openshift.io/v1alpha1 Kind: ScanSetting Max Retry On Timeout: 3 Metadata: Creation Timestamp: 2024-07-16T14:56:42Z Generation: 2 Resource Version: 91655682 UID: 50358cf1-57a8-4f69-ac50-5c7a5938e402 Raw Result Storage: Node Selector: node-role.kubernetes.io/master: Pv Access Modes: ReadWriteOnce 1 Rotation: 3 2 Size: 1Gi 3 Storage Class Name: standard 4 Tolerations: Effect: NoSchedule Key: node-role.kubernetes.io/master Operator: Exists Effect: NoExecute Key: node.kubernetes.io/not-ready Operator: Exists Toleration Seconds: 300 Effect: NoExecute Key: node.kubernetes.io/unreachable Operator: Exists Toleration Seconds: 300 Effect: NoSchedule Key: node.kubernetes.io/memory-pressure Operator: Exists Roles: master 5 worker 6 Scan Tolerations: 7 Operator: Exists Schedule: 0 1 * * * 8 Show Not Applicable: false Strict Node Scan: true Suspend: false Timeout: 30m Events: <none>",
"Name: default-auto-apply Namespace: openshift-compliance Labels: <none> Annotations: <none> API Version: compliance.openshift.io/v1alpha1 Auto Apply Remediations: true 1 Auto Update Remediations: true 2 Kind: ScanSetting Metadata: Creation Timestamp: 2022-10-18T20:21:00Z Generation: 1 Managed Fields: API Version: compliance.openshift.io/v1alpha1 Fields Type: FieldsV1 fieldsV1: f:autoApplyRemediations: f:autoUpdateRemediations: f:rawResultStorage: .: f:nodeSelector: .: f:node-role.kubernetes.io/master: f:pvAccessModes: f:rotation: f:size: f:tolerations: f:roles: f:scanTolerations: f:schedule: f:showNotApplicable: f:strictNodeScan: Manager: compliance-operator Operation: Update Time: 2022-10-18T20:21:00Z Resource Version: 38840 UID: 8cb0967d-05e0-4d7a-ac1c-08a7f7e89e84 Raw Result Storage: Node Selector: node-role.kubernetes.io/master: Pv Access Modes: ReadWriteOnce Rotation: 3 Size: 1Gi Tolerations: Effect: NoSchedule Key: node-role.kubernetes.io/master Operator: Exists Effect: NoExecute Key: node.kubernetes.io/not-ready Operator: Exists Toleration Seconds: 300 Effect: NoExecute Key: node.kubernetes.io/unreachable Operator: Exists Toleration Seconds: 300 Effect: NoSchedule Key: node.kubernetes.io/memory-pressure Operator: Exists Roles: master worker Scan Tolerations: Operator: Exists Schedule: 0 1 * * * Show Not Applicable: false Strict Node Scan: true Events: <none>",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: cis-compliance namespace: openshift-compliance profiles: - name: ocp4-cis-node kind: Profile apiGroup: compliance.openshift.io/v1alpha1 - name: ocp4-cis kind: Profile apiGroup: compliance.openshift.io/v1alpha1 settingsRef: name: default kind: ScanSetting apiGroup: compliance.openshift.io/v1alpha1",
"oc create -f <file-name>.yaml -n openshift-compliance",
"oc get compliancescan -w -n openshift-compliance",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: storageClassName: standard rotation: 10 size: 10Gi roles: - worker - master scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *'",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: rs-on-workers namespace: openshift-compliance rawResultStorage: nodeSelector: node-role.kubernetes.io/worker: \"\" 1 pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - operator: Exists 2 roles: - worker - master scanTolerations: - operator: Exists schedule: 0 1 * * *",
"oc create -f rs-workers.yaml",
"oc get scansettings rs-on-workers -n openshift-compliance -o yaml",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: creationTimestamp: \"2021-11-19T19:36:36Z\" generation: 1 name: rs-on-workers namespace: openshift-compliance resourceVersion: \"48305\" uid: 43fdfc5f-15a7-445a-8bbc-0e4a160cd46e rawResultStorage: nodeSelector: node-role.kubernetes.io/worker: \"\" pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - operator: Exists roles: - worker - master scanTolerations: - operator: Exists schedule: 0 1 * * * strictNodeScan: true",
"oc get hostedcluster -A",
"NAMESPACE NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE local-cluster 79136a1bdb84b3c13217 4.13.5 79136a1bdb84b3c13217-admin-kubeconfig Completed True False The hosted control plane is available",
"apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: hypershift-cisk57aw88gry namespace: openshift-compliance spec: description: This profile test required rules extends: ocp4-cis 1 title: Management namespace profile setValues: - name: ocp4-hypershift-cluster rationale: This value is used for HyperShift version detection value: 79136a1bdb84b3c13217 2 - name: ocp4-hypershift-namespace-prefix rationale: This value is used for HyperShift control plane namespace detection value: local-cluster 3",
"oc create -n openshift-compliance -f mgmt-tp.yaml",
"spec.containers[].resources.limits.cpu spec.containers[].resources.limits.memory spec.containers[].resources.limits.hugepages-<size> spec.containers[].resources.requests.cpu spec.containers[].resources.requests.memory spec.containers[].resources.requests.hugepages-<size>",
"apiVersion: v1 kind: Pod metadata: name: frontend spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: app image: images.my-company.example/app:v4 resources: requests: 1 memory: \"64Mi\" cpu: \"250m\" limits: 2 memory: \"128Mi\" cpu: \"500m\" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] - name: log-aggregator image: images.my-company.example/log-aggregator:v6 resources: requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: new-profile annotations: compliance.openshift.io/product-type: Node 1 spec: extends: ocp4-cis-node 2 description: My custom profile 3 title: Custom profile 4 enableRules: - name: ocp4-etcd-unique-ca rationale: We really need to enable this disableRules: - name: ocp4-file-groupowner-cni-conf rationale: This does not apply to the cluster",
"oc get rules.compliance -n openshift-compliance -l compliance.openshift.io/profile-bundle=rhcos4",
"oc get variables.compliance -n openshift-compliance -l compliance.openshift.io/profile-bundle=rhcos4",
"apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: nist-moderate-modified spec: extends: rhcos4-moderate description: NIST moderate profile title: My modified NIST moderate profile disableRules: - name: rhcos4-file-permissions-var-log-messages rationale: The file contains logs of error messages in the system - name: rhcos4-account-disable-post-pw-expiration rationale: No need to check this as it comes from the IdP setValues: - name: rhcos4-var-selinux-state rationale: Organizational requirements value: permissive",
"apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: ocp4-manual-scc-check spec: extends: ocp4-cis description: This profile extends ocp4-cis by forcing the SCC check to always return MANUAL title: OCP4 CIS profile with manual SCC check manualRules: - name: ocp4-scc-limit-container-allowed-capabilities rationale: We use third party software that installs its own SCC with extra privileges",
"oc create -n openshift-compliance -f new-profile-node.yaml 1",
"tailoredprofile.compliance.openshift.io/nist-moderate-modified created",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: nist-moderate-modified profiles: - apiGroup: compliance.openshift.io/v1alpha1 kind: Profile name: ocp4-moderate - apiGroup: compliance.openshift.io/v1alpha1 kind: TailoredProfile name: nist-moderate-modified settingsRef: apiGroup: compliance.openshift.io/v1alpha1 kind: ScanSetting name: default",
"oc create -n openshift-compliance -f new-scansettingbinding.yaml",
"scansettingbinding.compliance.openshift.io/nist-moderate-modified created",
"oc get compliancesuites nist-moderate-modified -o json -n openshift-compliance | jq '.status.scanStatuses[].resultsStorage'",
"{ \"name\": \"ocp4-moderate\", \"namespace\": \"openshift-compliance\" } { \"name\": \"nist-moderate-modified-master\", \"namespace\": \"openshift-compliance\" } { \"name\": \"nist-moderate-modified-worker\", \"namespace\": \"openshift-compliance\" }",
"oc get pvc -n openshift-compliance rhcos4-moderate-worker",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE rhcos4-moderate-worker Bound pvc-548f6cfe-164b-42fe-ba13-a07cfbc77f3a 1Gi RWO gp2 92m",
"oc create -n openshift-compliance -f pod.yaml",
"apiVersion: \"v1\" kind: Pod metadata: name: pv-extract spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: pv-extract-pod image: registry.access.redhat.com/ubi9/ubi command: [\"sleep\", \"3000\"] volumeMounts: - mountPath: \"/workers-scan-results\" name: workers-scan-vol securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: workers-scan-vol persistentVolumeClaim: claimName: rhcos4-moderate-worker",
"oc cp pv-extract:/workers-scan-results -n openshift-compliance .",
"oc delete pod pv-extract -n openshift-compliance",
"oc get -n openshift-compliance compliancecheckresults -l compliance.openshift.io/suite=workers-compliancesuite",
"oc get -n openshift-compliance compliancecheckresults -l compliance.openshift.io/scan=workers-scan",
"oc get -n openshift-compliance compliancecheckresults -l 'compliance.openshift.io/check-status=FAIL,compliance.openshift.io/automated-remediation'",
"oc get compliancecheckresults -n openshift-compliance -l 'compliance.openshift.io/check-status=FAIL,compliance.openshift.io/check-severity=high'",
"NAME STATUS SEVERITY nist-moderate-modified-master-configure-crypto-policy FAIL high nist-moderate-modified-master-coreos-pti-kernel-argument FAIL high nist-moderate-modified-master-disable-ctrlaltdel-burstaction FAIL high nist-moderate-modified-master-disable-ctrlaltdel-reboot FAIL high nist-moderate-modified-master-enable-fips-mode FAIL high nist-moderate-modified-master-no-empty-passwords FAIL high nist-moderate-modified-master-selinux-state FAIL high nist-moderate-modified-worker-configure-crypto-policy FAIL high nist-moderate-modified-worker-coreos-pti-kernel-argument FAIL high nist-moderate-modified-worker-disable-ctrlaltdel-burstaction FAIL high nist-moderate-modified-worker-disable-ctrlaltdel-reboot FAIL high nist-moderate-modified-worker-enable-fips-mode FAIL high nist-moderate-modified-worker-no-empty-passwords FAIL high nist-moderate-modified-worker-selinux-state FAIL high ocp4-moderate-configure-network-policies-namespaces FAIL high ocp4-moderate-fips-mode-enabled-on-all-nodes FAIL high",
"oc get -n openshift-compliance compliancecheckresults -l 'compliance.openshift.io/check-status=FAIL,!compliance.openshift.io/automated-remediation'",
"spec: apply: false current: object: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/sysctl.d/75-sysctl_net_ipv4_conf_all_accept_redirects.conf mode: 0644 contents: source: data:,net.ipv4.conf.all.accept_redirects%3D0 outdated: {} status: applicationState: NotApplied",
"echo \"net.ipv4.conf.all.accept_redirects%3D0\" | python3 -c \"import sys, urllib.parse; print(urllib.parse.unquote(''.join(sys.stdin.readlines())))\"",
"net.ipv4.conf.all.accept_redirects=0",
"oc get nodes -n openshift-compliance",
"NAME STATUS ROLES AGE VERSION ip-10-0-128-92.us-east-2.compute.internal Ready master 5h21m v1.31.3 ip-10-0-158-32.us-east-2.compute.internal Ready worker 5h17m v1.31.3 ip-10-0-166-81.us-east-2.compute.internal Ready worker 5h17m v1.31.3 ip-10-0-171-170.us-east-2.compute.internal Ready master 5h21m v1.31.3 ip-10-0-197-35.us-east-2.compute.internal Ready master 5h22m v1.31.3",
"oc -n openshift-compliance label node ip-10-0-166-81.us-east-2.compute.internal node-role.kubernetes.io/<machine_config_pool_name>=",
"node/ip-10-0-166-81.us-east-2.compute.internal labeled",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: <machine_config_pool_name> labels: pools.operator.machineconfiguration.openshift.io/<machine_config_pool_name>: '' 1 spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,<machine_config_pool_name>]} nodeSelector: matchLabels: node-role.kubernetes.io/<machine_config_pool_name>: \"\"",
"oc get mcp -w",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: rotation: 3 size: 1Gi roles: - worker - master - example scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *'",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: cis namespace: openshift-compliance profiles: - apiGroup: compliance.openshift.io/v1alpha1 kind: Profile name: ocp4-cis - apiGroup: compliance.openshift.io/v1alpha1 kind: Profile name: ocp4-cis-node settingsRef: apiGroup: compliance.openshift.io/v1alpha1 kind: ScanSetting name: default",
"oc get rules -o json | jq '.items[] | select(.checkType == \"Platform\") | select(.metadata.name | contains(\"ocp4-kubelet-\")) | .metadata.name'",
"oc label mcp <sub-pool-name> pools.operator.machineconfiguration.openshift.io/<sub-pool-name>=",
"oc -n openshift-compliance patch complianceremediations/<scan-name>-sysctl-net-ipv4-conf-all-accept-redirects --patch '{\"spec\":{\"apply\":true}}' --type=merge",
"oc edit image.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2020-09-10T10:12:54Z\" generation: 2 name: cluster resourceVersion: \"363096\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: 2dcb614e-2f8a-4a23-ba9a-8e33cd0ff77e spec: allowedRegistriesForImport: - domainName: registry.redhat.io status: externalRegistryHostnames: - default-route-openshift-image-registry.apps.user-cluster-09-10-12-07.devcluster.openshift.com internalRegistryHostname: image-registry.openshift-image-registry.svc:5000",
"oc -n openshift-compliance annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=",
"oc -n openshift-compliance get complianceremediations -l complianceoperator.openshift.io/outdated-remediation=",
"NAME STATE workers-scan-no-empty-passwords Outdated",
"oc -n openshift-compliance patch complianceremediations workers-scan-no-empty-passwords --type json -p '[{\"op\":\"remove\", \"path\":/spec/outdated}]'",
"oc get -n openshift-compliance complianceremediations workers-scan-no-empty-passwords",
"NAME STATE workers-scan-no-empty-passwords Applied",
"oc -n openshift-compliance patch complianceremediations/rhcos4-moderate-worker-sysctl-net-ipv4-conf-all-accept-redirects --patch '{\"spec\":{\"apply\":false}}' --type=merge",
"oc -n openshift-compliance get remediation \\ one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available -o yaml",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceRemediation metadata: annotations: compliance.openshift.io/xccdf-value-used: var-kubelet-evictionhard-imagefs-available creationTimestamp: \"2022-01-05T19:52:27Z\" generation: 1 labels: compliance.openshift.io/scan-name: one-rule-tp-node-master 1 compliance.openshift.io/suite: one-rule-ssb-node name: one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ComplianceCheckResult name: one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available uid: fe8e1577-9060-4c59-95b2-3e2c51709adc resourceVersion: \"84820\" uid: 5339d21a-24d7-40cb-84d2-7a2ebb015355 spec: apply: true current: object: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig spec: kubeletConfig: evictionHard: imagefs.available: 10% 2 outdated: {} type: Configuration status: applicationState: Applied",
"oc -n openshift-compliance patch complianceremediations/one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available -p '{\"spec\":{\"apply\":false}}' --type=merge",
"oc -n openshift-compliance get kubeletconfig --selector compliance.openshift.io/scan-name=one-rule-tp-node-master",
"NAME AGE compliance-operator-kubelet-master 2m34s",
"oc edit -n openshift-compliance KubeletConfig compliance-operator-kubelet-master",
"oc -n openshift-compliance annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: workers-compliancesuite spec: scans: - name: workers-scan profile: xccdf_org.ssgproject.content_profile_moderate content: ssg-rhcos4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc debug: true rule: xccdf_org.ssgproject.content_rule_no_direct_root_logins nodeSelector: node-role.kubernetes.io/worker: \"\"",
"apiVersion: compliance.openshift.io/v1alpha1 strictNodeScan: true metadata: name: default namespace: openshift-compliance priorityClass: compliance-high-priority 1 kind: ScanSetting showNotApplicable: false rawResultStorage: nodeSelector: node-role.kubernetes.io/master: '' pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists schedule: 0 1 * * * roles: - master - worker scanTolerations: - operator: Exists",
"oc -n openshift-compliance create configmap nist-moderate-modified --from-file=tailoring.xml=/path/to/the/tailoringFile.xml",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: workers-compliancesuite spec: debug: true scans: - name: workers-scan profile: xccdf_org.ssgproject.content_profile_moderate content: ssg-rhcos4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc debug: true tailoringConfigMap: name: nist-moderate-modified nodeSelector: node-role.kubernetes.io/worker: \"\"",
"oc -n openshift-compliance annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=",
"oc get mc",
"75-worker-scan-chronyd-or-ntpd-specify-remote-server 75-worker-scan-configure-usbguard-auditbackend 75-worker-scan-service-usbguard-enabled 75-worker-scan-usbguard-allow-hid-and-hub",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: storageClassName: standard rotation: 10 size: 10Gi roles: - worker - master scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *'",
"oc -n openshift-compliance annotate compliancesuites/workers-compliancesuite compliance.openshift.io/apply-remediations=",
"oc -n openshift-compliance annotate compliancesuites/workers-compliancesuite compliance.openshift.io/remove-outdated=",
"allowHostDirVolumePlugin: false allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: true allowPrivilegedContainer: false allowedCapabilities: null apiVersion: security.openshift.io/v1 defaultAddCapabilities: null fsGroup: type: MustRunAs kind: SecurityContextConstraints metadata: name: restricted-adjusted-compliance priority: 30 1 readOnlyRootFilesystem: false requiredDropCapabilities: - KILL - SETUID - SETGID - MKNOD runAsUser: type: MustRunAsRange seLinuxContext: type: MustRunAs supplementalGroups: type: RunAsAny users: - system:serviceaccount:openshift-compliance:api-resource-collector 2 volumes: - configMap - downwardAPI - emptyDir - persistentVolumeClaim - projected - secret",
"oc create -n openshift-compliance -f restricted-adjusted-compliance.yaml",
"securitycontextconstraints.security.openshift.io/restricted-adjusted-compliance created",
"oc get -n openshift-compliance scc restricted-adjusted-compliance",
"NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES restricted-adjusted-compliance false <no value> MustRunAs MustRunAsRange MustRunAs RunAsAny 30 false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"]",
"oc get events -n openshift-compliance",
"oc describe -n openshift-compliance compliancescan/cis-compliance",
"oc -n openshift-compliance logs compliance-operator-775d7bddbd-gj58f | jq -c 'select(.logger == \"profilebundlectrl\")'",
"date -d @1596184628.955853 --utc",
"oc get -n openshift-compliance profilebundle.compliance",
"oc get -n openshift-compliance profile.compliance",
"oc logs -n openshift-compliance -lprofile-bundle=ocp4 -c profileparser",
"oc get -n openshift-compliance deployments,pods -lprofile-bundle=ocp4",
"oc logs -n openshift-compliance pods/<pod-name>",
"oc describe -n openshift-compliance pod/<pod-name> -c profileparser",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: my-companys-constraints debug: true For each role, a separate scan will be created pointing to a node-role specified in roles roles: - worker --- apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: my-companys-compliance-requirements profiles: # Node checks - name: rhcos4-e8 kind: Profile apiGroup: compliance.openshift.io/v1alpha1 # Cluster checks - name: ocp4-e8 kind: Profile apiGroup: compliance.openshift.io/v1alpha1 settingsRef: name: my-companys-constraints kind: ScanSetting apiGroup: compliance.openshift.io/v1alpha1",
"Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuiteCreated 9m52s scansettingbindingctrl ComplianceSuite openshift-compliance/my-companys-compliance-requirements created",
"oc get cronjobs",
"NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE <cron_name> 0 1 * * * False 0 <none> 151m",
"oc -n openshift-compliance get cm -l compliance.openshift.io/scan-name=rhcos4-e8-worker,complianceoperator.openshift.io/scan-script=",
"oc get pvc -n openshift-compliance -lcompliance.openshift.io/scan-name=rhcos4-e8-worker",
"oc get pods -lcompliance.openshift.io/scan-name=rhcos4-e8-worker,workload=scanner --show-labels",
"NAME READY STATUS RESTARTS AGE LABELS rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod 0/2 Completed 0 39m compliance.openshift.io/scan-name=rhcos4-e8-worker,targetNode=ip-10-0-169-90.eu-north-1.compute.internal,workload=scanner",
"oc describe cm/rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod",
"Name: rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod Namespace: openshift-compliance Labels: compliance.openshift.io/scan-name-scan=rhcos4-e8-worker complianceoperator.openshift.io/scan-result= Annotations: compliance-remediations/processed: compliance.openshift.io/scan-error-msg: compliance.openshift.io/scan-result: NON-COMPLIANT OpenSCAP-scan-result/node: ip-10-0-169-90.eu-north-1.compute.internal Data ==== exit-code: ---- 2 results: ---- <?xml version=\"1.0\" encoding=\"UTF-8\"?>",
"oc get compliancecheckresults -lcompliance.openshift.io/scan-name=rhcos4-e8-worker",
"NAME STATUS SEVERITY rhcos4-e8-worker-accounts-no-uid-except-zero PASS high rhcos4-e8-worker-audit-rules-dac-modification-chmod FAIL medium",
"oc get complianceremediations -lcompliance.openshift.io/scan-name=rhcos4-e8-worker",
"NAME STATE rhcos4-e8-worker-audit-rules-dac-modification-chmod NotApplied rhcos4-e8-worker-audit-rules-dac-modification-chown NotApplied rhcos4-e8-worker-audit-rules-execution-chcon NotApplied rhcos4-e8-worker-audit-rules-execution-restorecon NotApplied rhcos4-e8-worker-audit-rules-execution-semanage NotApplied rhcos4-e8-worker-audit-rules-execution-setfiles NotApplied",
"oc -n openshift-compliance annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=",
"oc patch complianceremediations/rhcos4-e8-worker-audit-rules-dac-modification-chmod --patch '{\"spec\":{\"apply\":true}}' --type=merge",
"oc get mc | grep 75-",
"75-rhcos4-e8-worker-my-companys-compliance-requirements 3.2.0 2m46s",
"oc describe mc/75-rhcos4-e8-worker-my-companys-compliance-requirements",
"Name: 75-rhcos4-e8-worker-my-companys-compliance-requirements Labels: machineconfiguration.openshift.io/role=worker Annotations: remediation/rhcos4-e8-worker-audit-rules-dac-modification-chmod:",
"oc -n openshift-compliance annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=",
"oc -n openshift-compliance get compliancecheckresults/rhcos4-e8-worker-audit-rules-dac-modification-chmod",
"NAME STATUS SEVERITY rhcos4-e8-worker-audit-rules-dac-modification-chmod PASS medium",
"oc logs -l workload=<workload_name> -c <container_name>",
"spec: config: resources: limits: memory: 500Mi",
"oc patch sub compliance-operator -nopenshift-compliance --patch-file co-memlimit-patch.yaml --type=merge",
"kind: Subscription metadata: name: compliance-operator namespace: openshift-compliance spec: package: package-name channel: stable config: resources: requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\"",
"oc get pod ocp4-pci-dss-api-checks-pod -w",
"NAME READY STATUS RESTARTS AGE ocp4-pci-dss-api-checks-pod 0/2 Init:1/2 8 (5m56s ago) 25m ocp4-pci-dss-api-checks-pod 0/2 Init:OOMKilled 8 (6m19s ago) 26m",
"timeout: 30m strictNodeScan: true metadata: name: default namespace: openshift-compliance kind: ScanSetting showNotApplicable: false rawResultStorage: nodeSelector: node-role.kubernetes.io/master: '' pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists schedule: 0 1 * * * roles: - master - worker apiVersion: compliance.openshift.io/v1alpha1 maxRetryOnTimeout: 3 scanTolerations: - operator: Exists scanLimits: memory: 1024Mi 1",
"oc apply -f scansetting.yaml",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: rotation: 3 size: 1Gi roles: - worker - master scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *' timeout: '10m0s' 1 maxRetryOnTimeout: 3 2",
"podman run --rm -v ~/.local/bin:/mnt/out:Z registry.redhat.io/compliance/oc-compliance-rhel8:stable /bin/cp /usr/bin/oc-compliance /mnt/out/",
"W0611 20:35:46.486903 11354 manifest.go:440] Chose linux/amd64 manifest from the manifest list.",
"oc compliance fetch-raw <object-type> <object-name> -o <output-path>",
"oc compliance fetch-raw scansettingbindings my-binding -o /tmp/",
"Fetching results for my-binding scans: ocp4-cis, ocp4-cis-node-worker, ocp4-cis-node-master Fetching raw compliance results for scan 'ocp4-cis'.... The raw compliance results are available in the following directory: /tmp/ocp4-cis Fetching raw compliance results for scan 'ocp4-cis-node-worker'........ The raw compliance results are available in the following directory: /tmp/ocp4-cis-node-worker Fetching raw compliance results for scan 'ocp4-cis-node-master'... The raw compliance results are available in the following directory: /tmp/ocp4-cis-node-master",
"ls /tmp/ocp4-cis-node-master/",
"ocp4-cis-node-master-ip-10-0-128-89.ec2.internal-pod.xml.bzip2 ocp4-cis-node-master-ip-10-0-150-5.ec2.internal-pod.xml.bzip2 ocp4-cis-node-master-ip-10-0-163-32.ec2.internal-pod.xml.bzip2",
"bunzip2 -c resultsdir/worker-scan/worker-scan-stage-459-tqkg7-compute-0-pod.xml.bzip2 > resultsdir/worker-scan/worker-scan-ip-10-0-170-231.us-east-2.compute.internal-pod.xml",
"ls resultsdir/worker-scan/",
"worker-scan-ip-10-0-170-231.us-east-2.compute.internal-pod.xml worker-scan-stage-459-tqkg7-compute-0-pod.xml.bzip2 worker-scan-stage-459-tqkg7-compute-1-pod.xml.bzip2",
"oc compliance rerun-now scansettingbindings my-binding",
"Rerunning scans from 'my-binding': ocp4-cis Re-running scan 'openshift-compliance/ocp4-cis'",
"oc compliance bind [--dry-run] -N <binding name> [-S <scansetting name>] <objtype/objname> [..<objtype/objname>]",
"oc get profile.compliance -n openshift-compliance",
"NAME AGE VERSION ocp4-cis 3h49m 1.5.0 ocp4-cis-1-4 3h49m 1.4.0 ocp4-cis-1-5 3h49m 1.5.0 ocp4-cis-node 3h49m 1.5.0 ocp4-cis-node-1-4 3h49m 1.4.0 ocp4-cis-node-1-5 3h49m 1.5.0 ocp4-e8 3h49m ocp4-high 3h49m Revision 4 ocp4-high-node 3h49m Revision 4 ocp4-high-node-rev-4 3h49m Revision 4 ocp4-high-rev-4 3h49m Revision 4 ocp4-moderate 3h49m Revision 4 ocp4-moderate-node 3h49m Revision 4 ocp4-moderate-node-rev-4 3h49m Revision 4 ocp4-moderate-rev-4 3h49m Revision 4 ocp4-nerc-cip 3h49m ocp4-nerc-cip-node 3h49m ocp4-pci-dss 3h49m 3.2.1 ocp4-pci-dss-3-2 3h49m 3.2.1 ocp4-pci-dss-4-0 3h49m 4.0.0 ocp4-pci-dss-node 3h49m 3.2.1 ocp4-pci-dss-node-3-2 3h49m 3.2.1 ocp4-pci-dss-node-4-0 3h49m 4.0.0 ocp4-stig 3h49m V2R1 ocp4-stig-node 3h49m V2R1 ocp4-stig-node-v1r1 3h49m V1R1 ocp4-stig-node-v2r1 3h49m V2R1 ocp4-stig-v1r1 3h49m V1R1 ocp4-stig-v2r1 3h49m V2R1 rhcos4-e8 3h49m rhcos4-high 3h49m Revision 4 rhcos4-high-rev-4 3h49m Revision 4 rhcos4-moderate 3h49m Revision 4 rhcos4-moderate-rev-4 3h49m Revision 4 rhcos4-nerc-cip 3h49m rhcos4-stig 3h49m V2R1 rhcos4-stig-v1r1 3h49m V1R1 rhcos4-stig-v2r1 3h49m V2R1",
"oc get scansettings -n openshift-compliance",
"NAME AGE default 10m default-auto-apply 10m",
"oc compliance bind -N my-binding profile/ocp4-cis profile/ocp4-cis-node",
"Creating ScanSettingBinding my-binding",
"oc compliance controls profile ocp4-cis-node",
"+-----------+----------+ | FRAMEWORK | CONTROLS | +-----------+----------+ | CIS-OCP | 1.1.1 | + +----------+ | | 1.1.10 | + +----------+ | | 1.1.11 | + +----------+",
"oc compliance fetch-fixes profile ocp4-cis -o /tmp",
"No fixes to persist for rule 'ocp4-api-server-api-priority-flowschema-catch-all' 1 No fixes to persist for rule 'ocp4-api-server-api-priority-gate-enabled' No fixes to persist for rule 'ocp4-api-server-audit-log-maxbackup' Persisted rule fix to /tmp/ocp4-api-server-audit-log-maxsize.yaml No fixes to persist for rule 'ocp4-api-server-audit-log-path' No fixes to persist for rule 'ocp4-api-server-auth-mode-no-aa' No fixes to persist for rule 'ocp4-api-server-auth-mode-node' No fixes to persist for rule 'ocp4-api-server-auth-mode-rbac' No fixes to persist for rule 'ocp4-api-server-basic-auth' No fixes to persist for rule 'ocp4-api-server-bind-address' No fixes to persist for rule 'ocp4-api-server-client-ca' Persisted rule fix to /tmp/ocp4-api-server-encryption-provider-cipher.yaml Persisted rule fix to /tmp/ocp4-api-server-encryption-provider-config.yaml",
"head /tmp/ocp4-api-server-audit-log-maxsize.yaml",
"apiVersion: config.openshift.io/v1 kind: APIServer metadata: name: cluster spec: maximumFileSizeMegabytes: 100",
"oc get complianceremediations -n openshift-compliance",
"NAME STATE ocp4-cis-api-server-encryption-provider-cipher NotApplied ocp4-cis-api-server-encryption-provider-config NotApplied",
"oc compliance fetch-fixes complianceremediations ocp4-cis-api-server-encryption-provider-cipher -o /tmp",
"Persisted compliance remediation fix to /tmp/ocp4-cis-api-server-encryption-provider-cipher.yaml",
"head /tmp/ocp4-cis-api-server-encryption-provider-cipher.yaml",
"apiVersion: config.openshift.io/v1 kind: APIServer metadata: name: cluster spec: encryption: type: aescbc",
"oc compliance view-result ocp4-cis-scheduler-no-bind-address"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/security_and_compliance/compliance-operator |
Chapter 1. Product overview | Chapter 1. Product overview Red Hat Process Automation Manager is an open-source business automation platform that combines business process management (BPM), case management, business rules management, and resource planning. It enables business and IT users to create, manage, validate, and deploy business processes, cases, and business rules. Red Hat Process Automation Manager uses a centralized repository where all resources are stored. This ensures consistency, transparency, and the ability to audit across the business. Business users can modify business logic and business processes without requiring assistance from IT personnel. Red Hat Process Automation Manager 7.13 provides increased stability, several fixed issues, and new features. Red Hat Process Automation Manager is fully supported on Red Hat OpenShift Container Platform and can be installed on various platforms. For information about the support policy for Red Hat Process Automation Manager, see the Release maintenance plan for Red Hat Decision Manager 7.x and Red Hat Process Automation Manager 7.x . | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/release_notes_for_red_hat_process_automation_manager_7.13/rn-intro-con |
Installation Guide | Installation Guide Red Hat Ceph Storage 8 Installing Red Hat Ceph Storage on Red Hat Enterprise Linux Red Hat Ceph Storage Documentation Team | [
"ceph soft nofile unlimited",
"USER_NAME soft nproc unlimited",
"subscription-manager register",
"subscription-manager refresh",
"subscription-manager list --available --matches ' Red Hat Ceph Storage '",
"subscription-manager attach --pool= POOL_ID",
"subscription-manager repos --disable=* subscription-manager repos --enable=rhel-9-for-x86_64-baseos-rpms subscription-manager repos --enable=rhel-9-for-x86_64-appstream-rpms",
"dnf update",
"subscription-manager repos --enable=rhceph-8-tools-for-rhel-9-x86_64-rpms",
"dnf install cephadm-ansible",
"cd /usr/share/cephadm-ansible",
"mkdir -p inventory/staging inventory/production",
"[defaults] inventory = ./inventory/staging",
"touch inventory/staging/hosts touch inventory/production/hosts",
"NODE_NAME_1 NODE_NAME_2 [admin] ADMIN_NODE_NAME_1",
"host02 host03 host04 [admin] host01",
"ansible-playbook -i inventory/staging/hosts PLAYBOOK.yml",
"ansible-playbook -i inventory/production/hosts PLAYBOOK.yml",
"ssh root@myhostname root@myhostname password: Permission denied, please try again.",
"echo 'PermitRootLogin yes' >> /etc/ssh/sshd_config.d/01-permitrootlogin.conf",
"systemctl restart sshd.service",
"ssh root@ HOST_NAME",
"ssh root@host01",
"ssh root@ HOST_NAME",
"ssh root@host01",
"adduser USER_NAME",
"adduser ceph-admin",
"passwd USER_NAME",
"passwd ceph-admin",
"cat << EOF >/etc/sudoers.d/ USER_NAME USDUSER_NAME ALL = (root) NOPASSWD:ALL EOF",
"cat << EOF >/etc/sudoers.d/ceph-admin ceph-admin ALL = (root) NOPASSWD:ALL EOF",
"chmod 0440 /etc/sudoers.d/ USER_NAME",
"chmod 0440 /etc/sudoers.d/ceph-admin",
"[ceph-admin@admin ~]USD ssh-keygen",
"ssh-copy-id USER_NAME @ HOST_NAME",
"[ceph-admin@admin ~]USD ssh-copy-id ceph-admin@host01",
"[ceph-admin@admin ~]USD touch ~/.ssh/config",
"Host host01 Hostname HOST_NAME User USER_NAME Host host02 Hostname HOST_NAME User USER_NAME",
"Host host01 Hostname host01 User ceph-admin Host host02 Hostname host02 User ceph-admin Host host03 Hostname host03 User ceph-admin",
"[ceph-admin@admin ~]USD chmod 600 ~/.ssh/config",
"host02 host03 host04 [admin] host01",
"host02 host03 host04 [admin] host01",
"ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\"",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\"",
"ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\" --limit GROUP_NAME | NODE_NAME",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\" --limit clients [ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\" --limit host01",
"cephadm bootstrap --cluster-network NETWORK_CIDR --mon-ip IP_ADDRESS --registry-url registry.redhat.io --registry-username USER_NAME --registry-password PASSWORD --yes-i-know",
"cephadm bootstrap --cluster-network 10.10.128.0/24 --mon-ip 10.10.128.68 --registry-url registry.redhat.io --registry-username myuser1 --registry-password mypassword1 --yes-i-know",
"Ceph Dashboard is now available at: URL: https://host01:8443/ User: admin Password: i8nhu7zham Enabling client.admin keyring and conf on hosts with \"admin\" label You can access the Ceph CLI with: sudo /usr/sbin/cephadm shell --fsid 266ee7a8-2a05-11eb-b846-5254002d4916 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring Please consider enabling telemetry to help improve Ceph: ceph telemetry on For more information see: https://docs.ceph.com/docs/master/mgr/telemetry/ Bootstrap complete.",
"cephadm bootstrap --ssh-user USER_NAME --mon-ip IP_ADDRESS --allow-fqdn-hostname --registry-json REGISTRY_JSON",
"cephadm bootstrap --ssh-user ceph --mon-ip 10.10.128.68 --allow-fqdn-hostname --registry-json /etc/mylogin.json",
"{ \"url\":\" REGISTRY_URL \", \"username\":\" USER_NAME \", \"password\":\" PASSWORD \" }",
"{ \"url\":\"registry.redhat.io\", \"username\":\"myuser1\", \"password\":\"mypassword1\" }",
"cephadm bootstrap --mon-ip IP_ADDRESS --registry-json /etc/mylogin.json",
"cephadm bootstrap --mon-ip 10.10.128.68 --registry-json /etc/mylogin.json",
"service_type: host addr: host01 hostname: host01 --- service_type: host addr: host02 hostname: host02 --- service_type: host addr: host03 hostname: host03 --- service_type: host addr: host04 hostname: host04 --- service_type: mon placement: host_pattern: \"host[0-2]\" --- service_type: osd service_id: my_osds placement: host_pattern: \"host[1-3]\" data_devices: all: true",
"cephadm bootstrap --apply-spec CONFIGURATION_FILE_NAME --mon-ip MONITOR_IP_ADDRESS --registry-url registry.redhat.io --registry-username USER_NAME --registry-password PASSWORD",
"cephadm bootstrap --apply-spec initial-config.yaml --mon-ip 10.10.128.68 --registry-url registry.redhat.io --registry-username myuser1 --registry-password mypassword1",
"su - SSH_USER_NAME",
"su - ceph Last login: Tue Sep 14 12:00:29 EST 2021 on pts/0",
"[ceph@host01 ~]USD ssh host01 Last login: Tue Sep 14 12:03:29 EST 2021 on pts/0",
"sudo cephadm bootstrap --ssh-user USER_NAME --mon-ip IP_ADDRESS --ssh-private-key PRIVATE_KEY --ssh-public-key PUBLIC_KEY --registry-url registry.redhat.io --registry-username USER_NAME --registry-password PASSWORD",
"sudo cephadm bootstrap --ssh-user ceph --mon-ip 10.10.128.68 --ssh-private-key /home/ceph/.ssh/id_rsa --ssh-public-key /home/ceph/.ssh/id_rsa.pub --registry-url registry.redhat.io --registry-username myuser1 --registry-password mypassword1",
"subscription-manager register",
"subscription-manager refresh",
"subscription-manager list --available --all --matches=\"*Ceph*\"",
"subscription-manager attach --pool= POOL_ID",
"subscription-manager repos --disable=* subscription-manager repos --enable=rhel-9-for-x86_64-baseos-rpms subscription-manager repos --enable=rhel-9-for-x86_64-appstream-rpms",
"dnf install -y podman httpd-tools",
"mkdir -p /opt/registry/{auth,certs,data}",
"htpasswd -bBc /opt/registry/auth/htpasswd PRIVATE_REGISTRY_USERNAME PRIVATE_REGISTRY_PASSWORD",
"htpasswd -bBc /opt/registry/auth/htpasswd myregistryusername myregistrypassword1",
"openssl req -newkey rsa:4096 -nodes -sha256 -keyout /opt/registry/certs/domain.key -x509 -days 365 -out /opt/registry/certs/domain.crt -addext \"subjectAltName = DNS: LOCAL_NODE_FQDN \"",
"openssl req -newkey rsa:4096 -nodes -sha256 -keyout /opt/registry/certs/domain.key -x509 -days 365 -out /opt/registry/certs/domain.crt -addext \"subjectAltName = DNS:admin.lab.redhat.com\"",
"ln -s /opt/registry/certs/domain.crt /opt/registry/certs/domain.cert",
"cp /opt/registry/certs/domain.crt /etc/pki/ca-trust/source/anchors/ update-ca-trust trust list | grep -i \" LOCAL_NODE_FQDN \"",
"cp /opt/registry/certs/domain.crt /etc/pki/ca-trust/source/anchors/ update-ca-trust trust list | grep -i \"admin.lab.redhat.com\" label: admin.lab.redhat.com",
"scp /opt/registry/certs/domain.crt root@host01:/etc/pki/ca-trust/source/anchors/ ssh root@host01 update-ca-trust trust list | grep -i \"admin.lab.redhat.com\" label: admin.lab.redhat.com",
"run --restart=always --name NAME_OF_CONTAINER -p 5000:5000 -v /opt/registry/data:/var/lib/registry:z -v /opt/registry/auth:/auth:z -v /opt/registry/certs:/certs:z -e \"REGISTRY_AUTH=htpasswd\" -e \"REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm\" -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd -e \"REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt\" -e \"REGISTRY_HTTP_TLS_KEY=/certs/domain.key\" -e REGISTRY_COMPATIBILITY_SCHEMA1_ENABLED=true -d registry:2",
"podman run --restart=always --name myprivateregistry -p 5000:5000 -v /opt/registry/data:/var/lib/registry:z -v /opt/registry/auth:/auth:z -v /opt/registry/certs:/certs:z -e \"REGISTRY_AUTH=htpasswd\" -e \"REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm\" -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd -e \"REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt\" -e \"REGISTRY_HTTP_TLS_KEY=/certs/domain.key\" -e REGISTRY_COMPATIBILITY_SCHEMA1_ENABLED=true -d registry:2",
"unqualified-search-registries = [\"registry.redhat.io\", \"registry.access.redhat.com\", \"registry.fedoraproject.org\", \"registry.centos.org\", \"docker.io\"]",
"login registry.redhat.io",
"run -v / CERTIFICATE_DIRECTORY_PATH :/certs:Z -v / CERTIFICATE_DIRECTORY_PATH /domain.cert:/certs/domain.cert:Z --rm registry.redhat.io/rhel9/skopeo:8.5-8 skopeo copy --remove-signatures --src-creds RED_HAT_CUSTOMER_PORTAL_LOGIN : RED_HAT_CUSTOMER_PORTAL_PASSWORD --dest-cert-dir=./certs/ --dest-creds PRIVATE_REGISTRY_USERNAME : PRIVATE_REGISTRY_PASSWORD docker://registry.redhat.io/ SRC_IMAGE : SRC_TAG docker:// LOCAL_NODE_FQDN :5000/ DST_IMAGE : DST_TAG",
"podman run -v /opt/registry/certs:/certs:Z -v /opt/registry/certs/domain.cert:/certs/domain.cert:Z --rm registry.redhat.io/rhel9/skopeo skopeo copy --remove-signatures --src-creds myusername:mypassword1 --dest-cert-dir=./certs/ --dest-creds myregistryusername:myregistrypassword1 docker://registry.redhat.io/rhceph/rhceph-8-rhel9:latest docker://admin.lab.redhat.com:5000/rhceph/rhceph-8-rhel9:latest podman run -v /opt/registry/certs:/certs:Z -v /opt/registry/certs/domain.cert:/certs/domain.cert:Z --rm registry.redhat.io/rhel9/skopeo skopeo copy --remove-signatures --src-creds myusername:mypassword1 --dest-cert-dir=./certs/ --dest-creds myregistryusername:myregistrypassword1 docker://registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.12 docker://admin.lab.redhat.com:5000/openshift4/ose-prometheus-node-exporter:v4.12 podman run -v /opt/registry/certs:/certs:Z -v /opt/registry/certs/domain.cert:/certs/domain.cert:Z --rm registry.redhat.io/rhel9/skopeo skopeo copy --remove-signatures --src-creds myusername:mypassword1 --dest-cert-dir=./certs/ --dest-creds myregistryusername:myregistrypassword1 docker://registry.redhat.io/rhceph/grafana-rhel9:latest docker://admin.lab.redhat.com:5000/rhceph/grafana-rhel9:latest podman run -v /opt/registry/certs:/certs:Z -v /opt/registry/certs/domain.cert:/certs/domain.cert:Z --rm registry.redhat.io/rhel9/skopeo skopeo copy --remove-signatures --src-creds myusername:mypassword1 --dest-cert-dir=./certs/ --dest-creds myregistryusername:myregistrypassword1 docker://registry.redhat.io/openshift4/ose-prometheus:v4.12 docker://admin.lab.redhat.com:5000/openshift4/ose-prometheus:v4.12 podman run -v /opt/registry/certs:/certs:Z -v /opt/registry/certs/domain.cert:/certs/domain.cert:Z --rm registry.redhat.io/rhel9/skopeo skopeo copy --remove-signatures --src-creds myusername:mypassword1 --dest-cert-dir=./certs/ --dest-creds myregistryusername:myregistrypassword1 docker://registry.redhat.io/openshift4/ose-prometheus-alertmanager:v4.12 docker://admin.lab.redhat.com:5000/openshift4/ose-prometheus-alertmanager:v4.12",
"curl -u PRIVATE_REGISTRY_USERNAME : PRIVATE_REGISTRY_PASSWORD https:// LOCAL_NODE_FQDN :5000/v2/_catalog",
"curl -u myregistryusername:myregistrypassword1 https://admin.lab.redhat.com:5000/v2/_catalog {\"repositories\":[\"openshift4/ose-prometheus\",\"openshift4/ose-prometheus-alertmanager\",\"openshift4/ose-prometheus-node-exporter\",\"rhceph/rhceph-8-dashboard-rhel9\",\"rhceph/rhceph-8-rhel9\"]}",
"host02 host03 host04 [admin] host01",
"ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars \"ceph_origin=custom\" -e \"custom_repo_url= CUSTOM_REPO_URL \"",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=custom\" -e \"custom_repo_url=http://mycustomrepo.lab.redhat.com/x86_64/os/\"",
"ansible-playbook -vvv -i INVENTORY_HOST_FILE_ cephadm-set-container-insecure-registries.yml -e insecure_registry= REGISTRY_URL",
"ansible-playbook -vvv -i hosts cephadm-set-container-insecure-registries.yml -e insecure_registry=host01:5050",
"ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars \"ceph_origin=custom\" -e \"custom_repo_url= CUSTOM_REPO_URL \" --limit GROUP_NAME | NODE_NAME",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=custom\" -e \"custom_repo_url=http://mycustomrepo.lab.redhat.com/x86_64/os/\" --limit clients [ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=custom\" -e \"custom_repo_url=http://mycustomrepo.lab.redhat.com/x86_64/os/\" --limit host02",
"cephadm --image PRIVATE_REGISTRY_NODE_FQDN :5000/ CUSTOM_IMAGE_NAME : IMAGE_TAG bootstrap --mon-ip IP_ADDRESS --registry-url PRIVATE_REGISTRY_NODE_FQDN :5000 --registry-username PRIVATE_REGISTRY_USERNAME --registry-password PRIVATE_REGISTRY_PASSWORD",
"cephadm --image admin.lab.redhat.com:5000/rhceph-8-rhel9:latest bootstrap --mon-ip 10.10.128.68 --registry-url admin.lab.redhat.com:5000 --registry-username myregistryusername --registry-password myregistrypassword1",
"Ceph Dashboard is now available at: URL: https://host01:8443/ User: admin Password: i8nhu7zham Enabling client.admin keyring and conf on hosts with \"admin\" label You can access the Ceph CLI with: sudo /usr/sbin/cephadm shell --fsid 266ee7a8-2a05-11eb-b846-5254002d4916 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring Please consider enabling telemetry to help improve Ceph: ceph telemetry on For more information see: https://docs.ceph.com/docs/master/mgr/telemetry/ Bootstrap complete.",
"ceph cephadm registry-login --registry-url CUSTOM_REGISTRY_NAME --registry_username REGISTRY_USERNAME --registry_password REGISTRY_PASSWORD",
"ceph cephadm registry-login --registry-url myregistry --registry_username myregistryusername --registry_password myregistrypassword1",
"ceph config set mgr mgr/cephadm/ OPTION_NAME CUSTOM_REGISTRY_NAME / CONTAINER_NAME",
"container_image_prometheus container_image_grafana container_image_alertmanager container_image_node_exporter",
"ceph config set mgr mgr/cephadm/container_image_prometheus myregistry/mycontainer ceph config set mgr mgr/cephadm/container_image_grafana myregistry/mycontainer ceph config set mgr mgr/cephadm/container_image_alertmanager myregistry/mycontainer ceph config set mgr mgr/cephadm/container_image_node_exporter myregistry/mycontainer",
"ceph orch redeploy node-exporter",
"ceph config rm mgr mgr/cephadm/ OPTION_NAME",
"ceph config rm mgr mgr/cephadm/container_image_prometheus",
"[ansible@admin ~]USD cd /usr/share/cephadm-ansible",
"ansible-playbook -i INVENTORY_HOST_FILE cephadm-distribute-ssh-key.yml -e cephadm_ssh_user= USER_NAME -e cephadm_pubkey_path= home/cephadm/ceph.key -e admin_node= ADMIN_NODE_NAME_1",
"[ansible@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-distribute-ssh-key.yml -e cephadm_ssh_user=ceph-admin -e cephadm_pubkey_path=/home/cephadm/ceph.key -e admin_node=host01 [ansible@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-distribute-ssh-key.yml -e cephadm_ssh_user=ceph-admin -e admin_node=host01",
"cephadm shell ceph -s",
"cephadm shell ceph -s",
"exit",
"podman ps",
"cephadm shell ceph -s cluster: id: f64f341c-655d-11eb-8778-fa163e914bcc health: HEALTH_OK services: mon: 3 daemons, quorum host01,host02,host03 (age 94m) mgr: host01.lbnhug(active, since 59m), standbys: host02.rofgay, host03.ohipra mds: 1/1 daemons up, 1 standby osd: 18 osds: 18 up (since 10m), 18 in (since 10m) rgw: 4 daemons active (2 hosts, 1 zones) data: volumes: 1/1 healthy pools: 8 pools, 225 pgs objects: 230 objects, 9.9 KiB usage: 271 MiB used, 269 GiB / 270 GiB avail pgs: 225 active+clean io: client: 85 B/s rd, 0 op/s rd, 0 op/s wr",
".Syntax [source,subs=\"verbatim,quotes\"] ---- ceph cephadm registry-login --registry-url _CUSTOM_REGISTRY_NAME_ --registry_username _REGISTRY_USERNAME_ --registry_password _REGISTRY_PASSWORD_ ----",
".Example ---- ceph cephadm registry-login --registry-url myregistry --registry_username myregistryusername --registry_password myregistrypassword1 ----",
"ssh-copy-id -f -i /etc/ceph/ceph.pub user@ NEWHOST",
"ssh-copy-id -f -i /etc/ceph/ceph.pub root@host02 ssh-copy-id -f -i /etc/ceph/ceph.pub root@host03",
"[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible",
"[ceph-admin@admin ~]USD cat hosts host02 host03 host04 [admin] host01",
"ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\" --limit NEWHOST",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\" --limit host02",
"ceph orch host add NEWHOST",
"ceph orch host add host02 Added host 'host02' with addr '10.10.128.69' ceph orch host add host03 Added host 'host03' with addr '10.10.128.70'",
"ceph orch host add HOSTNAME IP_ADDRESS",
"ceph orch host add host02 10.10.128.69 Added host 'host02' with addr '10.10.128.69'",
"ceph orch host ls",
"ceph orch host add HOSTNAME IP_ADDR",
"ceph orch host add host01 10.10.128.68",
"ceph orch host set-addr HOSTNAME IP_ADDR",
"ceph orch host set-addr HOSTNAME IPV4_ADDRESS",
"service_type: host addr: hostname: host02 labels: - mon - osd - mgr --- service_type: host addr: hostname: host03 labels: - mon - osd - mgr --- service_type: host addr: hostname: host04 labels: - mon - osd",
"ceph orch apply -i hosts.yaml Added host 'host02' with addr '10.10.128.69' Added host 'host03' with addr '10.10.128.70' Added host 'host04' with addr '10.10.128.71'",
"cephadm shell --mount hosts.yaml -- ceph orch apply -i /mnt/hosts.yaml",
"ceph orch host ls HOST ADDR LABELS STATUS host02 host02 mon osd mgr host03 host03 mon osd mgr host04 host04 mon osd",
"cephadm shell",
"ceph orch host add HOST_NAME HOST_ADDRESS",
"ceph orch host add host03 10.10.128.70",
"cephadm shell",
"ceph orch host ls",
"ceph orch host drain HOSTNAME",
"ceph orch host drain host02",
"ceph orch osd rm status",
"ceph orch ps HOSTNAME",
"ceph orch ps host02",
"ceph orch host rm HOSTNAME",
"ceph orch host rm host02",
"cephadm shell",
"ceph orch host label add HOSTNAME LABEL",
"ceph orch host label add host02 mon",
"ceph orch host ls",
"cephadm shell",
"ceph orch host label rm HOSTNAME LABEL",
"ceph orch host label rm host02 mon",
"ceph orch host ls",
"cephadm shell",
"ceph orch host ls HOST ADDR LABELS STATUS host01 _admin mon osd mgr host02 mon osd mgr mylabel",
"ceph orch apply DAEMON --placement=\"label: LABEL \"",
"ceph orch apply prometheus --placement=\"label:mylabel\"",
"vi placement.yml",
"service_type: prometheus placement: label: \"mylabel\"",
"ceph orch apply -i FILENAME",
"ceph orch apply -i placement.yml Scheduled prometheus update...",
"ceph orch ps --daemon_type= DAEMON_NAME",
"ceph orch ps --daemon_type=prometheus NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID prometheus.host02 host02 *:9095 running (2h) 8m ago 2h 85.3M - 2.22.2 ac25aac5d567 ad8c7593d7c0",
"ceph orch apply mon 5",
"ceph orch apply mon --unmanaged",
"ceph orch host label add HOSTNAME mon",
"ceph orch host label add host01 mon",
"ceph orch host ls",
"ceph orch host label add host02 mon ceph orch host label add host03 mon ceph orch host ls HOST ADDR LABELS STATUS host01 mon host02 mon host03 mon host04 host05 host06",
"ceph orch apply mon label:mon",
"ceph orch apply mon HOSTNAME1 , HOSTNAME2 , HOSTNAME3",
"ceph orch apply mon host01,host02,host03",
"[ceph-admin@admin cephadm-ansible]USD ceph cephadm generate-key",
"[ceph-admin@admin cephadm-ansible]USD ceph cephadm get-pub-key",
"[ceph-admin@admin cephadm-ansible]USDceph cephadm clear-key",
"[ceph-admin@admin cephadm-ansible]USD ceph mgr fail",
"[ceph-admin@admin cephadm-ansible]USD ceph cephadm set-user <user>",
"[ceph-admin@admin cephadm-ansible]USD ceph cephadm set-user user",
"ceph cephadm get-pub-key > ~/ceph.pub",
"[ceph-admin@admin cephadm-ansible]USD ceph cephadm get-pub-key > ~/ceph.pub",
"ssh-copy-id -f -i ~/ceph.pub USER @ HOST",
"[ceph-admin@admin cephadm-ansible]USD ssh-copy-id ceph-admin@host01",
"ceph orch host ls HOST ADDR LABELS STATUS host01 mon,mgr,_admin host02 mon host03 mon,mgr host04 host05 host06",
"ceph orch host label add HOSTNAME _admin",
"ceph orch host label add host03 _admin",
"ceph orch host ls HOST ADDR LABELS STATUS host01 mon,mgr,_admin host02 mon host03 mon,mgr,_admin host04 host05 host06",
"ceph orch host label add HOSTNAME mon",
"ceph orch host label add host02 mon ceph orch host label add host03 mon",
"ceph orch host ls",
"ceph orch host ls HOST ADDR LABELS STATUS host01 mon,mgr,_admin host02 mon host03 mon host04 host05 host06",
"ceph orch apply mon label:mon",
"ceph orch apply mon HOSTNAME1 , HOSTNAME2 , HOSTNAME3",
"ceph orch apply mon host01,host02,host03",
"ceph orch apply mon NODE:IP_ADDRESS_OR_NETWORK_NAME [ NODE:IP_ADDRESS_OR_NETWORK_NAME ...]",
"ceph orch apply mon host02:10.10.128.69 host03:mynetwork",
"ceph orch apply mgr NUMBER_OF_DAEMONS",
"ceph orch apply mgr 3",
"ceph orch apply mgr --placement \" HOSTNAME1 HOSTNAME2 HOSTNAME3 \"",
"ceph orch apply mgr --placement \"host02 host03 host04\"",
"ceph orch device ls [--hostname= HOSTNAME1 HOSTNAME2 ] [--wide] [--refresh]",
"ceph orch device ls --wide --refresh",
"ceph orch daemon add osd HOSTNAME : DEVICE_PATH",
"ceph orch daemon add osd host02:/dev/sdb",
"ceph orch apply osd --all-available-devices",
"ansible-playbook -i hosts cephadm-clients.yml -extra-vars '{\"fsid\":\" FSID \", \"client_group\":\" ANSIBLE_GROUP_NAME \", \"keyring\":\" PATH_TO_KEYRING \", \"conf\":\" CONFIG_FILE \"}'",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-clients.yml --extra-vars '{\"fsid\":\"be3ca2b2-27db-11ec-892b-005056833d58\",\"client_group\":\"fs_clients\",\"keyring\":\"/etc/ceph/fs.keyring\", \"conf\": \"/etc/ceph/ceph.conf\"}'",
"ceph mgr module disable cephadm",
"ceph fsid",
"exit",
"cephadm rm-cluster --force --zap-osds --fsid FSID",
"cephadm rm-cluster --force --zap-osds --fsid a6ca415a-cde7-11eb-a41a-002590fc2544",
"[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible",
"host02 host03 host04 [admin] host01 [clients] client01 client02 client03",
"ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --limit CLIENT_GROUP_NAME | CLIENT_NODE_NAME",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --limit clients",
"ansible-playbook -i INVENTORY_FILE cephadm-clients.yml --extra-vars '{\"fsid\":\" FSID \",\"keyring\":\" KEYRING_PATH \",\"client_group\":\" CLIENT_GROUP_NAME \",\"conf\":\" CEPH_CONFIGURATION_PATH \",\"keyring_dest\":\" KEYRING_DESTINATION_PATH \"}'",
"[ceph-admin@host01 cephadm-ansible]USD ansible-playbook -i hosts cephadm-clients.yml --extra-vars '{\"fsid\":\"266ee7a8-2a05-11eb-b846-5254002d4916\",\"keyring\":\"/etc/ceph/ceph.client.admin.keyring\",\"client_group\":\"clients\",\"conf\":\"/etc/ceph/ceph.conf\",\"keyring_dest\":\"/etc/ceph/custom.name.ceph.keyring\"}'",
"ansible-playbook -i INVENTORY_FILE cephadm-clients.yml --extra-vars '{\"fsid\":\" FSID \",\"keyring\":\" KEYRING_PATH \",\"conf\":\" CONF_PATH \"}'",
"ls -l /etc/ceph/ -rw-------. 1 ceph ceph 151 Jul 11 12:23 custom.name.ceph.keyring -rw-------. 1 ceph ceph 151 Jul 11 12:23 ceph.keyring -rw-------. 1 ceph ceph 269 Jul 11 12:23 ceph.conf",
"[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible",
"sudo vi INVENTORY_FILE HOST1 labels=\"[' LABEL1 ', ' LABEL2 ']\" HOST2 labels=\"[' LABEL1 ', ' LABEL2 ']\" HOST3 labels=\"[' LABEL1 ']\" [admin] ADMIN_HOST monitor_address= MONITOR_IP_ADDRESS labels=\"[' ADMIN_LABEL ', ' LABEL1 ', ' LABEL2 ']\"",
"[ceph-admin@admin cephadm-ansible]USD sudo vi hosts host02 labels=\"['mon', 'mgr']\" host03 labels=\"['mon', 'mgr']\" host04 labels=\"['osd']\" host05 labels=\"['osd']\" host06 labels=\"['osd']\" [admin] host01 monitor_address=10.10.128.68 labels=\"['_admin', 'mon', 'mgr']\"",
"ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\"",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\"",
"sudo vi PLAYBOOK_FILENAME .yml --- - name: NAME_OF_PLAY hosts: BOOTSTRAP_HOST become: USE_ELEVATED_PRIVILEGES gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS tasks: -name: NAME_OF_TASK cephadm_registry_login: state: STATE registry_url: REGISTRY_URL registry_username: REGISTRY_USER_NAME registry_password: REGISTRY_PASSWORD - name: NAME_OF_TASK cephadm_bootstrap: mon_ip: \"{{ monitor_address }}\" dashboard_user: DASHBOARD_USER dashboard_password: DASHBOARD_PASSWORD allow_fqdn_hostname: ALLOW_FQDN_HOSTNAME cluster_network: NETWORK_CIDR",
"[ceph-admin@admin cephadm-ansible]USD sudo vi bootstrap.yml --- - name: bootstrap the cluster hosts: host01 become: true gather_facts: false tasks: - name: login to registry cephadm_registry_login: state: login registry_url: registry.redhat.io registry_username: user1 registry_password: mypassword1 - name: bootstrap initial cluster cephadm_bootstrap: mon_ip: \"{{ monitor_address }}\" dashboard_user: mydashboarduser dashboard_password: mydashboardpassword allow_fqdn_hostname: true cluster_network: 10.10.128.0/28",
"ansible-playbook -i INVENTORY_FILE PLAYBOOK_FILENAME .yml -vvv",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts bootstrap.yml -vvv",
"[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible",
"sudo vi INVENTORY_FILE NEW_HOST1 labels=\"[' LABEL1 ', ' LABEL2 ']\" NEW_HOST2 labels=\"[' LABEL1 ', ' LABEL2 ']\" NEW_HOST3 labels=\"[' LABEL1 ']\" [admin] ADMIN_HOST monitor_address= MONITOR_IP_ADDRESS labels=\"[' ADMIN_LABEL ', ' LABEL1 ', ' LABEL2 ']\"",
"[ceph-admin@admin cephadm-ansible]USD sudo vi hosts host02 labels=\"['mon', 'mgr']\" host03 labels=\"['mon', 'mgr']\" host04 labels=\"['osd']\" host05 labels=\"['osd']\" host06 labels=\"['osd']\" [admin] host01 monitor_address= 10.10.128.68 labels=\"['_admin', 'mon', 'mgr']\"",
"ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\" --limit NEWHOST",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\" --limit host02",
"sudo vi PLAYBOOK_FILENAME .yml --- - name: PLAY_NAME hosts: HOSTS_OR_HOST_GROUPS become: USE_ELEVATED_PRIVILEGES gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS tasks: - name: NAME_OF_TASK ceph_orch_host: name: \"{{ ansible_facts['hostname'] }}\" address: \"{{ ansible_facts['default_ipv4']['address'] }}\" labels: \"{{ labels }}\" delegate_to: HOST_TO_DELEGATE_TASK_TO - name: NAME_OF_TASK when: inventory_hostname in groups['admin'] ansible.builtin.shell: cmd: CEPH_COMMAND_TO_RUN register: REGISTER_NAME - name: NAME_OF_TASK when: inventory_hostname in groups['admin'] debug: msg: \"{{ REGISTER_NAME .stdout }}\"",
"[ceph-admin@admin cephadm-ansible]USD sudo vi add-hosts.yml --- - name: add additional hosts to the cluster hosts: all become: true gather_facts: true tasks: - name: add hosts to the cluster ceph_orch_host: name: \"{{ ansible_facts['hostname'] }}\" address: \"{{ ansible_facts['default_ipv4']['address'] }}\" labels: \"{{ labels }}\" delegate_to: host01 - name: list hosts in the cluster when: inventory_hostname in groups['admin'] ansible.builtin.shell: cmd: ceph orch host ls register: host_list - name: print current list of hosts when: inventory_hostname in groups['admin'] debug: msg: \"{{ host_list.stdout }}\"",
"ansible-playbook -i INVENTORY_FILE PLAYBOOK_FILENAME .yml",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts add-hosts.yml",
"[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible",
"sudo vi PLAYBOOK_FILENAME .yml --- - name: NAME_OF_PLAY hosts: ADMIN_HOST become: USE_ELEVATED_PRIVILEGES gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS tasks: - name: NAME_OF_TASK ceph_orch_host: name: HOST_TO_REMOVE state: STATE - name: NAME_OF_TASK ceph_orch_host: name: HOST_TO_REMOVE state: STATE retries: NUMBER_OF_RETRIES delay: DELAY until: CONTINUE_UNTIL register: REGISTER_NAME - name: NAME_OF_TASK ansible.builtin.shell: cmd: ceph orch host ls register: REGISTER_NAME - name: NAME_OF_TASK debug: msg: \"{{ REGISTER_NAME .stdout }}\"",
"[ceph-admin@admin cephadm-ansible]USD sudo vi remove-hosts.yml --- - name: remove host hosts: host01 become: true gather_facts: true tasks: - name: drain host07 ceph_orch_host: name: host07 state: drain - name: remove host from the cluster ceph_orch_host: name: host07 state: absent retries: 20 delay: 1 until: result is succeeded register: result - name: list hosts in the cluster ansible.builtin.shell: cmd: ceph orch host ls register: host_list - name: print current list of hosts debug: msg: \"{{ host_list.stdout }}\"",
"ansible-playbook -i INVENTORY_FILE PLAYBOOK_FILENAME .yml",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts remove-hosts.yml",
"TASK [print current hosts] ****************************************************************************************************** Friday 24 June 2022 14:52:40 -0400 (0:00:03.365) 0:02:31.702 *********** ok: [host01] => msg: |- HOST ADDR LABELS STATUS host01 10.10.128.68 _admin mon mgr host02 10.10.128.69 mon mgr host03 10.10.128.70 mon mgr host04 10.10.128.71 osd host05 10.10.128.72 osd host06 10.10.128.73 osd",
"[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible",
"sudo vi PLAYBOOK_FILENAME .yml --- - name: PLAY_NAME hosts: ADMIN_HOST become: USE_ELEVATED_PRIVILEGES gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS tasks: - name: NAME_OF_TASK ceph_config: action: GET_OR_SET who: DAEMON_TO_SET_CONFIGURATION_TO option: CEPH_CONFIGURATION_OPTION value: VALUE_OF_PARAMETER_TO_SET - name: NAME_OF_TASK ceph_config: action: GET_OR_SET who: DAEMON_TO_SET_CONFIGURATION_TO option: CEPH_CONFIGURATION_OPTION register: REGISTER_NAME - name: NAME_OF_TASK debug: msg: \" MESSAGE_TO_DISPLAY {{ REGISTER_NAME .stdout }}\"",
"[ceph-admin@admin cephadm-ansible]USD sudo vi change_configuration.yml --- - name: set pool delete hosts: host01 become: true gather_facts: false tasks: - name: set the allow pool delete option ceph_config: action: set who: mon option: mon_allow_pool_delete value: true - name: get the allow pool delete setting ceph_config: action: get who: mon option: mon_allow_pool_delete register: verify_mon_allow_pool_delete - name: print current mon_allow_pool_delete setting debug: msg: \"the value of 'mon_allow_pool_delete' is {{ verify_mon_allow_pool_delete.stdout }}\"",
"ansible-playbook -i INVENTORY_FILE _PLAYBOOK_FILENAME .yml",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts change_configuration.yml",
"TASK [print current mon_allow_pool_delete setting] ************************************************************* Wednesday 29 June 2022 13:51:41 -0400 (0:00:05.523) 0:00:17.953 ******** ok: [host01] => msg: the value of 'mon_allow_pool_delete' is true",
"[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible",
"sudo vi PLAYBOOK_FILENAME .yml --- - name: PLAY_NAME hosts: HOSTS_OR_HOST_GROUPS become: USE_ELEVATED_PRIVILEGES gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS tasks: - name: NAME_OF_TASK ceph_orch_apply: spec: | service_type: SERVICE_TYPE service_id: UNIQUE_NAME_OF_SERVICE placement: host_pattern: ' HOST_PATTERN_TO_SELECT_HOSTS ' label: LABEL spec: SPECIFICATION_OPTIONS :",
"[ceph-admin@admin cephadm-ansible]USD sudo vi deploy_osd_service.yml --- - name: deploy osd service hosts: host01 become: true gather_facts: true tasks: - name: apply osd spec ceph_orch_apply: spec: | service_type: osd service_id: osd placement: host_pattern: '*' label: osd spec: data_devices: all: true",
"ansible-playbook -i INVENTORY_FILE _PLAYBOOK_FILENAME .yml",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts deploy_osd_service.yml",
"[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible",
"sudo vi PLAYBOOK_FILENAME .yml --- - name: PLAY_NAME hosts: ADMIN_HOST become: USE_ELEVATED_PRIVILEGES gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS tasks: - name: NAME_OF_TASK ceph_orch_daemon: state: STATE_OF_SERVICE daemon_id: DAEMON_ID daemon_type: TYPE_OF_SERVICE",
"[ceph-admin@admin cephadm-ansible]USD sudo vi restart_services.yml --- - name: start and stop services hosts: host01 become: true gather_facts: false tasks: - name: start osd.0 ceph_orch_daemon: state: started daemon_id: 0 daemon_type: osd - name: stop mon.host02 ceph_orch_daemon: state: stopped daemon_id: host02 daemon_type: mon",
"ansible-playbook -i INVENTORY_FILE _PLAYBOOK_FILENAME .yml",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts restart_services.yml",
"cephadm adopt [-h] --name DAEMON_NAME --style STYLE [--cluster CLUSTER ] --legacy-dir [ LEGACY_DIR ] --config-json CONFIG_JSON ] [--skip-firewalld] [--skip-pull]",
"cephadm adopt --style=legacy --name prometheus.host02",
"cephadm ceph-volume inventory/simple/raw/lvm [-h] [--fsid FSID ] [--config-json CONFIG_JSON ] [--config CONFIG , -c CONFIG ] [--keyring KEYRING , -k KEYRING ]",
"cephadm ceph-volume inventory --fsid f64f341c-655d-11eb-8778-fa163e914bcc",
"cephadm check-host [--expect-hostname HOSTNAME ]",
"cephadm check-host --expect-hostname host02",
"cephadm shell deploy DAEMON_TYPE [-h] [--name DAEMON_NAME ] [--fsid FSID ] [--config CONFIG , -c CONFIG ] [--config-json CONFIG_JSON ] [--keyring KEYRING ] [--key KEY ] [--osd-fsid OSD_FSID ] [--skip-firewalld] [--tcp-ports TCP_PORTS ] [--reconfig] [--allow-ptrace] [--memory-request MEMORY_REQUEST ] [--memory-limit MEMORY_LIMIT ] [--meta-json META_JSON ]",
"cephadm shell deploy mon --fsid f64f341c-655d-11eb-8778-fa163e914bcc",
"cephadm enter [-h] [--fsid FSID ] --name NAME [command [command ...]]",
"cephadm enter --name 52c611f2b1d9",
"cephadm help",
"cephadm help",
"cephadm install PACKAGES",
"cephadm install ceph-common ceph-osd",
"cephadm --image IMAGE_ID inspect-image",
"cephadm --image 13ea90216d0be03003d12d7869f72ad9de5cec9e54a27fd308e01e467c0d4a0a inspect-image",
"cephadm list-networks",
"cephadm list-networks",
"cephadm ls [--no-detail] [--legacy-dir LEGACY_DIR ]",
"cephadm ls --no-detail",
"cephadm logs [--fsid FSID ] --name DAEMON_NAME cephadm logs [--fsid FSID ] --name DAEMON_NAME -- -n NUMBER # Last N lines cephadm logs [--fsid FSID ] --name DAEMON_NAME -- -f # Follow the logs",
"cephadm logs --fsid 57bddb48-ee04-11eb-9962-001a4a000672 --name osd.8 cephadm logs --fsid 57bddb48-ee04-11eb-9962-001a4a000672 --name osd.8 -- -n 20 cephadm logs --fsid 57bddb48-ee04-11eb-9962-001a4a000672 --name osd.8 -- -f",
"cephadm prepare-host [--expect-hostname HOSTNAME ]",
"cephadm prepare-host cephadm prepare-host --expect-hostname host01",
"cephadm [-h] [--image IMAGE_ID ] pull",
"cephadm --image 13ea90216d0be03003d12d7869f72ad9de5cec9e54a27fd308e01e467c0d4a0a pull",
"cephadm registry-login --registry-url [ REGISTRY_URL ] --registry-username [ USERNAME ] --registry-password [ PASSWORD ] [--fsid FSID ] [--registry-json JSON_FILE ]",
"cephadm registry-login --registry-url registry.redhat.io --registry-username myuser1 --registry-password mypassword1",
"cat REGISTRY_FILE { \"url\":\" REGISTRY_URL \", \"username\":\" REGISTRY_USERNAME \", \"password\":\" REGISTRY_PASSWORD \" }",
"cat registry_file { \"url\":\"registry.redhat.io\", \"username\":\"myuser\", \"password\":\"mypass\" } cephadm registry-login -i registry_file",
"cephadm rm-daemon [--fsid FSID ] [--name DAEMON_NAME ] [--force ] [--force-delete-data]",
"cephadm rm-daemon --fsid f64f341c-655d-11eb-8778-fa163e914bcc --name osd.8",
"cephadm rm-cluster [--fsid FSID ] [--force]",
"cephadm rm-cluster --fsid f64f341c-655d-11eb-8778-fa163e914bcc",
"ceph mgr module disable cephadm",
"cephadm rm-repo [-h]",
"cephadm rm-repo",
"cephadm run [--fsid FSID ] --name DAEMON_NAME",
"cephadm run --fsid f64f341c-655d-11eb-8778-fa163e914bcc --name osd.8",
"cephadm shell [--fsid FSID ] [--name DAEMON_NAME , -n DAEMON_NAME ] [--config CONFIG , -c CONFIG ] [--mount MOUNT , -m MOUNT ] [--keyring KEYRING , -k KEYRING ] [--env ENV , -e ENV ]",
"cephadm shell -- ceph orch ls cephadm shell",
"cephadm unit [--fsid FSID ] --name DAEMON_NAME start/stop/restart/enable/disable",
"cephadm unit --fsid f64f341c-655d-11eb-8778-fa163e914bcc --name osd.8 start",
"cephadm version",
"cephadm version"
]
| https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html-single/installation_guide/index |
4.353. xorg-x11-drv-intel | 4.353. xorg-x11-drv-intel 4.353.1. RHBA-2011:1619 - xorg-x11-drv-intel bug fix and enhancement update Updated xorg-x11-drv-intel packages that fix two bugs and add one enhancement are now available for Red Hat Enterprise Linux 6. The xorg-x11-drv-intel packages contain an Intel integrated graphics video driver for the X.Org implementation of the X Window System. The xorg-x11-drv-intel packages have been upgraded to upstream version 2.16, which provides a number of bug fixes and enhancements over the version. (BZ# 713767 ) Bug Fixes BZ# 699933 A black screen could appear when attempting to turn on a Lenovo ThinkPad T500 laptop after suspending it. As a consequence, the laptop could not recover from suspend. The source code has been modified so that Lenovo ThinkPad T500 laptops now recover from suspend successfully. BZ# 720702 Prior to this update, arithmetic rounding in the panel fitting algorithm did not work as expected. As a result, the screen was staggered in a diagonal way when scaling up the 1360x768 mode for the Intel Ironlake driver with Low-voltage differential signaling (LVDS) while the scaling mode was set to "None" or "Full aspect". This update modifies the rounding in the panel fitting algorithm so that the screen resolution can now be changed correctly. Enhancement BZ# 684313 This update adds support for future Intel embedded graphics controllers and enables additional 3D features. All users of xorg-x11-drv-intel are advised to upgrade to these updated xorg-x11-drv-intel packages, which fix these bugs and add this enhancement. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/xorg-x11-drv-intel |
probe::tty.poll | probe::tty.poll Name probe::tty.poll - Called when a tty device is being polled Synopsis Values file_name the tty file name wait_key the wait queue key | [
"tty.poll"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-tty-poll |
7.2. Order Constraints | 7.2. Order Constraints Order constraints determine the order in which the resources run. Use the following command to configure an order constraint. Table 7.3, "Properties of an Order Constraint" . summarizes the properties and options for configuring order constraints. Table 7.3. Properties of an Order Constraint Field Description resource_id The name of a resource on which an action is performed. action The action to perform on a resource. Possible values of the action property are as follows: * start - Start the resource. * stop - Stop the resource. * promote - Promote the resource from a slave resource to a master resource. * demote - Demote the resource from a master resource to a slave resource. If no action is specified, the default action is start . For information on master and slave resources, see Section 9.2, "Multistate Resources: Resources That Have Multiple Modes" . kind option How to enforce the constraint. The possible values of the kind option are as follows: * Optional - Only applies if both resources are executing the specified action. For information on optional ordering, see Section 7.2.2, "Advisory Ordering" . * Mandatory - Always (default value). If the first resource you specified is stopping or cannot be started, the second resource you specified must be stopped. For information on mandatory ordering, see Section 7.2.1, "Mandatory Ordering" . * Serialize - Ensure that no two stop/start actions occur concurrently for a set of resources. symmetrical option If true, which is the default, stop the resources in the reverse order. Default value: true 7.2.1. Mandatory Ordering A mandatory constraints indicates that the second resource you specify cannot run without the first resource you specify being active. This is the default value of the kind option. Leaving the default value ensures that the second resource you specify will react when the first resource you specify changes state. If the first resource you specified was running and is stopped, the second resource you specified will also be stopped (if it is running). If the first resource you specified resource was not running and cannot be started, the resource you specified will be stopped (if it is running). If the first resource you specified is (re)started while the second resource you specified is running, the second resource you specified will be stopped and restarted. Note, however, that the cluster reacts to each state change. If the first resource is restarted and is in a started state again before the second resource initiated a stop operation, the second resource will not need to be restarted. 7.2.2. Advisory Ordering When the kind=Optional option is specified for an order constraint, the constraint is considered optional and only applies if both resources are executing the specified actions. Any change in state by the first resource you specify will have no effect on the second resource you specify. The following command configures an advisory ordering constraint for the resources named VirtualIP and dummy_resource . 7.2.3. Ordered Resource Sets A common situation is for an administrator to create a chain of ordered resources, where, for example, resource A starts before resource B which starts before resource C. If your configuration requires that you create a set of resources that is colocated and started in order, you can configure a resource group that contains those resources, as described in Section 6.5, "Resource Groups" . There are some situations, however, where configuring the resources that need to start in a specified order as a resource group is not appropriate: You may need to configure resources to start in order and the resources are not necessarily colocated. You may have a resource C that must start after either resource A or B has started but there is no relationship between A and B. You may have resources C and D that must start after both resources A and B have started, but there is no relationship between A and B or between C and D. In these situations, you can create an order constraint on a set or sets of resources with the pcs constraint order set command. You can set the following options for a set of resources with the pcs constraint order set command. sequential , which can be set to true or false to indicate whether the set of resources must be ordered relative to each other. Setting sequential to false allows a set to be ordered relative to other sets in the ordering constraint, without its members being ordered relative to each other. Therefore, this option makes sense only if multiple sets are listed in the constraint; otherwise, the constraint has no effect. require-all , which can be set to true or false to indicate whether all of the resources in the set must be active before continuing. Setting require-all to false means that only one resource in the set needs to be started before continuing on to the set. Setting require-all to false has no effect unless used in conjunction with unordered sets, which are sets for which sequential is set to false . action , which can be set to start , promote , demote or stop , as described in Table 7.3, "Properties of an Order Constraint" . You can set the following constraint options for a set of resources following the setoptions parameter of the pcs constraint order set command. id , to provide a name for the constraint you are defining. score , to indicate the degree of preference for this constraint. For information on this option, see Table 7.4, "Properties of a Colocation Constraint" . If you have three resources named D1 , D2 , and D3 , the following command configures them as an ordered resource set. 7.2.4. Removing Resources From Ordering Constraints Use the following command to remove resources from any ordering constraint. | [
"pcs constraint order [ action ] resource_id then [ action ] resource_id [ options ]",
"pcs constraint order VirtualIP then dummy_resource kind=Optional",
"pcs constraint order set resource1 resource2 [ resourceN ]... [ options ] [set resourceX resourceY ... [ options ]] [setoptions [ constraint_options ]]",
"pcs constraint order set D1 D2 D3",
"pcs constraint order remove resource1 [ resourceN ]"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/s1-orderconstraints-HAAR |
Chapter 11. VLAN-aware instances | Chapter 11. VLAN-aware instances 11.1. VLAN trunks and VLAN transparent networks VM instances can send and receive VLAN-tagged traffic over a single virtual NIC. This is particularly useful for NFV applications (VNFs) that expect VLAN-tagged traffic, allowing a single virtual NIC to serve multiple customers or services. In ML2/OVN deployments you can support VLAN-aware instances using VLAN transparent networks. As an alternative in ML2/OVN or ML2/OVS deployments, you can support VLAN-aware instances using trunks. In a VLAN transparent network, you set up VLAN tagging in the VM instances. The VLAN tags are transferred over the network and consumed by the instances on the same VLAN, and ignored by other instances and devices. In a VLAN transparent network, the VLANs are managed in the instance. You do not need to set up the VLAN in the OpenStack Networking Service (neutron). VLAN trunks support VLAN-aware instances by combining VLANs into a single trunked port. For example, a project data network can use VLANs or tunneling (VXLAN, GRE, or Geneve) segmentation, while the instances see the traffic tagged with VLAN IDs. Network packets are tagged immediately before they are injected to the instance and do not need to be tagged throughout the entire network. The following table compares certain features of VLAN transparent networks and VLAN trunks. Transparent Trunk Mechanism driver support ML2/OVN ML2/OVN, ML2/OVS VLAN setup managed by VM instance OpenStack Networking Service (neutron) IP assignment Configured in VM instance Assigned by DHCP VLAN ID Flexible. You can set the VLAN ID in the instance Fixed. Instances must use the VLAN ID configured in the trunk 11.2. Enabling VLAN transparency in ML2/OVN deployments Enable VLAN transparency if you need to send VLAN tagged traffic between virtual machine (VM) instances. In a VLAN transparent network you can configure the VLANS directly in the VMs without configuring them in neutron. Prerequisites Deployment of Red Hat OpenStack Platform 16.1 or higher, with ML2/OVN as the mechanism driver. Provider network of type VLAN or Geneve. Do not use VLAN transparency in deployments with flat type provider networks. Ensure that the external switch supports 802.1q VLAN stacking using ethertype 0x8100 on both VLANs. OVN VLAN transparency does not support 802.1ad QinQ with outer provider VLAN ethertype set to 0x88A8 or 0x9100. You must have RHOSP administrator privileges. Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: In an environment file on the undercloud node, set the EnableVLANTransparency parameter to true . For example, add the following lines to ovn-extras.yaml . Include the environment file in the openstack overcloud deploy command with any other environment files that are relevant to your environment and deploy the overcloud: Replace <other_overcloud_environment_files> with the list of environment files that are part of your existing deployment. Create the network using the --transparent-vlan argument. Example Set up a VLAN interface on each participating VM. Set the interface MTU to 4 bytes less than the MTU of the underlay network to accommodate the extra tagging required by VLAN transparency. For example, if the underlay network MTU is 1500, set the interface MTU to 1496. The following example command adds a VLAN interface on eth0 with an MTU of 1496. The VLAN is 50 and the interface name is vlan50 : Example Choose one of these alternatives for the IP address you created on the VLAN interface inside the VM in step 4: Set an allowed address pair on the VM port. Example The following example sets an allowed address pair on port, fv82gwk3-qq2e-yu93-go31-56w7sf476mm0 , by using 192.128.111.3 and optionally adding a MAC address, 00:40:96:a8:45:c4 : Disable port security on the port. Disabling port security provides a practical alternative when it is not possible to list all of the possible combinations in allowed address pairs. Example The following example disables port security on port fv82gwk3-qq2e-yu93-go31-56w7sf476mm0 : Verification Ping between two VMs on the VLAN using the vlan50 IP address. Use tcpdump on eth0 to see if the packets arrive with the VLAN tag intact. Additional resources Environment files in the Customizing your Red Hat OpenStack Platform deployment guide Including environment files in overcloud creation in the Customizing your Red Hat OpenStack Platform deployment guide port set in the Command line interface reference 11.3. Reviewing the trunk plug-in During a Red Hat openStack deployment, the trunk plug-in is enabled by default. You can review the configuration on the controller nodes: On the controller node, confirm that the trunk plug-in is enabled in the /var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf file: 11.4. Creating a trunk connection To implement trunks for VLAN-tagged traffic, create a parent port and attach the new port to an existing neutron network. When you attach the new port, OpenStack Networking adds a trunk connection to the parent port you created. , create subports. These subports connect VLANs to instances, which allow connectivity to the trunk. Within the instance operating system, you must also create a sub-interface that tags traffic for the VLAN associated with the subport. Identify the network that contains the instances that require access to the trunked VLANs. In this example, this is the public network: Create the parent trunk port, and attach it to the network that the instance connects to. In this example, create a neutron port named parent-trunk-port on the public network. This trunk is the parent port, as you can use it to create subports . Create a trunk using the port that you created in step 2. In this example the trunk is named parent-trunk . View the trunk connection: View the details of the trunk connection: 11.5. Adding subports to the trunk Create a neutron port. This port is a subport connection to the trunk. You must also specify the MAC address that you assigned to the parent port: Note If you receive the error HttpException: Conflict , confirm that you are creating the subport on a different network to the one that has the parent trunk port. This example uses the public network for the parent trunk port, and private for the subport. Associate the port with the trunk ( parent-trunk ), and specify the VLAN ID ( 55 ): 11.6. Configuring an instance to use a trunk You must configure the VM instance operating system to use the MAC address that the Red Hat OpenStack Platform (RHOSP) Networking service (neutron) assigned to the subport. You can also configure the subport to use a specific MAC address during the subport creation step. Prerequisites If you are performing live migrations of your Compute nodes, ensure that the RHOSP Networking service RPC response timeout is appropriately set for your RHOSP deployment. The RPC response timeout value can vary between sites and is dependent on the system speed. The general recommendation is to set the value to at least 120 seconds per/100 trunk ports. The best practice is to measure the trunk port bind process time for your RHOSP deployment, and then set the RHOSP Networking service RPC response timeout appropriately. Try to keep the RPC response timeout value low, but also provide enough time for the RHOSP Networking service to receive an RPC response. For more information, see Section 11.7, "Configuring Networking service RPC timeout" . Procedure Review the configuration of your network trunk, using the network trunk command. Example Sample output Example Sample output Create an instance that uses the parent port-id as its vNIC. Example Sample output Additional resources Configuring Networking service RPC timeout 11.7. Configuring Networking service RPC timeout There can be situations when you must modify the Red Hat OpenStack Platform (RHOSP) Networking service (neutron) RPC response timeout. For example, live migrations for Compute nodes that use trunk ports can fail if the timeout value is too low. The RPC response timeout value can vary between sites and is dependent on the system speed. The general recommendation is to set the value to at least 120 seconds per/100 trunk ports. If your site uses trunk ports, the best practice is to measure the trunk port bind process time for your RHOSP deployment, and then set the RHOSP Networking service RPC response timeout appropriately. Try to keep the RPC response timeout value low, but also provide enough time for the RHOSP Networking service to receive an RPC response. By using a manual hieradata override, rpc_response_timeout , you can set the RPC response timeout value for the RHOSP Networking service. Procedure On the undercloud host, logged in as the stack user, create a custom YAML environment file. Example Tip The RHOSP Orchestration service (heat) uses a set of plans called templates to install and configure your environment. You can customize aspects of the overcloud with a custom environment file , which is a special type of template that provides customization for your heat templates. In the YAML environment file under ExtraConfig , set the appropriate value (in seconds) for rpc_response_timeout . (The default value is 60 seconds.) Example Note The RHOSP Orchestration service (heat) updates all RHOSP nodes with the value you set in the custom environment file, however this value only impacts the RHOSP Networking components. Run the openstack overcloud deploy command and include the core heat templates, environment files, and this new custom environment file. Important The order of the environment files is important as the parameters and resources defined in subsequent environment files take precedence. Example Additional resources Environment files in the Customizing your Red Hat OpenStack Platform deployment guide Including environment files in overcloud creation in the Customizing your Red Hat OpenStack Platform deployment guide 11.8. Understanding trunk states ACTIVE : The trunk is working as expected and there are no current requests. DOWN : The virtual and physical resources for the trunk are not in sync. This can be a temporary state during negotiation. BUILD : There has been a request and the resources are being provisioned. After successful completion the trunk returns to ACTIVE . DEGRADED : The provisioning request did not complete, so the trunk has only been partially provisioned. It is recommended to remove the subports and try again. ERROR : The provisioning request was unsuccessful. Remove the resource that caused the error to return the trunk to a healthier state. Do not add more subports while in the ERROR state, as this can cause more issues. | [
"source ~/stackrc",
"parameter_defaults: EnableVLANTransparency: true",
"openstack overcloud deploy --templates ... -e <other_overcloud_environment_files> -e ovn-extras.yaml ...",
"openstack network create network-name --transparent-vlan",
"ip link add link eth0 name vlan50 type vlan id 50 mtu 1496 ip link set vlan50 up ip addr add 192.128.111.3/24 dev vlan50",
"openstack port set --allowed-address ip-address=192.128.111.3,mac-address=00:40:96:a8:45:c4 fv82gwk3-qq2e-yu93-go31-56w7sf476mm0",
"openstack port set --no-security-group --disable-port-security fv82gwk3-qq2e-yu93-go31-56w7sf476mm0",
"service_plugins=router,qos,trunk",
"openstack network list +--------------------------------------+---------+--------------------------------------+ | ID | Name | Subnets | +--------------------------------------+---------+--------------------------------------+ | 82845092-4701-4004-add7-838837837621 | private | 434c7982-cd96-4c41-a8c9-b93adbdcb197 | | 8d8bc6d6-5b28-4e00-b99e-157516ff0050 | public | 3fd811b4-c104-44b5-8ff8-7a86af5e332c | +--------------------------------------+---------+--------------------------------------+",
"openstack port create --network public parent-trunk-port +-----------------------+-----------------------------------------------------------------------------+ | Field | Value | +-----------------------+-----------------------------------------------------------------------------+ | admin_state_up | UP | | allowed_address_pairs | | | binding_host_id | | | binding_profile | | | binding_vif_details | | | binding_vif_type | unbound | | binding_vnic_type | normal | | created_at | 2016-10-20T02:02:33Z | | description | | | device_id | | | device_owner | | | extra_dhcp_opts | | | fixed_ips | ip_address='172.24.4.230', subnet_id='dc608964-9af3-4fed-9f06-6d3844fb9b9b' | | headers | | | id | 20b6fdf8-0d43-475a-a0f1-ec8f757a4a39 | | mac_address | fa:16:3e:33:c4:75 | | name | parent-trunk-port | | network_id | 871a6bd8-4193-45d7-a300-dcb2420e7cc3 | | project_id | 745d33000ac74d30a77539f8920555e7 | | project_id | 745d33000ac74d30a77539f8920555e7 | | revision_number | 4 | | security_groups | 59e2af18-93c6-4201-861b-19a8a8b79b23 | | status | DOWN | | updated_at | 2016-10-20T02:02:33Z | +-----------------------+-----------------------------------------------------------------------------+",
"openstack network trunk create --parent-port parent-trunk-port parent-trunk +-----------------+--------------------------------------+ | Field | Value | +-----------------+--------------------------------------+ | admin_state_up | UP | | created_at | 2016-10-20T02:05:17Z | | description | | | id | 0e4263e2-5761-4cf6-ab6d-b22884a0fa88 | | name | parent-trunk | | port_id | 20b6fdf8-0d43-475a-a0f1-ec8f757a4a39 | | revision_number | 1 | | status | DOWN | | sub_ports | | | tenant_id | 745d33000ac74d30a77539f8920555e7 | | updated_at | 2016-10-20T02:05:17Z | +-----------------+--------------------------------------+",
"openstack network trunk list +--------------------------------------+--------------+--------------------------------------+-------------+ | ID | Name | Parent Port | Description | +--------------------------------------+--------------+--------------------------------------+-------------+ | 0e4263e2-5761-4cf6-ab6d-b22884a0fa88 | parent-trunk | 20b6fdf8-0d43-475a-a0f1-ec8f757a4a39 | | +--------------------------------------+--------------+--------------------------------------+-------------+",
"openstack network trunk show parent-trunk +-----------------+--------------------------------------+ | Field | Value | +-----------------+--------------------------------------+ | admin_state_up | UP | | created_at | 2016-10-20T02:05:17Z | | description | | | id | 0e4263e2-5761-4cf6-ab6d-b22884a0fa88 | | name | parent-trunk | | port_id | 20b6fdf8-0d43-475a-a0f1-ec8f757a4a39 | | revision_number | 1 | | status | DOWN | | sub_ports | | | tenant_id | 745d33000ac74d30a77539f8920555e7 | | updated_at | 2016-10-20T02:05:17Z | +-----------------+--------------------------------------+",
"openstack port create --network private --mac-address fa:16:3e:33:c4:75 subport-trunk-port +-----------------------+--------------------------------------------------------------------------+ | Field | Value | +-----------------------+--------------------------------------------------------------------------+ | admin_state_up | UP | | allowed_address_pairs | | | binding_host_id | | | binding_profile | | | binding_vif_details | | | binding_vif_type | unbound | | binding_vnic_type | normal | | created_at | 2016-10-20T02:08:14Z | | description | | | device_id | | | device_owner | | | extra_dhcp_opts | | | fixed_ips | ip_address='10.0.0.11', subnet_id='1a299780-56df-4c0b-a4c0-c5a612cef2e8' | | headers | | | id | 479d742e-dd00-4c24-8dd6-b7297fab3ee9 | | mac_address | fa:16:3e:33:c4:75 | | name | subport-trunk-port | | network_id | 3fe6b758-8613-4b17-901e-9ba30a7c4b51 | | project_id | 745d33000ac74d30a77539f8920555e7 | | project_id | 745d33000ac74d30a77539f8920555e7 | | revision_number | 4 | | security_groups | 59e2af18-93c6-4201-861b-19a8a8b79b23 | | status | DOWN | | updated_at | 2016-10-20T02:08:15Z | +-----------------------+--------------------------------------------------------------------------+",
"openstack network trunk set --subport port=subport-trunk-port,segmentation-type=vlan,segmentation-id=55 parent-trunk",
"openstack network trunk list",
"+---------------------+--------------+---------------------+-------------+ | ID | Name | Parent Port | Description | +---------------------+--------------+---------------------+-------------+ | 0e4263e2-5761-4cf6- | parent-trunk | 20b6fdf8-0d43-475a- | | | ab6d-b22884a0fa88 | | a0f1-ec8f757a4a39 | | +---------------------+--------------+---------------------+-------------+",
"openstack network trunk show parent-trunk",
"+-----------------+------------------------------------------------------+ | Field | Value | +-----------------+------------------------------------------------------+ | admin_state_up | UP | | created_at | 2021-10-20T02:05:17Z | | description | | | id | 0e4263e2-5761-4cf6-ab6d-b22884a0fa88 | | name | parent-trunk | | port_id | 20b6fdf8-0d43-475a-a0f1-ec8f757a4a39 | | revision_number | 2 | | status | DOWN | | sub_ports | port_id='479d742e-dd00-4c24-8dd6-b7297fab3ee9', segm | | | entation_id='55', segmentation_type='vlan' | | tenant_id | 745d33000ac74d30a77539f8920555e7 | | updated_at | 2021-08-20T02:10:06Z | +-----------------+------------------------------------------------------+",
"openstack server create --image cirros --flavor m1.tiny --security-group default --key-name sshaccess --nic port-id=20b6fdf8-0d43-475a-a0f1-ec8f757a4a39 testInstance",
"+--------------------------------------+---------------------------------+ | Property | Value | +--------------------------------------+---------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | | | OS-EXT-SRV-ATTR:host | - | | OS-EXT-SRV-ATTR:hostname | testinstance | | OS-EXT-SRV-ATTR:hypervisor_hostname | - | | OS-EXT-SRV-ATTR:instance_name | | | OS-EXT-SRV-ATTR:kernel_id | | | OS-EXT-SRV-ATTR:launch_index | 0 | | OS-EXT-SRV-ATTR:ramdisk_id | | | OS-EXT-SRV-ATTR:reservation_id | r-juqco0el | | OS-EXT-SRV-ATTR:root_device_name | - | | OS-EXT-SRV-ATTR:user_data | - | | OS-EXT-STS:power_state | 0 | | OS-EXT-STS:task_state | scheduling | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | - | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | adminPass | uMyL8PnZRBwQ | | config_drive | | | created | 2021-08-20T03:02:51Z | | description | - | | flavor | m1.tiny (1) | | hostId | | | host_status | | | id | 88b7aede-1305-4d91-a180-67e7eac | | | 8b70d | | image | cirros (568372f7-15df-4e61-a05f | | | -10954f79a3c4) | | key_name | sshaccess | | locked | False | | metadata | {} | | name | testInstance | | os-extended-volumes:volumes_attached | [] | | progress | 0 | | security_groups | default | | status | BUILD | | tags | [] | | tenant_id | 745d33000ac74d30a77539f8920555e | | | 7 | | updated | 2021-08-20T03:02:51Z | | user_id | 8c4aea738d774967b4ef388eb41fef5 | | | e | +--------------------------------------+---------------------------------+",
"vi /home/stack/templates/my-modules-environment.yaml",
"parameter_defaults: ExtraConfig: neutron::rpc_response_timeout: 120",
"openstack overcloud deploy --templates -e [your-environment-files] -e /usr/share/openstack-tripleo-heat-templates/environments/services/my-modules-environment.yaml"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_red_hat_openstack_platform_networking/vlan-aware-instances_rhosp-network |
Chapter 1. Uninstalling Red Hat OpenShift GitOps | Chapter 1. Uninstalling Red Hat OpenShift GitOps Uninstalling the Red Hat OpenShift GitOps Operator is a two-step process: Delete the Argo CD instances that were added under the default namespace of the Red Hat OpenShift GitOps Operator. Uninstall the Red Hat OpenShift GitOps Operator. Uninstalling only the Operator will not remove the Argo CD instances created. 1.1. Deleting the Argo CD instances Delete the Argo CD instances added to the namespace of the GitOps Operator. Procedure In the Terminal type the following command: USD oc delete gitopsservice cluster -n openshift-gitops Note You cannot delete an Argo CD cluster from the web console UI. After the command runs successfully all the Argo CD instances will be deleted from the openshift-gitops namespace. Delete any other Argo CD instances from other namespaces using the same command: USD oc delete gitopsservice cluster -n <namespace> 1.2. Uninstalling the GitOps Operator You can uninstall Red Hat OpenShift GitOps Operator from the OperatorHub by using the web console. Procedure From the Operators OperatorHub page, use the Filter by keyword box to search for Red Hat OpenShift GitOps tile. Click the Red Hat OpenShift GitOps tile. The Operator tile indicates it is installed. In the Red Hat OpenShift GitOps descriptor page, click Uninstall . Additional resources You can learn more about uninstalling Operators on OpenShift Container Platform in the Deleting Operators from a cluster section. | [
"oc delete gitopsservice cluster -n openshift-gitops",
"oc delete gitopsservice cluster -n <namespace>"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.15/html/removing_gitops/uninstalling-openshift-gitops |
Support | Support OpenShift Container Platform 4.18 Getting support for OpenShift Container Platform Red Hat OpenShift Documentation Team | [
"oc api-resources -o name | grep config.openshift.io",
"oc explain <resource_name>.config.openshift.io",
"oc get <resource_name>.config -o yaml",
"oc edit <resource_name>.config -o yaml",
"oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{\"\\n\"}'",
"curl -G -k -H \"Authorization: Bearer USD(oc whoami -t)\" https://USD(oc get route prometheus-k8s-federate -n openshift-monitoring -o jsonpath=\"{.spec.host}\")/federate --data-urlencode 'match[]={__name__=~\"cluster:usage:.*\"}' --data-urlencode 'match[]={__name__=\"count:up0\"}' --data-urlencode 'match[]={__name__=\"count:up1\"}' --data-urlencode 'match[]={__name__=\"cluster_version\"}' --data-urlencode 'match[]={__name__=\"cluster_version_available_updates\"}' --data-urlencode 'match[]={__name__=\"cluster_version_capability\"}' --data-urlencode 'match[]={__name__=\"cluster_operator_up\"}' --data-urlencode 'match[]={__name__=\"cluster_operator_conditions\"}' --data-urlencode 'match[]={__name__=\"cluster_version_payload\"}' --data-urlencode 'match[]={__name__=\"cluster_installer\"}' --data-urlencode 'match[]={__name__=\"cluster_infrastructure_provider\"}' --data-urlencode 'match[]={__name__=\"cluster_feature_set\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_object_counts:sum\"}' --data-urlencode 'match[]={__name__=\"ALERTS\",alertstate=\"firing\"}' --data-urlencode 'match[]={__name__=\"code:apiserver_request_total:rate:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:capacity_cpu_cores:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:capacity_memory_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:cpu_usage_cores:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:memory_usage_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"openshift:cpu_usage_cores:sum\"}' --data-urlencode 'match[]={__name__=\"openshift:memory_usage_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"workload:cpu_usage_cores:sum\"}' --data-urlencode 'match[]={__name__=\"workload:memory_usage_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:virt_platform_nodes:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:node_instance_type_count:sum\"}' --data-urlencode 'match[]={__name__=\"cnv:vmi_status_running:count\"}' --data-urlencode 'match[]={__name__=\"cluster:vmi_request_cpu_cores:sum\"}' --data-urlencode 'match[]={__name__=\"node_role_os_version_machine:cpu_capacity_cores:sum\"}' --data-urlencode 'match[]={__name__=\"node_role_os_version_machine:cpu_capacity_sockets:sum\"}' --data-urlencode 'match[]={__name__=\"subscription_sync_total\"}' --data-urlencode 'match[]={__name__=\"olm_resolution_duration_seconds\"}' --data-urlencode 'match[]={__name__=\"csv_succeeded\"}' --data-urlencode 'match[]={__name__=\"csv_abnormal\"}' --data-urlencode 'match[]={__name__=\"cluster:kube_persistentvolumeclaim_resource_requests_storage_bytes:provisioner:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:kubelet_volume_stats_used_bytes:provisioner:sum\"}' --data-urlencode 'match[]={__name__=\"ceph_cluster_total_bytes\"}' --data-urlencode 'match[]={__name__=\"ceph_cluster_total_used_raw_bytes\"}' --data-urlencode 'match[]={__name__=\"ceph_health_status\"}' --data-urlencode 'match[]={__name__=\"odf_system_raw_capacity_total_bytes\"}' --data-urlencode 'match[]={__name__=\"odf_system_raw_capacity_used_bytes\"}' --data-urlencode 'match[]={__name__=\"odf_system_health_status\"}' --data-urlencode 'match[]={__name__=\"job:ceph_osd_metadata:count\"}' --data-urlencode 'match[]={__name__=\"job:kube_pv:count\"}' --data-urlencode 'match[]={__name__=\"job:odf_system_pvs:count\"}' --data-urlencode 'match[]={__name__=\"job:ceph_pools_iops:total\"}' --data-urlencode 'match[]={__name__=\"job:ceph_pools_iops_bytes:total\"}' --data-urlencode 'match[]={__name__=\"job:ceph_versions_running:count\"}' --data-urlencode 'match[]={__name__=\"job:noobaa_total_unhealthy_buckets:sum\"}' --data-urlencode 'match[]={__name__=\"job:noobaa_bucket_count:sum\"}' --data-urlencode 'match[]={__name__=\"job:noobaa_total_object_count:sum\"}' --data-urlencode 'match[]={__name__=\"odf_system_bucket_count\", system_type=\"OCS\", system_vendor=\"Red Hat\"}' --data-urlencode 'match[]={__name__=\"odf_system_objects_total\", system_type=\"OCS\", system_vendor=\"Red Hat\"}' --data-urlencode 'match[]={__name__=\"noobaa_accounts_num\"}' --data-urlencode 'match[]={__name__=\"noobaa_total_usage\"}' --data-urlencode 'match[]={__name__=\"console_url\"}' --data-urlencode 'match[]={__name__=\"cluster:ovnkube_master_egress_routing_via_host:max\"}' --data-urlencode 'match[]={__name__=\"cluster:network_attachment_definition_instances:max\"}' --data-urlencode 'match[]={__name__=\"cluster:network_attachment_definition_enabled_instance_up:max\"}' --data-urlencode 'match[]={__name__=\"cluster:ingress_controller_aws_nlb_active:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:route_metrics_controller_routes_per_shard:min\"}' --data-urlencode 'match[]={__name__=\"cluster:route_metrics_controller_routes_per_shard:max\"}' --data-urlencode 'match[]={__name__=\"cluster:route_metrics_controller_routes_per_shard:avg\"}' --data-urlencode 'match[]={__name__=\"cluster:route_metrics_controller_routes_per_shard:median\"}' --data-urlencode 'match[]={__name__=\"cluster:openshift_route_info:tls_termination:sum\"}' --data-urlencode 'match[]={__name__=\"insightsclient_request_send_total\"}' --data-urlencode 'match[]={__name__=\"cam_app_workload_migrations\"}' --data-urlencode 'match[]={__name__=\"cluster:apiserver_current_inflight_requests:sum:max_over_time:2m\"}' --data-urlencode 'match[]={__name__=\"cluster:alertmanager_integrations:max\"}' --data-urlencode 'match[]={__name__=\"cluster:telemetry_selected_series:count\"}' --data-urlencode 'match[]={__name__=\"openshift:prometheus_tsdb_head_series:sum\"}' --data-urlencode 'match[]={__name__=\"openshift:prometheus_tsdb_head_samples_appended_total:sum\"}' --data-urlencode 'match[]={__name__=\"monitoring:container_memory_working_set_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"namespace_job:scrape_series_added:topk3_sum1h\"}' --data-urlencode 'match[]={__name__=\"namespace_job:scrape_samples_post_metric_relabeling:topk3\"}' --data-urlencode 'match[]={__name__=\"monitoring:haproxy_server_http_responses_total:sum\"}' --data-urlencode 'match[]={__name__=\"rhmi_status\"}' --data-urlencode 'match[]={__name__=\"status:upgrading:version:rhoam_state:max\"}' --data-urlencode 'match[]={__name__=\"state:rhoam_critical_alerts:max\"}' --data-urlencode 'match[]={__name__=\"state:rhoam_warning_alerts:max\"}' --data-urlencode 'match[]={__name__=\"rhoam_7d_slo_percentile:max\"}' --data-urlencode 'match[]={__name__=\"rhoam_7d_slo_remaining_error_budget:max\"}' --data-urlencode 'match[]={__name__=\"cluster_legacy_scheduler_policy\"}' --data-urlencode 'match[]={__name__=\"cluster_master_schedulable\"}' --data-urlencode 'match[]={__name__=\"che_workspace_status\"}' --data-urlencode 'match[]={__name__=\"che_workspace_started_total\"}' --data-urlencode 'match[]={__name__=\"che_workspace_failure_total\"}' --data-urlencode 'match[]={__name__=\"che_workspace_start_time_seconds_sum\"}' --data-urlencode 'match[]={__name__=\"che_workspace_start_time_seconds_count\"}' --data-urlencode 'match[]={__name__=\"cco_credentials_mode\"}' --data-urlencode 'match[]={__name__=\"cluster:kube_persistentvolume_plugin_type_counts:sum\"}' --data-urlencode 'match[]={__name__=\"visual_web_terminal_sessions_total\"}' --data-urlencode 'match[]={__name__=\"acm_managed_cluster_info\"}' --data-urlencode 'match[]={__name__=\"cluster:vsphere_vcenter_info:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:vsphere_esxi_version_total:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:vsphere_node_hw_version_total:sum\"}' --data-urlencode 'match[]={__name__=\"openshift:build_by_strategy:sum\"}' --data-urlencode 'match[]={__name__=\"rhods_aggregate_availability\"}' --data-urlencode 'match[]={__name__=\"rhods_total_users\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_disk_wal_fsync_duration_seconds:histogram_quantile\",quantile=\"0.99\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_mvcc_db_total_size_in_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_network_peer_round_trip_time_seconds:histogram_quantile\",quantile=\"0.99\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_mvcc_db_total_size_in_use_in_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_disk_backend_commit_duration_seconds:histogram_quantile\",quantile=\"0.99\"}' --data-urlencode 'match[]={__name__=\"jaeger_operator_instances_storage_types\"}' --data-urlencode 'match[]={__name__=\"jaeger_operator_instances_strategies\"}' --data-urlencode 'match[]={__name__=\"jaeger_operator_instances_agent_strategies\"}' --data-urlencode 'match[]={__name__=\"appsvcs:cores_by_product:sum\"}' --data-urlencode 'match[]={__name__=\"nto_custom_profiles:count\"}' --data-urlencode 'match[]={__name__=\"openshift_csi_share_configmap\"}' --data-urlencode 'match[]={__name__=\"openshift_csi_share_secret\"}' --data-urlencode 'match[]={__name__=\"openshift_csi_share_mount_failures_total\"}' --data-urlencode 'match[]={__name__=\"openshift_csi_share_mount_requests_total\"}' --data-urlencode 'match[]={__name__=\"cluster:velero_backup_total:max\"}' --data-urlencode 'match[]={__name__=\"cluster:velero_restore_total:max\"}' --data-urlencode 'match[]={__name__=\"eo_es_storage_info\"}' --data-urlencode 'match[]={__name__=\"eo_es_redundancy_policy_info\"}' --data-urlencode 'match[]={__name__=\"eo_es_defined_delete_namespaces_total\"}' --data-urlencode 'match[]={__name__=\"eo_es_misconfigured_memory_resources_info\"}' --data-urlencode 'match[]={__name__=\"cluster:eo_es_data_nodes_total:max\"}' --data-urlencode 'match[]={__name__=\"cluster:eo_es_documents_created_total:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:eo_es_documents_deleted_total:sum\"}' --data-urlencode 'match[]={__name__=\"pod:eo_es_shards_total:max\"}' --data-urlencode 'match[]={__name__=\"eo_es_cluster_management_state_info\"}' --data-urlencode 'match[]={__name__=\"imageregistry:imagestreamtags_count:sum\"}' --data-urlencode 'match[]={__name__=\"imageregistry:operations_count:sum\"}' --data-urlencode 'match[]={__name__=\"log_logging_info\"}' --data-urlencode 'match[]={__name__=\"log_collector_error_count_total\"}' --data-urlencode 'match[]={__name__=\"log_forwarder_pipeline_info\"}' --data-urlencode 'match[]={__name__=\"log_forwarder_input_info\"}' --data-urlencode 'match[]={__name__=\"log_forwarder_output_info\"}' --data-urlencode 'match[]={__name__=\"cluster:log_collected_bytes_total:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:log_logged_bytes_total:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:kata_monitor_running_shim_count:sum\"}' --data-urlencode 'match[]={__name__=\"platform:hypershift_hostedclusters:max\"}' --data-urlencode 'match[]={__name__=\"platform:hypershift_nodepools:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_unhealthy_bucket_claims:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_buckets_claims:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_unhealthy_namespace_resources:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_namespace_resources:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_unhealthy_namespace_buckets:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_namespace_buckets:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_accounts:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_usage:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_system_health_status:max\"}' --data-urlencode 'match[]={__name__=\"ocs_advanced_feature_usage\"}' --data-urlencode 'match[]={__name__=\"os_image_url_override:sum\"}' --data-urlencode 'match[]={__name__=\"openshift:openshift_network_operator_ipsec_state:info\"}'",
"INSIGHTS_OPERATOR_POD=USD(oc get pods --namespace=openshift-insights -o custom-columns=:metadata.name --no-headers --field-selector=status.phase=Running)",
"oc cp openshift-insights/USDINSIGHTS_OPERATOR_POD:/var/lib/insights-operator ./insights-data",
"oc extract secret/pull-secret -n openshift-config --to=.",
"\"cloud.openshift.com\":{\"auth\":\"<hash>\",\"email\":\"<email_address>\"}",
"oc get secret/pull-secret -n openshift-config --template='{{index .data \".dockerconfigjson\" | base64decode}}' ><pull_secret_location> 1",
"oc registry login --registry=\"<registry>\" \\ 1 --auth-basic=\"<username>:<password>\" \\ 2 --to=<pull_secret_location> 3",
"oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1",
"{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \" <your_token> \", \"email\": \" <email_address> \" } } }",
"oc get secret/pull-secret -n openshift-config --template='{{index .data \".dockerconfigjson\" | base64decode}}' > pull-secret",
"cp pull-secret pull-secret-backup",
"oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=pull-secret",
"apiVersion: v1 kind: ConfigMap metadata: name: insights-config namespace: openshift-insights data: config.yaml: | dataReporting: obfuscation: - networking - workload_names sca: disabled: false interval: 2h alerting: disabled: false binaryData: {} immutable: false",
"apiVersion: v1 kind: ConfigMap data: config.yaml: | alerting: disabled: true",
"apiVersion: v1 kind: ConfigMap data: config.yaml: | alerting: disabled: false",
"oc get pods --namespace=openshift-insights -o custom-columns=:metadata.name --no-headers --field-selector=status.phase=Running",
"oc cp openshift-insights/<insights_operator_pod_name>:/var/lib/insights-operator ./insights-data 1",
"{ \"name\": \"clusterconfig/authentication\", \"duration_in_ms\": 730, 1 \"records_count\": 1, \"errors\": null, \"panic\": null }",
"apiVersion: insights.openshift.io/v1alpha1 kind: DataGather metadata: name: <your_data_gather> 1 spec: gatherers: 2 - name: workloads state: Disabled",
"oc apply -f <your_datagather_definition>.yaml",
"apiVersion: insights.openshift.io/v1alpha1 kind: DataGather metadata: name: <your_data_gather> 1 spec: gatherers: 2 - name: workloads state: Disabled",
"apiVersion: config.openshift.io/v1alpha1 kind: InsightsDataGather metadata: . spec: 1 gatherConfig: disabledGatherers: - all 2",
"spec: gatherConfig: disabledGatherers: - clusterconfig/container_images 1 - clusterconfig/host_subnets - workloads/workload_info",
"apiVersion: config.openshift.io/v1alpha1 kind: InsightsDataGather metadata: . spec: gatherConfig: 1 disabledGatherers: all",
"spec: gatherConfig: disabledGatherers: - clusterconfig/container_images 1 - clusterconfig/host_subnets - workloads/workload_info",
"apiVersion: v1 kind: ConfigMap data: config.yaml: | dataReporting: obfuscation: - workload_names",
"apiVersion: batch/v1 kind: Job metadata: name: insights-operator-job annotations: config.openshift.io/inject-proxy: insights-operator spec: backoffLimit: 6 ttlSecondsAfterFinished: 600 template: spec: restartPolicy: OnFailure serviceAccountName: operator nodeSelector: beta.kubernetes.io/os: linux node-role.kubernetes.io/master: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 900 - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 900 volumes: - name: snapshots emptyDir: {} - name: service-ca-bundle configMap: name: service-ca-bundle optional: true initContainers: - name: insights-operator image: quay.io/openshift/origin-insights-operator:latest terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - name: snapshots mountPath: /var/lib/insights-operator - name: service-ca-bundle mountPath: /var/run/configmaps/service-ca-bundle readOnly: true ports: - containerPort: 8443 name: https resources: requests: cpu: 10m memory: 70Mi args: - gather - -v=4 - --config=/etc/insights-operator/server.yaml containers: - name: sleepy image: quay.io/openshift/origin-base:latest args: - /bin/sh - -c - sleep 10m volumeMounts: [{name: snapshots, mountPath: /var/lib/insights-operator}]",
"oc get -n openshift-insights deployment insights-operator -o yaml",
"apiVersion: apps/v1 kind: Deployment metadata: name: insights-operator namespace: openshift-insights spec: template: spec: containers: - args: image: registry.ci.openshift.org/ocp/4.15-2023-10-12-212500@sha256:a0aa581400805ad0... 1",
"apiVersion: batch/v1 kind: Job metadata: name: insights-operator-job spec: template: spec: initContainers: - name: insights-operator image: image: registry.ci.openshift.org/ocp/4.15-2023-10-12-212500@sha256:a0aa581400805ad0... 1 terminationMessagePolicy: FallbackToLogsOnError volumeMounts:",
"oc apply -n openshift-insights -f gather-job.yaml",
"oc describe -n openshift-insights job/insights-operator-job",
"Name: insights-operator-job Namespace: openshift-insights Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 7m18s job-controller Created pod: insights-operator-job-<your_job>",
"oc logs -n openshift-insights insights-operator-job-<your_job> insights-operator",
"I0407 11:55:38.192084 1 diskrecorder.go:34] Wrote 108 records to disk in 33ms",
"oc cp openshift-insights/insights-operator-job- <your_job> :/var/lib/insights-operator ./insights-data",
"oc delete -n openshift-insights job insights-operator-job",
"oc extract secret/pull-secret -n openshift-config --to=.",
"{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \" <your_token> \", \"email\": \"[email protected]\" } }",
"curl -v -H \"User-Agent: insights-operator/one10time200gather184a34f6a168926d93c330 cluster/ <cluster_id> \" -H \"Authorization: Bearer <your_token> \" -F \"upload=@ <path_to_archive> ; type=application/vnd.redhat.openshift.periodic+tar\" https://console.redhat.com/api/ingress/v1/upload",
"* Connection #0 to host console.redhat.com left intact {\"request_id\":\"393a7cf1093e434ea8dd4ab3eb28884c\",\"upload\":{\"account_number\":\"6274079\"}}%",
"apiVersion: v1 kind: ConfigMap data: config.yaml: | sca: interval: 2h",
"apiVersion: v1 kind: ConfigMap data: config.yaml: | sca: disabled: true",
"apiVersion: v1 kind: ConfigMap data: config.yaml: | sca: disabled: false",
"oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.18.0",
"oc adm must-gather -- /usr/bin/gather_audit_logs",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s",
"oc adm must-gather --run-namespace <namespace> --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.18.0",
"oc import-image is/must-gather -n openshift",
"oc adm must-gather",
"tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1",
"oc adm must-gather --image-stream=openshift/must-gather \\ 1 --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.18.0 2",
"oc adm must-gather --image=USD(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == \"cluster-logging-operator\")].image}')",
"├── cluster-logging │ ├── clo │ │ ├── cluster-logging-operator-74dd5994f-6ttgt │ │ ├── clusterlogforwarder_cr │ │ ├── cr │ │ ├── csv │ │ ├── deployment │ │ └── logforwarding_cr │ ├── collector │ │ ├── fluentd-2tr64 │ ├── eo │ │ ├── csv │ │ ├── deployment │ │ └── elasticsearch-operator-7dc7d97b9d-jb4r4 │ ├── es │ │ ├── cluster-elasticsearch │ │ │ ├── aliases │ │ │ ├── health │ │ │ ├── indices │ │ │ ├── latest_documents.json │ │ │ ├── nodes │ │ │ ├── nodes_stats.json │ │ │ └── thread_pool │ │ ├── cr │ │ ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms │ │ └── logs │ │ ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms │ ├── install │ │ ├── co_logs │ │ ├── install_plan │ │ ├── olmo_logs │ │ └── subscription │ └── kibana │ ├── cr │ ├── kibana-9d69668d4-2rkvz ├── cluster-scoped-resources │ └── core │ ├── nodes │ │ ├── ip-10-0-146-180.eu-west-1.compute.internal.yaml │ └── persistentvolumes │ ├── pvc-0a8d65d9-54aa-4c44-9ecc-33d9381e41c1.yaml ├── event-filter.html ├── gather-debug.log └── namespaces ├── openshift-logging │ ├── apps │ │ ├── daemonsets.yaml │ │ ├── deployments.yaml │ │ ├── replicasets.yaml │ │ └── statefulsets.yaml │ ├── batch │ │ ├── cronjobs.yaml │ │ └── jobs.yaml │ ├── core │ │ ├── configmaps.yaml │ │ ├── endpoints.yaml │ │ ├── events │ │ │ ├── elasticsearch-im-app-1596020400-gm6nl.1626341a296c16a1.yaml │ │ │ ├── elasticsearch-im-audit-1596020400-9l9n4.1626341a2af81bbd.yaml │ │ │ ├── elasticsearch-im-infra-1596020400-v98tk.1626341a2d821069.yaml │ │ │ ├── elasticsearch-im-app-1596020400-cc5vc.1626341a3019b238.yaml │ │ │ ├── elasticsearch-im-audit-1596020400-s8d5s.1626341a31f7b315.yaml │ │ │ ├── elasticsearch-im-infra-1596020400-7mgv8.1626341a35ea59ed.yaml │ │ ├── events.yaml │ │ ├── persistentvolumeclaims.yaml │ │ ├── pods.yaml │ │ ├── replicationcontrollers.yaml │ │ ├── secrets.yaml │ │ └── services.yaml │ ├── openshift-logging.yaml │ ├── pods │ │ ├── cluster-logging-operator-74dd5994f-6ttgt │ │ │ ├── cluster-logging-operator │ │ │ │ └── cluster-logging-operator │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── previous.insecure.log │ │ │ │ └── previous.log │ │ │ └── cluster-logging-operator-74dd5994f-6ttgt.yaml │ │ ├── cluster-logging-operator-registry-6df49d7d4-mxxff │ │ │ ├── cluster-logging-operator-registry │ │ │ │ └── cluster-logging-operator-registry │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── previous.insecure.log │ │ │ │ └── previous.log │ │ │ ├── cluster-logging-operator-registry-6df49d7d4-mxxff.yaml │ │ │ └── mutate-csv-and-generate-sqlite-db │ │ │ └── mutate-csv-and-generate-sqlite-db │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms │ │ ├── elasticsearch-im-app-1596030300-bpgcx │ │ │ ├── elasticsearch-im-app-1596030300-bpgcx.yaml │ │ │ └── indexmanagement │ │ │ └── indexmanagement │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ ├── fluentd-2tr64 │ │ │ ├── fluentd │ │ │ │ └── fluentd │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── previous.insecure.log │ │ │ │ └── previous.log │ │ │ ├── fluentd-2tr64.yaml │ │ │ └── fluentd-init │ │ │ └── fluentd-init │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ ├── kibana-9d69668d4-2rkvz │ │ │ ├── kibana │ │ │ │ └── kibana │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── previous.insecure.log │ │ │ │ └── previous.log │ │ │ ├── kibana-9d69668d4-2rkvz.yaml │ │ │ └── kibana-proxy │ │ │ └── kibana-proxy │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ └── route.openshift.io │ └── routes.yaml └── openshift-operators-redhat ├──",
"oc adm must-gather --image-stream=openshift/must-gather \\ 1 --image=quay.io/kubevirt/must-gather 2",
"tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1",
"oc adm must-gather -- gather_network_logs",
"tar cvaf must-gather.tar.gz must-gather.local.472290403699006248 1",
"Disk usage exceeds the volume percentage of 30% for mounted directory. Exiting",
"oc adm must-gather --volume-percentage <storage_percentage>",
"oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{\"\\n\"}'",
"oc get nodes",
"oc debug node/my-cluster-node",
"oc new-project dummy",
"oc patch namespace dummy --type=merge -p '{\"metadata\": {\"annotations\": { \"scheduler.alpha.kubernetes.io/defaultTolerations\": \"[{\\\"operator\\\": \\\"Exists\\\"}]\"}}}'",
"oc debug node/my-cluster-node",
"chroot /host",
"toolbox",
"sos report -k crio.all=on -k crio.logs=on -k podman.all=on -k podman.logs=on 1",
"sos report --all-logs",
"Your sosreport has been generated and saved in: /host/var/tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz 1 The checksum is: 382ffc167510fd71b4f12a4f40b97a4e",
"oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz' > /tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz 1",
"ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service",
"ssh core@<bootstrap_fqdn> 'for pod in USD(sudo podman ps -a -q); do sudo podman logs USDpod; done'",
"oc adm node-logs --role=master -u kubelet 1",
"oc adm node-logs --role=master --path=openshift-apiserver",
"oc adm node-logs --role=master --path=openshift-apiserver/audit.log",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log",
"oc adm must-gather --dest-dir /tmp/captures \\// <.> --source-dir '/tmp/tcpdump/' \\// <.> --image registry.redhat.io/openshift4/network-tools-rhel8:latest \\// <.> --node-selector 'node-role.kubernetes.io/worker' \\// <.> --host-network=true \\// <.> --timeout 30s \\// <.> -- tcpdump -i any \\// <.> -w /tmp/tcpdump/%Y-%m-%dT%H:%M:%S.pcap -W 1 -G 300",
"tmp/captures ├── event-filter.html ├── ip-10-0-192-217-ec2-internal 1 │ └── registry-redhat-io-openshift4-network-tools-rhel8-sha256-bca │ └── 2022-01-13T19:31:31.pcap ├── ip-10-0-201-178-ec2-internal 2 │ └── registry-redhat-io-openshift4-network-tools-rhel8-sha256-bca │ └── 2022-01-13T19:31:30.pcap ├── ip- └── timestamp",
"oc get nodes",
"oc debug node/my-cluster-node",
"chroot /host",
"ip ad",
"toolbox",
"tcpdump -nn -s 0 -i ens5 -w /host/var/tmp/my-cluster-node_USD(date +%d_%m_%Y-%H_%M_%S-%Z).pcap 1",
"chroot /host crictl ps",
"chroot /host crictl inspect --output yaml a7fe32346b120 | grep 'pid' | awk '{print USD2}'",
"nsenter -n -t 49628 -- tcpdump -nn -i ens5 -w /host/var/tmp/my-cluster-node-my-container_USD(date +%d_%m_%Y-%H_%M_%S-%Z).pcap 1",
"oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-tcpdump-capture-file.pcap' > /tmp/my-tcpdump-capture-file.pcap 1",
"oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-diagnostic-data.tar.gz' > /var/tmp/my-diagnostic-data.tar.gz 1",
"chroot /host",
"toolbox",
"dnf install -y <package_name>",
"chroot /host",
"REGISTRY=quay.io 1 IMAGE=fedora/fedora:latest 2 TOOLBOX_NAME=toolbox-fedora-latest 3",
"toolbox",
"oc get clusterversion",
"NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.13.8 True False 8h Cluster version is 4.13.8",
"oc describe clusterversion",
"Name: version Namespace: Labels: <none> Annotations: <none> API Version: config.openshift.io/v1 Kind: ClusterVersion Image: quay.io/openshift-release-dev/ocp-release@sha256:a956488d295fe5a59c8663a4d9992b9b5d0950f510a7387dbbfb8d20fc5970ce URL: https://access.redhat.com/errata/RHSA-2023:4456 Version: 4.13.8 History: Completion Time: 2023-08-17T13:20:21Z Image: quay.io/openshift-release-dev/ocp-release@sha256:a956488d295fe5a59c8663a4d9992b9b5d0950f510a7387dbbfb8d20fc5970ce Started Time: 2023-08-17T12:59:45Z State: Completed Verified: false Version: 4.13.8",
"ssh <user_name>@<load_balancer> systemctl status haproxy",
"ssh <user_name>@<load_balancer> netstat -nltupe | grep -E ':80|:443|:6443|:22623'",
"ssh <user_name>@<load_balancer> ss -nltupe | grep -E ':80|:443|:6443|:22623'",
"dig <wildcard_fqdn> @<dns_server>",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete --log-level debug 1",
"./openshift-install create ignition-configs --dir=./install_dir",
"tail -f ~/<installation_directory>/.openshift_install.log",
"ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service",
"oc adm node-logs --role=master -u kubelet",
"ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service",
"oc adm node-logs --role=master -u crio",
"ssh [email protected]_name.sub_domain.domain journalctl -b -f -u crio.service",
"curl -I http://<http_server_fqdn>:<port>/bootstrap.ign 1",
"grep -is 'bootstrap.ign' /var/log/httpd/access_log",
"ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service",
"ssh core@<bootstrap_fqdn> 'for pod in USD(sudo podman ps -a -q); do sudo podman logs USDpod; done'",
"curl -I http://<http_server_fqdn>:<port>/master.ign 1",
"grep -is 'master.ign' /var/log/httpd/access_log",
"oc get nodes",
"oc describe node <master_node>",
"oc get daemonsets -n openshift-ovn-kubernetes",
"oc get pods -n openshift-ovn-kubernetes",
"oc logs <ovn-k_pod> -n openshift-ovn-kubernetes",
"oc get network.config.openshift.io cluster -o yaml",
"./openshift-install create manifests",
"oc get pods -n openshift-network-operator",
"oc logs pod/<network_operator_pod_name> -n openshift-network-operator",
"oc adm node-logs --role=master -u kubelet",
"ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service",
"oc adm node-logs --role=master -u crio",
"ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u crio.service",
"oc adm node-logs --role=master --path=openshift-apiserver",
"oc adm node-logs --role=master --path=openshift-apiserver/audit.log",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps -a",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>",
"curl https://api-int.<cluster_name>:22623/config/master",
"dig api-int.<cluster_name> @<dns_server>",
"dig -x <load_balancer_mco_ip_address> @<dns_server>",
"ssh core@<bootstrap_fqdn> curl https://api-int.<cluster_name>:22623/config/master",
"ssh core@<node>.<cluster_name>.<base_domain> chronyc tracking",
"openssl s_client -connect api-int.<cluster_name>:22623 | openssl x509 -noout -text",
"oc get pods -n openshift-etcd",
"oc get pods -n openshift-etcd-operator",
"oc describe pod/<pod_name> -n <namespace>",
"oc logs pod/<pod_name> -n <namespace>",
"oc logs pod/<pod_name> -c <container_name> -n <namespace>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl pods --name=etcd-",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspectp <pod_id>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps | grep '<pod_id>'",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspect <container_id>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>",
"oc adm node-logs --role=master -u kubelet",
"ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service",
"oc adm node-logs --role=master -u kubelet | grep -is 'x509: certificate has expired'",
"ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service | grep -is 'x509: certificate has expired'",
"curl -I http://<http_server_fqdn>:<port>/worker.ign 1",
"grep -is 'worker.ign' /var/log/httpd/access_log",
"oc get nodes",
"oc describe node <worker_node>",
"oc get pods -n openshift-machine-api",
"oc describe pod/<machine_api_operator_pod_name> -n openshift-machine-api",
"oc logs pod/<machine_api_operator_pod_name> -n openshift-machine-api -c machine-api-operator",
"oc logs pod/<machine_api_operator_pod_name> -n openshift-machine-api -c kube-rbac-proxy",
"oc adm node-logs --role=worker -u kubelet",
"ssh core@<worker-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service",
"oc adm node-logs --role=worker -u crio",
"ssh core@<worker-node>.<cluster_name>.<base_domain> journalctl -b -f -u crio.service",
"oc adm node-logs --role=worker --path=sssd",
"oc adm node-logs --role=worker --path=sssd/sssd.log",
"ssh core@<worker-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/sssd/sssd.log",
"ssh core@<worker-node>.<cluster_name>.<base_domain> sudo crictl ps -a",
"ssh core@<worker-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>",
"curl https://api-int.<cluster_name>:22623/config/worker",
"dig api-int.<cluster_name> @<dns_server>",
"dig -x <load_balancer_mco_ip_address> @<dns_server>",
"ssh core@<bootstrap_fqdn> curl https://api-int.<cluster_name>:22623/config/worker",
"ssh core@<node>.<cluster_name>.<base_domain> chronyc tracking",
"openssl s_client -connect api-int.<cluster_name>:22623 | openssl x509 -noout -text",
"oc get clusteroperators",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 1 csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending 2 csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc describe clusteroperator <operator_name>",
"oc get pods -n <operator_namespace>",
"oc describe pod/<operator_pod_name> -n <operator_namespace>",
"oc logs pod/<operator_pod_name> -n <operator_namespace>",
"oc get pod -o \"jsonpath={range .status.containerStatuses[*]}{.name}{'\\t'}{.state}{'\\t'}{.image}{'\\n'}{end}\" <operator_pod_name> -n <operator_namespace>",
"oc adm release info <image_path>:<tag> --commits",
"./openshift-install gather bootstrap --dir <installation_directory> 1",
"./openshift-install gather bootstrap --dir <installation_directory> \\ 1 --bootstrap <bootstrap_address> \\ 2 --master <master_1_address> \\ 3 --master <master_2_address> \\ 4 --master <master_3_address> 5",
"INFO Pulling debug logs from the bootstrap machine INFO Bootstrap gather logs captured here \"<installation_directory>/log-bundle-<timestamp>.tar.gz\"",
"oc get nodes",
"oc adm top nodes",
"oc adm top node my-node",
"oc debug node/my-node",
"chroot /host",
"systemctl is-active kubelet",
"systemctl status kubelet",
"oc adm node-logs --role=master -u kubelet 1",
"oc adm node-logs --role=master --path=openshift-apiserver",
"oc adm node-logs --role=master --path=openshift-apiserver/audit.log",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log",
"oc debug node/my-node",
"chroot /host",
"systemctl is-active crio",
"systemctl status crio.service",
"oc adm node-logs --role=master -u crio",
"oc adm node-logs <node_name> -u crio",
"ssh core@<node>.<cluster_name>.<base_domain> journalctl -b -f -u crio.service",
"Failed to create pod sandbox: rpc error: code = Unknown desc = failed to mount container XXX: error recreating the missing symlinks: error reading name of symlink for XXX: open /var/lib/containers/storage/overlay/XXX/link: no such file or directory",
"can't stat lower layer ... because it does not exist. Going through storage to recreate the missing symlinks.",
"oc adm cordon <node_name>",
"oc adm drain <node_name> --ignore-daemonsets --delete-emptydir-data",
"ssh [email protected] sudo -i",
"systemctl stop kubelet",
".. for pod in USD(crictl pods -q); do if [[ \"USD(crictl inspectp USDpod | jq -r .status.linux.namespaces.options.network)\" != \"NODE\" ]]; then crictl rmp -f USDpod; fi; done",
"crictl rmp -fa",
"systemctl stop crio",
"crio wipe -f",
"systemctl start crio systemctl start kubelet",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ci-ln-tkbxyft-f76d1-nvwhr-master-1 Ready, SchedulingDisabled master 133m v1.31.3",
"oc adm uncordon <node_name>",
"NAME STATUS ROLES AGE VERSION ci-ln-tkbxyft-f76d1-nvwhr-master-1 Ready master 133m v1.31.3",
"rpm-ostree kargs --append='crashkernel=256M'",
"systemctl enable kdump.service",
"systemctl reboot",
"variant: openshift version: 4.18.0 metadata: name: 99-worker-kdump 1 labels: machineconfiguration.openshift.io/role: worker 2 openshift: kernel_arguments: 3 - crashkernel=256M storage: files: - path: /etc/kdump.conf 4 mode: 0644 overwrite: true contents: inline: | path /var/crash core_collector makedumpfile -l --message-level 7 -d 31 - path: /etc/sysconfig/kdump 5 mode: 0644 overwrite: true contents: inline: | KDUMP_COMMANDLINE_REMOVE=\"hugepages hugepagesz slub_debug quiet log_buf_len swiotlb\" KDUMP_COMMANDLINE_APPEND=\"irqpoll nr_cpus=1 reset_devices cgroup_disable=memory mce=off numa=off udev.children-max=2 panic=10 rootflags=nofail acpi_no_memhotplug transparent_hugepage=never nokaslr novmcoredd hest_disable\" 6 KEXEC_ARGS=\"-s\" KDUMP_IMG=\"vmlinuz\" systemd: units: - name: kdump.service enabled: true",
"nfs server.example.com:/export/cores core_collector makedumpfile -l --message-level 7 -d 31 extra_bins /sbin/mount.nfs extra_modules nfs nfsv3 nfs_layout_nfsv41_files blocklayoutdriver nfs_layout_flexfiles nfs_layout_nfsv41_files",
"butane 99-worker-kdump.bu -o 99-worker-kdump.yaml",
"oc create -f 99-worker-kdump.yaml",
"systemctl --failed",
"journalctl -u <unit>.service",
"NODEIP_HINT=192.0.2.1",
"echo -n 'NODEIP_HINT=192.0.2.1' | base64 -w0",
"Tk9ERUlQX0hJTlQ9MTkyLjAuMCxxxx==",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-nodeip-hint-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<encoded_content> 1 mode: 0644 overwrite: true path: /etc/default/nodeip-configuration",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-nodeip-hint-worker spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<encoded_content> 1 mode: 0644 overwrite: true path: /etc/default/nodeip-configuration",
"[connection] id=eno1 type=ethernet interface-name=eno1 master=bond1 slave-type=bond autoconnect=true autoconnect-priority=20",
"[connection] id=eno2 type=ethernet interface-name=eno2 master=bond1 slave-type=bond autoconnect=true autoconnect-priority=20",
"[connection] id=bond1 type=bond interface-name=bond1 autoconnect=true connection.autoconnect-slaves=1 autoconnect-priority=20 [bond] mode=802.3ad miimon=100 xmit_hash_policy=\"layer3+4\" [ipv4] method=auto",
"base64 <directory_path>/en01.config",
"base64 <directory_path>/eno2.config",
"base64 <directory_path>/bond1.config",
"export ROLE=<machine_role>",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: USD{worker} name: 12-USD{ROLE}-sec-bridge-cni spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:;base64,<base-64-encoded-contents-for-bond1.conf> path: /etc/NetworkManager/system-connections/bond1.nmconnection filesystem: root mode: 0600 - contents: source: data:;base64,<base-64-encoded-contents-for-eno1.conf> path: /etc/NetworkManager/system-connections/eno1.nmconnection filesystem: root mode: 0600 - contents: source: data:;base64,<base-64-encoded-contents-for-eno2.conf> path: /etc/NetworkManager/system-connections/eno2.nmconnection filesystem: root mode: 0600",
"oc create -f <machine_config_file_name>",
"bond1",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: USD{worker} name: 12-worker-extra-bridge spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/ovnk/extra_bridge mode: 0420 overwrite: true contents: source: data:text/plain;charset=utf-8,bond1 filesystem: root",
"oc create -f <machine_config_file_name>",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: USD{worker} name: 12-worker-br-ex-override spec: config: ignition: version: 3.2.0 storage: files: - path: /var/lib/ovnk/iface_default_hint mode: 0420 overwrite: true contents: source: data:text/plain;charset=utf-8,bond0 1 filesystem: root",
"oc create -f <machine_config_file_name>",
"oc get nodes -o json | grep --color exgw-ip-addresses",
"\"k8s.ovn.org/l3-gateway-config\": \\\"exgw-ip-address\\\":\\\"172.xx.xx.yy/24\\\",\\\"next-hops\\\":[\\\"xx.xx.xx.xx\\\"],",
"oc debug node/<node_name> -- chroot /host sh -c \"ip a | grep mtu | grep br-ex\"",
"Starting pod/worker-1-debug To use host binaries, run `chroot /host` 5: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 6: br-ex1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000",
"oc debug node/<node_name> -- chroot /host sh -c \"ip a | grep -A1 -E 'br-ex|bond0'",
"Starting pod/worker-1-debug To use host binaries, run `chroot /host` sh-5.1# ip a | grep -A1 -E 'br-ex|bond0' 2: bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master ovs-system state UP group default qlen 1000 link/ether fa:16:3e:47:99:98 brd ff:ff:ff:ff:ff:ff -- 5: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/ether fa:16:3e:47:99:98 brd ff:ff:ff:ff:ff:ff inet 10.xx.xx.xx/21 brd 10.xx.xx.255 scope global dynamic noprefixroute br-ex",
"E0514 12:47:17.998892 2281 daemon.go:1350] content mismatch for file /etc/systemd/system/ovs-vswitchd.service: [Unit]",
"oc debug node/<node_name>",
"chroot /host",
"ovs-appctl vlog/list",
"console syslog file ------- ------ ------ backtrace OFF INFO INFO bfd OFF INFO INFO bond OFF INFO INFO bridge OFF INFO INFO bundle OFF INFO INFO bundles OFF INFO INFO cfm OFF INFO INFO collectors OFF INFO INFO command_line OFF INFO INFO connmgr OFF INFO INFO conntrack OFF INFO INFO conntrack_tp OFF INFO INFO coverage OFF INFO INFO ct_dpif OFF INFO INFO daemon OFF INFO INFO daemon_unix OFF INFO INFO dns_resolve OFF INFO INFO dpdk OFF INFO INFO",
"Restart=always ExecStartPre=-/bin/sh -c '/usr/bin/chown -R :USDUSD{OVS_USER_ID##*:} /var/lib/openvswitch' ExecStartPre=-/bin/sh -c '/usr/bin/chown -R :USDUSD{OVS_USER_ID##*:} /etc/openvswitch' ExecStartPre=-/bin/sh -c '/usr/bin/chown -R :USDUSD{OVS_USER_ID##*:} /run/openvswitch' ExecStartPost=-/usr/bin/ovs-appctl vlog/set syslog:dbg ExecReload=-/usr/bin/ovs-appctl vlog/set syslog:dbg",
"systemctl daemon-reload",
"systemctl restart ovs-vswitchd",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master 1 name: 99-change-ovs-loglevel spec: config: ignition: version: 3.2.0 systemd: units: - dropins: - contents: | [Service] ExecStartPost=-/usr/bin/ovs-appctl vlog/set syslog:dbg 2 ExecReload=-/usr/bin/ovs-appctl vlog/set syslog:dbg name: 20-ovs-vswitchd-restart.conf name: ovs-vswitchd.service",
"oc apply -f 99-change-ovs-loglevel.yaml",
"oc adm node-logs <node_name> -u ovs-vswitchd",
"journalctl -b -f -u ovs-vswitchd.service",
"oc get subs -n <operator_namespace>",
"oc describe sub <subscription_name> -n <operator_namespace>",
"Name: cluster-logging Namespace: openshift-logging Labels: operators.coreos.com/cluster-logging.openshift-logging= Annotations: <none> API Version: operators.coreos.com/v1alpha1 Kind: Subscription Conditions: Last Transition Time: 2019-07-29T13:42:57Z Message: all available catalogsources are healthy Reason: AllCatalogSourcesHealthy Status: False Type: CatalogSourcesUnhealthy",
"oc get catalogsources -n openshift-marketplace",
"NAME DISPLAY TYPE PUBLISHER AGE certified-operators Certified Operators grpc Red Hat 55m community-operators Community Operators grpc Red Hat 55m example-catalog Example Catalog grpc Example Org 2m25s redhat-marketplace Red Hat Marketplace grpc Red Hat 55m redhat-operators Red Hat Operators grpc Red Hat 55m",
"oc describe catalogsource example-catalog -n openshift-marketplace",
"Name: example-catalog Namespace: openshift-marketplace Labels: <none> Annotations: operatorframework.io/managed-by: marketplace-operator target.workload.openshift.io/management: {\"effect\": \"PreferredDuringScheduling\"} API Version: operators.coreos.com/v1alpha1 Kind: CatalogSource Status: Connection State: Address: example-catalog.openshift-marketplace.svc:50051 Last Connect: 2021-09-09T17:07:35Z Last Observed State: TRANSIENT_FAILURE Registry Service: Created At: 2021-09-09T17:05:45Z Port: 50051 Protocol: grpc Service Name: example-catalog Service Namespace: openshift-marketplace",
"oc get pods -n openshift-marketplace",
"NAME READY STATUS RESTARTS AGE certified-operators-cv9nn 1/1 Running 0 36m community-operators-6v8lp 1/1 Running 0 36m marketplace-operator-86bfc75f9b-jkgbc 1/1 Running 0 42m example-catalog-bwt8z 0/1 ImagePullBackOff 0 3m55s redhat-marketplace-57p8c 1/1 Running 0 36m redhat-operators-smxx8 1/1 Running 0 36m",
"oc describe pod example-catalog-bwt8z -n openshift-marketplace",
"Name: example-catalog-bwt8z Namespace: openshift-marketplace Priority: 0 Node: ci-ln-jyryyg2-f76d1-ggdbq-worker-b-vsxjd/10.0.128.2 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 48s default-scheduler Successfully assigned openshift-marketplace/example-catalog-bwt8z to ci-ln-jyryyf2-f76d1-fgdbq-worker-b-vsxjd Normal AddedInterface 47s multus Add eth0 [10.131.0.40/23] from openshift-sdn Normal BackOff 20s (x2 over 46s) kubelet Back-off pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 20s (x2 over 46s) kubelet Error: ImagePullBackOff Normal Pulling 8s (x3 over 47s) kubelet Pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 8s (x3 over 47s) kubelet Failed to pull image \"quay.io/example-org/example-catalog:v1\": rpc error: code = Unknown desc = reading manifest v1 in quay.io/example-org/example-catalog: unauthorized: access to the requested resource is not authorized Warning Failed 8s (x3 over 47s) kubelet Error: ErrImagePull",
"oc get clusteroperators",
"oc get pod -n <operator_namespace>",
"oc describe pod <operator_pod_name> -n <operator_namespace>",
"oc debug node/my-node",
"chroot /host",
"crictl ps",
"crictl ps --name network-operator",
"oc get pods -n <operator_namespace>",
"oc logs pod/<pod_name> -n <operator_namespace>",
"oc logs pod/<operator_pod_name> -c <container_name> -n <operator_namespace>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl pods",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspectp <operator_pod_id>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps --pod=<operator_pod_id>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspect <container_id>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool spec: paused: true 1",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool spec: paused: false 1",
"oc patch --type=merge --patch='{\"spec\":{\"paused\":true}}' machineconfigpool/master",
"oc patch --type=merge --patch='{\"spec\":{\"paused\":true}}' machineconfigpool/worker",
"oc get machineconfigpool/master --template='{{.spec.paused}}'",
"oc get machineconfigpool/worker --template='{{.spec.paused}}'",
"true",
"oc get machineconfigpool",
"NAME CONFIG UPDATED UPDATING master rendered-master-33cf0a1254318755d7b48002c597bf91 True False worker rendered-worker-e405a5bdb0db1295acea08bcca33fa60 False False",
"oc patch --type=merge --patch='{\"spec\":{\"paused\":false}}' machineconfigpool/master",
"oc patch --type=merge --patch='{\"spec\":{\"paused\":false}}' machineconfigpool/worker",
"oc get machineconfigpool/master --template='{{.spec.paused}}'",
"oc get machineconfigpool/worker --template='{{.spec.paused}}'",
"false",
"oc get machineconfigpool",
"NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True",
"ImagePullBackOff for Back-off pulling image \"example.com/openshift4/ose-elasticsearch-operator-bundle@sha256:6d2587129c846ec28d384540322b40b05833e7e00b25cca584e004af9a1d292e\"",
"rpc error: code = Unknown desc = error pinging docker registry example.com: Get \"https://example.com/v2/\": dial tcp: lookup example.com on 10.0.0.1:53: no such host",
"oc get sub,csv -n <namespace>",
"NAME PACKAGE SOURCE CHANNEL subscription.operators.coreos.com/elasticsearch-operator elasticsearch-operator redhat-operators 5.0 NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/elasticsearch-operator.5.0.0-65 OpenShift Elasticsearch Operator 5.0.0-65 Succeeded",
"oc delete subscription <subscription_name> -n <namespace>",
"oc delete csv <csv_name> -n <namespace>",
"oc get job,configmap -n openshift-marketplace",
"NAME COMPLETIONS DURATION AGE job.batch/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 1/1 26s 9m30s NAME DATA AGE configmap/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 3 9m30s",
"oc delete job <job_name> -n openshift-marketplace",
"oc delete configmap <configmap_name> -n openshift-marketplace",
"oc get sub,csv,installplan -n <namespace>",
"message: 'Failed to delete all resource types, 1 remaining: Internal error occurred: error resolving resource'",
"oc get namespaces",
"operator-ns-1 Terminating",
"oc get crds",
"oc delete crd <crd_name>",
"oc get EtcdCluster -n <namespace_name>",
"oc get EtcdCluster --all-namespaces",
"oc delete <cr_name> <cr_instance_name> -n <namespace_name>",
"oc get namespace <namespace_name>",
"oc get sub,csv,installplan -n <namespace>",
"oc project <project_name>",
"oc get pods",
"oc status",
"skopeo inspect docker://<image_reference>",
"oc edit deployment/my-deployment",
"oc get pods -w",
"oc get events",
"oc logs <pod_name>",
"oc logs <pod_name> -c <container_name>",
"oc exec <pod_name> -- ls -alh /var/log",
"total 124K drwxr-xr-x. 1 root root 33 Aug 11 11:23 . drwxr-xr-x. 1 root root 28 Sep 6 2022 .. -rw-rw----. 1 root utmp 0 Jul 10 10:31 btmp -rw-r--r--. 1 root root 33K Jul 17 10:07 dnf.librepo.log -rw-r--r--. 1 root root 69K Jul 17 10:07 dnf.log -rw-r--r--. 1 root root 8.8K Jul 17 10:07 dnf.rpm.log -rw-r--r--. 1 root root 480 Jul 17 10:07 hawkey.log -rw-rw-r--. 1 root utmp 0 Jul 10 10:31 lastlog drwx------. 2 root root 23 Aug 11 11:14 openshift-apiserver drwx------. 2 root root 6 Jul 10 10:31 private drwxr-xr-x. 1 root root 22 Mar 9 08:05 rhsm -rw-rw-r--. 1 root utmp 0 Jul 10 10:31 wtmp",
"oc exec <pod_name> cat /var/log/<path_to_log>",
"2023-07-10T10:29:38+0000 INFO --- logging initialized --- 2023-07-10T10:29:38+0000 DDEBUG timer: config: 13 ms 2023-07-10T10:29:38+0000 DEBUG Loaded plugins: builddep, changelog, config-manager, copr, debug, debuginfo-install, download, generate_completion_cache, groups-manager, needs-restarting, playground, product-id, repoclosure, repodiff, repograph, repomanage, reposync, subscription-manager, uploadprofile 2023-07-10T10:29:38+0000 INFO Updating Subscription Management repositories. 2023-07-10T10:29:38+0000 INFO Unable to read consumer identity 2023-07-10T10:29:38+0000 INFO Subscription Manager is operating in container mode. 2023-07-10T10:29:38+0000 INFO",
"oc exec <pod_name> -c <container_name> ls /var/log",
"oc exec <pod_name> -c <container_name> cat /var/log/<path_to_log>",
"oc project <namespace>",
"oc rsh <pod_name> 1",
"oc rsh -c <container_name> pod/<pod_name>",
"oc port-forward <pod_name> <host_port>:<pod_port> 1",
"oc get deployment -n <project_name>",
"oc debug deployment/my-deployment --as-root -n <project_name>",
"oc get deploymentconfigs -n <project_name>",
"oc debug deploymentconfig/my-deployment-configuration --as-root -n <project_name>",
"oc cp <local_path> <pod_name>:/<path> -c <container_name> 1",
"oc cp <pod_name>:/<path> -c <container_name> <local_path> 1",
"oc get pods -w 1",
"oc logs -f pod/<application_name>-<build_number>-build",
"oc logs -f pod/<application_name>-<build_number>-deploy",
"oc logs -f pod/<application_name>-<build_number>-<random_string>",
"oc describe pod/my-app-1-akdlg",
"oc logs -f pod/my-app-1-akdlg",
"oc exec my-app-1-akdlg -- cat /var/log/my-application.log",
"oc debug dc/my-deployment-configuration --as-root -- cat /var/log/my-application.log",
"oc exec -it my-app-1-akdlg /bin/bash",
"oc debug node/my-cluster-node",
"chroot /host",
"crictl ps",
"crictl inspect a7fe32346b120 --output yaml | grep 'pid:' | awk '{print USD2}'",
"nsenter -n -t 31150 -- ip ad",
"Unable to attach or mount volumes: unmounted volumes=[sso-mysql-pvol], unattached volumes=[sso-mysql-pvol default-token-x4rzc]: timed out waiting for the condition Multi-Attach error for volume \"pvc-8837384d-69d7-40b2-b2e6-5df86943eef9\" Volume is already used by pod(s) sso-mysql-1-ns6b4",
"oc delete pod <old_pod> --force=true --grace-period=0",
"oc logs -f deployment/windows-machine-config-operator -n openshift-windows-machine-config-operator",
"ssh -t -o StrictHostKeyChecking=no -o ProxyCommand='ssh -A -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -W %h:%p core@USD(oc get service --all-namespaces -l run=ssh-bastion -o go-template=\"{{ with (index (index .items 0).status.loadBalancer.ingress 0) }}{{ or .hostname .ip }}{{end}}\")' <username>@<windows_node_internal_ip> 1 2",
"oc get nodes <node_name> -o jsonpath={.status.addresses[?\\(@.type==\\\"InternalIP\\\"\\)].address}",
"ssh -L 2020:<windows_node_internal_ip>:3389 \\ 1 core@USD(oc get service --all-namespaces -l run=ssh-bastion -o go-template=\"{{ with (index (index .items 0).status.loadBalancer.ingress 0) }}{{ or .hostname .ip }}{{end}}\")",
"oc get nodes <node_name> -o jsonpath={.status.addresses[?\\(@.type==\\\"InternalIP\\\"\\)].address}",
"C:\\> net user <username> * 1",
"oc adm node-logs -l kubernetes.io/os=windows --path= /ip-10-0-138-252.us-east-2.compute.internal containers /ip-10-0-138-252.us-east-2.compute.internal hybrid-overlay /ip-10-0-138-252.us-east-2.compute.internal kube-proxy /ip-10-0-138-252.us-east-2.compute.internal kubelet /ip-10-0-138-252.us-east-2.compute.internal pods",
"oc adm node-logs -l kubernetes.io/os=windows --path=/kubelet/kubelet.log",
"oc adm node-logs -l kubernetes.io/os=windows --path=journal",
"oc adm node-logs -l kubernetes.io/os=windows --path=journal -u docker",
"C:\\> powershell",
"C:\\> Get-EventLog -LogName Application -Source Docker",
"oc -n ns1 get service prometheus-example-app -o yaml",
"labels: app: prometheus-example-app",
"oc -n ns1 get servicemonitor prometheus-example-monitor -o yaml",
"apiVersion: v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - interval: 30s port: web scheme: http selector: matchLabels: app: prometheus-example-app",
"oc -n openshift-user-workload-monitoring get pods",
"NAME READY STATUS RESTARTS AGE prometheus-operator-776fcbbd56-2nbfm 2/2 Running 0 132m prometheus-user-workload-0 5/5 Running 1 132m prometheus-user-workload-1 5/5 Running 1 132m thanos-ruler-user-workload-0 3/3 Running 0 132m thanos-ruler-user-workload-1 3/3 Running 0 132m",
"oc -n openshift-user-workload-monitoring logs prometheus-operator-776fcbbd56-2nbfm -c prometheus-operator",
"level=warn ts=2020-08-10T11:48:20.906739623Z caller=operator.go:1829 component=prometheusoperator msg=\"skipping servicemonitor\" error=\"it accesses file system via bearer token file which Prometheus specification prohibits\" servicemonitor=eagle/eagle namespace=openshift-user-workload-monitoring prometheus=user-workload",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheusOperator: logLevel: debug",
"oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep \"log-level\"",
"- --log-level=debug",
"oc -n openshift-user-workload-monitoring get pods",
"topk(10, max by(namespace, job) (topk by(namespace, job) (1, scrape_samples_post_metric_relabeling)))",
"topk(10, sum by(namespace, job) (sum_over_time(scrape_series_added[1h])))",
"HOST=USD(oc -n openshift-monitoring get route prometheus-k8s -ojsonpath='{.status.ingress[].host}')",
"TOKEN=USD(oc whoami -t)",
"curl -H \"Authorization: Bearer USDTOKEN\" -k \"https://USDHOST/api/v1/status/tsdb\"",
"\"status\": \"success\",\"data\":{\"headStats\":{\"numSeries\":507473, \"numLabelPairs\":19832,\"chunkCount\":946298,\"minTime\":1712253600010, \"maxTime\":1712257935346},\"seriesCountByMetricName\": [{\"name\":\"etcd_request_duration_seconds_bucket\",\"value\":51840}, {\"name\":\"apiserver_request_sli_duration_seconds_bucket\",\"value\":47718},",
"oc debug <prometheus_k8s_pod_name> -n openshift-monitoring \\ 1 -c prometheus --image=USD(oc get po -n openshift-monitoring <prometheus_k8s_pod_name> \\ 2 -o jsonpath='{.spec.containers[?(@.name==\"prometheus\")].image}') -- sh -c 'cd /prometheus/;du -hs USD(ls -dtr */ | grep -Eo \"[0-9|A-Z]{26}\")'",
"308M 01HVKMPKQWZYWS8WVDAYQHNMW6 52M 01HVK64DTDA81799TBR9QDECEZ 102M 01HVK64DS7TRZRWF2756KHST5X 140M 01HVJS59K11FBVAPVY57K88Z11 90M 01HVH2A5Z58SKT810EM6B9AT50 152M 01HV8ZDVQMX41MKCN84S32RRZ1 354M 01HV6Q2N26BK63G4RYTST71FBF 156M 01HV664H9J9Z1FTZD73RD1563E 216M 01HTHXB60A7F239HN7S2TENPNS 104M 01HTHMGRXGS0WXA3WATRXHR36B",
"oc debug prometheus-k8s-0 -n openshift-monitoring -c prometheus --image=USD(oc get po -n openshift-monitoring prometheus-k8s-0 -o jsonpath='{.spec.containers[?(@.name==\"prometheus\")].image}') -- sh -c 'ls -latr /prometheus/ | egrep -o \"[0-9|A-Z]{26}\" | head -3 | while read BLOCK; do rm -r /prometheus/USDBLOCK; done'",
"oc debug <prometheus_k8s_pod_name> -n openshift-monitoring \\ 1 --image=USD(oc get po -n openshift-monitoring <prometheus_k8s_pod_name> \\ 2 -o jsonpath='{.spec.containers[?(@.name==\"prometheus\")].image}') -- df -h /prometheus/",
"Starting pod/prometheus-k8s-0-debug-j82w4 Filesystem Size Used Avail Use% Mounted on /dev/nvme0n1p4 40G 15G 40G 37% /prometheus Removing debug pod",
"oc <command> --loglevel <log_level>",
"oc whoami -t",
"sha256~RCV3Qcn7H-OEfqCGVI0CvnZ6"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html-single/support/index |
Chapter 4. RoleBindingRestriction [authorization.openshift.io/v1] | Chapter 4. RoleBindingRestriction [authorization.openshift.io/v1] Description RoleBindingRestriction is an object that can be matched against a subject (user, group, or service account) to determine whether rolebindings on that subject are allowed in the namespace to which the RoleBindingRestriction belongs. If any one of those RoleBindingRestriction objects matches a subject, rolebindings on that subject in the namespace are allowed. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Spec defines the matcher. 4.1.1. .spec Description Spec defines the matcher. Type object Property Type Description grouprestriction `` GroupRestriction matches against group subjects. serviceaccountrestriction `` ServiceAccountRestriction matches against service-account subjects. userrestriction `` UserRestriction matches against user subjects. 4.2. API endpoints The following API endpoints are available: /apis/authorization.openshift.io/v1/rolebindingrestrictions GET : list objects of kind RoleBindingRestriction /apis/authorization.openshift.io/v1/namespaces/{namespace}/rolebindingrestrictions DELETE : delete collection of RoleBindingRestriction GET : list objects of kind RoleBindingRestriction POST : create a RoleBindingRestriction /apis/authorization.openshift.io/v1/namespaces/{namespace}/rolebindingrestrictions/{name} DELETE : delete a RoleBindingRestriction GET : read the specified RoleBindingRestriction PATCH : partially update the specified RoleBindingRestriction PUT : replace the specified RoleBindingRestriction 4.2.1. /apis/authorization.openshift.io/v1/rolebindingrestrictions HTTP method GET Description list objects of kind RoleBindingRestriction Table 4.1. HTTP responses HTTP code Reponse body 200 - OK RoleBindingRestrictionList schema 401 - Unauthorized Empty 4.2.2. /apis/authorization.openshift.io/v1/namespaces/{namespace}/rolebindingrestrictions HTTP method DELETE Description delete collection of RoleBindingRestriction Table 4.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind RoleBindingRestriction Table 4.3. HTTP responses HTTP code Reponse body 200 - OK RoleBindingRestrictionList schema 401 - Unauthorized Empty HTTP method POST Description create a RoleBindingRestriction Table 4.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.5. Body parameters Parameter Type Description body RoleBindingRestriction schema Table 4.6. HTTP responses HTTP code Reponse body 200 - OK RoleBindingRestriction schema 201 - Created RoleBindingRestriction schema 202 - Accepted RoleBindingRestriction schema 401 - Unauthorized Empty 4.2.3. /apis/authorization.openshift.io/v1/namespaces/{namespace}/rolebindingrestrictions/{name} Table 4.7. Global path parameters Parameter Type Description name string name of the RoleBindingRestriction HTTP method DELETE Description delete a RoleBindingRestriction Table 4.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 4.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified RoleBindingRestriction Table 4.10. HTTP responses HTTP code Reponse body 200 - OK RoleBindingRestriction schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified RoleBindingRestriction Table 4.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.12. HTTP responses HTTP code Reponse body 200 - OK RoleBindingRestriction schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified RoleBindingRestriction Table 4.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.14. Body parameters Parameter Type Description body RoleBindingRestriction schema Table 4.15. HTTP responses HTTP code Reponse body 200 - OK RoleBindingRestriction schema 201 - Created RoleBindingRestriction schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/role_apis/rolebindingrestriction-authorization-openshift-io-v1 |
Red Hat build of OpenTelemetry | Red Hat build of OpenTelemetry OpenShift Container Platform 4.16 Configuring and using the Red Hat build of OpenTelemetry in OpenShift Container Platform Red Hat OpenShift Documentation Team | [
"apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: opentelemetry-operator-controller-manager-metrics-service namespace: openshift-opentelemetry-operator spec: endpoints: - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token path: /metrics port: https scheme: https tlsConfig: insecureSkipVerify: true selector: matchLabels: app.kubernetes.io/name: opentelemetry-operator control-plane: controller-manager --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: otel-operator-prometheus namespace: openshift-opentelemetry-operator annotations: include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" rules: - apiGroups: - \"\" resources: - services - endpoints - pods verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: otel-operator-prometheus namespace: openshift-opentelemetry-operator annotations: include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: otel-operator-prometheus subjects: - kind: ServiceAccount name: prometheus-k8s namespace: openshift-monitoring",
"spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\"",
"spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 tls: ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\"",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: <project_of_opentelemetry_collector_instance> spec: mode: deployment config: receivers: 1 otlp: protocols: grpc: http: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} zipkin: {} processors: 2 batch: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 exporters: 3 debug: {} service: pipelines: traces: receivers: [otlp,jaeger,zipkin] processors: [memory_limiter,batch] exporters: [debug]",
"oc login --username=<your_username>",
"oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: labels: kubernetes.io/metadata.name: openshift-opentelemetry-operator openshift.io/cluster-monitoring: \"true\" name: openshift-opentelemetry-operator EOF",
"oc apply -f - << EOF apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-opentelemetry-operator namespace: openshift-opentelemetry-operator spec: upgradeStrategy: Default EOF",
"oc apply -f - << EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: opentelemetry-product namespace: openshift-opentelemetry-operator spec: channel: stable installPlanApproval: Automatic name: opentelemetry-product source: redhat-operators sourceNamespace: openshift-marketplace EOF",
"oc get csv -n openshift-opentelemetry-operator",
"oc new-project <project_of_opentelemetry_collector_instance>",
"oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: name: <project_of_opentelemetry_collector_instance> EOF",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: <project_of_opentelemetry_collector_instance> spec: mode: deployment config: receivers: 1 otlp: protocols: grpc: http: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} zipkin: {} processors: 2 batch: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 exporters: 3 debug: {} service: pipelines: traces: receivers: [otlp,jaeger,zipkin] processors: [memory_limiter,batch] exporters: [debug]",
"oc apply -f - << EOF <OpenTelemetryCollector_custom_resource> EOF",
"oc get pod -l app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/instance=<namespace>.<instance_name> -o yaml",
"oc get service -l app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/instance=<namespace>.<instance_name>",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: generate-processors-rbac rules: - apiGroups: - rbac.authorization.k8s.io resources: - clusterrolebindings - clusterroles verbs: - create - delete - get - list - patch - update - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: generate-processors-rbac roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: generate-processors-rbac subjects: - kind: ServiceAccount name: opentelemetry-operator-controller-manager namespace: openshift-opentelemetry-operator",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: cluster-collector namespace: tracing-system spec: mode: deployment observability: metrics: enableMetrics: true config: receivers: otlp: protocols: grpc: {} http: {} processors: {} exporters: otlp: endpoint: otel-collector-headless.tracing-system.svc:4317 tls: ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\" prometheus: endpoint: 0.0.0.0:8889 resource_to_telemetry_conversion: enabled: true # by default resource attributes are dropped service: 1 pipelines: traces: receivers: [otlp] processors: [] exporters: [otlp] metrics: receivers: [otlp] processors: [] exporters: [prometheus]",
"receivers:",
"processors:",
"exporters:",
"connectors:",
"extensions:",
"service: pipelines:",
"service: pipelines: traces: receivers:",
"service: pipelines: traces: processors:",
"service: pipelines: traces: exporters:",
"service: pipelines: metrics: receivers:",
"service: pipelines: metrics: processors:",
"service: pipelines: metrics: exporters:",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: generate-processors-rbac rules: - apiGroups: - rbac.authorization.k8s.io resources: - clusterrolebindings - clusterroles verbs: - create - delete - get - list - patch - update - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: generate-processors-rbac roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: generate-processors-rbac subjects: - kind: ServiceAccount name: opentelemetry-operator-controller-manager namespace: openshift-opentelemetry-operator",
"config: receivers: otlp: protocols: grpc: endpoint: 0.0.0.0:4317 1 tls: 2 ca_file: ca.pem cert_file: cert.pem key_file: key.pem client_ca_file: client.pem 3 reload_interval: 1h 4 http: endpoint: 0.0.0.0:4318 5 tls: {} 6 service: pipelines: traces: receivers: [otlp] metrics: receivers: [otlp]",
"config: receivers: jaeger: protocols: grpc: endpoint: 0.0.0.0:14250 1 thrift_http: endpoint: 0.0.0.0:14268 2 thrift_compact: endpoint: 0.0.0.0:6831 3 thrift_binary: endpoint: 0.0.0.0:6832 4 tls: {} 5 service: pipelines: traces: receivers: [jaeger]",
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-hostfs-daemonset namespace: <namespace> --- apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints allowHostDirVolumePlugin: true allowHostIPC: false allowHostNetwork: false allowHostPID: true allowHostPorts: false allowPrivilegeEscalation: true allowPrivilegedContainer: true allowedCapabilities: null defaultAddCapabilities: - SYS_ADMIN fsGroup: type: RunAsAny groups: [] metadata: name: otel-hostmetrics readOnlyRootFilesystem: true runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny supplementalGroups: type: RunAsAny users: - system:serviceaccount:<namespace>:otel-hostfs-daemonset volumes: - configMap - emptyDir - hostPath - projected --- apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: <namespace> spec: serviceAccount: otel-hostfs-daemonset mode: daemonset volumeMounts: - mountPath: /hostfs name: host readOnly: true volumes: - hostPath: path: / name: host config: receivers: hostmetrics: collection_interval: 10s 1 initial_delay: 1s 2 root_path: / 3 scrapers: 4 cpu: {} memory: {} disk: {} service: pipelines: metrics: receivers: [hostmetrics]",
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-k8sobj namespace: <namespace> --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-k8sobj namespace: <namespace> rules: - apiGroups: - \"\" resources: - events - pods verbs: - get - list - watch - apiGroups: - \"events.k8s.io\" resources: - events verbs: - watch - list --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-k8sobj subjects: - kind: ServiceAccount name: otel-k8sobj namespace: <namespace> roleRef: kind: ClusterRole name: otel-k8sobj apiGroup: rbac.authorization.k8s.io --- apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel-k8s-obj namespace: <namespace> spec: serviceAccount: otel-k8sobj mode: deployment config: receivers: k8sobjects: auth_type: serviceAccount objects: - name: pods 1 mode: pull 2 interval: 30s 3 label_selector: 4 field_selector: 5 namespaces: [<namespace>,...] 6 - name: events mode: watch exporters: debug: service: pipelines: logs: receivers: [k8sobjects] exporters: [debug]",
"config: receivers: kubeletstats: collection_interval: 20s auth_type: \"serviceAccount\" endpoint: \"https://USD{env:K8S_NODE_NAME}:10250\" insecure_skip_verify: true service: pipelines: metrics: receivers: [kubeletstats] env: - name: K8S_NODE_NAME 1 valueFrom: fieldRef: fieldPath: spec.nodeName",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: [''] resources: ['nodes/stats'] verbs: ['get', 'watch', 'list'] - apiGroups: [\"\"] resources: [\"nodes/proxy\"] 1 verbs: [\"get\"]",
"config: receivers: prometheus: config: scrape_configs: 1 - job_name: 'my-app' 2 scrape_interval: 5s 3 static_configs: - targets: ['my-app.example.svc.cluster.local:8888'] 4 service: pipelines: metrics: receivers: [prometheus]",
"config: otlpjsonfile: include: - \"/var/log/*.log\" 1 exclude: - \"/var/log/test.log\" 2",
"config: receivers: zipkin: endpoint: 0.0.0.0:9411 1 tls: {} 2 service: pipelines: traces: receivers: [zipkin]",
"config: receivers: kafka: brokers: [\"localhost:9092\"] 1 protocol_version: 2.0.0 2 topic: otlp_spans 3 auth: plain_text: 4 username: example password: example tls: 5 ca_file: ca.pem cert_file: cert.pem key_file: key.pem insecure: false 6 server_name_override: kafka.example.corp 7 service: pipelines: traces: receivers: [kafka]",
"config: receivers: k8s_cluster: distribution: openshift collection_interval: 10s exporters: debug: {} service: pipelines: metrics: receivers: [k8s_cluster] exporters: [debug] logs/entity_events: receivers: [k8s_cluster] exporters: [debug]",
"apiVersion: v1 kind: ServiceAccount metadata: labels: app: otelcontribcol name: otelcontribcol",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otelcontribcol labels: app: otelcontribcol rules: - apiGroups: - quota.openshift.io resources: - clusterresourcequotas verbs: - get - list - watch - apiGroups: - \"\" resources: - events - namespaces - namespaces/status - nodes - nodes/spec - pods - pods/status - replicationcontrollers - replicationcontrollers/status - resourcequotas - services verbs: - get - list - watch - apiGroups: - apps resources: - daemonsets - deployments - replicasets - statefulsets verbs: - get - list - watch - apiGroups: - extensions resources: - daemonsets - deployments - replicasets verbs: - get - list - watch - apiGroups: - batch resources: - jobs - cronjobs verbs: - get - list - watch - apiGroups: - autoscaling resources: - horizontalpodautoscalers verbs: - get - list - watch",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otelcontribcol labels: app: otelcontribcol roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: otelcontribcol subjects: - kind: ServiceAccount name: otelcontribcol namespace: default",
"config: receivers: opencensus: endpoint: 0.0.0.0:9411 1 tls: 2 cors_allowed_origins: 3 - https://*.<example>.com service: pipelines: traces: receivers: [opencensus]",
"config: receivers: filelog: include: [ /simple.log ] 1 operators: 2 - type: regex_parser regex: '^(?P<time>\\d{4}-\\d{2}-\\d{2} \\d{2}:\\d{2}:\\d{2}) (?P<sev>[A-Z]*) (?P<msg>.*)USD' timestamp: parse_from: attributes.time layout: '%Y-%m-%d %H:%M:%S' severity: parse_from: attributes.sev",
"apiVersion: v1 kind: Namespace metadata: name: otel-journald labels: security.openshift.io/scc.podSecurityLabelSync: \"false\" pod-security.kubernetes.io/enforce: \"privileged\" pod-security.kubernetes.io/audit: \"privileged\" pod-security.kubernetes.io/warn: \"privileged\" --- apiVersion: v1 kind: ServiceAccount metadata: name: privileged-sa namespace: otel-journald --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-journald-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:openshift:scc:privileged subjects: - kind: ServiceAccount name: privileged-sa namespace: otel-journald --- apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel-journald-logs namespace: otel-journald spec: mode: daemonset serviceAccount: privileged-sa securityContext: allowPrivilegeEscalation: false capabilities: drop: - CHOWN - DAC_OVERRIDE - FOWNER - FSETID - KILL - NET_BIND_SERVICE - SETGID - SETPCAP - SETUID readOnlyRootFilesystem: true seLinuxOptions: type: spc_t seccompProfile: type: RuntimeDefault config: receivers: journald: files: /var/log/journal/*/* priority: info 1 units: 2 - kubelet - crio - init.scope - dnsmasq all: true 3 retry_on_failure: enabled: true 4 initial_interval: 1s 5 max_interval: 30s 6 max_elapsed_time: 5m 7 processors: exporters: debug: {} service: pipelines: logs: receivers: [journald] exporters: [debug] volumeMounts: - name: journal-logs mountPath: /var/log/journal/ readOnly: true volumes: - name: journal-logs hostPath: path: /var/log/journal tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector labels: app: otel-collector rules: - apiGroups: - \"\" resources: - events - namespaces - namespaces/status - nodes - nodes/spec - pods - pods/status - replicationcontrollers - replicationcontrollers/status - resourcequotas - services verbs: - get - list - watch - apiGroups: - apps resources: - daemonsets - deployments - replicasets - statefulsets verbs: - get - list - watch - apiGroups: - extensions resources: - daemonsets - deployments - replicasets verbs: - get - list - watch - apiGroups: - batch resources: - jobs - cronjobs verbs: - get - list - watch - apiGroups: - autoscaling resources: - horizontalpodautoscalers verbs: - get - list - watch",
"serviceAccount: otel-collector 1 config: receivers: k8s_events: namespaces: [project1, project2] 2 service: pipelines: logs: receivers: [k8s_events]",
"config: processors: batch: timeout: 5s send_batch_max_size: 10000 service: pipelines: traces: processors: [batch] metrics: processors: [batch]",
"config: processors: memory_limiter: check_interval: 1s limit_mib: 4000 spike_limit_mib: 800 service: pipelines: traces: processors: [batch] metrics: processors: [batch]",
"kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: [\"config.openshift.io\"] resources: [\"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"]",
"config: processors: resourcedetection: detectors: [openshift] override: true service: pipelines: traces: processors: [resourcedetection] metrics: processors: [resourcedetection]",
"config: processors: resourcedetection/env: detectors: [env] 1 timeout: 2s override: false",
"config: processors: attributes/example: actions: - key: db.table action: delete - key: redacted_span value: true action: upsert - key: copy_key from_attribute: key_original action: update - key: account_id value: 2245 action: insert - key: account_password action: delete - key: account_email action: hash - key: http.status_code action: convert converted_type: int",
"config: processors: attributes: - key: cloud.availability_zone value: \"zone-1\" action: upsert - key: k8s.cluster.name from_attribute: k8s-cluster action: insert - key: redundant-attribute action: delete",
"config: processors: span: name: from_attributes: [<key1>, <key2>, ...] 1 separator: <value> 2",
"config: processors: span/to_attributes: name: to_attributes: rules: - ^\\/api\\/v1\\/document\\/(?P<documentId>.*)\\/updateUSD 1",
"config: processors: span/set_status: status: code: Error description: \"<error_description>\"",
"kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: [''] resources: ['pods', 'namespaces'] verbs: ['get', 'watch', 'list']",
"config: processors: k8sattributes: filter: node_from_env_var: KUBE_NODE_NAME",
"config: processors: filter/ottl: error_mode: ignore 1 traces: span: - 'attributes[\"container.name\"] == \"app_container_1\"' 2 - 'resource.attributes[\"host.name\"] == \"localhost\"' 3",
"config: processors: routing: from_attribute: X-Tenant 1 default_exporters: 2 - jaeger table: 3 - value: acme exporters: [jaeger/acme] exporters: jaeger: endpoint: localhost:14250 jaeger/acme: endpoint: localhost:24250",
"config: processors: cumulativetodelta: include: 1 match_type: strict 2 metrics: 3 - <metric_1_name> - <metric_2_name> exclude: 4 match_type: regexp metrics: - \"<regular_expression_for_metric_names>\"",
"config: processors: groupbyattrs: keys: 1 - <key1> 2 - <key2>",
"config: processors: transform: error_mode: ignore 1 <trace|metric|log>_statements: 2 - context: <string> 3 conditions: 4 - <string> - <string> statements: 5 - <string> - <string> - <string> - context: <string> statements: - <string> - <string> - <string>",
"config: transform: error_mode: ignore trace_statements: 1 - context: resource statements: - keep_keys(attributes, [\"service.name\", \"service.namespace\", \"cloud.region\", \"process.command_line\"]) 2 - replace_pattern(attributes[\"process.command_line\"], \"password\\\\=[^\\\\s]*(\\\\s?)\", \"password=***\") 3 - limit(attributes, 100, []) - truncate_all(attributes, 4096) - context: span 4 statements: - set(status.code, 1) where attributes[\"http.path\"] == \"/health\" - set(name, attributes[\"http.route\"]) - replace_match(attributes[\"http.target\"], \"/user/*/list/*\", \"/user/{userId}/list/{listId}\") - limit(attributes, 100, []) - truncate_all(attributes, 4096)",
"config: exporters: otlp: endpoint: tempo-ingester:4317 1 tls: 2 ca_file: ca.pem cert_file: cert.pem key_file: key.pem insecure: false 3 insecure_skip_verify: false # 4 reload_interval: 1h 5 server_name_override: <name> 6 headers: 7 X-Scope-OrgID: \"dev\" service: pipelines: traces: exporters: [otlp] metrics: exporters: [otlp]",
"config: exporters: otlphttp: endpoint: http://tempo-ingester:4318 1 tls: 2 headers: 3 X-Scope-OrgID: \"dev\" disable_keep_alives: false 4 service: pipelines: traces: exporters: [otlphttp] metrics: exporters: [otlphttp]",
"config: exporters: debug: verbosity: detailed 1 sampling_initial: 5 2 sampling_thereafter: 200 3 use_internal_logger: true 4 service: pipelines: traces: exporters: [debug] metrics: exporters: [debug]",
"config: exporters: loadbalancing: routing_key: \"service\" 1 protocol: otlp: 2 timeout: 1s resolver: 3 static: 4 hostnames: - backend-1:4317 - backend-2:4317 dns: 5 hostname: otelcol-headless.observability.svc.cluster.local k8s: 6 service: lb-svc.kube-public ports: - 15317 - 16317",
"config: exporters: prometheus: endpoint: 0.0.0.0:8889 1 tls: 2 ca_file: ca.pem cert_file: cert.pem key_file: key.pem namespace: prefix 3 const_labels: 4 label1: value1 enable_open_metrics: true 5 resource_to_telemetry_conversion: 6 enabled: true metric_expiration: 180m 7 add_metric_suffixes: false 8 service: pipelines: metrics: exporters: [prometheus]",
"config: exporters: prometheusremotewrite: endpoint: \"https://my-prometheus:7900/api/v1/push\" 1 tls: 2 ca_file: ca.pem cert_file: cert.pem key_file: key.pem target_info: true 3 export_created_metric: true 4 max_batch_size_bytes: 3000000 5 service: pipelines: metrics: exporters: [prometheusremotewrite]",
"config: exporters: kafka: brokers: [\"localhost:9092\"] 1 protocol_version: 2.0.0 2 topic: otlp_spans 3 auth: plain_text: 4 username: example password: example tls: 5 ca_file: ca.pem cert_file: cert.pem key_file: key.pem insecure: false 6 server_name_override: kafka.example.corp 7 service: pipelines: traces: exporters: [kafka]",
"config: exporters: awscloudwatchlogs: log_group_name: \"<group_name_of_amazon_cloudwatch_logs>\" 1 log_stream_name: \"<log_stream_of_amazon_cloudwatch_logs>\" 2 region: <aws_region_of_log_stream> 3 endpoint: <service_endpoint_of_amazon_cloudwatch_logs> 4 log_retention: <supported_value_in_days> 5",
"config: exporters: awsemf: log_group_name: \"<group_name_of_amazon_cloudwatch_logs>\" 1 log_stream_name: \"<log_stream_of_amazon_cloudwatch_logs>\" 2 resource_to_telemetry_conversion: 3 enabled: true region: <region> 4 endpoint: <endpoint> 5 log_retention: <supported_value_in_days> 6 namespace: <custom_namespace> 7",
"config: exporters: awsxray: region: \"<region>\" 1 endpoint: <endpoint> 2 resource_arn: \"<aws_resource_arn>\" 3 role_arn: \"<iam_role>\" 4 indexed_attributes: [ \"<indexed_attr_0>\", \"<indexed_attr_1>\" ] 5 aws_log_groups: [\"<group1>\", \"<group2>\"] 6 request_timeout_seconds: 120 7",
"config: | exporters: file: path: /data/metrics.json 1 rotation: 2 max_megabytes: 10 3 max_days: 3 4 max_backups: 3 5 localtime: true 6 format: proto 7 compression: zstd 8 flush_interval: 5 9",
"config: receivers: otlp: protocols: grpc: endpoint: 0.0.0.0:4317 exporters: prometheus: endpoint: 0.0.0.0:8889 connectors: count: {} service: pipelines: 1 traces/in: receivers: [otlp] exporters: [count] 2 metrics/out: receivers: [count] 3 exporters: [prometheus]",
"config: connectors: count: spans: 1 <custom_metric_name>: 2 description: \"<custom_metric_description>\" conditions: - 'attributes[\"env\"] == \"dev\"' - 'name == \"devevent\"'",
"config: connectors: count: logs: 1 <custom_metric_name>: 2 description: \"<custom_metric_description>\" attributes: - key: env default_value: unknown 3",
"config: connectors: routing: table: 1 - statement: route() where attributes[\"X-Tenant\"] == \"dev\" 2 pipelines: [traces/dev] 3 - statement: route() where attributes[\"X-Tenant\"] == \"prod\" pipelines: [traces/prod] default_pipelines: [traces/dev] 4 error_mode: ignore 5 match_once: false 6 service: pipelines: traces/in: receivers: [otlp] exporters: [routing] traces/dev: receivers: [routing] exporters: [otlp/dev] traces/prod: receivers: [routing] exporters: [otlp/prod]",
"config: receivers: otlp: protocols: grpc: jaeger: protocols: grpc: processors: batch: exporters: otlp: endpoint: tempo-simplest-distributor:4317 tls: insecure: true connectors: forward: {} service: pipelines: traces/regiona: receivers: [otlp] processors: [] exporters: [forward] traces/regionb: receivers: [jaeger] processors: [] exporters: [forward] traces: receivers: [forward] processors: [batch] exporters: [otlp]",
"config: connectors: spanmetrics: metrics_flush_interval: 15s 1 service: pipelines: traces: exporters: [spanmetrics] metrics: receivers: [spanmetrics]",
"config: extensions: bearertokenauth: scheme: \"Bearer\" 1 token: \"<token>\" 2 filename: \"<token_file>\" 3 receivers: otlp: protocols: http: auth: authenticator: bearertokenauth 4 exporters: otlp: auth: authenticator: bearertokenauth 5 service: extensions: [bearertokenauth] pipelines: traces: receivers: [otlp] exporters: [otlp]",
"config: extensions: oauth2client: client_id: <client_id> 1 client_secret: <client_secret> 2 endpoint_params: 3 audience: <audience> token_url: https://example.com/oauth2/default/v1/token 4 scopes: [\"api.metrics\"] 5 # tls settings for the token client tls: 6 insecure: true 7 ca_file: /var/lib/mycert.pem 8 cert_file: <cert_file> 9 key_file: <key_file> 10 timeout: 2s 11 receivers: otlp: protocols: http: {} exporters: otlp: auth: authenticator: oauth2client 12 service: extensions: [oauth2client] pipelines: traces: receivers: [otlp] exporters: [otlp]",
"config: extensions: file_storage/all_settings: directory: /var/lib/otelcol/mydir 1 timeout: 1s 2 compaction: on_start: true 3 directory: /tmp/ 4 max_transaction_size: 65_536 5 fsync: false 6 exporters: otlp: sending_queue: storage: file_storage/all_settings 7 service: extensions: [file_storage/all_settings] 8 pipelines: traces: receivers: [otlp] exporters: [otlp]",
"config: extensions: oidc: attribute: authorization 1 issuer_url: https://example.com/auth/realms/opentelemetry 2 issuer_ca_path: /var/run/tls/issuer.pem 3 audience: otel-collector 4 username_claim: email 5 receivers: otlp: protocols: grpc: auth: authenticator: oidc exporters: debug: {} service: extensions: [oidc] pipelines: traces: receivers: [otlp] exporters: [debug]",
"config: extensions: jaegerremotesampling: source: reload_interval: 30s 1 remote: endpoint: jaeger-collector:14250 2 file: /etc/otelcol/sampling_strategies.json 3 receivers: otlp: protocols: http: {} exporters: debug: {} service: extensions: [jaegerremotesampling] pipelines: traces: receivers: [otlp] exporters: [debug]",
"{ \"service_strategies\": [ { \"service\": \"foo\", \"type\": \"probabilistic\", \"param\": 0.8, \"operation_strategies\": [ { \"operation\": \"op1\", \"type\": \"probabilistic\", \"param\": 0.2 }, { \"operation\": \"op2\", \"type\": \"probabilistic\", \"param\": 0.4 } ] }, { \"service\": \"bar\", \"type\": \"ratelimiting\", \"param\": 5 } ], \"default_strategy\": { \"type\": \"probabilistic\", \"param\": 0.5, \"operation_strategies\": [ { \"operation\": \"/health\", \"type\": \"probabilistic\", \"param\": 0.0 }, { \"operation\": \"/metrics\", \"type\": \"probabilistic\", \"param\": 0.0 } ] } }",
"config: extensions: pprof: endpoint: localhost:1777 1 block_profile_fraction: 0 2 mutex_profile_fraction: 0 3 save_to_file: test.pprof 4 receivers: otlp: protocols: http: {} exporters: debug: {} service: extensions: [pprof] pipelines: traces: receivers: [otlp] exporters: [debug]",
"config: extensions: health_check: endpoint: \"0.0.0.0:13133\" 1 tls: 2 ca_file: \"/path/to/ca.crt\" cert_file: \"/path/to/cert.crt\" key_file: \"/path/to/key.key\" path: \"/health/status\" 3 check_collector_pipeline: 4 enabled: true 5 interval: \"5m\" 6 exporter_failure_threshold: 5 7 receivers: otlp: protocols: http: {} exporters: debug: {} service: extensions: [health_check] pipelines: traces: receivers: [otlp] exporters: [debug]",
"config: extensions: zpages: endpoint: \"localhost:55679\" 1 receivers: otlp: protocols: http: {} exporters: debug: {} service: extensions: [zpages] pipelines: traces: receivers: [otlp] exporters: [debug]",
"oc port-forward pod/USD(oc get pod -l app.kubernetes.io/name=instance-collector -o=jsonpath='{.items[0].metadata.name}') 55679",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: mode: statefulset 1 targetAllocator: enabled: true 2 serviceAccount: 3 prometheusCR: enabled: true 4 scrapeInterval: 10s serviceMonitorSelector: 5 name: app1 podMonitorSelector: 6 name: app2 config: receivers: prometheus: 7 config: scrape_configs: [] processors: exporters: debug: {} service: pipelines: metrics: receivers: [prometheus] processors: [] exporters: [debug]",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-targetallocator rules: - apiGroups: [\"\"] resources: - services - pods - namespaces verbs: [\"get\", \"list\", \"watch\"] - apiGroups: [\"monitoring.coreos.com\"] resources: - servicemonitors - podmonitors - scrapeconfigs - probes verbs: [\"get\", \"list\", \"watch\"] - apiGroups: [\"discovery.k8s.io\"] resources: - endpointslices verbs: [\"get\", \"list\", \"watch\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-targetallocator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: otel-targetallocator subjects: - kind: ServiceAccount name: otel-targetallocator 1 namespace: observability 2",
"apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation metadata: name: java-instrumentation spec: env: - name: OTEL_EXPORTER_OTLP_TIMEOUT value: \"20\" exporter: endpoint: http://production-collector.observability.svc.cluster.local:4317 propagators: - w3c sampler: type: parentbased_traceidratio argument: \"0.25\" java: env: - name: OTEL_JAVAAGENT_DEBUG value: \"true\"",
"apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation spec exporter: endpoint: https://production-collector.observability.svc.cluster.local:4317 1 tls: configMapName: ca-bundle 2 ca_file: service-ca.crt 3",
"apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation spec exporter: endpoint: https://production-collector.observability.svc.cluster.local:4317 1 tls: secretName: serving-certs 2 ca_file: service-ca.crt 3 cert_file: tls.crt 4 key_file: tls.key 5",
"apiVersion: v1 kind: ConfigMap metadata: name: otelcol-cabundle namespace: tutorial-application annotations: service.beta.openshift.io/inject-cabundle: \"true\" --- apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation metadata: name: my-instrumentation spec: exporter: endpoint: https://simplest-collector.tracing-system.svc.cluster.local:4317 tls: configMapName: otelcol-cabundle ca: service-ca.crt",
"instrumentation.opentelemetry.io/inject-apache-httpd: \"true\"",
"instrumentation.opentelemetry.io/inject-dotnet: \"true\"",
"instrumentation.opentelemetry.io/inject-go: \"true\"",
"apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints metadata: name: otel-go-instrumentation-scc allowHostDirVolumePlugin: true allowPrivilegeEscalation: true allowPrivilegedContainer: true allowedCapabilities: - \"SYS_PTRACE\" fsGroup: type: RunAsAny runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny seccompProfiles: - '*' supplementalGroups: type: RunAsAny",
"oc adm policy add-scc-to-user otel-go-instrumentation-scc -z <service_account>",
"instrumentation.opentelemetry.io/inject-java: \"true\"",
"instrumentation.opentelemetry.io/inject-nodejs: \"true\" instrumentation.opentelemetry.io/otel-go-auto-target-exe: \"/path/to/container/executable\"",
"instrumentation.opentelemetry.io/inject-python: \"true\"",
"instrumentation.opentelemetry.io/container-names: \"<container_1>,<container_2>\"",
"instrumentation.opentelemetry.io/<application_language>-container-names: \"<container_1>,<container_2>\" 1",
"apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability",
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-sidecar namespace: observability",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: [\"\", \"config.openshift.io\"] resources: [\"pods\", \"namespaces\", \"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-sidecar namespace: observability roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: serviceAccount: otel-collector-sidecar mode: sidecar config: serviceAccount: otel-collector-sidecar receivers: otlp: protocols: grpc: {} http: {} processors: batch: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] timeout: 2s exporters: otlp: endpoint: \"tempo-<example>-gateway:8090\" 1 tls: insecure: true service: pipelines: traces: receivers: [otlp] processors: [memory_limiter, resourcedetection, batch] exporters: [otlp]",
"apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability",
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment namespace: observability",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: [\"\", \"config.openshift.io\"] resources: [\"pods\", \"namespaces\", \"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: observability roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: mode: deployment serviceAccount: otel-collector-deployment config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} opencensus: otlp: protocols: grpc: {} http: {} zipkin: {} processors: batch: {} k8sattributes: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlp: endpoint: \"tempo-<example>-distributor:4317\" 1 tls: insecure: true service: pipelines: traces: receivers: [jaeger, opencensus, otlp, zipkin] processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp]",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector spec: mode: deployment observability: metrics: enableMetrics: true 1 config: exporters: prometheus: endpoint: 0.0.0.0:8889 resource_to_telemetry_conversion: enabled: true # by default resource attributes are dropped service: telemetry: metrics: address: \":8888\" pipelines: metrics: exporters: [prometheus]",
"apiVersion: monitoring.coreos.com/v1 kind: PodMonitor metadata: name: otel-collector spec: selector: matchLabels: app.kubernetes.io/name: <cr_name>-collector 1 podMetricsEndpoints: - port: metrics 2 - port: promexporter 3 relabelings: - action: labeldrop regex: pod - action: labeldrop regex: container - action: labeldrop regex: endpoint metricRelabelings: - action: labeldrop regex: instance - action: labeldrop regex: job",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-monitoring-view 1 subjects: - kind: ServiceAccount name: otel-collector namespace: observability --- kind: ConfigMap apiVersion: v1 metadata: name: cabundle namespce: observability annotations: service.beta.openshift.io/inject-cabundle: \"true\" 2 --- apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: volumeMounts: - name: cabundle-volume mountPath: /etc/pki/ca-trust/source/service-ca readOnly: true volumes: - name: cabundle-volume configMap: name: cabundle mode: deployment config: receivers: prometheus: 3 config: scrape_configs: - job_name: 'federate' scrape_interval: 15s scheme: https tls_config: ca_file: /etc/pki/ca-trust/source/service-ca/service-ca.crt bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token honor_labels: false params: 'match[]': - '{__name__=\"<metric_name>\"}' 4 metrics_path: '/federate' static_configs: - targets: - \"prometheus-k8s.openshift-monitoring.svc.cluster.local:9091\" exporters: debug: 5 verbosity: detailed service: pipelines: metrics: receivers: [prometheus] processors: [] exporters: [debug]",
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: 1 2 - apiGroups: [\"\", \"config.openshift.io\"] resources: [\"pods\", \"namespaces\", \"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"]",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: otel-collector-example roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel spec: mode: deployment serviceAccount: otel-collector-deployment config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} opencensus: {} otlp: protocols: grpc: {} http: {} zipkin: {} processors: batch: {} k8sattributes: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlp: endpoint: \"tempo-simplest-distributor:4317\" 1 tls: insecure: true service: pipelines: traces: receivers: [jaeger, opencensus, otlp, zipkin] 2 processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp]",
"apiVersion: batch/v1 kind: Job metadata: name: telemetrygen spec: template: spec: containers: - name: telemetrygen image: ghcr.io/open-telemetry/opentelemetry-collector-contrib/telemetrygen:latest args: - traces - --otlp-endpoint=otel-collector:4317 - --otlp-insecure - --duration=30s - --workers=1 restartPolicy: Never backoffLimit: 4",
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment namespace: openshift-logging",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector-logs-writer rules: - apiGroups: [\"loki.grafana.com\"] resourceNames: [\"logs\"] resources: [\"application\"] verbs: [\"create\"] - apiGroups: [\"\"] resources: [\"pods\", \"namespaces\", \"nodes\"] verbs: [\"get\", \"watch\", \"list\"] - apiGroups: [\"apps\"] resources: [\"replicasets\"] verbs: [\"get\", \"list\", \"watch\"] - apiGroups: [\"extensions\"] resources: [\"replicasets\"] verbs: [\"get\", \"list\", \"watch\"]",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector-logs-writer roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: otel-collector-logs-writer subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: openshift-logging",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: openshift-logging spec: serviceAccount: otel-collector-deployment config: extensions: bearertokenauth: filename: \"/var/run/secrets/kubernetes.io/serviceaccount/token\" receivers: otlp: protocols: grpc: {} http: {} processors: k8sattributes: {} resource: attributes: 1 - key: kubernetes.namespace_name from_attribute: k8s.namespace.name action: upsert - key: kubernetes.pod_name from_attribute: k8s.pod.name action: upsert - key: kubernetes.container_name from_attribute: k8s.container.name action: upsert - key: log_type value: application action: upsert transform: log_statements: - context: log statements: - set(attributes[\"level\"], ConvertCase(severity_text, \"lower\")) exporters: otlphttp: endpoint: https://logging-loki-gateway-http.openshift-logging.svc.cluster.local:8080/api/logs/v1/application/otlp encoding: json tls: ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\" auth: authenticator: bearertokenauth debug: verbosity: detailed service: extensions: [bearertokenauth] 2 pipelines: logs: receivers: [otlp] processors: [k8sattributes, transform, resource] exporters: [otlphttp] 3 logs/test: receivers: [otlp] processors: [] exporters: [debug]",
"apiVersion: batch/v1 kind: Job metadata: name: telemetrygen spec: template: spec: containers: - name: telemetrygen image: ghcr.io/open-telemetry/opentelemetry-collector-contrib/telemetrygen:v0.106.1 args: - logs - --otlp-endpoint=otel-collector.openshift-logging.svc.cluster.local:4317 - --otlp-insecure - --duration=180s - --workers=1 - --logs=10 - --otlp-attributes=k8s.container.name=\"telemetrygen\" restartPolicy: Never backoffLimit: 4",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: <name> spec: observability: metrics: enableMetrics: true",
"apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: selfsigned-issuer spec: selfSigned: {}",
"apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: ca spec: isCA: true commonName: ca subject: organizations: - <your_organization_name> organizationalUnits: - Widgets secretName: ca-secret privateKey: algorithm: ECDSA size: 256 issuerRef: name: selfsigned-issuer kind: Issuer group: cert-manager.io",
"apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: test-ca-issuer spec: ca: secretName: ca-secret",
"apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: server spec: secretName: server-tls isCA: false usages: - server auth - client auth dnsNames: - \"otel.observability.svc.cluster.local\" 1 issuerRef: name: ca-issuer --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: client spec: secretName: client-tls isCA: false usages: - server auth - client auth dnsNames: - \"otel.observability.svc.cluster.local\" 2 issuerRef: name: ca-issuer",
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: 1 2 - apiGroups: [\"\", \"config.openshift.io\"] resources: [\"pods\", \"namespaces\", \"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"]",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: otel-collector-<example> roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: otel-collector-<example> spec: mode: daemonset serviceAccount: otel-collector-deployment config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} opencensus: otlp: protocols: grpc: {} http: {} zipkin: {} processors: batch: {} k8sattributes: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlphttp: endpoint: https://observability-cluster.com:443 1 tls: insecure: false cert_file: /certs/server.crt key_file: /certs/server.key ca_file: /certs/ca.crt service: pipelines: traces: receivers: [jaeger, opencensus, otlp, zipkin] processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp] volumes: - name: otel-certs secret: name: otel-certs volumeMounts: - name: otel-certs mountPath: /certs",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otlp-receiver namespace: observability spec: mode: \"deployment\" ingress: type: route route: termination: \"passthrough\" config: receivers: otlp: protocols: http: tls: 1 cert_file: /certs/server.crt key_file: /certs/server.key client_ca_file: /certs/ca.crt exporters: otlp: endpoint: \"tempo-<simplest>-distributor:4317\" 2 tls: insecure: true service: pipelines: traces: receivers: [otlp] processors: [] exporters: [otlp] volumes: - name: otel-certs secret: name: otel-certs volumeMounts: - name: otel-certs mountPath: /certs",
"oc adm must-gather --image=ghcr.io/open-telemetry/opentelemetry-operator/must-gather -- /usr/bin/must-gather --operator-namespace <operator_namespace> 1",
"config: service: telemetry: logs: level: debug 1",
"config: service: telemetry: metrics: address: \":8888\" 1",
"oc port-forward <collector_pod>",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector spec: mode: deployment observability: metrics: enableMetrics: true",
"config: exporters: debug: verbosity: detailed service: pipelines: traces: exporters: [debug] metrics: exporters: [debug] logs: exporters: [debug]",
"oc get instrumentation -n <workload_project> 1",
"oc get events -n <workload_project> 1",
"... Created container opentelemetry-auto-instrumentation ... Started container opentelemetry-auto-instrumentation",
"oc logs -l app.kubernetes.io/name=opentelemetry-operator --container manager -n openshift-opentelemetry-operator --follow",
"instrumentation.opentelemetry.io/inject-python=\"true\"",
"oc get pods -n <workload_project> -o jsonpath='{range .items[?(@.metadata.annotations[\"instrumentation.opentelemetry.io/inject-python\"]==\"true\")]}{.metadata.name}{\"\\n\"}{end}'",
"instrumentation.opentelemetry.io/inject-nodejs: \"<instrumentation_object>\"",
"instrumentation.opentelemetry.io/inject-nodejs: \"<other_namespace>/<instrumentation_object>\"",
"oc get instrumentation <instrumentation_name> -n <workload_project> -o jsonpath='{.spec.endpoint}'",
"oc logs <application_pod> -n <workload_project>",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: <otel-collector-namespace> spec: mode: sidecar config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} processors: batch: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] timeout: 2s exporters: otlp: endpoint: \"tempo-<example>-gateway:8090\" 1 tls: insecure: true service: pipelines: traces: receivers: [jaeger] processors: [memory_limiter, resourcedetection, batch] exporters: [otlp]",
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-sidecar",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector-sidecar rules: 1 - apiGroups: [\"config.openshift.io\"] resources: [\"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"]",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector-sidecar subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: otel-collector-example roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io",
"apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability",
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment namespace: observability",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: 1 2 - apiGroups: [\"\", \"config.openshift.io\"] resources: [\"pods\", \"namespaces\", \"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"]",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: observability roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: mode: deployment serviceAccount: otel-collector-deployment config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} processors: batch: {} k8sattributes: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlp: endpoint: \"tempo-example-gateway:8090\" tls: insecure: true service: pipelines: traces: receivers: [jaeger] processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp]",
"exp, err := jaeger.New(jaeger.WithCollectorEndpoint(jaeger.WithEndpoint(url))) 1",
"oc login --username=<your_username>",
"oc get deployments -n <project_of_opentelemetry_instance>",
"oc delete opentelemetrycollectors <opentelemetry_instance_name> -n <project_of_opentelemetry_instance>",
"oc get deployments -n <project_of_opentelemetry_instance>"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/red_hat_build_of_opentelemetry/index |
Chapter 6. Installation configuration parameters for Azure Stack Hub | Chapter 6. Installation configuration parameters for Azure Stack Hub Before you deploy an OpenShift Container Platform cluster on Azure Stack Hub, you provide a customized install-config.yaml installation configuration file that describes the details for your environment. 6.1. Available installation configuration parameters for Azure Stack Hub The following tables specify the required, optional, and Azure Stack Hub-specific installation configuration parameters that you can set as part of the installation process. Note After installation, you cannot modify these parameters in the install-config.yaml file. 6.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 6.1. Required parameters Parameter Description Values The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object Get a pull secret from Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 6.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 6.2. Network parameters Parameter Description Values The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power(R) Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power(R) Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 6.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 6.3. Optional parameters Parameter Description Values A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use compute . The name of the machine pool. worker Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . The configuration for the machines that comprise the control plane. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use controlPlane . The name of the machine pool. master Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of control plane machines to provision. Supported values are 3 , or 1 when deploying single-node OpenShift. The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Mint , Passthrough , Manual or an empty string ( "" ). [1] Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String Specify one or more repositories that may also contain the same images. Array of strings How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. 6.1.4. Additional Azure Stack Hub configuration parameters Additional Azure configuration parameters are described in the following table: Table 6.4. Additional Azure Stack Hub parameters Parameter Description Values The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 128 . Defines the type of disk. standard_LRS or premium_LRS . The default is premium_LRS . Defines the azure instance type for compute machines. String The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 1024 . Defines the type of disk. premium_LRS . Defines the azure instance type for control plane machines. String The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 128 . Defines the type of disk. standard_LRS or premium_LRS . The default is premium_LRS . The Azure instance type for control plane and compute machines. The Azure instance type. The URL of the Azure Resource Manager endpoint that your Azure Stack Hub operator provides. String The name of the resource group that contains the DNS zone for your base domain. String, for example production_cluster . The name of your Azure Stack Hub local region. String The name of an already existing resource group to install your cluster to. This resource group must be empty and only used for this specific cluster; the cluster components assume ownership of all resources in the resource group. If you limit the service principal scope of the installation program to this resource group, you must ensure all other resources used by the installation program in your environment have the necessary permissions, such as the public DNS zone and virtual network. Destroying the cluster by using the installation program deletes this resource group. String, for example existing_resource_group . The outbound routing strategy used to connect your cluster to the internet. If you are using user-defined routing, you must have pre-existing networking available where the outbound routing has already been configured prior to installing a cluster. The installation program is not responsible for configuring user-defined routing. LoadBalancer or UserDefinedRouting . The default is LoadBalancer . The name of the Azure cloud environment that is used to configure the Azure SDK with the appropriate Azure API endpoints. AzureStackCloud The URL of a storage blob in the Azure Stack environment that contains an RHCOS VHD. String, for example, https://vhdsa.blob.example.example.com/vhd/rhcos-410.84.202112040202-0-azurestack.x86_64.vhd | [
"apiVersion:",
"baseDomain:",
"metadata:",
"metadata: name:",
"platform:",
"pullSecret:",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking:",
"networking: networkType:",
"networking: clusterNetwork:",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: clusterNetwork: cidr:",
"networking: clusterNetwork: hostPrefix:",
"networking: serviceNetwork:",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork:",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"networking: machineNetwork: cidr:",
"additionalTrustBundle:",
"capabilities:",
"capabilities: baselineCapabilitySet:",
"capabilities: additionalEnabledCapabilities:",
"cpuPartitioningMode:",
"compute:",
"compute: architecture:",
"compute: hyperthreading:",
"compute: name:",
"compute: platform:",
"compute: replicas:",
"featureSet:",
"controlPlane:",
"controlPlane: architecture:",
"controlPlane: hyperthreading:",
"controlPlane: name:",
"controlPlane: platform:",
"controlPlane: replicas:",
"credentialsMode:",
"fips:",
"imageContentSources:",
"imageContentSources: source:",
"imageContentSources: mirrors:",
"publish:",
"sshKey:",
"compute: platform: azure: osDisk: diskSizeGB:",
"compute: platform: azure: osDisk: diskType:",
"compute: platform: azure: type:",
"controlPlane: platform: azure: osDisk: diskSizeGB:",
"controlPlane: platform: azure: osDisk: diskType:",
"controlPlane: platform: azure: type:",
"platform: azure: defaultMachinePlatform: osDisk: diskSizeGB:",
"platform: azure: defaultMachinePlatform: osDisk: diskType:",
"platform: azure: defaultMachinePlatform: type:",
"platform: azure: armEndpoint:",
"platform: azure: baseDomainResourceGroupName:",
"platform: azure: region:",
"platform: azure: resourceGroupName:",
"platform: azure: outboundType:",
"platform: azure: cloudName:",
"clusterOSImage:"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_azure_stack_hub/installation-config-parameters-ash |
Managing IdM users, groups, hosts, and access control rules | Managing IdM users, groups, hosts, and access control rules Red Hat Enterprise Linux 9 Configuring users and hosts, managing them in groups, and controlling access with host-based and role-based access control rules Red Hat Customer Content Services | [
"ipa help [TOPIC | COMMAND | topics | commands]",
"ipa help topics",
"ipa help user",
"ipa help user | less",
"ipa help commands",
"ipa help user-add",
"ipa user-add",
"---------------------- Added user \"euser\" ---------------------- User login: euser First name: Example Last name: User Full name: Example User Display name: Example User Initials: EU Home directory: /home/euser GECOS: Example User Login shell: /bin/sh Principal name: [email protected] Principal alias: [email protected] Email address: [email protected] UID: 427200006 GID: 427200006 Password: False Member of groups: ipausers Kerberos keys available: False",
"ipa user-add --first=Example --last=User --password",
"ipa user-mod euser --password",
"---------------------- Modified user \"euser\" ---------------------- User login: euser First name: Example Last name: User Home directory: /home/euser Principal name: [email protected] Principal alias: [email protected] Email address: [email protected] UID: 427200006 GID: 427200006 Password: True Member of groups: ipausers Kerberos keys available: True",
"ipa permission-add --right=read --permissions=write --permissions=delete",
"ipa permission-add --right={read,write,delete}",
"ipa permission-mod --right=read --right=write --right=delete",
"ipa permission-mod --right={read,write,delete}",
"ipa permission-mod --right=read --right=write",
"ipa permission-mod --right={read,write}",
"ipa certprofile-show certificate_profile --out= exported\\*profile.cfg",
"ipa user-add user_login --first=first_name --last=last_name --email=email_address",
"[a-zA-Z0-9_.][a-zA-Z0-9_.-]{0,252}[a-zA-Z0-9_.USD-]?",
"ipa config-mod --maxusername=64 Maximum username length: 64",
"ipa help user-add",
"ipa user-find",
"ipa stageuser-activate user_login ------------------------- Stage user user_login activated -------------------------",
"ipa user-find",
"ipa user-del --preserve user_login -------------------- Deleted user \"user_login\" --------------------",
"ipa user-del --continue user1 user2 user3",
"ipa user-del user_login -------------------- Deleted user \"user_login\" --------------------",
"ipa user-undel user_login ------------------------------ Undeleted user account \"user_login\" ------------------------------",
"ipa user-stage user_login ------------------------------ Staged user account \"user_login\" ------------------------------",
"ipa user-find",
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle users hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Create user idm_user ipauser: ipaadmin_password: \"{{ ipaadmin_password }}\" name: idm_user first: Alice last: Acme uid: 1000111 gid: 10011 phone: \"+555123457\" email: [email protected] passwordexpiration: \"2023-01-19 23:59:59\" password: \"Password123\" update_password: on_create",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/add-IdM-user.yml",
"ssh [email protected] Password: [admin@server /]USD",
"kinit admin Password for [email protected]:",
"ipa user-show idm_user User login: idm_user First name: Alice Last name: Acme .",
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle users hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Create user idm_users ipauser: ipaadmin_password: \"{{ ipaadmin_password }}\" users: - name: idm_user_1 first: Alice last: Acme uid: 10001 gid: 10011 phone: \"+555123457\" email: [email protected] passwordexpiration: \"2023-01-19 23:59:59\" password: \"Password123\" - name: idm_user_2 first: Bob last: Acme uid: 100011 gid: 10011 - name: idm_user_3 first: Eve last: Acme uid: 1000111 gid: 10011",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/add-users.yml",
"ssh [email protected] Password: [admin@server /]USD",
"ipa user-show idm_user_1 User login: idm_user_1 First name: Alice Last name: Acme Password: True .",
"[ipaserver] server.idm.example.com",
"--- - name: Ensure users' presence hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Include users_present.json include_vars: file: users_present.json - name: Users present ipauser: ipaadmin_password: \"{{ ipaadmin_password }}\" users: \"{{ users }}\"",
"{ \"users\": [ { \"name\": \"idm_user_1\", \"first\": \"First 1\", \"last\": \"Last 1\", \"password\": \"Password123\" }, { \"name\": \"idm_user_2\", \"first\": \"First 2\", \"last\": \"Last 2\" }, { \"name\": \"idm_user_3\", \"first\": \"First 3\", \"last\": \"Last 3\" } ] }",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory /inventory.file path_to_playbooks_directory /ensure-users-present-jsonfile.yml",
"ssh [email protected] Password: [admin@server /]USD",
"ipa user-show idm_user_1 User login: idm_user_1 First name: Alice Last name: Acme Password: True .",
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle users hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Delete users idm_user_1, idm_user_2, idm_user_3 ipauser: ipaadmin_password: \"{{ ipaadmin_password }}\" users: - name: idm_user_1 - name: idm_user_2 - name: idm_user_3 state: absent",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory /inventory.file path_to_playbooks_directory /delete-users.yml",
"ssh [email protected] Password: [admin@server /]USD",
"ipa user-show idm_user_1 ipa: ERROR: idm_user_1: user not found",
"set -o braceexpand",
"[bjensen@server ~]USD ipa config-mod --userobjectclasses={person,organizationalperson,inetorgperson,inetuser,posixaccount,krbprincipalaux,krbticketpolicyaux,ipaobject,ipasshuser,mepOriginEntry,top,mailRecipient}",
"set -o braceexpand",
"[bjensen@server ~]USD ipa config-mod --groupobjectclasses={top,groupofnames,nestedgroup,ipausergroup,ipaobject,ipasshuser,employeegroup}",
"[bjensen@server ~]USD ipa config-show --all dn: cn=ipaConfig,cn=etc,dc=example,dc=com Maximum username length: 32 Home directory base: /home Default shell: /bin/sh Default users group: ipausers Default e-mail domain: example.com Search time limit: 2 Search size limit: 100 User search fields: uid,givenname,sn,telephonenumber,ou,title Group search fields: cn,description Enable migration mode: FALSE Certificate Subject base: O=EXAMPLE.COM Default group objectclasses: top, groupofnames, nestedgroup, ipausergroup, ipaobject Default user objectclasses: top, person, organizationalperson, inetorgperson, inetuser, posixaccount, krbprincipalaux, krbticketpolicyaux, ipaobject, ipasshuser Password Expiration Notification (days): 4 Password plugin features: AllowNThash SELinux user map order: guest_u:s0USDxguest_u:s0USDuser_u:s0USDstaff_u:s0-s0:c0.c1023USDunconfined_u:s0-s0:c0.c1023 Default SELinux user: unconfined_u:s0-s0:c0.c1023 Default PAC types: MS-PAC, nfs:NONE cn: ipaConfig objectclass: nsContainer, top, ipaGuiConfig, ipaConfigObject",
"[bjensen@server ~]USD ipa config-mod --defaultshell \"/bin/bash\"",
"pwdhash -D /etc/dirsrv/slapd-IDM-EXAMPLE-COM password {PBKDF2_SHA256}AAAgABU0bKhyjY53NcxY33ueoPjOUWtl4iyYN5uW",
"ipactl stop",
"nsslapd-rootpw: {PBKDF2_SHA256}AAAgABU0bKhyjY53NcxY33ueoPjOUWtl4iyYN5uW",
"ipactl start",
"ipa user-mod idm_user --password Password: Enter Password again to verify: -------------------- Modified user \"idm_user\" --------------------",
"ldapmodify -x -D \"cn=Directory Manager\" -W -h server.idm.example.com -p 389 Enter LDAP Password:",
"dn: cn=ipa_pwd_extop,cn=plugins,cn=config",
"changetype: modify",
"add: passSyncManagersDNs",
"passSyncManagersDNs: uid=admin,cn=users,cn=accounts,dc=example,dc=com",
"ldapmodify -x -D \"cn=Directory Manager\" -W -h server.idm.example.com -p 389 Enter LDAP Password: dn: cn=ipa_pwd_extop,cn=plugins,cn=config changetype: modify add: passSyncManagersDNs passSyncManagersDNs: uid=admin,cn=users,cn=accounts,dc=example,dc=com",
"ipa user-status example_user ----------------------- Account disabled: False ----------------------- Server: idm.example.com Failed logins: 8 Last successful authentication: N/A Last failed authentication: 20220229080317Z Time now: 2022-02-29T08:04:46Z ---------------------------- Number of entries returned 1 ----------------------------",
"ipa user-unlock idm_user ----------------------- Unlocked account \"idm_user\" -----------------------",
"ipa config-show | grep \"Password plugin features\" Password plugin features: AllowNThash , KDC:Disable Last Success",
"ipa config-mod --ipaconfigstring='AllowNThash'",
"ipactl restart",
"[ipaserver] server.idm.example.com",
"--- - name: Tests hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure presence of pwpolicy for group ops ipapwpolicy: ipaadmin_password: \"{{ ipaadmin_password }}\" name: ops minlife: 7 maxlife: 49 history: 5 priority: 1 lockouttime: 300 minlength: 8 minclasses: 4 maxfail: 3 failinterval: 5",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory_/new_pwpolicy_present.yml",
"ipa pwpolicy-add Group: group_name Priority: priority_level",
"ipa pwpolicy-find",
"ipa pwpolicy-mod --usercheck=True managers",
"ipa pwpolicy-mod --maxrepeat=2 managers",
"ipa user-add test_user First name: test Last name: user ---------------------------- Added user \"test_user\" ----------------------------",
"kinit test_user",
"Password expired. You must change it now. Enter new password: Enter it again: Password change rejected: Password not changed. Unspecified password quality failure while trying to change password. Please try again.",
"Password change rejected: Password not changed. Unspecified password quality failure while trying to change password. Please try again. Enter new password: Enter it again:",
"Password change rejected: Password not changed. Unspecified password quality failure while trying to change password. Please try again. Enter new password: Enter it again:",
"klist Ticket cache: KCM:0:33945 Default principal: [email protected] Valid starting Expires Service principal 07/07/2021 12:44:44 07/08/2021 12:44:44 [email protected]@IDM.EXAMPLE.COM",
"--- - name: Tests hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure presence of usercheck and maxrepeat pwpolicy for group managers ipapwpolicy: ipaadmin_password: \"{{ ipaadmin_password }}\" name: managers usercheck: True maxrepeat: 2 maxsequence: 3",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory_/manager_pwpolicy_present.yml",
"ipa user-add test_user First name: test Last name: user ---------------------------- Added user \"test_user\" ----------------------------",
"kinit test_user",
"Password expired. You must change it now. Enter new password: Enter it again: Password change rejected: Password not changed. Unspecified password quality failure while trying to change password. Please try again.",
"Password change rejected: Password not changed. Unspecified password quality failure while trying to change password. Please try again. Enter new password: Enter it again:",
"Password change rejected: Password not changed. Unspecified password quality failure while trying to change password. Please try again. Enter new password: Enter it again:",
"Password change rejected: Password not changed. Unspecified password quality failure while trying to change password. Please try again. Enter new password: Enter it again:",
"klist Ticket cache: KCM:0:33945 Default principal: [email protected] Valid starting Expires Service principal 07/07/2021 12:44:44 07/08/2021 12:44:44 [email protected]@IDM.EXAMPLE.COM",
"dnf install ipa-client-epn",
"vi /etc/ipa/epn.conf",
"notify_ttls = 28, 14, 7, 3, 1",
"smtp_server = localhost smtp_port = 25",
"mail_from = [email protected]",
"smtp_client_cert = /etc/pki/tls/certs/client.pem",
"smtp_client_key = /etc/pki/tls/certs/client.key",
"smtp_client_key_pass = Secret123!",
"ipa-epn --dry-run [ { \"uid\": \"user5\", \"cn\": \"user 5\", \"krbpasswordexpiration\": \"2020-04-17 15:51:53\", \"mail\": \"['[email protected]']\" } ] [ { \"uid\": \"user6\", \"cn\": \"user 6\", \"krbpasswordexpiration\": \"2020-12-17 15:51:53\", \"mail\": \"['[email protected]']\" } ] The IPA-EPN command was successful",
"ipa-epn [ { \"uid\": \"user5\", \"cn\": \"user 5\", \"krbpasswordexpiration\": \"2020-10-01 15:51:53\", \"mail\": \"['[email protected]']\" } ] [ { \"uid\": \"user6\", \"cn\": \"user 6\", \"krbpasswordexpiration\": \"2020-12-17 15:51:53\", \"mail\": \"['[email protected]']\" } ] The IPA-EPN command was successful",
"ipa-epn --from-nbdays 8 --to-nbdays 12",
"systemctl start ipa-epn.timer",
"vi /etc/ipa/epn/expire_msg.template",
"Hi {{ fullname }}, Your password will expire on {{ expiration }}. Please change it as soon as possible.",
"kinit admin",
"ipa sudocmd-add /usr/sbin/reboot ------------------------------------- Added Sudo Command \"/usr/sbin/reboot\" ------------------------------------- Sudo Command: /usr/sbin/reboot",
"ipa sudorule-add idm_user_reboot --------------------------------- Added Sudo Rule \"idm_user_reboot\" --------------------------------- Rule name: idm_user_reboot Enabled: TRUE",
"ipa sudorule-add-allow-command idm_user_reboot --sudocmds '/usr/sbin/reboot' Rule name: idm_user_reboot Enabled: TRUE Sudo Allow Commands: /usr/sbin/reboot ------------------------- Number of members added 1 -------------------------",
"ipa sudorule-add-host idm_user_reboot --hosts idmclient.idm.example.com Rule name: idm_user_reboot Enabled: TRUE Hosts: idmclient.idm.example.com Sudo Allow Commands: /usr/sbin/reboot ------------------------- Number of members added 1 -------------------------",
"ipa sudorule-add-user idm_user_reboot --users idm_user Rule name: idm_user_reboot Enabled: TRUE Users: idm_user Hosts: idmclient.idm.example.com Sudo Allow Commands: /usr/sbin/reboot ------------------------- Number of members added 1 -------------------------",
"ipa sudorule-mod idm_user_reboot --setattr sudonotbefore=20251231123400Z",
"ipa sudorule-mod idm_user_reboot --setattr sudonotafter=20261231123400Z",
"[idm_user@idmclient ~]USD sudo -l Matching Defaults entries for idm_user on idmclient : !visiblepw, always_set_home, match_group_by_gid, always_query_group_plugin, env_reset, env_keep=\"COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS\", env_keep+=\"MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE\", env_keep+=\"LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES\", env_keep+=\"LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE\", env_keep+=\"LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY KRB5CCNAME\", secure_path=/sbin\\:/bin\\:/usr/sbin\\:/usr/bin User idm_user may run the following commands on idmclient : (root) /usr/sbin/reboot",
"[idm_user@idmclient ~]USD sudo /usr/sbin/reboot [sudo] password for idm_user:",
"ipa group-add --desc='AD users external map' ad_users_external --external ------------------------------- Added group \"ad_users_external\" ------------------------------- Group name: ad_users_external Description: AD users external map",
"ipa group-add --desc='AD users' ad_users ---------------------- Added group \"ad_users\" ---------------------- Group name: ad_users Description: AD users GID: 129600004",
"ipa group-add-member ad_users_external --external \"[email protected]\" [member user]: [member group]: Group name: ad_users_external Description: AD users external map External member: S-1-5-21-3655990580-1375374850-1633065477-513 ------------------------- Number of members added 1 -------------------------",
"ipa group-add-member ad_users --groups ad_users_external Group name: ad_users Description: AD users GID: 129600004 Member groups: ad_users_external ------------------------- Number of members added 1 -------------------------",
"ipa sudocmd-add /usr/sbin/reboot ------------------------------------- Added Sudo Command \"/usr/sbin/reboot\" ------------------------------------- Sudo Command: /usr/sbin/reboot",
"ipa sudorule-add ad_users_reboot --------------------------------- Added Sudo Rule \"ad_users_reboot\" --------------------------------- Rule name: ad_users_reboot Enabled: True",
"ipa sudorule-add-allow-command ad_users_reboot --sudocmds '/usr/sbin/reboot' Rule name: ad_users_reboot Enabled: True Sudo Allow Commands: /usr/sbin/reboot ------------------------- Number of members added 1 -------------------------",
"ipa sudorule-add-host ad_users_reboot --hosts idmclient.idm.example.com Rule name: ad_users_reboot Enabled: True Hosts: idmclient.idm.example.com Sudo Allow Commands: /usr/sbin/reboot ------------------------- Number of members added 1 -------------------------",
"ipa sudorule-add-user ad_users_reboot --groups ad_users Rule name: ad_users_reboot Enabled: TRUE User Groups: ad_users Hosts: idmclient.idm.example.com Sudo Allow Commands: /usr/sbin/reboot ------------------------- Number of members added 1 -------------------------",
"ssh [email protected]@ipaclient Password:",
"[[email protected]@idmclient ~]USD sudo -l Matching Defaults entries for [email protected] on idmclient : !visiblepw, always_set_home, match_group_by_gid, always_query_group_plugin, env_reset, env_keep=\"COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS\", env_keep+=\"MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE\", env_keep+=\"LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES\", env_keep+=\"LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE\", env_keep+=\"LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY KRB5CCNAME\", secure_path=/sbin\\:/bin\\:/usr/sbin\\:/usr/bin User [email protected] may run the following commands on idmclient : (root) /usr/sbin/reboot",
"[[email protected]@idmclient ~]USD sudo /usr/sbin/reboot [sudo] password for [email protected]:",
"sudo /usr/sbin/reboot [sudo] password for idm_user:",
"kinit admin",
"ipa sudocmd-add /opt/third-party-app/bin/report ---------------------------------------------------- Added Sudo Command \"/opt/third-party-app/bin/report\" ---------------------------------------------------- Sudo Command: /opt/third-party-app/bin/report",
"ipa sudorule-add run_third-party-app_report -------------------------------------------- Added Sudo Rule \"run_third-party-app_report\" -------------------------------------------- Rule name: run_third-party-app_report Enabled: TRUE",
"ipa sudorule-add-runasuser run_third-party-app_report --users= thirdpartyapp Rule name: run_third-party-app_report Enabled: TRUE RunAs External User: thirdpartyapp ------------------------- Number of members added 1 -------------------------",
"ipa sudorule-add-allow-command run_third-party-app_report --sudocmds '/opt/third-party-app/bin/report' Rule name: run_third-party-app_report Enabled: TRUE Sudo Allow Commands: /opt/third-party-app/bin/report RunAs External User: thirdpartyapp ------------------------- Number of members added 1 -------------------------",
"ipa sudorule-add-host run_third-party-app_report --hosts idmclient.idm.example.com Rule name: run_third-party-app_report Enabled: TRUE Hosts: idmclient.idm.example.com Sudo Allow Commands: /opt/third-party-app/bin/report RunAs External User: thirdpartyapp ------------------------- Number of members added 1 -------------------------",
"ipa sudorule-add-user run_third-party-app_report --users idm_user Rule name: run_third-party-app_report Enabled: TRUE Users: idm_user Hosts: idmclient.idm.example.com Sudo Allow Commands: /opt/third-party-app/bin/report RunAs External User: thirdpartyapp ------------------------- Number of members added 1",
"[idm_user@idmclient ~]USD sudo -l Matching Defaults entries for [email protected] on idmclient: !visiblepw, always_set_home, match_group_by_gid, always_query_group_plugin, env_reset, env_keep=\"COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS\", env_keep+=\"MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE\", env_keep+=\"LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES\", env_keep+=\"LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE\", env_keep+=\"LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY KRB5CCNAME\", secure_path=/sbin\\:/bin\\:/usr/sbin\\:/usr/bin User [email protected] may run the following commands on idmclient: (thirdpartyapp) /opt/third-party-app/bin/report",
"[idm_user@idmclient ~]USD sudo -u thirdpartyapp /opt/third-party-app/bin/report [sudo] password for [email protected]: Executing report Report successful.",
"[idm_user@idmclient ~]USD sudo -l Matching Defaults entries for [email protected] on idmclient: !visiblepw, always_set_home, match_group_by_gid, always_query_group_plugin, env_reset, env_keep=\"COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS\", env_keep+=\"MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE\", env_keep+=\"LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES\", env_keep+=\"LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE\", env_keep+=\"LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY KRB5CCNAME\", secure_path=/sbin\\:/bin\\:/usr/sbin\\:/usr/bin User [email protected] may run the following commands on idmclient: (thirdpartyapp) /opt/third-party-app/bin/report",
"[idm_user@idmclient ~]USD sudo -u thirdpartyapp /opt/third-party-app/bin/report [sudo] password for [email protected]: Executing report Report successful.",
"[domain/ <domain_name> ] pam_gssapi_services = sudo, sudo-i",
"systemctl restart sssd",
"authselect current Profile ID: sssd",
"authselect enable-feature with-gssapi",
"authselect select sssd with-gssapi",
"#%PAM-1.0 auth sufficient pam_sss_gss.so auth include system-auth account include system-auth password include system-auth session include system-auth",
"ssh -l [email protected] localhost [email protected]'s password:",
"[idmuser@idmclient ~]USD klist Ticket cache: KCM:1366201107 Default principal: [email protected] Valid starting Expires Service principal 01/08/2021 09:11:48 01/08/2021 19:11:48 krbtgt/[email protected] renew until 01/15/2021 09:11:44",
"[idm_user@idmclient ~]USD kdestroy -A [idm_user@idmclient ~]USD kinit [email protected] Password for [email protected] :",
"[idm_user@idmclient ~]USD sudo /usr/sbin/reboot",
"[domain/ <domain_name> ] pam_gssapi_services = sudo, sudo-i pam_gssapi_indicators_map = sudo:pkinit, sudo-i:pkinit",
"systemctl restart sssd",
"authselect current Profile ID: sssd",
"authselect select sssd",
"authselect enable-feature with-gssapi",
"authselect with-smartcard-required",
"#%PAM-1.0 auth sufficient pam_sss_gss.so auth include system-auth account include system-auth password include system-auth session include system-auth",
"#%PAM-1.0 auth sufficient pam_sss_gss.so auth include sudo account include sudo password include sudo session optional pam_keyinit.so force revoke session include sudo",
"ssh -l [email protected] localhost PIN for smart_card",
"[idm_user@idmclient ~]USD klist Ticket cache: KEYRING:persistent:1358900015:krb_cache_TObtNMd Default principal: [email protected] Valid starting Expires Service principal 02/15/2021 16:29:48 02/16/2021 02:29:48 krbtgt/[email protected] renew until 02/22/2021 16:29:44",
"[idm_user@idmclient ~]USD sudo -l Matching Defaults entries for idmuser on idmclient : !visiblepw, always_set_home, match_group_by_gid, always_query_group_plugin, env_reset, env_keep=\"COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS\", env_keep+=\"MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE\", env_keep+=\"LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES\", env_keep+=\"LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE\", env_keep+=\"LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY KRB5CCNAME\", secure_path=/sbin\\:/bin\\:/usr/sbin\\:/usr/bin User idm_user may run the following commands on idmclient : (root) /usr/sbin/reboot",
"[idm_user@idmclient ~]USD sudo /usr/sbin/reboot",
"[pam] pam_gssapi_services = sudo , sudo-i pam_gssapi_indicators_map = sudo:otp pam_gssapi_check_upn = true",
"[domain/ idm.example.com ] pam_gssapi_services = sudo, sudo-i pam_gssapi_indicators_map = sudo:pkinit , sudo-i:otp pam_gssapi_check_upn = true [domain/ ad.example.com ] pam_gssapi_services = sudo pam_gssapi_check_upn = false",
"Server not found in Kerberos database",
"[idm-user@idm-client ~]USD cat /etc/krb5.conf [domain_realm] .example.com = EXAMPLE.COM example.com = EXAMPLE.COM server.example.com = EXAMPLE.COM",
"No Kerberos credentials available",
"[idm-user@idm-client ~]USD kinit [email protected] Password for [email protected] :",
"User with UPN [ <UPN> ] was not found. UPN [ <UPN> ] does not match target user [ <username> ].",
"[idm-user@idm-client ~]USD cat /etc/sssd/sssd.conf pam_gssapi_check_upn = false",
"cat /etc/pam.d/sudo #%PAM-1.0 auth sufficient pam_sss_gss.so debug auth include system-auth account include system-auth password include system-auth session include system-auth",
"cat /etc/pam.d/sudo-i #%PAM-1.0 auth sufficient pam_sss_gss.so debug auth include sudo account include sudo password include sudo session optional pam_keyinit.so force revoke session include sudo",
"[idm-user@idm-client ~]USD sudo ls -l /etc/sssd/sssd.conf pam_sss_gss: Initializing GSSAPI authentication with SSSD pam_sss_gss: Switching euid from 0 to 1366201107 pam_sss_gss: Trying to establish security context pam_sss_gss: SSSD User name: [email protected] pam_sss_gss: User domain: idm.example.com pam_sss_gss: User principal: pam_sss_gss: Target name: [email protected] pam_sss_gss: Using ccache: KCM: pam_sss_gss: Acquiring credentials, principal name will be derived pam_sss_gss: Unable to read credentials from [KCM:] [maj:0xd0000, min:0x96c73ac3] pam_sss_gss: GSSAPI: Unspecified GSS failure. Minor code may provide more information pam_sss_gss: GSSAPI: No credentials cache found pam_sss_gss: Switching euid from 1366200907 to 0 pam_sss_gss: System error [5]: Input/output error",
"[ipaservers] server.idm.example.com",
"--- - name: Playbook to manage sudo command hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure sudo command is present - ipasudocmd: ipaadmin_password: \"{{ ipaadmin_password }}\" name: /usr/sbin/reboot state: present",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory /ensure-reboot-sudocmd-is-present.yml",
"--- - name: Tests hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure a sudorule is present granting idm_user the permission to run /usr/sbin/reboot on idmclient - ipasudorule: ipaadmin_password: \"{{ ipaadmin_password }}\" name: idm_user_reboot description: A test sudo rule. allow_sudocmd: /usr/sbin/reboot host: idmclient.idm.example.com user: idm_user state: present",
"ansible-playbook -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory /ensure-sudorule-for-idmuser-on-idmclient-is-present.yml",
"sudo /usr/sbin/reboot [sudo] password for idm_user:",
"dn: uid=user_login ,cn=staged users,cn=accounts,cn=provisioning,dc=idm,dc=example,dc=com changetype: add objectClass: top objectClass: inetorgperson uid: user_login sn: surname givenName: first_name cn: full_name",
"dn: uid=user_login,cn=staged users,cn=accounts,cn=provisioning,dc=idm,dc=example,dc=com changetype: add objectClass: top objectClass: person objectClass: inetorgperson objectClass: organizationalperson objectClass: posixaccount uid: user_login uidNumber: UID_number gidNumber: GID_number sn: surname givenName: first_name cn: full_name homeDirectory: /home/user_login",
"dn: distinguished_name changetype: modify replace: attribute_to_modify attribute_to_modify: new_value",
"dn: distinguished_name changetype: modify replace: nsAccountLock nsAccountLock: TRUE",
"dn: distinguished_name changetype: modify replace: nsAccountLock nsAccountLock: FALSE",
"dn: distinguished_name changetype: modrdn newrdn: uid=user_login deleteoldrdn: 0 newsuperior: cn=deleted users,cn=accounts,cn=provisioning,dc=idm,dc=example,dc=com",
"ldapsearch -LLL -x -D \"uid= user_allowed_to_modify_user_entries ,cn=users,cn=accounts,dc=idm,dc=example,dc=com\" -w \"Secret123\" -H ldap://r8server.idm.example.com -b \"cn=users,cn=accounts,dc=idm,dc=example,dc=com\" uid=test_user dn: uid=test_user,cn=users,cn=accounts,dc=idm,dc=example,dc=com memberOf: cn=ipausers,cn=groups,cn=accounts,dc=idm,dc=example,dc=com",
"dn: cn=group_name,cn=groups,cn=accounts,dc=idm,dc=example,dc=com changetype: add objectClass: top objectClass: ipaobject objectClass: ipausergroup objectClass: groupofnames objectClass: nestedgroup objectClass: posixgroup uid: group_name cn: group_name gidNumber: GID_number",
"dn: group_distinguished_name changetype: delete",
"dn: group_distinguished_name changetype: modify add: member member: uid=user_login,cn=users,cn=accounts,dc=idm,dc=example,dc=com",
"dn: distinguished_name changetype: modify delete: member member: uid=user_login,cn=users,cn=accounts,dc=idm,dc=example,dc=com",
"ldapsearch -YGSSAPI -H ldap://server.idm.example.com -b \"cn=groups,cn=accounts,dc=idm,dc=example,dc=com\" \"cn=group_name\" dn: cn=group_name,cn=groups,cn=accounts,dc=idm,dc=example,dc=com ipaNTSecurityIdentifier: S-1-5-21-1650388524-2605035987-2578146103-11017 cn: testgroup objectClass: top objectClass: groupofnames objectClass: nestedgroup objectClass: ipausergroup objectClass: ipaobject objectClass: posixgroup objectClass: ipantgroupattrs ipaUniqueID: 569bf864-9d45-11ea-bea3-525400f6f085 gidNumber: 1997010017",
"ldapmodify -Y GSSAPI -H ldap://server.example.com dn: uid=testuser,cn=users,cn=accounts,dc=example,dc=com changetype: modify replace: telephoneNumber telephonenumber: 88888888",
"ldapmodify -Y GSSAPI -H ldap://server.example.com -f ~/example.ldif",
"kinit admin",
"ldapmodify -Y GSSAPI SASL/GSSAPI authentication started SASL username: [email protected] SASL SSF: 256 SASL data security layer installed.",
"dn: uid=user1,cn=users,cn=accounts,dc=idm,dc=example,dc=com",
"changetype: modrdn",
"newrdn: uid=user1",
"deleteoldrdn: 0",
"newsuperior: cn=deleted users,cn=accounts,cn=provisioning,dc=idm,dc=example,dc=com",
"[Enter] modifying rdn of entry \"uid=user1,cn=users,cn=accounts,dc=idm,dc=example,dc=com\"",
"ipa user-find --preserved=true -------------- 1 user matched -------------- User login: user1 First name: First 1 Last name: Last 1 Home directory: /home/user1 Login shell: /bin/sh Principal name: [email protected] Principal alias: [email protected] Email address: [email protected] UID: 1997010003 GID: 1997010003 Account disabled: True Preserved user: True ---------------------------- Number of entries returned 1 ----------------------------",
"ldapsearch [-x | -Y mechanism] [options] [search_filter] [list_of_attributes]",
"ldapsearch -x -H ldap://ldap.example.com -s sub \"(uid=user01)\"",
"\"(cn=example)\"",
"kinit admin",
"ipa user-add provisionator --first=provisioning --last=account --password",
"ipa role-add --desc \"Responsible for provisioning stage users\" \"System Provisioning\"",
"ipa role-add-privilege \"System Provisioning\" --privileges=\"Stage User Provisioning\"",
"ipa role-add-member --users=provisionator \"System Provisioning\"",
"ipa user-find provisionator --all --raw -------------- 1 user matched -------------- dn: uid=provisionator,cn=users,cn=accounts,dc=idm,dc=example,dc=com uid: provisionator [...]",
"ipa user-add activator --first=activation --last=account --password",
"ipa role-add-member --users=activator \"User Administrator\"",
"ipa group-add application-accounts",
"ipa pwpolicy-add application-accounts --maxlife=10000 --minlife=0 --history=0 --minclasses=4 --minlength=8 --priority=1 --maxfail=0 --failinterval=1 --lockouttime=0",
"ipa pwpolicy-show application-accounts Group: application-accounts Max lifetime (days): 10000 Min lifetime (hours): 0 History size: 0 [...]",
"ipa group-add-member application-accounts --users={provisionator,activator}",
"kpasswd provisionator kpasswd activator",
"ipa-getkeytab -s server.idm.example.com -p \"activator\" -k /etc/krb5.ipa-activation.keytab",
"#!/bin/bash kinit -k -i activator ipa stageuser-find --all --raw | grep \" uid:\" | cut -d \":\" -f 2 | while read uid; do ipa stageuser-activate USD{uid}; done",
"chmod 755 /usr/local/sbin/ipa-activate-all chown root:root /usr/local/sbin/ipa-activate-all",
"[Unit] Description=Scan IdM every minute for any stage users that must be activated [Service] Environment=KRB5_CLIENT_KTNAME=/etc/krb5.ipa-activation.keytab Environment=KRB5CCNAME=FILE:/tmp/krb5cc_ipa-activate-all ExecStart=/usr/local/sbin/ipa-activate-all",
"[Unit] Description=Scan IdM every minute for any stage users that must be activated [Timer] OnBootSec=15min OnUnitActiveSec=1min [Install] WantedBy=multi-user.target",
"systemctl daemon-reload",
"systemctl enable ipa-activate-all.timer",
"systemctl start ipa-activate-all.timer",
"systemctl status ipa-activate-all.timer ● ipa-activate-all.timer - Scan IdM every minute for any stage users that must be activated Loaded: loaded (/etc/systemd/system/ipa-activate-all.timer; enabled; vendor preset: disabled) Active: active (waiting) since Wed 2020-06-10 16:34:55 CEST; 15s ago Trigger: Wed 2020-06-10 16:35:55 CEST; 44s left Jun 10 16:34:55 server.idm.example.com systemd[1]: Started Scan IdM every minute for any stage users that must be activated.",
"dn: uid=stageidmuser,cn=staged users,cn=accounts,cn=provisioning,dc=idm,dc=example,dc=com changetype: add objectClass: top objectClass: inetorgperson uid: stageidmuser sn: surname givenName: first_name cn: full_name",
"scp add-stageidmuser.ldif [email protected]:/provisionator/ Password: add-stageidmuser.ldif 100% 364 217.6KB/s 00:00",
"ssh [email protected] Password: [provisionator@server ~]USD",
"[provisionator@server ~]USD kinit provisionator",
"~]USD ldapadd -h server.idm.example.com -p 389 -f add-stageidmuser.ldif SASL/GSSAPI authentication started SASL username: [email protected] SASL SSF: 256 SASL data security layer installed. adding the entry \"uid=stageidmuser,cn=staged users,cn=accounts,cn=provisioning,dc=idm,dc=example,dc=com\"",
"ssh [email protected] Password: [provisionator@server ~]USD",
"kinit provisionator",
"ldapmodify -h server.idm.example.com -p 389 -Y GSSAPI SASL/GSSAPI authentication started SASL username: [email protected] SASL SSF: 56 SASL data security layer installed.",
"dn: uid=stageuser,cn=staged users,cn=accounts,cn=provisioning,dc=idm,dc=example,dc=com",
"changetype: add",
"objectClass: top objectClass: inetorgperson",
"uid: stageuser",
"cn: Babs Jensen",
"sn: Jensen",
"[Enter] adding new entry \"uid=stageuser,cn=staged users,cn=accounts,cn=provisioning,dc=idm,dc=example,dc=com\"",
"ipa stageuser-show stageuser --all --raw dn: uid=stageuser,cn=staged users,cn=accounts,cn=provisioning,dc=idm,dc=example,dc=com uid: stageuser sn: Jensen cn: Babs Jensen has_password: FALSE has_keytab: FALSE nsaccountlock: TRUE objectClass: top objectClass: inetorgperson objectClass: organizationalPerson objectClass: person",
"ipa user-add-principal <user> <useralias> -------------------------------- Added new aliases to user \"user\" -------------------------------- User login: user Principal alias: [email protected], [email protected]",
"kinit -C <useralias> Password for <user>@IDM.EXAMPLE.COM:",
"ipa user-remove-principal <user> <useralias> -------------------------------- Removed aliases from user \"user\" -------------------------------- User login: user Principal alias: [email protected]",
"ipa user-show <user> User login: user Principal name: [email protected] ipa user-remove-principal user user ipa: ERROR: invalid 'krbprincipalname': at least one value equal to the canonical principal name must be present",
"ipa: ERROR: The realm for the principal does not match the realm for this IPA server",
"ipa user-add-principal <user> <user\\\\@example.com> -------------------------------- Added new aliases to user \"user\" -------------------------------- User login: user Principal alias: [email protected], user\\@[email protected]",
"kinit -E <[email protected]> Password for user\\@[email protected]:",
"ipa: ERROR: The realm for the principal does not match the realm for this IPA server",
"ipa user-remove-principal <user> <user\\\\@example.com> -------------------------------- Removed aliases from user \"user\" -------------------------------- User login: user Principal alias: [email protected]",
"ipa config-mod --enable-sid --add-sids",
"ipa user-show admin --all | grep ipantsecurityidentifier ipantsecurityidentifier: S-1-5-21-2633809701-976279387-419745629-500",
"ipa service-add testservice/client.example.com ------------------------------------------------------------- Modified service \"testservice/[email protected]\" ------------------------------------------------------------- Principal name: testservice/[email protected] Principal alias: testservice/[email protected] Managed by: client.example.com",
"ipa-getkeytab -k /etc/testservice.keytab -p testservice/client.example.com Keytab successfully retrieved and stored in: /etc/testservice.keytab",
"ipa service-show testservice/client.example.com Principal name: testservice/[email protected] Principal alias: testservice/[email protected] Keytab: True Managed by: client.example.com",
"klist -ekt /etc/testservice.keytab Keytab name: FILE:/etc/testservice.keytab KVNO Timestamp Principal ---- ------------------- ------------------------------------------------------ 2 04/01/2020 17:52:55 testservice/[email protected] (aes256-cts-hmac-sha1-96) 2 04/01/2020 17:52:55 testservice/[email protected] (aes128-cts-hmac-sha1-96) 2 04/01/2020 17:52:55 testservice/[email protected] (camellia128-cts-cmac) 2 04/01/2020 17:52:55 testservice/[email protected] (camellia256-cts-cmac)",
"host /[email protected] HTTP /[email protected] ldap /[email protected] DNS /[email protected] cifs /[email protected]",
"ipa service-mod testservice/[email protected] --auth-ind otp --auth-ind pkinit ------------------------------------------------------------- Modified service \"testservice/[email protected]\" ------------------------------------------------------------- Principal name: testservice/[email protected] Principal alias: testservice/[email protected] Authentication Indicators: otp, pkinit Managed by: client.example.com",
"ipa service-mod testservice/[email protected] --auth-ind '' ------------------------------------------------------ Modified service \"testservice/[email protected]\" ------------------------------------------------------ Principal name: testservice/[email protected] Principal alias: testservice/[email protected] Managed by: client.example.com",
"ipa service-show testservice/client.example.com Principal name: testservice/[email protected] Principal alias: testservice/[email protected] Authentication Indicators: otp, pkinit Keytab: True Managed by: client.example.com",
"kvno -S testservice client.example.com testservice/[email protected]: kvno = 1",
"kdestroy",
"klist_ Ticket cache: KCM:1000 Default principal: [email protected] Valid starting Expires Service principal 04/01/2020 12:52:42 04/02/2020 12:52:39 krbtgt/[email protected] 04/01/2020 12:54:07 04/02/2020 12:52:39 testservice/[email protected]",
"ipa krbtpolicy-mod --maxlife= USD((8*60*60)) --maxrenew= USD((24*60*60)) Max life: 28800 Max renew: 86400",
"ipa krbtpolicy-reset Max life: 86400 Max renew: 604800",
"ipa krbtpolicy-show Max life: 28800 Max renew: 86640",
"ipa krbtpolicy-mod --otp-maxlife= 604800 --otp-maxrenew= 604800 --pkinit-maxlife= 172800 --pkinit-maxrenew= 172800",
"ipa krbtpolicy-show Max life: 86400 OTP max life: 604800 PKINIT max life: 172800 Max renew: 604800 OTP max renew: 604800 PKINIT max renew: 172800",
"ipa krbtpolicy-mod admin --maxlife= 172800 --maxrenew= 1209600 Max life: 172800 Max renew: 1209600",
"ipa krbtpolicy-reset admin",
"ipa krbtpolicy-show admin Max life: 172800 Max renew: 1209600",
"ipa krbtpolicy-mod admin --otp-maxrenew=USD((2*24*60*60)) OTP max renew: 172800",
"ipa krbtpolicy-reset username",
"ipa krbtpolicy-show admin Max life: 28800 Max renew: 86640",
"ipa pkinit-status Server name: server1.example.com PKINIT status: enabled [...output truncated...] Server name: server2.example.com PKINIT status: disabled [...output truncated...]",
"ipa config-show Maximum username length: 32 Home directory base: /home Default shell: /bin/sh Default users group: ipausers [...output truncated...] IPA masters capable of PKINIT: server1.example.com [...output truncated...]",
"kinit admin Password for [email protected]: ipa pkinit-status --server=server.idm.example.com 1 server matched ---------------- Server name: server.idm.example.com PKINIT status:enabled ---------------------------- Number of entries returned 1 ----------------------------",
"ipa pkinit-status --server server.idm.example.com ----------------- 0 servers matched ----------------- ---------------------------- Number of entries returned 0 ----------------------------",
"ipa-cacert-manage install -t CT,C,C ca.pem",
"ipa-certupdate",
"ipa-cacert-manage list CN=CA,O=Example Organization The ipa-cacert-manage command was successful",
"ipa-server-certinstall --kdc kdc.pem kdc.key systemctl restart krb5kdc.service",
"ipa pkinit-status Server name: server1.example.com PKINIT status: enabled [...output truncated...] Server name: server2.example.com PKINIT status: disabled [...output truncated...]",
"ipa-pkinit-manage enable Configuring Kerberos KDC (krb5kdc) [1/1]: installing X509 Certificate for PKINIT Done configuring Kerberos KDC (krb5kdc). The ipa-pkinit-manage command was successful",
"klist -ekt /etc/krb5.keytab Keytab name: FILE:/etc/krb5.keytab KVNO Timestamp Principal ---- ------------------- ------------------------------------------------------ 2 02/24/2022 20:28:09 host/[email protected] (aes256-cts-hmac-sha1-96) 2 02/24/2022 20:28:09 host/[email protected] (aes128-cts-hmac-sha1-96) 2 02/24/2022 20:28:09 host/[email protected] (camellia128-cts-cmac) 2 02/24/2022 20:28:09 host/[email protected] (camellia256-cts-cmac)",
"klist -ekt /etc/named.keytab Keytab name: FILE:/etc/named.keytab KVNO Timestamp Principal ---- ------------------- ------------------------------------------------------ 2 11/26/2021 13:51:11 DNS/[email protected] (aes256-cts-hmac-sha1-96) 2 11/26/2021 13:51:11 DNS/[email protected] (aes128-cts-hmac-sha1-96) 2 11/26/2021 13:51:11 DNS/[email protected] (camellia128-cts-cmac) 2 11/26/2021 13:51:11 DNS/[email protected] (camellia256-cts-cmac)",
"kvno DNS/[email protected] DNS/[email protected]: kvno = 3",
"kinit admin Password for [email protected]:",
"ipa-getkeytab -s server1.idm.example.com -p DNS/server1.idm.example.com -k /etc/named.keytab",
"klist -ekt /etc/named.keytab Keytab name: FILE:/etc/named.keytab KVNO Timestamp Principal ---- ------------------- ------------------------------------------------------ 4 08/17/2022 14:42:11 DNS/[email protected] (aes256-cts-hmac-sha1-96) 4 08/17/2022 14:42:11 DNS/[email protected] (aes128-cts-hmac-sha1-96) 4 08/17/2022 14:42:11 DNS/[email protected] (camellia128-cts-cmac) 4 08/17/2022 14:42:11 DNS/[email protected] (camellia256-cts-cmac)",
"kvno DNS/[email protected] DNS/[email protected]: kvno = 4",
"kadmin.local getprinc K/M | grep -E '^Key:' Key: vno 1, aes256-cts-hmac-sha1-96",
"dnf install fido2-tools",
"fido2-token -L",
"fido2-token -C passkey_device",
"ipa user-add user01 --first=user --last=01 --user-auth-type=passkey",
"ipa user-add-passkey user01 --register",
"Insert your passkey device, then press ENTER.",
"Enter PIN: Creating home directory for [email protected] .",
"klist Default principal: [email protected]",
"[domain/shadowutils] id_provider = proxy proxy_lib_name = files auth_provider = none local_auth_policy = only",
"kinit -n @IDM.EXAMPLE.COM -c FILE:armor.ccache",
"kinit -T FILE:armor.ccache <username>@IDM.EXAMPLE.COM Enter your PIN:",
"klist -C Ticket cache: KCM:0:58420 Default principal: <username>@IDM.EXAMPLE.COM Valid starting Expires Service principal 05/09/22 07:48:23 05/10/22 07:03:07 krbtgt/[email protected] config: fast_avail(krbtgt/[email protected]) = yes 08/17/2022 20:22:45 08/18/2022 20:22:43 krbtgt/[email protected] config: pa_type(krbtgt/[email protected]) = 153",
"[realms] EXAMPLE.COM = { kdc = https://kdc.example.com/KdcProxy admin_server = https://kdc.example.com/KdcProxy kpasswd_server = https://kdc.example.com/KdcProxy default_domain = example.com }",
"~]# systemctl restart sssd",
"ls -l /etc/httpd/conf.d/ipa-kdc-proxy.conf lrwxrwxrwx. 1 root root 36 Jun 21 2020 /etc/httpd/conf.d/ipa-kdc-proxy.conf -> /etc/ipa/kdcproxy/ipa-kdc-proxy.conf",
"ipa-ldap-updater /usr/share/ipa/kdcproxy-disable.uldif Update complete The ipa-ldap-updater command was successful",
"systemctl restart httpd.service",
"ls -l /etc/httpd/conf.d/ipa-kdc-proxy.conf ls: cannot access '/etc/httpd/conf.d/ipa-kdc-proxy.conf': No such file or directory",
"ipa-ldap-updater /usr/share/ipa/kdcproxy-enable.uldif Update complete The ipa-ldap-updater command was successful",
"systemctl restart httpd.service",
"ls -l /etc/httpd/conf.d/ipa-kdc-proxy.conf lrwxrwxrwx. 1 root root 36 Jun 21 2020 /etc/httpd/conf.d/ipa-kdc-proxy.conf -> /etc/ipa/kdcproxy/ipa-kdc-proxy.conf",
"[global] use_dns = false",
"[AD. EXAMPLE.COM ] kerberos = kerberos+tcp://1.2.3.4:88 kerberos+tcp://5.6.7.8:88 kpasswd = kpasswd+tcp://1.2.3.4:464 kpasswd+tcp://5.6.7.8:464",
"ipactl restart",
"[global] configs = mit use_dns = true",
"[realms] AD. EXAMPLE.COM = { kdc = ad-server.ad.example.com kpasswd_server = ad-server.ad.example.com }",
"ipactl restart",
"ipa selfservice-add \"Users can manage their own name details\" --permissions=write --attrs=givenname --attrs=displayname --attrs=title --attrs=initials ----------------------------------------------------------- Added selfservice \"Users can manage their own name details\" ----------------------------------------------------------- Self-service name: Users can manage their own name details Permissions: write Attributes: givenname, displayname, title, initials",
"ipa selfservice-mod \"Users can manage their own name details\" --attrs=givenname --attrs=displayname --attrs=title --attrs=initials --attrs=surname -------------------------------------------------------------- Modified selfservice \"Users can manage their own name details\" -------------------------------------------------------------- Self-service name: Users can manage their own name details Permissions: write Attributes: givenname, displayname, title, initials",
"ipa selfservice-show \"Users can manage their own name details\" -------------------------------------------------------------- Self-service name: Users can manage their own name details Permissions: write Attributes: givenname, displayname, title, initials",
"ipa selfservice-del \"Users can manage their own name details\" ----------------------------------------------------------- Deleted selfservice \"Users can manage their own name details\" -----------------------------------------------------------",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/selfservice/selfservice-present.yml selfservice-present-copy.yml",
"--- - name: Self-service present hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure self-service rule \"Users can manage their own name details\" is present ipaselfservice: ipaadmin_password: \"{{ ipaadmin_password }}\" name: \"Users can manage their own name details\" permission: read, write attribute: - givenname - displayname - title - initials",
"ansible-playbook --vault-password-file=password_file -v -i inventory selfservice-present-copy.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/selfservice/selfservice-absent.yml selfservice-absent-copy.yml",
"--- - name: Self-service absent hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure self-service rule \"Users can manage their own name details\" is absent ipaselfservice: ipaadmin_password: \"{{ ipaadmin_password }}\" name: \"Users can manage their own name details\" state: absent",
"ansible-playbook --vault-password-file=password_file -v -i inventory selfservice-absent-copy.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/selfservice/selfservice-member-present.yml selfservice-member-present-copy.yml",
"--- - name: Self-service member present hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure selfservice \"Users can manage their own name details\" member attribute surname is present ipaselfservice: ipaadmin_password: \"{{ ipaadmin_password }}\" name: \"Users can manage their own name details\" attribute: - surname action: member",
"ansible-playbook --vault-password-file=password_file -v -i inventory selfservice-member-present-copy.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/selfservice/selfservice-member-absent.yml selfservice-member-absent-copy.yml",
"--- - name: Self-service member absent hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure selfservice \"Users can manage their own name details\" member attributes givenname and surname are absent ipaselfservice: ipaadmin_password: \"{{ ipaadmin_password }}\" name: \"Users can manage their own name details\" attribute: - givenname - surname action: member state: absent",
"ansible-playbook --vault-password-file=password_file -v -i inventory selfservice-member-absent-copy.yml",
"ipa group-add group_a --------------------- Added group \"group_a\" --------------------- Group name: group_a GID: 1133400009",
"ipa group-del group_a -------------------------- Deleted group \"group_a\" --------------------------",
"ipa group-add-member group_a --groups=group_b Group name: group_a GID: 1133400009 Member users: user_a Member groups: group_b Indirect Member users: user_b ------------------------- Number of members added 1 -------------------------",
"ipa user-add jsmith --first=John --last=Smith --noprivate --gid 10000",
"kinit admin",
"ipa-managed-entries --list",
"ipa-managed-entries -e \"UPG Definition\" disable Disabling Plugin",
"sudo systemctl restart dirsrv.target",
"ipa-managed-entries -e \"UPG Definition\" disable Plugin already disabled",
"ipa group-add-member-manager group_a --users=test Group name: group_a GID: 1133400009 Membership managed by users: test ------------------------- Number of members added 1 -------------------------",
"ipa group-add-member-manager group_a --groups=group_admins Group name: group_a GID: 1133400009 Membership managed by groups: group_admins Membership managed by users: test ------------------------- Number of members added 1 -------------------------",
"ipa group-show group_a Group name: group_a GID: 1133400009 Membership managed by groups: group_admins Membership managed by users: test",
"ipa group-show group_a Member users: user_a Member groups: group_b Indirect Member users: user_b",
"ipa group-remove-member group_name --users= user1 --users= user2 --groups= group1",
"ipa group-remove-member-manager group_a --users=test Group name: group_a GID: 1133400009 Membership managed by groups: group_admins --------------------------- Number of members removed 1 ---------------------------",
"ipa group-remove-member-manager group_a --groups=group_admins Group name: group_a GID: 1133400009 --------------------------- Number of members removed 1 ---------------------------",
"ipa group-show group_a Group name: group_a GID: 1133400009",
"Allow initgroups to default to the setting for group. initgroups: sss [SUCCESS=merge] files",
"ipa user-add idmuser First name: idm Last name: user --------------------- Added user \"idmuser\" --------------------- User login: idmuser First name: idm Last name: user Full name: idm user Display name: idm user Initials: tu Home directory: /home/idmuser GECOS: idm user Login shell: /bin/sh Principal name: [email protected] Principal alias: [email protected] Email address: [email protected] UID: 19000024 GID: 19000024 Password: False Member of groups: ipausers Kerberos keys available: False",
"getent group audio --------------------- audio:x:63",
"ipa group-add audio --gid 63 ------------------- Added group \"audio\" ------------------- Group name: audio GID: 63",
"ipa group-add-member audio --users= idmuser Group name: audio GID: 63 Member users: idmuser ------------------------- Number of members added 1 -------------------------",
"id idmuser uid=1867800003(idmuser) gid=1867800003(idmuser) groups=1867800003(idmuser),63(audio),10(wheel)",
"Allow initgroups to default to the setting for group. initgroups: sss [SUCCESS=merge] files",
"getent group audio --------------------- audio:x:63",
"--- - name: Playbook to manage idoverrideuser hosts: ipaserver become: false tasks: - name: Add [email protected] user to the Default Trust View ipaidoverrideuser: ipaadmin_password: \"{{ ipaadmin_password }}\" idview: \"Default Trust View\" anchor: [email protected]",
"- name: Add the audio group with the aduser member and GID of 63 ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: audio idoverrideuser: - [email protected] gidnumber: 63",
"ansible-playbook --vault-password-file=password_file -v -i inventory add-aduser-to-audio-group.yml",
"ssh [email protected]@client.idm.example.com",
"id [email protected] uid=702801456([email protected]) gid=63(audio) groups=63(audio)",
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle groups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Create group ops with gid 1234 ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: ops gidnumber: 1234 - name: Create group sysops ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: sysops user: - idm_user - name: Create group appops ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: appops - name: Add group members sysops and appops to group ops ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: ops group: - sysops - appops",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/add-group-members.yml",
"ssh [email protected] Password: [admin@server /]USD",
"ipaserver]USD ipa group-show ops Group name: ops GID: 1234 Member groups: sysops, appops Indirect Member users: idm_user",
"--- - name: Playbook to add nonposix and external groups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Add nonposix group sysops and external group appops ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" groups: - name: sysops nonposix: true - name: appops external: true",
"ansible-playbook --vault-password-file=password_file -v -i <path_to_inventory_directory>/hosts <path_to_playbooks_directory>/add-nonposix-and-external-groups.yml",
"cd ~/ MyPlaybooks /",
"--- - name: Playbook to ensure presence of users in a group hosts: ipaserver - name: Ensure the [email protected] user ID override is a member of the admins group: ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: admins idoverrideuser: - [email protected]",
"ansible-playbook --vault-password-file=password_file -v -i inventory add-useridoverride-to-group.yml",
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle membership management hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure user test is present for group_a ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: group_a membermanager_user: test - name: Ensure group_admins is present for group_a ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: group_a membermanager_group: group_admins",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/add-member-managers-user-groups.yml",
"ssh [email protected] Password: [admin@server /]USD",
"ipaserver]USD ipa group-show group_a Group name: group_a GID: 1133400009 Membership managed by groups: group_admins Membership managed by users: test",
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle membership management hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure member manager user and group members are absent for group_a ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: group_a membermanager_user: test membermanager_group: group_admins action: member state: absent",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-member-managers-are-absent.yml",
"ssh [email protected] Password: [admin@server /]USD",
"ipaserver]USD ipa group-show group_a Group name: group_a GID: 1133400009",
"ipa automember-add Automember Rule: user_group Grouping Type: group -------------------------------- Added automember rule \"user_group\" -------------------------------- Automember Rule: user_group",
"ipa automember-add-condition Automember Rule: user_group Attribute Key: uid Grouping Type: group [Inclusive Regex]: .* [Exclusive Regex]: ---------------------------------- Added condition(s) to \"user_group\" ---------------------------------- Automember Rule: user_group Inclusive Regex: uid=.* ---------------------------- Number of conditions added 1 ----------------------------",
"ipa automember-add-condition Automember Rule: ad_users Attribute Key: objectclass Grouping Type: group [Inclusive Regex]: ntUser [Exclusive Regex]: ------------------------------------- Added condition(s) to \"ad_users\" ------------------------------------- Automember Rule: ad_users Inclusive Regex: objectclass=ntUser ---------------------------- Number of conditions added 1 ----------------------------",
"ipa automember-find Grouping Type: group --------------- 1 rules matched --------------- Automember Rule: user_group Inclusive Regex: uid=.* ---------------------------- Number of entries returned 1 ----------------------------",
"ipa automember-remove-condition Automember Rule: user_group Attribute Key: uid Grouping Type: group [Inclusive Regex]: .* [Exclusive Regex]: ----------------------------------- Removed condition(s) from \"user_group\" ----------------------------------- Automember Rule: user_group ------------------------------ Number of conditions removed 1 ------------------------------",
"ipa automember-rebuild --type=group -------------------------------------------------------- Automember rebuild task finished. Processed (9) entries. --------------------------------------------------------",
"ipa automember-rebuild --users=target_user1 --users=target_user2 -------------------------------------------------------- Automember rebuild task finished. Processed (2) entries. --------------------------------------------------------",
"ipa automember-default-group-set Default (fallback) Group: default_user_group Grouping Type: group --------------------------------------------------- Set default (fallback) group for automember \"default_user_group\" --------------------------------------------------- Default (fallback) Group: cn=default_user_group,cn=groups,cn=accounts,dc=example,dc=com",
"ipa automember-default-group-show Grouping Type: group Default (fallback) Group: cn=default_user_group,cn=groups,cn=accounts,dc=example,dc=com",
"mkdir ~/MyPlaybooks/",
"cd ~/MyPlaybooks",
"[defaults] inventory = /home/ your_username /MyPlaybooks/inventory [privilege_escalation] become=True",
"[ipaserver] server.idm.example.com [ipareplicas] replica1.idm.example.com replica2.idm.example.com [ipacluster:children] ipaserver ipareplicas [ipacluster:vars] ipaadmin_password=SomeADMINpassword [ipaclients] ipaclient1.example.com ipaclient2.example.com [ipaclients:vars] ipaadmin_password=SomeADMINpassword",
"ssh-keygen",
"ssh-copy-id [email protected] ssh-copy-id [email protected]",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/automember/automember-group-present.yml automember-group-present-copy.yml",
"--- - name: Automember group present example hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure group automember rule admins is present ipaautomember: ipaadmin_password: \"{{ ipaadmin_password }}\" name: testing_group automember_type: group state: present",
"ansible-playbook --vault-password-file=password_file -v -i inventory automember-group-present-copy.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/automember/automember-hostgroup-rule-present.yml automember-usergroup-rule-present.yml",
"--- - name: Automember user group rule member present hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure an automember condition for a user group is present ipaautomember: ipaadmin_password: \"{{ ipaadmin_password }}\" name: testing_group automember_type: group state: present action: member inclusive: - key: UID expression: . *",
"ansible-playbook --vault-password-file=password_file -v -i inventory automember-usergroup-rule-present.yml",
"kinit admin",
"ipa user-add user101 --first user --last 101 ----------------------- Added user \"user101\" ----------------------- User login: user101 First name: user Last name: 101 Member of groups: ipausers, testing_group",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/automember/automember-hostgroup-rule-absent.yml automember-usergroup-rule-absent.yml",
"--- - name: Automember user group rule member absent hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure an automember condition for a user group is absent ipaautomember: ipaadmin_password: \"{{ ipaadmin_password }}\" name: testing_group automember_type: group state: absent action: member inclusive: - key: initials expression: dp",
"ansible-playbook --vault-password-file=password_file -v -i inventory automember-usergroup-rule-absent.yml",
"kinit admin",
"ipa automember-show --type=group testing_group Automember Rule: testing_group",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/automember/automember-group-absent.yml automember-group-absent-copy.yml",
"--- - name: Automember group absent example hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure group automember rule admins is absent ipaautomember: ipaadmin_password: \"{{ ipaadmin_password }}\" name: testing_group automember_type: group state: absent",
"ansible-playbook --vault-password-file=password_file -v -i inventory automember-group-absent.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/automember/automember-hostgroup-rule-present.yml automember-hostgroup-rule-present-copy.yml",
"--- - name: Automember user group rule member present hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure an automember condition for a user group is present ipaautomember: ipaadmin_password: \"{{ ipaadmin_password }}\" name: primary_dns_domain_hosts automember_type: hostgroup state: present action: member inclusive: - key: fqdn expression: .*.idm.example.com exclusive: - key: fqdn expression: .*.example.org",
"ansible-playbook --vault-password-file=password_file -v -i inventory automember-hostgroup-rule-present-copy.yml",
"ipa delegation-add \"basic manager attributes\" --permissions=read --permissions=write --attrs=businesscategory --attrs=departmentnumber --attrs=employeetype --attrs=employeenumber --group=managers --membergroup=employees ------------------------------------------- Added delegation \"basic manager attributes\" ------------------------------------------- Delegation name: basic manager attributes Permissions: read, write Attributes: businesscategory, departmentnumber, employeetype, employeenumber Member user group: employees User group: managers",
"ipa delegation-find -------------------- 1 delegation matched -------------------- Delegation name: basic manager attributes Permissions: read, write Attributes: businesscategory, departmentnumber, employeenumber, employeetype Member user group: employees User group: managers ---------------------------- Number of entries returned 1 ----------------------------",
"ipa delegation-mod \"basic manager attributes\" --attrs=businesscategory --attrs=departmentnumber --attrs=employeetype --attrs=employeenumber --attrs=displayname ---------------------------------------------- Modified delegation \"basic manager attributes\" ---------------------------------------------- Delegation name: basic manager attributes Permissions: read, write Attributes: businesscategory, departmentnumber, employeetype, employeenumber, displayname Member user group: employees User group: managers",
"ipa delegation-del Delegation name: basic manager attributes --------------------------------------------- Deleted delegation \"basic manager attributes\" ---------------------------------------------",
"mkdir ~/MyPlaybooks/",
"cd ~/MyPlaybooks",
"[defaults] inventory = /home/ <username> /MyPlaybooks/inventory [privilege_escalation] become=True",
"[eu] server.idm.example.com [us] replica.idm.example.com [ipaserver:children] eu us",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/delegation/delegation-present.yml delegation-present-copy.yml",
"--- - name: Playbook to manage a delegation rule hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure delegation \"basic manager attributes\" is present ipadelegation: ipaadmin_password: \"{{ ipaadmin_password }}\" name: \"basic manager attributes\" permission: read, write attribute: - businesscategory - departmentnumber - employeenumber - employeetype group: managers membergroup: employees",
"ansible-playbook --vault-password-file=password_file -v -i ~/ MyPlaybooks /inventory delegation-present-copy.yml",
"cd ~/ MyPlaybooks> /",
"cp /usr/share/doc/ansible-freeipa/playbooks/delegation/delegation-present.yml delegation-absent-copy.yml",
"--- - name: Delegation absent hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure delegation \"basic manager attributes\" is absent ipadelegation: ipaadmin_password: \"{{ ipaadmin_password }}\" name: \"basic manager attributes\" state: absent",
"ansible-playbook --vault-password-file=password_file -v -i ~/ MyPlaybooks /inventory delegation-absent-copy.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/delegation/delegation-member-present.yml delegation-member-present-copy.yml",
"--- - name: Delegation member present hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure delegation \"basic manager attributes\" member attribute departmentnumber is present ipadelegation: ipaadmin_password: \"{{ ipaadmin_password }}\" name: \"basic manager attributes\" attribute: - departmentnumber action: member",
"ansible-playbook --vault-password-file=password_file -v -i ~/ MyPlaybooks /inventory delegation-member-present-copy.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/delegation/delegation-member-absent.yml delegation-member-absent-copy.yml",
"--- - name: Delegation member absent hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure delegation \"basic manager attributes\" member attributes employeenumber and employeetype are absent ipadelegation: ipaadmin_password: \"{{ ipaadmin_password }}\" name: \"basic manager attributes\" attribute: - employeenumber - employeetype action: member state: absent",
"ansible-playbook --vault-password-file=password_file -v -i ~/ MyPlaybooks /inventory delegation-member-absent-copy.yml",
"ipa permission-add \"dns admin\"",
"ipa permission-add \"dns admin\" --bindtype=all",
"ipa permission-add \"dns admin\" --right=read --right=write",
"ipa permission-add \"dns admin\" --right={read,write}",
"ipa permission-add \"dns admin\" --attrs=description --attrs=automountKey",
"ipa permission-add \"dns admin\" --attrs={description,automountKey}",
"ipa permission-add \"manage service\" --right=all --type=service --attrs=krbprincipalkey --attrs=krbprincipalname --attrs=managedby",
"ipa permission-add \"manage automount locations\" --subtree=\"ldap://ldap.example.com:389/cn=automount,dc=example,dc=com\" --right=write --attrs=automountmapname --attrs=automountkey --attrs=automountInformation",
"ipa permission-add \"manage Windows groups\" --filter=\"(!(objectclass=posixgroup))\" --right=write --attrs=description",
"ipa permission-add ManageShell --right=\"write\" --type=user --attr=loginshell --memberof=engineers",
"ipa permission-add ManageMembers --right=\"write\" --subtree=cn=groups,cn=accounts,dc=example,dc=test --attr=member --targetgroup=engineers",
"ipa permission-show <permission> --raw",
"ipa privilege-add \"managing filesystems\" --desc=\"for filesystems\"",
"ipa privilege-add-permission \"managing filesystems\" --permissions=\"managing automount\" --permissions=\"managing ftp services\"",
"ipa role-add --desc=\"User Administrator\" useradmin ------------------------ Added role \"useradmin\" ------------------------ Role name: useradmin Description: User Administrator",
"ipa role-add-privilege --privileges=\"user administrators\" useradmin Role name: useradmin Description: User Administrator Privileges: user administrators ---------------------------- Number of privileges added 1 ----------------------------",
"ipa role-add-member --groups=useradmins useradmin Role name: useradmin Description: User Administrator Member groups: useradmins Privileges: user administrators ------------------------- Number of members added 1 -------------------------",
"mkdir ~/MyPlaybooks/",
"cd ~/MyPlaybooks",
"[defaults] inventory = /home/ your_username /MyPlaybooks/inventory [privilege_escalation] become=True",
"[eu] server.idm.example.com [us] replica.idm.example.com [ipaserver:children] eu us",
"ssh-keygen",
"ssh-copy-id [email protected] ssh-copy-id [email protected]",
"cd ~/ <MyPlaybooks> /",
"cp /usr/share/doc/ansible-freeipa/playbooks/role/role-member-user-present.yml role-member-user-present-copy.yml",
"--- - name: Playbook to manage IPA role with members. hosts: ipaserver become: true gather_facts: no vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - iparole: ipaadmin_password: \"{{ ipaadmin_password }}\" name: user_and_host_administrator user: idm_user01 group: idm_group01 privilege: - Group Administrators - User Administrators - Stage User Administrators - Group Administrators",
"ansible-playbook --vault-password-file=password_file -v -i ~/ <MyPlaybooks> /inventory role-member-user-present-copy.yml",
"cd ~/ <MyPlaybooks> /",
"cp /usr/share/doc/ansible-freeipa/playbooks/role/role-is-absent.yml role-is-absent-copy.yml",
"--- - name: Playbook to manage IPA role with members. hosts: ipaserver become: true gather_facts: no vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - iparole: ipaadmin_password: \"{{ ipaadmin_password }}\" name: user_and_host_administrator state: absent",
"ansible-playbook --vault-password-file=password_file -v -i ~/ <MyPlaybooks> /inventory role-is-absent-copy.yml",
"cd ~/ <MyPlaybooks> /",
"cp /usr/share/doc/ansible-freeipa/playbooks/role/role-member-group-present.yml role-member-group-present-copy.yml",
"--- - name: Playbook to manage IPA role with members. hosts: ipaserver become: true gather_facts: no vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - iparole: ipaadmin_password: \"{{ ipaadmin_password }}\" name: helpdesk group: junior_sysadmins action: member",
"ansible-playbook --vault-password-file=password_file -v -i ~/ <MyPlaybooks> /inventory role-member-group-present-copy.yml",
"cd ~/ <MyPlaybooks> /",
"cp /usr/share/doc/ansible-freeipa/playbooks/role/role-member-user-absent.yml role-member-user-absent-copy.yml",
"--- - name: Playbook to manage IPA role with members. hosts: ipaserver become: true gather_facts: no vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - iparole: ipaadmin_password: \"{{ ipaadmin_password }}\" name: helpdesk user - user_01 - user_02 action: member state: absent",
"ansible-playbook --vault-password-file=password_file -v -i ~/ <MyPlaybooks> /inventory role-member-user-absent-copy.yml",
"cd ~/ <MyPlaybooks> /",
"cp /usr/share/doc/ansible-freeipa/playbooks/role/role-member-service-present-absent.yml role-member-service-present-copy.yml",
"--- - name: Playbook to manage IPA role with members. hosts: ipaserver become: true gather_facts: no vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - iparole: ipaadmin_password: \"{{ ipaadmin_password }}\" name: web_administrator service: - HTTP/client01.idm.example.com action: member",
"ansible-playbook --vault-password-file=password_file -v -i ~/ <MyPlaybooks> /inventory role-member-service-present-copy.yml",
"cd ~/ <MyPlaybooks> /",
"cp /usr/share/doc/ansible-freeipa/playbooks/role/role-member-host-present.yml role-member-host-present-copy.yml",
"--- - name: Playbook to manage IPA role with members. hosts: ipaserver become: true gather_facts: no vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - iparole: ipaadmin_password: \"{{ ipaadmin_password }}\" name: web_administrator host: - client01.idm.example.com action: member",
"ansible-playbook --vault-password-file=password_file -v -i ~/ <MyPlaybooks> /inventory role-member-host-present-copy.yml",
"cd ~/ <MyPlaybooks> /",
"cp /usr/share/doc/ansible-freeipa/playbooks/role/role-member-hostgroup-present.yml role-member-hostgroup-present-copy.yml",
"--- - name: Playbook to manage IPA role with members. hosts: ipaserver become: true gather_facts: no vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - iparole: ipaadmin_password: \"{{ ipaadmin_password }}\" name: web_administrator hostgroup: - web_servers action: member",
"ansible-playbook --vault-password-file=password_file -v -i ~/ <MyPlaybooks> /inventory role-member-hostgroup-present-copy.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/privilege/privilege-present.yml privilege-present-copy.yml",
"--- - name: Privilege present example hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure privilege full_host_administration is present ipaprivilege: ipaadmin_password: \"{{ ipaadmin_password }}\" name: full_host_administration description: This privilege combines all IdM permissions related to host administration",
"ansible-playbook --vault-password-file=password_file -v -i inventory privilege-present-copy.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/privilege/privilege-member-present.yml privilege-member-present-copy.yml",
"--- - name: Privilege member present example hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure that permissions are present for the \"full_host_administration\" privilege ipaprivilege: ipaadmin_password: \"{{ ipaadmin_password }}\" name: full_host_administration permission: - \"System: Add krbPrincipalName to a Host\" - \"System: Enroll a Host\" - \"System: Manage Host Certificates\" - \"System: Manage Host Enrollment Password\" - \"System: Manage Host Keytab\" - \"System: Manage Host Principals\" - \"Retrieve Certificates from the CA\" - \"Revoke Certificate\" - \"System: Add Hosts\" - \"System: Add krbPrincipalName to a Host\" - \"System: Enroll a Host\" - \"System: Manage Host Certificates\" - \"System: Manage Host Enrollment Password\" - \"System: Manage Host Keytab\" - \"System: Manage Host Keytab Permissions\" - \"System: Manage Host Principals\" - \"System: Manage Host SSH Public Keys\" - \"System: Manage Service Keytab\" - \"System: Manage Service Keytab Permissions\" - \"System: Modify Hosts\" - \"System: Remove Hosts\" - \"System: Add Hostgroups\" - \"System: Modify Hostgroup Membership\" - \"System: Modify Hostgroups\" - \"System: Remove Hostgroups\"",
"ansible-playbook --vault-password-file=password_file -v -i inventory privilege-member-present-copy.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/privilege/privilege-member-absent.yml privilege-member-absent-copy.yml",
"--- - name: Privilege absent example hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure that the \"Request Certificate ignoring CA ACLs\" permission is absent from the \"Certificate Administrators\" privilege ipaprivilege: ipaadmin_password: \"{{ ipaadmin_password }}\" name: Certificate Administrators permission: - \"Request Certificate ignoring CA ACLs\" action: member state: absent",
"ansible-playbook --vault-password-file=password_file -v -i inventory privilege-member-absent-copy.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/privilege/privilege-present.yml rename-privilege.yml",
"--- - name: Rename a privilege hosts: ipaserver",
"[...] tasks: - name: Ensure the full_host_administration privilege is renamed to limited_host_administration ipaprivilege: [...]",
"--- - name: Rename a privilege hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure the full_host_administration privilege is renamed to limited_host_administration ipaprivilege: ipaadmin_password: \"{{ ipaadmin_password }}\" name: full_host_administration rename: limited_host_administration state: renamed",
"ansible-playbook --vault-password-file=password_file -v -i inventory rename-privilege.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/privilege/privilege-absent.yml privilege-absent-copy.yml",
"[...] tasks: - name: Ensure privilege \"CA administrator\" is absent ipaprivilege: [...]",
"--- - name: Privilege absent example hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure privilege \"CA administrator\" is absent ipaprivilege: ipaadmin_password: \"{{ ipaadmin_password }}\" name: CA administrator state: absent",
"ansible-playbook --vault-password-file=password_file -v -i inventory privilege-absent-copy.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/permission/permission-present.yml permission-present-copy.yml",
"--- - name: Permission present example hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure that the \"MyPermission\" permission is present ipapermission: ipaadmin_password: \"{{ ipaadmin_password }}\" name: MyPermission object_type: host right: all",
"ansible-playbook --vault-password-file=password_file -v -i inventory permission-present-copy.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/permission/permission-present.yml permission-present-with-attribute.yml",
"--- - name: Permission present example hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure that the \"MyPermission\" permission is present with an attribute ipapermission: ipaadmin_password: \"{{ ipaadmin_password }}\" name: MyPermission object_type: host right: all attrs: description",
"ansible-playbook --vault-password-file=password_file -v -i inventory permission-present-with-attribute.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/permission/permission-absent.yml permission-absent-copy.yml",
"--- - name: Permission absent example hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure that the \"MyPermission\" permission is absent ipapermission: ipaadmin_password: \"{{ ipaadmin_password }}\" name: MyPermission state: absent",
"ansible-playbook --vault-password-file=password_file -v -i inventory permission-absent-copy.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/permission/permission-member-present.yml permission-member-present-copy.yml",
"--- - name: Permission member present example hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure that the \"gecos\" and \"description\" attributes are present in \"MyPermission\" ipapermission: ipaadmin_password: \"{{ ipaadmin_password }}\" name: MyPermission attrs: - description - gecos action: member",
"ansible-playbook --vault-password-file=password_file -v -i inventory permission-member-present-copy.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/permission/permission-member-absent.yml permission-member-absent-copy.yml",
"--- - name: Permission absent example hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure that an attribute is not a member of \"MyPermission\" ipapermission: ipaadmin_password: \"{{ ipaadmin_password }}\" name: MyPermission attrs: description action: member state: absent",
"ansible-playbook --vault-password-file=password_file -v -i inventory permission-member-absent-copy.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/permission/permission-renamed.yml permission-renamed-copy.yml",
"--- - name: Permission present example hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Rename the \"MyPermission\" permission ipapermission: ipaadmin_password: \"{{ ipaadmin_password }}\" name: MyPermission rename: MyNewPermission state: renamed",
"ansible-playbook --vault-password-file=password_file -v -i inventory permission-renamed-copy.yml",
"ipa help idviews ID Views Manage ID Views IPA allows to override certain properties of users and groups[...] [...] Topic commands: idoverridegroup-add Add a new Group ID override idoverridegroup-del Delete a Group ID override [...]",
"ipa idview-add --help Usage: ipa [global-options] idview-add NAME [options] Add a new ID View. Options: -h, --help show this help message and exit --desc=STR Description [...]",
"ipa idview-add example_for_host1 --------------------------- Added ID View \"example_for_host1\" --------------------------- ID View Name: example_for_host1",
"ipa idoverrideuser-add example_for_host1 idm_user --login=user_1234 ----------------------------- Added User ID override \"idm_user\" ----------------------------- Anchor to override: idm_user User login: user_1234",
"ipa idoverrideuser-add-cert example_for_host1 user --certificate=\"MIIEATCC...\"",
"ipa idview-apply example_for_host1 --hosts=host1.idm.example.com ----------------------------- Applied ID View \"example_for_host1\" ----------------------------- hosts: host1.idm.example.com --------------------------------------------- Number of hosts the ID View was applied to: 1 ---------------------------------------------",
"ssh root@host1 Password:",
"root@host1 ~]# sss_cache -E",
"root@host1 ~]# systemctl restart sssd",
"ssh [email protected] Password: Last login: Sun Jun 21 22:34:25 2020 from 192.168.122.229 [user_1234@host1 ~]USD",
"[user_1234@host1 ~]USD pwd /home/idm_user/",
"id idm_user uid=779800003(user_1234) gid=779800003(idm_user) groups=779800003(idm_user) user_1234 uid=779800003(user_1234) gid=779800003(idm_user) groups=779800003(idm_user)",
"mkdir /home/user_1234/",
"chown idm_user:idm_user /home/user_1234/",
"ipa idview-show example_for_host1 --all dn: cn=example_for_host1,cn=views,cn=accounts,dc=idm,dc=example,dc=com ID View Name: example_for_host1 User object override: idm_user Hosts the view applies to: host1.idm.example.com objectclass: ipaIDView, top, nsContainer",
"ipa idoverrideuser-mod example_for_host1 idm_user --homedir=/home/user_1234 ----------------------------- Modified a User ID override \"idm_user\" ----------------------------- Anchor to override: idm_user User login: user_1234 Home directory: /home/user_1234/",
"ssh root@host1 Password:",
"root@host1 ~]# sss_cache -E",
"root@host1 ~]# systemctl restart sssd",
"ssh [email protected] Password: Last login: Sun Jun 21 22:34:25 2020 from 192.168.122.229 [user_1234@host1 ~]USD",
"[user_1234@host1 ~]USD pwd /home/user_1234/",
"mkdir /home/user_1234/",
"chown idm_user:idm_user /home/user_1234/",
"ipa idview-add example_for_host1 --------------------------- Added ID View \"example_for_host1\" --------------------------- ID View Name: example_for_host1",
"ipa idoverrideuser-add example_for_host1 idm_user --homedir=/home/user_1234 ----------------------------- Added User ID override \"idm_user\" ----------------------------- Anchor to override: idm_user Home directory: /home/user_1234/",
"ipa idview-apply example_for_host1 --hosts=host1.idm.example.com ----------------------------- Applied ID View \"example_for_host1\" ----------------------------- hosts: host1.idm.example.com --------------------------------------------- Number of hosts the ID View was applied to: 1 ---------------------------------------------",
"ssh root@host1 Password:",
"root@host1 ~]# sss_cache -E",
"root@host1 ~]# systemctl restart sssd",
"ssh [email protected] Password: Activate the web console with: systemctl enable --now cockpit.socket Last login: Sun Jun 21 22:34:25 2020 from 192.168.122.229 [idm_user@host1 /]USD",
"[idm_user@host1 /]USD pwd /home/user_1234/",
"ipa hostgroup-add --desc=\"Baltimore hosts\" baltimore --------------------------- Added hostgroup \"baltimore\" --------------------------- Host-group: baltimore Description: Baltimore hosts",
"ipa hostgroup-add-member --hosts={host102,host103} baltimore Host-group: baltimore Description: Baltimore hosts Member hosts: host102.idm.example.com, host103.idm.example.com ------------------------- Number of members added 2 -------------------------",
"ipa idview-apply --hostgroups=baltimore ID View Name: example_for_host1 ----------------------------------------- Applied ID View \"example_for_host1\" ----------------------------------------- hosts: host102.idm.example.com, host103.idm.example.com --------------------------------------------- Number of hosts the ID View was applied to: 2 ---------------------------------------------",
"ipa hostgroup-add-member --hosts=somehost.idm.example.com baltimore Host-group: baltimore Description: Baltimore hosts Member hosts: host102.idm.example.com, host103.idm.example.com,somehost.idm.example.com ------------------------- Number of members added 1 -------------------------",
"ipa idview-show example_for_host1 --all dn: cn=example_for_host1,cn=views,cn=accounts,dc=idm,dc=example,dc=com ID View Name: example_for_host1 [...] Hosts the view applies to: host102.idm.example.com, host103.idm.example.com objectclass: ipaIDView, top, nsContainer",
"ipa idview-apply --host=somehost.idm.example.com ID View Name: example_for_host1 ----------------------------------------- Applied ID View \"example_for_host1\" ----------------------------------------- hosts: somehost.idm.example.com --------------------------------------------- Number of hosts the ID View was applied to: 1 ---------------------------------------------",
"ipa idview-show example_for_host1 --all dn: cn=example_for_host1,cn=views,cn=accounts,dc=idm,dc=example,dc=com ID View Name: example_for_host1 [...] Hosts the view applies to: host102.idm.example.com, host103.idm.example.com, somehost.idm.example.com objectclass: ipaIDView, top, nsContainer",
"--- - name: Playbook to manage idoverrideuser hosts: ipaserver become: false gather_facts: false vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure idview_for_host1 is present idview: ipaadmin_password: \"{{ ipaadmin_password }}\" name: idview_for_host1 - name: Ensure idview_for_host1 is applied to host1.idm.example.com idview: ipaadmin_password: \"{{ ipaadmin_password }}\" name: idview_for_host1 host: host1.idm.example.com action: member - name: Ensure idm_user is present in idview_for_host1 with homedir /home/user_1234 and name user_1234 ipaidoverrideuser: ipaadmin_password: \"{{ ipaadmin_password }}\" idview: idview_for_host1 anchor: idm_user name: user_1234 homedir: /home/user_1234",
"ansible-playbook --vault-password-file=password_file -v -i <path_to_inventory_directory>/inventory <path_to_playbooks_directory>/add-idoverrideuser-with-name-and-homedir.yml",
"ssh root@host1 Password:",
"root@host1 ~]# sss_cache -E",
"root@host1 ~]# systemctl restart sssd",
"ssh [email protected] Password: Last login: Sun Jun 21 22:34:25 2020 from 192.168.122.229 [user_1234@host1 ~]USD",
"[user_1234@host1 ~]USD pwd /home/user_1234/",
"--- - name: Playbook to manage idoverrideuser hosts: ipaserver become: false gather_facts: false vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure test user idm_user is present in idview idview_for_host1 with sshpubkey ipaidoverrideuser: ipaadmin_password: \"{{ ipaadmin_password }}\" idview: idview_for_host1 anchor: idm_user sshpubkey: - ssh-rsa AAAAB3NzaC1yc2EAAADAQABAAABgQCqmVDpEX5gnSjKuv97Ay - name: Ensure idview_for_host1 is applied to host1.idm.example.com ipaidview: ipaadmin_password: \"{{ ipaadmin_password }}\" name: idview_for_host1 host: host1.idm.example.com action: member",
"ansible-playbook --vault-password-file=password_file -v -i <path_to_inventory_directory>/inventory <path_to_playbooks_directory>/ensure-idoverrideuser-can-login-with-sshkey.yml",
"ssh root@host1 Password:",
"root@host1 ~]# sss_cache -E",
"root@host1 ~]# systemctl restart sssd",
"ssh -i ~/.ssh/id_rsa.pub [email protected] Last login: Sun Jun 21 22:34:25 2023 from 192.168.122.229 [idm_user@host1 ~]USD",
"Allow initgroups to default to the setting for group. initgroups: sss [SUCCESS=merge] files",
"getent group audio --------------------- audio:x:63",
"--- - name: Playbook to manage idoverrideuser hosts: ipaserver become: false tasks: - name: Add [email protected] user to the Default Trust View ipaidoverrideuser: ipaadmin_password: \"{{ ipaadmin_password }}\" idview: \"Default Trust View\" anchor: [email protected]",
"- name: Add the audio group with the aduser member and GID of 63 ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: audio idoverrideuser: - [email protected] gidnumber: 63",
"ansible-playbook --vault-password-file=password_file -v -i inventory add-aduser-to-audio-group.yml",
"ssh [email protected]@client.idm.example.com",
"id [email protected] uid=702801456([email protected]) gid=63(audio) groups=63(audio)",
"--- - name: Ensure both local user and IdM user have access to same files hosts: ipaserver become: false gather_facts: false tasks: - name: Ensure idview_for_host1 is applied to host1.idm.example.com ipaidview: ipaadmin_password: \"{{ ipaadmin_password }}\" name: idview_for_host01 host: host1.idm.example.com - name: Ensure idmuser is present in idview_for_host01 with the UID of 20001 ipaidoverrideuser: ipaadmin_password: \"{{ ipaadmin_password }}\" idview: idview_for_host01 anchor: idm_user UID: 20001",
"ansible-playbook --vault-password-file=password_file -v -i inventory ensure-idmuser-and-local-user-have-access-to-same-files.yml",
"--- - name: Ensure both local user and IdM user have access to same files hosts: ipaserver become: false gather_facts: false tasks: - name: Ensure idview_for_host1 is applied to host01.idm.example.com ipaidview: ipaadmin_password: \"{{ ipaadmin_password }}\" name: idview_for_host01 host: host01.idm.example.com - name: Ensure an IdM user is present in ID view with two certificates ipaidoverrideuser: ipaadmin_password: \"{{ ipaadmin_password }}\" idview: idview_for_host01 anchor: idm_user certificate: - \"{{ lookup('file', 'cert1.b64', rstrip=False) }}\" - \"{{ lookup('file', 'cert2.b64', rstrip=False) }}\"",
"ansible-playbook --vault-password-file=password_file -v -i inventory ensure-idmuser-present-in-idview-with-certificates.yml",
"getent group audio --------------------- audio:x:63",
"--- - name: Playbook to give IdM group access to sound card on IdM client hosts: ipaserver become: false tasks: - name: Ensure the audio group exists in IdM ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: audio - name: Ensure idview_for_host01 exists and is applied to host01.idm.example.com ipaidview: ipaadmin_password: \"{{ ipaadmin_password }}\" name: idview_for_host01 host: host01.idm.example.com - name: Add an override for the IdM audio group with GID 63 to idview_for_host01 ipaidoverridegroup: ipaadmin_password: \"{{ ipaadmin_password }}\" idview: idview_for_host01 anchor: audio GID: 63",
"ansible-playbook --vault-password-file=password_file -v -i inventory give-idm-group-access-to-sound-card-on-idm-client.yml",
"kinit admin Password:",
"ipa user-add testuser --first test --last user --password User login [tuser]: Password: Enter Password again to verify: ------------------ Added user \"tuser\" ------------------",
"ipa group-add-member --tuser audio",
"ssh [email protected]",
"id tuser uid=702801456(tuser) gid=63(audio) groups=63(audio)",
"ipa idview-show example-view ID View Name: example-view User object overrides: example-user1 Group object overrides: example-group",
"ipa idoverrideuser-add 'Default Trust View' [email protected] --gidnumber=732000006",
"sssctl cache-expire -u [email protected]",
"id [email protected] uid=702801456([email protected]) gid=732000006(ad_admins) groups=732000006(ad_admins),702800513(domain [email protected])",
"ipa idview-add example_for_host1 --------------------------- Added ID View \"example_for_host1\" --------------------------- ID View Name: example_for_host1",
"ipa idoverrideuser-add example_for_host1 [email protected] --gidnumber=732001337 ----------------------------- Added User ID override \"[email protected]\" ----------------------------- Anchor to override: [email protected] GID: 732001337",
"ipa idview-apply example_for_host1 --hosts=host1.idm.example.com ----------------------------- Applied ID View \"example_for_host1\" ----------------------------- hosts: host1.idm.example.com --------------------------------------------- Number of hosts the ID View was applied to: 1 ---------------------------------------------",
"sssctl cache-expire -u [email protected]",
"ssh [email protected]@host1.idm.example.com",
"[[email protected]@host1 ~]USD id [email protected] uid=702801456([email protected]) gid=732001337(admins2) groups=732001337(admins2),702800513(domain [email protected])",
"ipa hostgroup-add --desc=\"Baltimore hosts\" baltimore --------------------------- Added hostgroup \"baltimore\" --------------------------- Host-group: baltimore Description: Baltimore hosts",
"ipa hostgroup-add-member --hosts={host102,host103} baltimore Host-group: baltimore Description: Baltimore hosts Member hosts: host102.idm.example.com, host103.idm.example.com ------------------------- Number of members added 2 -------------------------",
"ipa idview-apply --hostgroups=baltimore ID View Name: example_for_host1 ----------------------------------------- Applied ID View \"example_for_host1\" ----------------------------------------- hosts: host102.idm.example.com, host103.idm.example.com --------------------------------------------- Number of hosts the ID View was applied to: 2 ---------------------------------------------",
"ipa hostgroup-add-member --hosts=somehost.idm.example.com baltimore Host-group: baltimore Description: Baltimore hosts Member hosts: host102.idm.example.com, host103.idm.example.com,somehost.idm.example.com ------------------------- Number of members added 1 -------------------------",
"ipa idview-show example_for_host1 --all dn: cn=example_for_host1,cn=views,cn=accounts,dc=idm,dc=example,dc=com ID View Name: example_for_host1 [...] Hosts the view applies to: host102.idm.example.com, host103.idm.example.com objectclass: ipaIDView, top, nsContainer",
"ipa idview-apply --host=somehost.idm.example.com ID View Name: example_for_host1 ----------------------------------------- Applied ID View \"example_for_host1\" ----------------------------------------- hosts: somehost.idm.example.com --------------------------------------------- Number of hosts the ID View was applied to: 1 ---------------------------------------------",
"ipa idview-show example_for_host1 --all dn: cn=example_for_host1,cn=views,cn=accounts,dc=idm,dc=example,dc=com ID View Name: example_for_host1 [...] Hosts the view applies to: host102.idm.example.com, host103.idm.example.com, somehost.idm.example.com objectclass: ipaIDView, top, nsContainer",
"ipa idrange-find --------------- 1 range matched --------------- Range name: IDM.EXAMPLE.COM_id_range First Posix ID of the range: 882200000 Number of IDs in the range: 200000 Range type: local domain range ---------------------------- Number of entries returned 1 ----------------------------",
"ipa idrange-add IDM.EXAMPLE.COM_new_range --base-id 5000 --range-size 1000 --rid-base 300000 --secondary-rid-base 1300000 --type ipa-local ipa: WARNING: Service [email protected] requires restart on IPA server <all IPA servers> to apply configuration changes. ------------------------------------------ Added ID range \"IDM.EXAMPLE.COM_new_range\" ------------------------------------------ Range name: IDM.EXAMPLE.COM_new_range First Posix ID of the range: 5000 Number of IDs in the range: 1000 First RID of the corresponding RID range: 300000 First RID of the secondary RID range: 1300000 Range type: local domain range",
"systemctl restart [email protected]",
"sss_cache -E",
"systemctl restart sssd",
"ipa idrange-find ---------------- 2 ranges matched ---------------- Range name: IDM.EXAMPLE.COM_id_range First Posix ID of the range: 882200000 Number of IDs in the range: 200000 Range type: local domain range Range name: IDM.EXAMPLE.COM_new_range First Posix ID of the range: 5000 Number of IDs in the range: 1000 First RID of the corresponding RID range: 300000 First RID of the secondary RID range: 1300000 Range type: local domain range ---------------------------- Number of entries returned 2 ----------------------------",
"ipa idrange-show IDM.EXAMPLE.COM_id_range Range name: IDM.EXAMPLE.COM_id_range First Posix ID of the range: 196600000 Number of IDs in the range: 200000 First RID of the corresponding RID range: 1000 First RID of the secondary RID range: 1000000 Range type: local domain range",
"cd ~/ MyPlaybooks /",
"--- - name: Playbook to manage idrange hosts: ipaserver become: no vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure local idrange is present ipaidrange: ipaadmin_password: \"{{ ipaadmin_password }}\" name: new_id_range base_id: 12000000 range_size: 200000 rid_base: 1000000 secondary_rid_base: 200000000",
"ansible-playbook --vault-password-file=password_file -v -i inventory idrange-present.yml",
"systemctl restart [email protected]",
"sss_cache -E",
"systemctl restart sssd",
"ipa idrange-find ---------------- 2 ranges matched ---------------- Range name: IDM.EXAMPLE.COM_id_range First Posix ID of the range: 882200000 Number of IDs in the range: 200000 Range type: local domain range Range name: IDM.EXAMPLE.COM_new_id_range First Posix ID of the range: 12000000 Number of IDs in the range: 200000 Range type: local domain range ---------------------------- Number of entries returned 2 ----------------------------",
"ipa idrange-find",
"ipa idrange-del AD.EXAMPLE.COM_id_range",
"systemctl restart sssd",
"ipa-replica-manage dnarange-show serverA.example.com: 1001-1500 serverB.example.com: 1501-2000 serverC.example.com: No range set ipa-replica-manage dnarange-show serverA.example.com serverA.example.com: 1001-1500",
"ipa-replica-manage dnanextrange-show serverA.example.com: 2001-2500 serverB.example.com: No on-deck range set serverC.example.com: No on-deck range set ipa-replica-manage dnanextrange-show serverA.example.com serverA.example.com: 2001-2500",
"ipa-replica-manage dnarange-set serverA.example.com 1250-1499",
"ipa-replica-manage dnanextrange-set serverB.example.com 1500-5000",
"ipa subid-find",
"ipa subid-generate --owner=idmuser Added subordinate id \"359dfcef-6b76-4911-bd37-bb5b66b8c418\" Unique ID: 359dfcef-6b76-4911-bd37-bb5b66b8c418 Description: auto-assigned subid Owner: idmuser SubUID range start: 2147483648 SubUID range size: 65536 SubGID range start: 2147483648 SubGID range size: 65536",
"/usr/libexec/ipa/ipa-subids --all-users Found 2 user(s) without subordinate ids Processing user 'user4' (1/2) Processing user 'user5' (2/2) Updated 2 user(s) The ipa-subids command was successful",
"ipa config-mod --user-default-subid=True",
"ipa subid-find --owner=idmuser 1 subordinate id matched Unique ID: 359dfcef-6b76-4911-bd37-bb5b66b8c418 Owner: idmuser SubUID range start: 2147483648 SubUID range size: 65536 SubGID range start: 2147483648 SubGID range size: 65536 Number of entries returned 1",
"ipa subid-show 359dfcef-6b76-4911-bd37-bb5b66b8c418 Unique ID: 359dfcef-6b76-4911-bd37-bb5b66b8c418 Owner: idmuser SubUID range start: 2147483648 SubUID range size: 65536 SubGID range start: 2147483648 SubGID range size: 65536",
"ipa subid-match --subuid=2147483670 1 subordinate id matched Unique ID: 359dfcef-6b76-4911-bd37-bb5b66b8c418 Owner: uid=idmuser SubUID range start: 2147483648 SubUID range size: 65536 SubGID range start: 2147483648 SubGID range size: 65536 Number of entries returned 1",
"[...] subid: sss",
"getsubids idmuser 0: idmuser 2147483648 65536",
"ipa host-add client1.example.com",
"ipa host-add --ip-address=192.168.166.31 client1.example.com",
"ipa host-add --force client1.example.com",
"ipa host-del --updatedns client1.example.com",
"ipa-client-install --force-join",
"User authorized to enroll computers: hostadmin Password for hostadmin @ EXAMPLE.COM :",
"ipa-client-install --keytab /tmp/krb5.keytab",
"[user@client1 ~]USD id admin uid=1254400000(admin) gid=1254400000(admins) groups=1254400000(admins)",
"[user@client1 ~]USD su - idm_user Last login: Thu Oct 18 18:39:11 CEST 2018 from 192.168.122.1 on pts/0 [idm_user@client1 ~]USD",
"ipa service-find old-client-name.example.com",
"find / -name \"*.keytab\"",
"ipa hostgroup-find old-client-name.example.com",
"ipa-client-install --uninstall",
"ipa dnsrecord-del Record name: old-client-client Zone name: idm.example.com No option to delete specific record provided. Delete all? Yes/No (default No): true ------------------------ Deleted record \"old-client-name\"",
"ipa-rmkeytab -k /path/to/keytab -r EXAMPLE.COM",
"ipa host-del client.example.com",
"hostnamectl set-hostname new-client-name.example.com",
"ipa service-add service_name/new-client-name",
"kinit admin ipa host-disable client.example.com",
"ipa-getkeytab -s server.example.com -p host/client.example.com -k /etc/krb5.keytab -D \"cn=directory manager\" -w password",
"ipa service-add-host principal --hosts=<hostname>",
"ipa service-add HTTP/web.example.com ipa service-add-host HTTP/web.example.com --hosts=client1.example.com",
"kinit -kt /etc/krb5.keytab host/client1.example.com ipa-getkeytab -s server.example.com -k /tmp/test.keytab -p HTTP/web.example.com Keytab successfully retrieved and stored in: /tmp/test.keytab",
"kinit -kt /etc/krb5.keytab host/client1.example.com openssl req -newkey rsa:2048 -subj '/CN=web.example.com/O=EXAMPLE.COM' -keyout /etc/pki/tls/web.key -out /tmp/web.csr -nodes Generating a 2048 bit RSA private key .............................................................+++ ............................................................................................+++ Writing new private key to '/etc/pki/tls/private/web.key'",
"ipa cert-request --principal=HTTP/web.example.com web.csr Certificate: MIICETCCAXqgA...[snip] Subject: CN=web.example.com,O=EXAMPLE.COM Issuer: CN=EXAMPLE.COM Certificate Authority Not Before: Tue Feb 08 18:51:51 2011 UTC Not After: Mon Feb 08 18:51:51 2016 UTC Serial number: 1005",
"kinit admin",
"ipa host-add-managedby client2.example.com --hosts=client1.example.com",
"kinit -kt /etc/krb5.keytab host/client1.example.com",
"ipa-getkeytab -s server.example.com -k /tmp/client2.keytab -p host/client2.example.com Keytab successfully retrieved and stored in: /tmp/client2.keytab",
"kinit -kt /etc/krb5.keytab host/[email protected]",
"kinit -kt /etc/httpd/conf/krb5.keytab HTTP/[email protected]",
"[ipaserver] server.idm.example.com",
"--- - name: Host present hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Host host01.idm.example.com present ipahost: ipaadmin_password: \"{{ ipaadmin_password }}\" name: host01.idm.example.com state: present force: true",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-host-is-present.yml",
"ssh [email protected] Password:",
"ipa host-show host01.idm.example.com Host name: host01.idm.example.com Principal name: host/[email protected] Principal alias: host/[email protected] Password: False Keytab: False Managed by: host01.idm.example.com",
"[ipaserver] server.idm.example.com",
"--- - name: Host present hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure host01.idm.example.com is present ipahost: ipaadmin_password: \"{{ ipaadmin_password }}\" name: host01.idm.example.com description: Example host ip_address: 192.168.0.123 locality: Lab ns_host_location: Lab ns_os_version: CentOS 7 ns_hardware_platform: Lenovo T61 mac_address: - \"08:00:27:E3:B1:2D\" - \"52:54:00:BD:97:1E\" state: present",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-host-is-present.yml",
"ssh [email protected] Password:",
"ipa host-show host01.idm.example.com Host name: host01.idm.example.com Description: Example host Locality: Lab Location: Lab Platform: Lenovo T61 Operating system: CentOS 7 Principal name: host/[email protected] Principal alias: host/[email protected] MAC address: 08:00:27:E3:B1:2D, 52:54:00:BD:97:1E Password: False Keytab: False Managed by: host01.idm.example.com",
"[ipaserver] server.idm.example.com",
"--- - name: Ensure hosts with random password hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Hosts host01.idm.example.com and host02.idm.example.com present with random passwords ipahost: ipaadmin_password: \"{{ ipaadmin_password }}\" hosts: - name: host01.idm.example.com random: true force: true - name: host02.idm.example.com random: true force: true register: ipahost",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-hosts-are-present.yml [...] TASK [Hosts host01.idm.example.com and host02.idm.example.com present with random passwords] changed: [r8server.idm.example.com] => {\"changed\": true, \"host\": {\"host01.idm.example.com\": {\"randompassword\": \"0HoIRvjUdH0Ycbf6uYdWTxH\"}, \"host02.idm.example.com\": {\"randompassword\": \"5VdLgrf3wvojmACdHC3uA3s\"}}}",
"ssh [email protected] Password:",
"ipa host-show host01.idm.example.com Host name: host01.idm.example.com Password: True Keytab: False Managed by: host01.idm.example.com",
"[ipaserver] server.idm.example.com",
"--- - name: Host member IP addresses present hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure host101.example.com IP addresses present ipahost: ipaadmin_password: \"{{ ipaadmin_password }}\" name: host01.idm.example.com ip_address: - 192.168.0.123 - fe80::20c:29ff:fe02:a1b3 - 192.168.0.124 - fe80::20c:29ff:fe02:a1b4 force: true",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-host-with-multiple-IP-addreses-is-present.yml",
"ssh [email protected] Password:",
"ipa host-show host01.idm.example.com Principal name: host/[email protected] Principal alias: host/[email protected] Password: False Keytab: False Managed by: host01.idm.example.com",
"ipa dnsrecord-show idm.example.com host01 [...] Record name: host01 A record: 192.168.0.123, 192.168.0.124 AAAA record: fe80::20c:29ff:fe02:a1b3, fe80::20c:29ff:fe02:a1b4",
"[ipaserver] server.idm.example.com",
"--- - name: Host absent hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Host host01.idm.example.com absent ipahost: ipaadmin_password: \"{{ ipaadmin_password }}\" name: host01.idm.example.com updatedns: true state: absent",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-host-absent.yml",
"ssh [email protected] Password: [admin@server /]USD",
"ipa host-show host01.idm.example.com ipa: ERROR: host01.idm.example.com: host not found",
"ipa hostgroup-find ------------------- 1 hostgroup matched ------------------- Host-group: ipaservers Description: IPA server hosts ---------------------------- Number of entries returned 1 ----------------------------",
"ipa hostgroup-find --all ------------------- 1 hostgroup matched ------------------- dn: cn=ipaservers,cn=hostgroups,cn=accounts,dc=idm,dc=local Host-group: ipaservers Description: IPA server hosts Member hosts: xxx.xxx.xxx.xxx ipauniqueid: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx objectclass: top, groupOfNames, nestedGroup, ipaobject, ipahostgroup ---------------------------- Number of entries returned 1 ----------------------------",
"ipa hostgroup-add --desc ' My new host group ' group_name --------------------- Added hostgroup \"group_name\" --------------------- Host-group: group_name Description: My new host group ---------------------",
"ipa hostgroup-del group_name -------------------------- Deleted hostgroup \"group_name\" --------------------------",
"ipa hostgroup-add-member group_name --hosts example_member Host-group: group_name Description: My host group Member hosts: example_member ------------------------- Number of members added 1 -------------------------",
"ipa hostgroup-add-member group_name --hostgroups nested_group Host-group: group_name Description: My host group Member host-groups: nested_group ------------------------- Number of members added 1 -------------------------",
"ipa hostgroup-add-member group_name --hosts={ host1,host2 } --hostgroups={ group1,group2 }",
"ipa hostgroup-remove-member group_name --hosts example_member Host-group: group_name Description: My host group ------------------------- Number of members removed 1 -------------------------",
"ipa hostgroup-remove-member group_name --hostgroups example_member Host-group: group_name Description: My host group ------------------------- Number of members removed 1 -------------------------",
"ipa hostgroup- remove -member group_name --hosts={ host1,host2 } --hostgroups={ group1,group2 }",
"ipa hostgroup-add-member-manager group_name --user example_member Host-group: group_name Member hosts: server.idm.example.com Member host-groups: project_admins Member of netgroups: group_name Membership managed by users: example_member ------------------------- Number of members added 1 -------------------------",
"ipa hostgroup-add-member-manager group_name --groups admin_group Host-group: group_name Member hosts: server.idm.example.com Member host-groups: project_admins Member of netgroups: group_name Membership managed by groups: admin_group Membership managed by users: example_member ------------------------- Number of members added 1 -------------------------",
"ipa hostgroup-show group_name Host-group: group_name Member hosts: server.idm.example.com Member host-groups: project_admins Membership managed by groups: admin_group Membership managed by users: example_member",
"ipa hostgroup-remove-member-manager group_name --user example_member Host-group: group_name Member hosts: server.idm.example.com Member host-groups: project_admins Member of netgroups: group_name Membership managed by groups: nested_group --------------------------- Number of members removed 1 ---------------------------",
"ipa hostgroup-remove-member-manager group_name --groups nested_group Host-group: group_name Member hosts: server.idm.example.com Member host-groups: project_admins Member of netgroups: group_name --------------------------- Number of members removed 1 ---------------------------",
"ipa hostgroup-show group_name Host-group: group_name Member hosts: server.idm.example.com Member host-groups: project_admins",
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle hostgroups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure host-group databases is present - ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: databases state: present",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-hostgroup-is-present.yml",
"ssh [email protected] Password: [admin@server /]USD",
"kinit admin Password for [email protected]:",
"ipa hostgroup-show databases Host-group: databases",
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle hostgroups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure host-group databases is present - ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: databases host: - db.idm.example.com action: member",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-hosts-or-hostgroups-are-present-in-hostgroup.yml",
"ssh [email protected] Password: [admin@server /]USD",
"kinit admin Password for [email protected]:",
"ipa hostgroup-show databases Host-group: databases Member hosts: db.idm.example.com",
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle hostgroups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure hosts and hostgroups are present in existing databases hostgroup - ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: databases hostgroup: - mysql-server - oracle-server action: member",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-hosts-or-hostgroups-are-present-in-hostgroup.yml",
"ssh [email protected] Password: [admin@server /]USD",
"kinit admin Password for [email protected]:",
"ipa hostgroup-show databases Host-group: databases Member hosts: db.idm.example.com Member host-groups: mysql-server, oracle-server",
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle host group membership management hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure member manager user example_member is present for group_name ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: group_name membermanager_user: example_member - name: Ensure member manager group project_admins is present for group_name ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: group_name membermanager_group: project_admins",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/add-member-managers-host-groups.yml",
"ssh [email protected] Password: [admin@server /]USD",
"ipaserver]USD ipa hostgroup-show group_name Host-group: group_name Member hosts: server.idm.example.com Member host-groups: testhostgroup2 Membership managed by groups: project_admins Membership managed by users: example_member",
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle hostgroups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure host-group databases is absent - ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: databases host: - db.idm.example.com action: member state: absent",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-hosts-or-hostgroups-are-absent-in-hostgroup.yml",
"ssh [email protected] Password: [admin@server /]USD",
"kinit admin Password for [email protected]:",
"ipa hostgroup-show databases Host-group: databases Member host-groups: mysql-server, oracle-server",
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle hostgroups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure hosts and hostgroups are absent in existing databases hostgroup - ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: databases hostgroup: - mysql-server - oracle-server action: member state: absent",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-hosts-or-hostgroups-are-absent-in-hostgroup.yml",
"ssh [email protected] Password: [admin@server /]USD",
"kinit admin Password for [email protected]:",
"ipa hostgroup-show databases Host-group: databases",
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle hostgroups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - Ensure host-group databases is absent ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: databases state: absent",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-hostgroup-is-absent.yml",
"ssh [email protected] Password: [admin@server /]USD",
"kinit admin Password for [email protected]:",
"ipa hostgroup-show databases ipa: ERROR: databases: host group not found",
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle host group membership management hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure member manager host and host group members are absent for group_name ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: group_name membermanager_user: example_member membermanager_group: project_admins action: member state: absent",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-member-managers-host-groups-are-absent.yml",
"ssh [email protected] Password: [admin@server /]USD",
"ipaserver]USD ipa hostgroup-show group_name Host-group: group_name Member hosts: server.idm.example.com Member host-groups: testhostgroup2",
"ipa hbacrule-add Rule name: rule_name --------------------------- Added HBAC rule \" rule_name \" --------------------------- Rule name: rule_name Enabled: TRUE",
"ipa hbacrule-add-user --users= sysadmin Rule name: rule_name Rule name: rule_name Enabled: True Users: sysadmin ------------------------- Number of members added 1 -------------------------",
"ipa hbacrule-mod rule_name --hostcat=all ------------------------------ Modified HBAC rule \" rule_name \" ------------------------------ Rule name: rule_name Host category: all Enabled: TRUE Users: sysadmin",
"ipa hbacrule-mod rule_name --servicecat=all ------------------------------ Modified HBAC rule \" rule_name \" ------------------------------ Rule name: rule_name Host category: all Service category: all Enabled: True Users: sysadmin",
"ipa hbactest --user= sysadmin --host=server.idm.example.com --service=sudo --rules= rule_name --------------------- Access granted: True --------------------- Matched rules: rule_name",
"ipa hbacrule-add --hostcat=all rule2_name ipa hbacrule-add-user --users sysadmin rule2_name ipa hbacrule-add-service --hbacsvcs=sshd rule2_name Rule name: rule2_name Host category: all Enabled: True Users: admin HBAC Services: sshd ------------------------- Number of members added 1 -------------------------",
"ipa hbactest --user= sysadmin --host=server.idm.example.com --service=sudo --rules= rule_name --rules= rule2_name -------------------- Access granted: True -------------------- Matched rules: rule_name Not matched rules: rule2_name",
"ipa hbacrule-disable allow_all ------------------------------ Disabled HBAC rule \"allow_all\" ------------------------------",
"ipa hbacsvc-add tftp ------------------------- Added HBAC service \"tftp\" ------------------------- Service name: tftp",
"ipa hbacsvcgroup-add Service group name: login -------------------------------- Added HBAC service group \" login \" -------------------------------- Service group name: login",
"ipa hbacsvcgroup-add-member Service group name: login [member HBAC service]: sshd Service group name: login Member HBAC service: sshd ------------------------- Number of members added 1 -------------------------",
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle hbacrules hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure idm_user can access client.idm.example.com via the sshd service - ipahbacrule: ipaadmin_password: \"{{ ipaadmin_password }}\" name: login user: idm_user host: client.idm.example.com hbacsvc: - sshd state: present",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-new-hbacrule-present.yml",
"host.example.com,1.2.3.4 ssh-rsa AAA...ZZZ==",
"\"ssh-rsa ABCD1234...== ipaclient.example.com\"",
"ssh-rsa AAA...ZZZ== host.example.com,1.2.3.4",
"ssh-keygen -t rsa -C [email protected] Generating public/private rsa key pair.",
"Enter file in which to save the key (/home/user/.ssh/id_rsa):",
"Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/user/.ssh/id_rsa. Your public key has been saved in /home/user/.ssh/id_rsa.pub. The key fingerprint is: SHA256:ONxjcMX7hJ5zly8F8ID9fpbqcuxQK+ylVLKDMsJPxGA [email protected] The key's randomart image is: +---[RSA 3072]----+ | ..o | | .o + | | E. . o = | | ..o= o . + | | +oS. = + o.| | . .o .* B =.+| | o + . X.+.= | | + o o.*+. .| | . o=o . | +----[SHA256]-----+",
"server.example.com,1.2.3.4 ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEApvjBvSFSkTU0WQW4eOweeo0DZZ08F9Ud21xlLy6FOhzwpXFGIyxvXZ52+siHBHbbqGL5+14N7UvElruyslIHx9LYUR/pPKSMXCGyboLy5aTNl5OQ5EHwrhVnFDIKXkvp45945R7SKYCUtRumm0Iw6wq0XD4o+ILeVbV3wmcB1bXs36ZvC/M6riefn9PcJmh6vNCvIsbMY6S+FhkWUTTiOXJjUDYRLlwM273FfWhzHK+SSQXeBp/zIn1gFvJhSZMRi9HZpDoqxLbBB9QIdIw6U4MIjNmKsSI/ASpkFm2GuQ7ZK9KuMItY2AoCuIRmRAdF8iYNHBTXNfFurGogXwRDjQ==",
"cat /home/user/.ssh/host_keys.pub ssh-rsa AAAAB3NzaC1yc2E...tJG1PK2Mq++wQ== server.example.com,1.2.3.4",
"ipa host-mod --sshpubkey=\"ssh-rsa RjlzYQo==\" --updatedns host1.example.com",
"--sshpubkey=\"RjlzYQo==\" --sshpubkey=\"ZEt0TAo==\"",
"ipa host-show client.ipa.test SSH public key fingerprint: SHA256:qGaqTZM60YPFTngFX0PtNPCKbIuudwf1D2LqmDeOcuA [email protected] (ssh-rsa)",
"kinit admin ipa host-mod --sshpubkey= --updatedns host1.example.com",
"ipa host-show client.ipa.test Host name: client.ipa.test Platform: x86_64 Operating system: 4.18.0-240.el8.x86_64 Principal name: host/[email protected] Principal alias: host/[email protected] Password: False Member of host-groups: ipaservers Roles: helpdesk Member of netgroups: test Member of Sudo rule: test2 Member of HBAC rule: test Keytab: True Managed by: client.ipa.test, server.ipa.test Users allowed to retrieve keytab: user1, user2, user3",
"ipa user-mod user --sshpubkey=\"ssh-rsa AAAAB3Nza...SNc5dv== client.example.com\"",
"--sshpubkey=\"AAAAB3Nza...SNc5dv==\" --sshpubkey=\"RjlzYQo...ZEt0TAo=\"",
"ipa user-mod user --sshpubkey=\"USD(cat ~/.ssh/id_rsa.pub)\" --sshpubkey=\"USD(cat ~/.ssh/id_rsa2.pub)\"",
"ipa user-show user User login: user First name: user Last name: user Home directory: /home/user Login shell: /bin/sh Principal name: [email protected] Principal alias: [email protected] Email address: [email protected] UID: 1118800019 GID: 1118800019 SSH public key fingerprint: SHA256:qGaqTZM60YPFTngFX0PtNPCKbIuudwf1D2LqmDeOcuA [email protected] (ssh-rsa) Account disabled: False Password: False Member of groups: ipausers Subordinate ids: 3167b7cc-8497-4ff2-ab4b-6fcb3cb1b047 Kerberos keys available: False",
"ipa user-mod user --sshpubkey=",
"ipa user-show user User login: user First name: user Last name: user Home directory: /home/user Login shell: /bin/sh Principal name: [email protected] Principal alias: [email protected] Email address: [email protected] UID: 1118800019 GID: 1118800019 Account disabled: False Password: False Member of groups: ipausers Subordinate ids: 3167b7cc-8497-4ff2-ab4b-6fcb3cb1b047 Kerberos keys available: False",
"[user@server ~]USD ipa config-mod --domain-resolution-order='ad.example.com:subdomain1.ad.example.com:idm.example.com' Maximum username length: 32 Home directory base: /home Domain Resolution Order: ad.example.com:subdomain1.ad.example.com:idm.example.com",
"id <ad_username> uid=1916901102(ad_username) gid=1916900513(domain users) groups=1916900513(domain users)",
"[user@server ~]USD ipa idview-add ADsubdomain1_first --desc \"ID view for resolving AD subdomain1 first on client1.idm.example.com\" --domain-resolution-order subdomain1.ad.example.com:ad.example.com:idm.example.com --------------------------------- Added ID View \"ADsubdomain1_first\" --------------------------------- ID View Name: ADsubdomain1_first Description: ID view for resolving AD subdomain1 first on client1.idm.example.com Domain Resolution Order: subdomain1.ad.example.com:ad.example.com:idm.example.com",
"[user@server ~]USD ipa idview-apply ADsubdomain1_first --hosts client1.idm.example.com ----------------------------------- Applied ID View \"ADsubdomain1_first\" ----------------------------------- hosts: client1.idm.example.com --------------------------------------------- Number of hosts the ID View was applied to: 1 ---------------------------------------------",
"[user@server ~]USD ipa idview-show ADsubdomain1_first --show-hosts ID View Name: ADsubdomain1_first Description: ID view for resolving AD subdomain1 first on client1.idm.example.com Hosts the view applies to: client1.idm.example.com Domain resolution order: subdomain1.ad.example.com:ad.example.com:idm.example.com",
"id <user_from_subdomain1> uid=1916901106(user_from_subdomain1) gid=1916900513(domain users) groups=1916900513(domain users)",
"--- - name: Playbook to add idview and apply it to an IdM client hosts: ipaserver vars_files: - /home/<user_name>/MyPlaybooks/secret.yml become: false gather_facts: false tasks: - name: Add idview and apply it to testhost.idm.example.com ipaidview: ipaadmin_password: \"{{ ipaadmin_password }}\" name: test_idview host: testhost.idm.example.com domain_resolution_order: \"ad.example.com:ipa.example.com\"",
"ansible-playbook --vault-password-file=password_file -v -i inventory add-id-view-with-domain-resolution-order.yml",
"id aduser05 uid=1916901102(aduser05) gid=1916900513(domain users) groups=1916900513(domain users)",
"domain_resolution_order = subdomain1.ad.example.com, ad.example.com, idm.example.com",
"systemctl restart sssd",
"id <user_from_subdomain1> uid=1916901106(user_from_subdomain1) gid=1916900513(domain users) groups=1916900513(domain users)",
"ipa trust-fetch-domains Realm-Name: ad.example.com ------------------------------- No new trust domains were found ------------------------------- ---------------------------- Number of entries returned 0 ----------------------------",
"ipa trust-show Realm-Name: ad.example.com Realm-Name: ad.example.com Domain NetBIOS name: AD Domain Security Identifier: S-1-5-21-796215754-1239681026-23416912 Trust direction: One-way trust Trust type: Active Directory domain UPN suffixes: example.com",
"[global] log level = 10",
"[global] debug = True",
"systemctl restart httpd",
"ipa trust-fetch-domains <ad.example.com>",
"kinit admin ipa idoverrideuser-add 'default trust view' [email protected]",
"ipa group-add-member admins [email protected]",
"ipa role-add-member 'User Administrator' [email protected]",
"cd ~/ MyPlaybooks /",
"--- - name: Playbook to ensure presence of users in a group hosts: ipaserver - name: Ensure the [email protected] user ID override is a member of the admins group: ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: admins idoverrideuser: - [email protected]",
"ansible-playbook --vault-password-file=password_file -v -i inventory add-useridoverride-to-group.yml",
"kdestroy -A",
"kinit [email protected] Password for [email protected]:",
"ipa group-add some-new-group ---------------------------- Added group \"some-new-group\" ---------------------------- Group name: some-new-group GID: 1997000011",
"--- - name: Enable AD administrator to act as a FreeIPA admin hosts: ipaserver become: false gather_facts: false tasks: - name: Ensure idoverride for [email protected] in 'default trust view' ipaidoverrideuser: ipaadmin_password: \"{{ ipaadmin_password }}\" idview: \"Default Trust View\" anchor: [email protected]",
"- name: Add the AD administrator as a member of admins ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: admins idoverrideuser: - [email protected]",
"ansible-playbook --vault-password-file=password_file -v -i inventory enable-ad-admin-to-administer-idm.yml",
"ssh [email protected]@client.idm.example.com",
"klist Ticket cache: KCM:325600500:99540 Default principal: [email protected] Valid starting Expires Service principal 02/04/2024 11:54:16 02/04/2024 21:54:16 krbtgt/[email protected] renew until 02/05/2024 11:54:16",
"ipa user-add testuser --first=test --last=user ------------------------ Added user \"tuser\" ------------------------ User login: tuser First name: test Last name: user Full name: test user [...]",
"kinit admin",
"ipa idp-add my-keycloak-idp --provider keycloak --organization main --base-url keycloak.idm.example.com:8443/auth --client-id id13778 ------------------------------------------------ Added Identity Provider reference \"my-keycloak-idp\" ------------------------------------------------ Identity Provider reference name: my-keycloak-idp Authorization URI: https://keycloak.idm.example.com:8443/auth/realms/main/protocol/openid-connect/auth Device authorization URI: https://keycloak.idm.example.com:8443/auth/realms/main/protocol/openid-connect/auth/device Token URI: https://keycloak.idm.example.com:8443/auth/realms/main/protocol/openid-connect/token User info URI: https://keycloak.idm.example.com:8443/auth/realms/main/protocol/openid-connect/userinfo Client identifier: ipa_oidc_client Scope: openid email External IdP user identifier attribute: email",
"ipa idp-show my-keycloak-idp",
"ipa idp-add my-azure-idp --provider microsoft --organization main --client-id <azure_client_id>",
"ipa idp-add my-google-idp --provider google --client-id <google_client_id>",
"ipa idp-add my-github-idp --provider github --client-id <github_client_id>",
"ipa idp-add my-keycloak-idp --provider keycloak --organization main --base-url keycloak.idm.example.com:8443/auth --client-id <keycloak_client_id>",
"ipa idp-add my-okta-idp --provider okta --base-url dev-12345.okta.com --client-id <okta_client_id>",
"kinit admin",
"ipa idp-find keycloak",
"ipa idp-show my-keycloak-idp",
"ipa idp-mod my-keycloak-idp --secret",
"ipa idp-del my-keycloak-idp",
"ipa user-mod idm-user-with-external-idp --idp my-keycloak-idp --idp-user-id [email protected] --user-auth-type=idp --------------------------------- Modified user \"idm-user-with-external-idp\" --------------------------------- User login: idm-user-with-external-idp First name: Test Last name: User1 Home directory: /home/idm-user-with-external-idp Login shell: /bin/sh Principal name: [email protected] Principal alias: [email protected] Email address: [email protected] UID: 35000003 GID: 35000003 User authentication types: idp External IdP configuration: keycloak External IdP user identifier: [email protected] Account disabled: False Password: False Member of groups: ipausers Kerberos keys available: False",
"ipa user-show idm-user-with-external-idp User login: idm-user-with-external-idp First name: Test Last name: User1 Home directory: /home/idm-user-with-external-idp Login shell: /bin/sh Principal name: [email protected] Principal alias: [email protected] Email address: [email protected] ID: 35000003 GID: 35000003 User authentication types: idp External IdP configuration: keycloak External IdP user identifier: [email protected] Account disabled: False Password: False Member of groups: ipausers Kerberos keys available: False",
"kinit -n -c ./fast.ccache",
"klist -c fast.ccache Ticket cache: FILE:fast.ccache Default principal: WELLKNOWN/ANONYMOUS@WELLKNOWN:ANONYMOUS Valid starting Expires Service principal 03/03/2024 13:36:37 03/04/2024 13:14:28 krbtgt/[email protected]",
"kinit -T ./fast.ccache idm-user-with-external-idp Authenticate at https://oauth2.idp.com:8443/auth/realms/master/device?user_code=YHMQ-XKTL and press ENTER.:",
"klist -C Ticket cache: KCM:0:58420 Default principal: [email protected] Valid starting Expires Service principal 05/09/22 07:48:23 05/10/22 07:03:07 krbtgt/[email protected] config: fast_avail(krbtgt/[email protected]) = yes 08/17/2022 20:22:45 08/18/2022 20:22:43 krbtgt/[email protected] config: pa_type(krbtgt/[email protected]) = 152",
"[user@client ~]USD ssh [email protected] ([email protected]) Authenticate at https://oauth2.idp.com:8443/auth/realms/main/device?user_code=XYFL-ROYR and press ENTER.",
"[idm-user-with-external-idp@client ~]USD klist -C Ticket cache: KCM:0:58420 Default principal: [email protected] Valid starting Expires Service principal 05/09/22 07:48:23 05/10/22 07:03:07 krbtgt/[email protected] config: fast_avail(krbtgt/[email protected]) = yes 08/17/2022 20:22:45 08/18/2022 20:22:43 krbtgt/[email protected] config: pa_type(krbtgt/[email protected]) = 152",
"ipa idp-add MySSO --provider keycloak --org main --base-url keycloak.domain.com:8443/auth --client-id <your-client-id>",
"ipa idp-add MyOkta --provider okta --base-url dev-12345.okta.com --client-id <your-client-id>",
"--- - name: Configure external IdP hosts: ipaserver become: false gather_facts: false tasks: - name: Ensure a reference to github external provider is available ipaidp: ipaadmin_password: \"{{ ipaadmin_password }}\" name: github_idp provider: github client_ID: 2efe1acffe9e8ab869f4 secret: 656a5228abc5f9545c85fa626aecbf69312d398c idp_user_id: my_github_account_name",
"ansible-playbook --vault-password-file=password_file -v -i inventory configure-external-idp-reference.yml",
"[idmuser@idmclient ~]USD ipa idp-show github_idp",
"--- - name: Ensure an IdM user uses an external IdP to authenticate to IdM hosts: ipaserver become: false gather_facts: false tasks: - name: Retrieve Github user ID ansible.builtin.uri: url: \"https://api.github.com/users/my_github_account_name\" method: GET headers: Accept: \"application/vnd.github.v3+json\" register: user_data - name: Ensure IdM user exists with an external IdP authentication ipauser: ipaadmin_password: \"{{ ipaadmin_password }}\" name: idm-user-with-external-idp first: Example last: User userauthtype: idp idp: github_idp idp_user_id: my_github_account_name",
"ansible-playbook --vault-password-file=password_file -v -i inventory enable-user-to-authenticate-via-external-idp.yml",
"ipa user-show idm-user-with-external-idp User login: idm-user-with-external-idp First name: Example Last name: User Home directory: /home/idm-user-with-external-idp Login shell: /bin/sh Principal name: [email protected] Principal alias: [email protected] Email address: [email protected] ID: 35000003 GID: 35000003 User authentication types: idp External IdP configuration: github External IdP user identifier: [email protected] Account disabled: False Password: False Member of groups: ipausers Kerberos keys available: False",
"kinit -n -c ./fast.ccache",
"klist -c fast.ccache Ticket cache: FILE:fast.ccache Default principal: WELLKNOWN/ANONYMOUS@WELLKNOWN:ANONYMOUS Valid starting Expires Service principal 03/03/2024 13:36:37 03/04/2024 13:14:28 krbtgt/[email protected]",
"kinit -T ./fast.ccache idm-user-with-external-idp Authenticate at https://oauth2.idp.com:8443/auth/realms/master/device?user_code=YHMQ-XKTL and press ENTER.:",
"klist -C Ticket cache: KCM:0:58420 Default principal: [email protected] Valid starting Expires Service principal 05/09/22 07:48:23 05/10/22 07:03:07 krbtgt/[email protected] config: fast_avail(krbtgt/[email protected]) = yes 08/17/2022 20:22:45 08/18/2022 20:22:43 krbtgt/[email protected] config: pa_type(krbtgt/[email protected]) = 152",
"[user@client ~]USD ssh [email protected] ([email protected]) Authenticate at https://oauth2.idp.com:8443/auth/realms/main/device?user_code=XYFL-ROYR and press ENTER.",
"[idm-user-with-external-idp@client ~]USD klist -C Ticket cache: KCM:0:58420 Default principal: [email protected] Valid starting Expires Service principal 05/09/22 07:48:23 05/10/22 07:03:07 krbtgt/[email protected] config: fast_avail(krbtgt/[email protected]) = yes 08/17/2022 20:22:45 08/18/2022 20:22:43 krbtgt/[email protected] config: pa_type(krbtgt/[email protected]) = 152",
"--- - name: Playbook to manage IPA idp hosts: ipaserver become: false tasks: - name: Ensure keycloak idp my-keycloak-idp is present using provider ipaidp: ipaadmin_password: \"{{ ipaadmin_password }}\" name: my-keycloak-idp provider: keycloak organization: main base_url: keycloak.domain.com:8443/auth client_id: my-keycloak-client-id",
"--- - name: Playbook to manage IPA idp hosts: ipaserver become: false tasks: - name: Ensure okta idp my-okta-idp is present using provider ipaidp: ipaadmin_password: \"{{ ipaadmin_password }}\" name: my-okta-idp provider: okta base_url: dev-12345.okta.com client_id: my-okta-client-id",
"kinit -k",
"ipa service-add-delegation nfs/client.example.test HTTP/client.example.test ------------------------------------------------------- Added new resource delegation to the service principal \"nfs/[email protected]\" ------------------------------------------------------- Principal name: nfs/[email protected] Delegation principal: HTTP/[email protected]",
"ipa service-show nfs/client.example.test Principal name: nfs/[email protected] Principal alias: nfs/[email protected] Delegation principal: HTTP/[email protected] Keytab: True Managed by: client.example.test",
"kinit -kt http.keytab HTTP/client.example.test",
"klist -f Ticket cache: KCM:0:99799 Default principal: HTTP/[email protected] Valid starting Expires Service principal 10/13/2023 14:39:23 10/14/2023 14:05:07 krbtgt/[email protected] Flags: FIA",
"kvno -U testuser -P nfs/client.example.test nfs/[email protected]: kvno = 1",
"klist -f Ticket cache: KCM:0:99799 Default principal: HTTP/[email protected] Valid starting Expires Service principal 10/13/2023 14:39:38 10/14/2023 14:05:07 HTTP/[email protected] for client [email protected], Flags: FAT 10/13/2023 14:39:23 10/14/2023 14:05:07 krbtgt/[email protected] Flags: FIA 10/13/2023 14:39:38 10/14/2023 14:05:07 nfs/[email protected] for client [email protected], Flags: FAT"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html-single/managing_idm_users_groups_hosts_and_access_control_rules/index |
Chapter 21. Monitoring Server and Database Activity | Chapter 21. Monitoring Server and Database Activity This chapter describes monitoring database and Red Hat Directory Server logs. For information on using SNMP to monitor the Directory Server, see Section 21.10, "Monitoring Directory Server Using SNMP" . 21.1. Types of Directory Server Log Files Directory Server provides the following log types: Access log: Contains information on client connections and connection attempts to the Directory Server instance. This log type is enabled by default. Error log: Contains detailed messages of errors and events the directory experiences during normal operations. This log type is enabled by default. Warning If the Directory Server fails to write to the errors log, the server sends an error message to the Syslog service and exits. This log type is enabled by default. Audit log: Records changes made to each database as well as to server configuration. This log is not enabled by default. Audit fail log: Records failed audit events. This log is not enabled by default. | null | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/monitoring_server_and_database_activity |
9.0 Release Notes | 9.0 Release Notes Red Hat Enterprise Linux 9.0 Release Notes for Red Hat Enterprise Linux 9.0 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/9.0_release_notes/index |
Chapter 17. Adding Variables to the Watch List | Chapter 17. Adding Variables to the Watch List Overview By adding variables to the watch list, you can focus on particular variables to see whether their values change as expected as they flow through the routing context. Procedure To add a variable to the watch list: If necessary, start the debugger. See Chapter 14, Running the Camel Debugger . In the Variables view, right-click a variable you want to track to open the context menu. Select Watch . A new view, Expressions , opens to the Breakpoints view. The Expressions view displays the name of the variable being watched and its current value, for example: Repeat [watch1] and [watch2] to add additional variables to the watch list. Note The variables you add remain in the watch list until you remove them. To stop watching a variable, right-click it in the list to open the context menu, and then click Remove . With the Expressions view open, step through the routing context to track how the value of each variable in the watch list changes as it reaches each step in the route. Related topics Chapter 16, Changing Variable Values | null | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/tooling_user_guide/addwatchlist |
Part V. References | Part V. References | null | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/references |
Chapter 12. Crimson (Technology Preview) | Chapter 12. Crimson (Technology Preview) As a storage administrator, the Crimson project is an effort to build a replacement of ceph-osd daemon that is suited to the new reality of low latency, high throughput persistent memory, and NVMe technologies. Important The Crimson feature is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See the support scope for Red Hat Technology Preview features for more details. 12.1. Crimson overview Crimson is the code name for crimson-osd , which is the generation ceph-osd for multi-core scalability. It improves performance with fast network and storage devices, employing state-of-the-art technologies that includes DPDK and SPDK. BlueStore continues to support HDDs and SSDs. Crimson aims to be compatible with an earlier version of OSD daemon with the class ceph-osd . Built on the SeaStar C++ framework, Crimson is a new implementation of the core Ceph object storage daemon (OSD) component and replaces ceph-osd . The crimson-osd minimizes latency and increased CPU processor usage. It uses high-performance asynchronous IO and a new threading architecture that is designed to minimize context switches and inter-thread communication for an operation for cross communication. Caution For Red Hat Ceph Storage 7, you can test RADOS Block Device (RBD) workloads on replicated pools with Crimson only. Do not use Crimson for production data. Crimson goals Crimson OSD is a replacement for the OSD daemon with the following goals: Minimize CPU overload Minimize cycles or IOPS. Minimize cross-core communication. Minimize copies. Bypass kernel, avoid context switches. Enable emerging storage technologies Zoned namespaces Persistent memory Fast NVMe Seastar features Single reactor thread per CPU Asynchronous IO Scheduling done in user space Includes direct support for DPDK, a high-performance library for user space networking. Benefits SeaStore has an independent metadata collection. Transactional Composed of flat object namespace. Object Names might be Large (>1k). Each object contains a key>value mapping (string>bytes) and data payload. Supports COW object clones. Supports ordered listing of both OMAP and object namespaces. 12.2. Difference between Crimson and Classic Ceph OSD architecture In a classic ceph-osd architecture, a messenger thread reads a client message from the wire, which places the message in the OP queue. The osd-op thread-pool then picks up the message and creates a transaction and queues it to BlueStore, the current default ObjectStore implementation. BlueStore's kv_queue then picks up this transaction and anything else in the queue, synchronously waits for rocksdb to commit the transaction, and then places the completion callback in the finisher queue. The finisher thread then picks up the completion callback and queues to replace the messenger thread to send. Each of these actions requires inter-thread co-ordination over the contents of a queue. For pg state , more than one thread might need to access the internal metadata of any PG to lock contention. This lock contention with increased processor usage scales rapidly with the number of tasks and cores, and every locking point might become the scaling bottleneck under certain scenarios. Moreover, these locks and queues incur latency costs even when uncontended. Due to this latency, the thread pools and task queues deteriorate, as the bookkeeping effort delegates tasks between the worker thread and locks can force context-switches. Unlike the ceph-osd architecture, Crimson allows a single I/O operation to complete on a single core without context switches and without blocking if the underlying storage operations do not require it. However, some operations still need to be able to wait for asynchronous processes to complete, probably nondeterministically depending on the state of the system such as recovery or the underlying device. Crimson uses the C++ framework that is called Seastar, a highly asynchronous engine, which generally pre-allocates one thread pinned to each core. These divide work among those cores such that the state can be partitioned between cores and locking can be avoided. With Seastar, the I/O operations are partitioned among a group of threads based on the target object. Rather than splitting the stages of running an I/O operation among different groups of threads, run all the pipeline stages within a single thread. If an operation needs to be blocked, the core's Seastar reactor switches to another concurrent operation and progresses. Ideally, all the locks and context-switches are no longer needed as each running nonblocking task owns the CPU until it completes or cooperatively yields. No other thread can preempt the task at the same time. If the communication is not needed with other shards in the data path, the ideal performance scales linearly with the number of cores until the I/O device reaches its limit. This design fits the Ceph OSD well because, at the OSD level, the PG shard all IOs. Unlike ceph-osd , crimson-osd does not daemonize itself even if the daemonize option is enabled. Do not daemonize crimson-osd since supported Linux distributions use systemd , which is able to daemonize the application. With sysvinit , use start-stop-daemon to daemonize crimson-osd . ObjectStore backend The crimson-osd offers both native and alienized object store backend. The native object store backend performs I/O with the Seastar reactor. Following three ObjectStore backend is supported for Crimson: AlienStore - Provides compatibility with an earlier version of object store, that is, BlueStore. CyanStore - A dummy backend for tests, which are implemented by volatile memory. This object store is modeled after the memstore in the classic OSD. SeaStore - The new object store designed specifically for Crimson OSD. The paths toward multiple shard support are different depending on the specific goal of the backend. Following are the other two classic OSD ObjectStore backends: MemStore - The memory as the backend object store. BlueStore - The object store used by the classic ceph-osd . 12.3. Crimson metrics Crimson has three ways to report statistics and metrics: PG stats reported to manager. Prometheus text protocol. The asock command. PG stats reported to manager Crimson collects the per-pg , per-pool , and per-osd stats in MPGStats message, which is sent to the Ceph Managers. Prometheus text protocol Configure the listening port and address by using the --prometheus-port command-line option. The asock command An admin socket command is offered to dump metrics. Syntax Example Here, reactor_utilization is an optional string to filter the dumped metrics by prefix. 12.4. Crimson configuration options Run the crimson-osd --help-seastar command for Seastar specific command-line options. Following are the options that you can use to configure Crimson: --crimson , Description Start crimson-osd instead of ceph-osd . --nodaemon , Description Do not daemonize the service. --redirect-output , Description Redirect the stdout and stderr to out/USDtype.USDnum.stdout --osd-args , Description Pass extra command-line options to crimson-osd or ceph-osd . This option is useful for passing Seastar options to crimson-osd . For example, one can supply --osd-args "--memory 2G" to set the amount of memory to use. --cyanstore , Description Use CyanStore as the object store backend. --bluestore , Description Use the alienized BlueStore as the object store backend. --bluestore is the default memory store. --memstore , Description Use the alienized MemStore as the object store backend. --seastore , Description Use SeaStore as the back end object store. --seastore-devs , Description Specify the block device used by SeaStore. --seastore-secondary-devs , Description Optional. SeaStore supports multiple devices. Enable this feature by passing the block device to this option. --seastore-secondary-devs-type , Description Optional. Specify the type of secondary devices. When the secondary device is slower than main device passed to --seastore-devs , the cold data in faster device will be evicted to the slower devices over time. Valid types include HDD , SSD , (default) , ZNS , and RANDOM_BLOCK_SSD . Note that secondary devices should not be faster than the main device. 12.5. Configuring Crimson Configure crimson-osd by installing a new storage cluster. Install a new cluster by using the bootstrap option. You cannot upgrade this cluster as it is in the experimental phase. WARNING: Do not use production data as it might result in data loss. Prerequisites An IP address for the first Ceph Monitor container, which is also the IP address for the first node in the storage cluster. Login access to registry.redhat.io . A minimum of 10 GB of free space for /var/lib/containers/ . Root-level access to all nodes. Procedure While bootstrapping, use the --image flag to use Crimson build. Example Log in to the cephadm shell: Example Enable Crimson globally as an experimental feature. Example This step enables crimson . Crimson is highly experimental, and malfunctions including crashes and data loss are to be expected. Enable the OSD Map flag. Example The monitor allows crimson-osd to boot only with the --yes-i-really-mean-it flag. Enable Crimson parameter for the monitor to direct the default pools to be created as Crimson pools. Example The crimson-osd does not initiate placement groups (PG) for non-crimson pools. 12.6. Crimson configuration parameters Following are the parameters that you can use to configure Crimson. crimson_osd_obc_lru_size Description Number of obcs to cache. Type uint Default 10 crimson_osd_scheduler_concurrency Description The maximum number concurrent IO operations, 0 for unlimited. Type uint Default 0 crimson_alien_op_num_threads Description The number of threads for serving alienized ObjectStore. Type uint Default 6 crimson_seastar_smp Description Number of seastar reactor threads to use for the OSD. Type uint Default 1 crimson_alien_thread_cpu_cores Description String CPU cores on which alienstore threads run in cpuset(7) format. Type String seastore_segment_size Description Segment size to use for Segment Manager. Type Size Default 64_M seastore_device_size Description Total size to use for SegmentManager block file if created. Type Size Default 50_G seastore_block_create Description Create SegmentManager file if it does not exist. Type Boolean Default true seastore_journal_batch_capacity Description The number limit of records in a journal batch. Type uint Default 16 seastore_journal_batch_flush_size Description The size threshold to force flush a journal batch. Type Size Default 16_M seastore_journal_iodepth_limit Description The IO depth limit to submit journal records. Type uint Default 5 seastore_journal_batch_preferred_fullness Description The record fullness threshold to flush a journal batch. Type Float Default 0.95 seastore_default_max_object_size Description The default logical address space reservation for seastore objects' data. Type uint Default 16777216 seastore_default_object_metadata_reservation Description The default logical address space reservation for seastore objects' metadata. Type uint Default 16777216 seastore_cache_lru_size Description Size in bytes of extents to keep in cache. Type Size Default 64_M seastore_cbjournal_size Description Total size to use for CircularBoundedJournal if created, it is valid only if seastore_main_device_type is RANDOM_BLOCK. Type Size Default 5_G seastore_obj_data_write_amplification Description Split extent if ratio of total extent size to write size exceeds this value. Type Float Default 1.25 seastore_max_concurrent_transactions Description The maximum concurrent transactions that seastore allows. Type uint Default 8 seastore_main_device_type Description The main device type seastore uses (SSD or RANDOM_BLOCK_SSD). Type String Default SSD seastore_multiple_tiers_stop_evict_ratio Description When the used ratio of main tier is less than this value, then stop evict cold data to the cold tier. Type Float Default 0.5 seastore_multiple_tiers_default_evict_ratio Description Begin evicting cold data to the cold tier when the used ratio of the main tier reaches this value. Type Float Default 0.6 seastore_multiple_tiers_fast_evict_ratio Description Begin fast eviction when the used ratio of the main tier reaches this value. Type Float Default 0.7 12.7. Profiling Crimson Profiling Crimson is a methodology to do performance testing with Crimson. Two types of profiling are supported: Flexible I/O (FIO) - The crimson-store-nbd shows the configurable FuturizedStore internals as an NBD server for use with FIO. Ceph benchmarking tool (CBT) - A testing harness in python to test the performance of a Ceph cluster. Procedure Install libnbd and compile FIO: Example Build crimson-store-nbd : Example Run the crimson-store-nbd server with a block device. Specify the path to the raw device, like /dev/nvme1n1 : Example Create an FIO job named nbd.fio: Example Test the Crimson object with the FIO compiled: Example Ceph Benchmarking Tool (CBT) Run the same test against two branches. One is main (master), another is topic branch of your choice. Compare the test results. Along with every test case, a set of rules is defined to check whether you need to perform regressions when two sets of test results are compared. If a possible regression is found, the rule and corresponding test results are highlighted. Procedure From the main branch and the topic branch, run make crimson osd : Example Compare the test results: Example | [
"ceph tell OSD_ID dump_metrics ceph tell OSD_ID dump_metrics reactor_utilization",
"ceph tell osd.0 dump_metrics ceph tell osd.0 dump_metrics reactor_utilization",
"cephadm --image quay.ceph.io/ceph-ci/ceph:b682861f8690608d831f58603303388dd7915aa7-crimson bootstrap --mon-ip 10.1.240.54 --allow-fqdn-hostname --initial-dashboard-password Ceph_Crims",
"cephadm shell",
"ceph config set global 'enable_experimental_unrecoverable_data_corrupting_features' crimson",
"ceph osd set-allow-crimson --yes-i-really-mean-it",
"ceph config set mon osd_pool_default_crimson true",
"dnf install libnbd git clone git://git.kernel.dk/fio.git cd fio ./configure --enable-libnbd make",
"cd build ninja crimson-store-nbd",
"export disk_img=/tmp/disk.img export unix_socket=/tmp/store_nbd_socket.sock rm -f USDdisk_img USDunix_socket truncate -s 512M USDdisk_img ./bin/crimson-store-nbd --device-path USDdisk_img --smp 1 --mkfs true --type transaction_manager --uds-path USD{unix_socket} & --smp is the CPU cores. --mkfs initializes the device first. --type is the backend.",
"[global] ioengine=nbd uri=nbd+unix:///?socket=USD{unix_socket} rw=randrw time_based runtime=120 group_reporting iodepth=1 size=512M [job0] offset=0",
"./fio nbd.fio",
"git checkout master make crimson-osd ../src/script/run-cbt.sh --cbt ~/dev/cbt -a /tmp/baseline ../src/test/crimson/cbt/radosbench_4K_read.yaml git checkout topic make crimson-osd ../src/script/run-cbt.sh --cbt ~/dev/cbt -a /tmp/yap ../src/test/crimson/cbt/radosbench_4K_read.yaml",
"~/dev/cbt/compare.py -b /tmp/baseline -a /tmp/yap -v"
]
| https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/administration_guide/crimson |
Chapter 11. Multicloud Object Gateway | Chapter 11. Multicloud Object Gateway 11.1. About the Multicloud Object Gateway The Multicloud Object Gateway (MCG) is a lightweight object storage service for OpenShift, allowing users to start small and then scale as needed on-premise, in multiple clusters, and with cloud-native storage. 11.2. Accessing the Multicloud Object Gateway with your applications You can access the object service with any application targeting AWS S3 or code that uses AWS S3 Software Development Kit (SDK). Applications need to specify the Multicloud Object Gateway (MCG) endpoint, an access key, and a secret access key. You can use your terminal or the MCG CLI to retrieve this information. Prerequisites A running OpenShift Data Foundation Platform. 11.3. Adding storage resources for hybrid or Multicloud 11.3.1. Creating a new backing store Use this procedure to create a new backing store in OpenShift Data Foundation. Prerequisites Administrator access to OpenShift Data Foundation. Procedure In the OpenShift Web Console, click Storage Object Storage . Click the Backing Store tab. Click Create Backing Store . On the Create New Backing Store page, perform the following: Enter a Backing Store Name . Select a Provider . Select a Region . Optional: Enter an Endpoint . Select a Secret from the drop-down list, or create your own secret. Optionally, you can Switch to Credentials view which lets you fill in the required secrets. For more information on creating an OCP secret, see the section Creating the secret in the Openshift Container Platform documentation. Each backingstore requires a different secret. For more information on creating the secret for a particular backingstore, see the Section 11.3.2, "Adding storage resources for hybrid or Multicloud using the MCG command line interface" and follow the procedure for the addition of storage resources using a YAML. Note This menu is relevant for all providers except Google Cloud and local PVC. Enter the Target bucket . The target bucket is a container storage that is hosted on the remote cloud service. It allows you to create a connection that tells the MCG that it can use this bucket for the system. Click Create Backing Store . Verification steps In the OpenShift Web Console, click Storage Object Storage . Click the Backing Store tab to view all the backing stores. 11.3.2. Adding storage resources for hybrid or Multicloud using the MCG command line interface The Multicloud Object Gateway (MCG) simplifies the process of spanning data across cloud provider and clusters. You must add a backing storage that can be used by the MCG. Depending on the type of your deployment, you can choose one of the following procedures to create a backing storage: For creating an AWS-backed backingstore, see Section 11.3.2.1, "Creating an AWS-backed backingstore" For creating an IBM COS-backed backingstore, see Section 11.3.2.2, "Creating an IBM COS-backed backingstore" For creating an Azure-backed backingstore, see Section 11.3.2.3, "Creating an Azure-backed backingstore" For creating a GCP-backed backingstore, see Section 11.3.2.4, "Creating a GCP-backed backingstore" For creating a local Persistent Volume-backed backingstore, see Section 11.3.2.5, "Creating a local Persistent Volume-backed backingstore" For VMware deployments, skip to Section 11.3.3, "Creating an s3 compatible Multicloud Object Gateway backingstore" for further instructions. 11.3.2.1. Creating an AWS-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Using MCG command-line interface From the MCG command-line interface, run the following command: <backingstore_name> The name of the backingstore. <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> The AWS access key ID and secret access key you created for this purpose. <bucket-name> The existing AWS bucket name. This argument indicates to the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. The output will be similar to the following: Adding storage resources using a YAML Create a secret with the credentials: <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> Supply and encode your own AWS access key ID and secret access key using Base64, and use the results for <AWS ACCESS KEY ID ENCODED IN BASE64> and <AWS SECRET ACCESS KEY ENCODED IN BASE64> . <backingstore-secret-name> The name of the backingstore secret created in the step. Apply the following YAML for a specific backing store: <bucket-name> The existing AWS bucket name. <backingstore-secret-name> The name of the backingstore secret created in the step. 11.3.2.2. Creating an IBM COS-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Using command-line interface From the MCG command-line interface, run the following command: <backingstore_name> The name of the backingstore. <IBM ACCESS KEY> , <IBM SECRET ACCESS KEY> , and <IBM COS ENDPOINT> An IBM access key ID, secret access key and the appropriate regional endpoint that corresponds to the location of the existing IBM bucket. To generate the above keys on IBM cloud, you must include HMAC credentials while creating the service credentials for your target bucket. <bucket-name> An existing IBM bucket name. This argument indicates MCG about the bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. The output will be similar to the following: Adding storage resources using an YAML Create a secret with the credentials: <IBM COS ACCESS KEY ID ENCODED IN BASE64> and <IBM COS SECRET ACCESS KEY ENCODED IN BASE64> Provide and encode your own IBM COS access key ID and secret access key using Base64, and use the results in place of these attributes respectively. <backingstore-secret-name> The name of the backingstore secret. Apply the following YAML for a specific backing store: <bucket-name> an existing IBM COS bucket name. This argument indicates to MCG about the bucket to use as a target bucket for its backingstore, and subsequently, data storage and administration. <endpoint> A regional endpoint that corresponds to the location of the existing IBM bucket name. This argument indicates to MCG about the endpoint to use for its backingstore, and subsequently, data storage and administration. <backingstore-secret-name> The name of the secret created in the step. 11.3.2.3. Creating an Azure-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Using the MCG command-line interface From the MCG command-line interface, run the following command: <backingstore_name> The name of the backingstore. <AZURE ACCOUNT KEY> and <AZURE ACCOUNT NAME> An AZURE account key and account name you created for this purpose. <blob container name> An existing Azure blob container name. This argument indicates to MCG about the bucket to use as a target bucket for its backingstore, and subsequently, data storage and administration. The output will be similar to the following: Adding storage resources using a YAML Create a secret with the credentials: <AZURE ACCOUNT NAME ENCODED IN BASE64> and <AZURE ACCOUNT KEY ENCODED IN BASE64> Supply and encode your own Azure Account Name and Account Key using Base64, and use the results in place of these attributes respectively. <backingstore-secret-name> A unique name of backingstore secret. Apply the following YAML for a specific backing store: <blob-container-name> An existing Azure blob container name. This argument indicates to the MCG about the bucket to use as a target bucket for its backingstore, and subsequently, data storage and administration. <backingstore-secret-name> with the name of the secret created in the step. 11.3.2.4. Creating a GCP-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Using the MCG command-line interface From the MCG command-line interface, run the following command: <backingstore_name> Name of the backingstore. <PATH TO GCP PRIVATE KEY JSON FILE> A path to your GCP private key created for this purpose. <GCP bucket name> An existing GCP object storage bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. The output will be similar to the following: Adding storage resources using a YAML Create a secret with the credentials: <GCP PRIVATE KEY ENCODED IN BASE64> Provide and encode your own GCP service account private key using Base64, and use the results for this attribute. <backingstore-secret-name> A unique name of the backingstore secret. Apply the following YAML for a specific backing store: <target bucket> An existing Google storage bucket. This argument indicates to the MCG about the bucket to use as a target bucket for its backing store, and subsequently, data storage dfdand administration. <backingstore-secret-name> The name of the secret created in the step. 11.3.2.5. Creating a local Persistent Volume-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Adding storage resources using the MCG command-line interface From the MCG command-line interface, run the following command: Note This command must be run from within the openshift-storage namespace. Adding storage resources using YAML Apply the following YAML for a specific backing store: <backingstore_name > The name of the backingstore. <NUMBER OF VOLUMES> The number of volumes you would like to create. Note that increasing the number of volumes scales up the storage. <VOLUME SIZE> Required size in GB of each volume. <CPU REQUEST> Guaranteed amount of CPU requested in CPU unit m . <MEMORY REQUEST> Guaranteed amount of memory requested. <CPU LIMIT> Maximum amount of CPU that can be consumed in CPU unit m . <MEMORY LIMIT> Maximum amount of memory that can be consumed. <LOCAL STORAGE CLASS> The local storage class name, recommended to use ocs-storagecluster-ceph-rbd . The output will be similar to the following: 11.3.3. Creating an s3 compatible Multicloud Object Gateway backingstore The Multicloud Object Gateway (MCG) can use any S3 compatible object storage as a backing store, for example, Red Hat Ceph Storage's RADOS Object Gateway (RGW). The following procedure shows how to create an S3 compatible MCG backing store for Red Hat Ceph Storage's RGW. Note that when the RGW is deployed, OpenShift Data Foundation operator creates an S3 compatible backingstore for MCG automatically. Procedure From the MCG command-line interface, run the following command: Note This command must be run from within the openshift-storage namespace. To get the <RGW ACCESS KEY> and <RGW SECRET KEY> , run the following command using your RGW user secret name: Decode the access key ID and the access key from Base64 and keep them. Replace <RGW USER ACCESS KEY> and <RGW USER SECRET ACCESS KEY> with the appropriate, decoded data from the step. Replace <bucket-name> with an existing RGW bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. To get the <RGW endpoint> , see Accessing the RADOS Object Gateway S3 endpoint . The output will be similar to the following: You can also create the backingstore using a YAML: Create a CephObjectStore user. This also creates a secret containing the RGW credentials: Replace <RGW-Username> and <Display-name> with a unique username and display name. Apply the following YAML for an S3-Compatible backing store: Replace <backingstore-secret-name> with the name of the secret that was created with CephObjectStore in the step. Replace <bucket-name> with an existing RGW bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. To get the <RGW endpoint> , see Accessing the RADOS Object Gateway S3 endpoint . 11.3.4. Adding storage resources for hybrid and Multicloud using the user interface Procedure In the OpenShift Web Console, click Storage Data Foundation . In the Storage Systems tab, select the storage system and then click Overview Object tab. Select the Multicloud Object Gateway link. Select the Resources tab in the left, highlighted below. From the list that populates, select Add Cloud Resource . Select Add new connection . Select the relevant native cloud provider or S3 compatible option and fill in the details. Select the newly created connection and map it to the existing bucket. Repeat these steps to create as many backing stores as needed. Note Resources created in NooBaa UI cannot be used by OpenShift UI or MCG CLI. 11.3.5. Creating a new bucket class Bucket class is a CRD representing a class of buckets that defines tiering policies and data placements for an Object Bucket Class. Use this procedure to create a bucket class in OpenShift Data Foundation. Procedure In the OpenShift Web Console, click Storage Object Storage . Click the Bucket Class tab. Click Create Bucket Class . On the Create new Bucket Class page, perform the following: Select the bucket class type and enter a bucket class name. Select the BucketClass type . Choose one of the following options: Standard : data will be consumed by a Multicloud Object Gateway (MCG), deduped, compressed and encrypted. Namespace : data is stored on the NamespaceStores without performing de-duplication, compression or encryption. By default, Standard is selected. Enter a Bucket Class Name . Click . In Placement Policy , select Tier 1 - Policy Type and click . You can choose either one of the options as per your requirements. Spread allows spreading of the data across the chosen resources. Mirror allows full duplication of the data across the chosen resources. Click Add Tier to add another policy tier. Select at least one Backing Store resource from the available list if you have selected Tier 1 - Policy Type as Spread and click . Alternatively, you can also create a new backing store . Note You need to select at least 2 backing stores when you select Policy Type as Mirror in step. Review and confirm Bucket Class settings. Click Create Bucket Class . Verification steps In the OpenShift Web Console, click Storage Object Storage . Click the Bucket Class tab and search the new Bucket Class. 11.3.6. Editing a bucket class Use the following procedure to edit the bucket class components through the YAML file by clicking the edit button on the Openshift web console. Prerequisites Administrator access to OpenShift Web Console. Procedure In the OpenShift Web Console, click Storage Object Storage . Click the Bucket Class tab. Click the Action Menu (...) to the Bucket class you want to edit. Click Edit Bucket Class . You are redirected to the YAML file, make the required changes in this file and click Save . 11.3.7. Editing backing stores for bucket class Use the following procedure to edit an existing Multicloud Object Gateway (MCG) bucket class to change the underlying backing stores used in a bucket class. Prerequisites Administrator access to OpenShift Web Console. A bucket class. Backing stores. Procedure In the OpenShift Web Console, click Storage Object Storage . Click the Bucket Class tab. Click the Action Menu (...) to the Bucket class you want to edit. Click Edit Bucket Class Resources . On the Edit Bucket Class Resources page, edit the bucket class resources either by adding a backing store to the bucket class or by removing a backing store from the bucket class. You can also edit bucket class resources created with one or two tiers and different placement policies. To add a backing store to the bucket class, select the name of the backing store. To remove a backing store from the bucket class, uncheck the name of the backing store. Click Save . 11.4. Managing namespace buckets Namespace buckets let you connect data repositories on different providers together, so you can interact with all of your data through a single unified view. Add the object bucket associated with each provider to the namespace bucket, and access your data through the namespace bucket to see all of your object buckets at once. This lets you write to your preferred storage provider while reading from multiple other storage providers, greatly reducing the cost of migrating to a new storage provider. You can interact with objects in a namespace bucket using the S3 API. See S3 API endpoints for objects in namespace buckets for more information. Note A namespace bucket can only be used if its write target is available and functional. 11.4.1. Amazon S3 API endpoints for objects in namespace buckets You can interact with objects in the namespace buckets using the Amazon Simple Storage Service (S3) API. Ensure that the credentials provided for the Multicloud Object Gateway (MCG) enables you to perform the AWS S3 namespace bucket operations. You can use the AWS tool, aws-cli to verify that all the operations can be performed on the target bucket. Also, the list bucket which is using this MCG account shows the target bucket. Red Hat OpenShift Data Foundation supports the following namespace bucket operations: ListBuckets ListObjects ListMultipartUploads ListObjectVersions GetObject HeadObject CopyObject PutObject CreateMultipartUpload UploadPartCopy UploadPart ListParts AbortMultipartUpload PubObjectTagging DeleteObjectTagging GetObjectTagging GetObjectAcl PutObjectAcl DeleteObject DeleteObjects See the Amazon S3 API reference documentation for the most up-to-date information about these operations and how to use them. Additional resources Amazon S3 REST API Reference Amazon S3 CLI Reference 11.4.2. Adding a namespace bucket using the Multicloud Object Gateway CLI and YAML For more information about namespace buckets, see Managing namespace buckets . Depending on the type of your deployment and whether you want to use YAML or the Multicloud Object Gateway CLI, choose one of the following procedures to add a namespace bucket: Adding an AWS S3 namespace bucket using YAML Adding an IBM COS namespace bucket using YAML Adding an AWS S3 namespace bucket using the Multicloud Object Gateway CLI Adding an IBM COS namespace bucket using the Multicloud Object Gateway CLI 11.4.2.1. Adding an AWS S3 namespace bucket using YAML Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG). For information, see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Procedure Create a secret with the credentials: where <namespacestore-secret-name> is a unique NamespaceStore name. You must provide and encode your own AWS access key ID and secret access key using Base64 , and use the results in place of <AWS ACCESS KEY ID ENCODED IN BASE64> and <AWS SECRET ACCESS KEY ENCODED IN BASE64> . Create a NamespaceStore resource using OpenShift custom resource definitions (CRDs). A NamespaceStore represents underlying storage to be used as a read or write target for the data in the MCG namespace buckets. To create a NamespaceStore resource, apply the following YAML: <resource-name> The name you want to give to the resource. <namespacestore-secret-name> The secret created in the step. <namespace-secret> The namespace where the secret can be found. <target-bucket> The target bucket you created for the NamespaceStore. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . A namespace policy of type single requires the following configuration: <my-bucket-class> The unique namespace bucket class name. <resource> The name of a single NamespaceStore that defines the read and write target of the namespace bucket. A namespace policy of type multi requires the following configuration: <my-bucket-class> A unique bucket class name. <write-resource> The name of a single NamespaceStore that defines the write target of the namespace bucket. <read-resources> A list of the names of the NamespaceStores that defines the read targets of the namespace bucket. Create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the earlier step using the following YAML: <resource-name> The name you want to give to the resource. <my-bucket> The name you want to give to the bucket. <my-bucket-class> The bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and in the same namespace as that of the OBC. 11.4.2.2. Adding an IBM COS namespace bucket using YAML Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Procedure Create a secret with the credentials: <namespacestore-secret-name> A unique NamespaceStore name. You must provide and encode your own IBM COS access key ID and secret access key using Base64 , and use the results in place of <IBM COS ACCESS KEY ID ENCODED IN BASE64> and <IBM COS SECRET ACCESS KEY ENCODED IN BASE64> . Create a NamespaceStore resource using OpenShift custom resource definitions (CRDs). A NamespaceStore represents underlying storage to be used as a read or write target for the data in the MCG namespace buckets. To create a NamespaceStore resource, apply the following YAML: <IBM COS ENDPOINT> The appropriate IBM COS endpoint. <namespacestore-secret-name> The secret created in the step. <namespace-secret> The namespace where the secret can be found. <target-bucket> The target bucket you created for the NamespaceStore. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . The namespace policy of type single requires the following configuration: <my-bucket-class> The unique namespace bucket class name. <resource> The name of a single NamespaceStore that defines the read and write target of the namespace bucket. The namespace policy of type multi requires the following configuration: <my-bucket-class> The unique bucket class name. <write-resource> The name of a single NamespaceStore that defines the write target of the namespace bucket. <read-resources> A list of the NamespaceStores names that defines the read targets of the namespace bucket. To create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the step, apply the following YAML: <resource-name> The name you want to give to the resource. <my-bucket> The name you want to give to the bucket. <my-bucket-class> The bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and in the same namespace as that of the OBC. 11.4.2.3. Adding an AWS S3 namespace bucket using the Multicloud Object Gateway CLI Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS Procedure In the MCG command-line interface, create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in MCG namespace buckets. <namespacestore> The name of the NamespaceStore. <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> The AWS access key ID and secret access key you created for this purpose. <bucket-name> The existing AWS bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy can be either single or multi . To create a namespace bucket class with a namespace policy of type single : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <resource> A single namespace-store that defines the read and write target of the namespace bucket. To create a namespace bucket class with a namespace policy of type multi : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <write-resource> A single namespace-store that defines the write target of the namespace bucket. <read-resources>s A list of namespace-stores separated by commas that defines the read targets of the namespace bucket. Create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the step. <bucket-name> A bucket name of your choice. <custom-bucket-class> The name of the bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and a ConfigMap with the same name and in the same namespace as that of the OBC. 11.4.2.4. Adding an IBM COS namespace bucket using the Multicloud Object Gateway CLI Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Download the MCG command-line interface binary from the customer portal and make it executable. Note Choose either Linux(x86_64), Windows, or Mac OS. Procedure In the MCG command-line interface, create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in the MCG namespace buckets. <namespacestore> The name of the NamespaceStore. <IBM ACCESS KEY> , <IBM SECRET ACCESS KEY> , <IBM COS ENDPOINT> An IBM access key ID, secret access key, and the appropriate regional endpoint that corresponds to the location of the existing IBM bucket. <bucket-name> An existing IBM bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . To create a namespace bucket class with a namespace policy of type single : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <resource> A single NamespaceStore that defines the read and write target of the namespace bucket. To create a namespace bucket class with a namespace policy of type multi : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <write-resource> A single NamespaceStore that defines the write target of the namespace bucket. <read-resources> A comma-separated list of NamespaceStores that defines the read targets of the namespace bucket. Create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the earlier step. <bucket-name> A bucket name of your choice. <custom-bucket-class> The name of the bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and in the same namespace as that of the OBC. 11.4.3. Adding a namespace bucket using the OpenShift Container Platform user interface You can add namespace buckets using the OpenShift Container Platform user interface. For information about namespace buckets, see Managing namespace buckets . Prerequisites Ensure that Openshift Container Platform with OpenShift Data Foundation operator is already installed. Access to the Multicloud Object Gateway (MCG). Procedure On the OpenShift Web Console, navigate to Storage Object Storage Namespace Store tab. Click Create namespace store to create a namespacestore resources to be used in the namespace bucket. Enter a namespacestore name. Choose a provider and region. Either select an existing secret, or click Switch to credentials to create a secret by entering a secret key and secret access key. Enter a target bucket. Click Create . On the Namespace Store tab, verify that the newly created namespacestore is in the Ready state. Repeat steps 2 and 3 until you have created all the desired amount of resources. Navigate to Bucket Class tab and click Create Bucket Class . Choose Namespace BucketClass type radio button. Enter a BucketClass name and click . Choose a Namespace Policy Type for your namespace bucket, and then click . If your namespace policy type is Single , you need to choose a read resource. If your namespace policy type is Multi , you need to choose read resources and a write resource. If your namespace policy type is Cache , you need to choose a Hub namespace store that defines the read and write target of the namespace bucket. Select one Read and Write NamespaceStore which defines the read and write targets of the namespace bucket and click . Review your new bucket class details, and then click Create Bucket Class . Navigate to Bucket Class tab and verify that your newly created resource is in the Ready phase. Navigate to Object Bucket Claims tab and click Create Object Bucket Claim . Enter ObjectBucketClaim Name for the namespace bucket. Select StorageClass as openshift-storage.noobaa.io . Select the BucketClass that you created earlier for your namespacestore from the list. By default, noobaa-default-bucket-class gets selected. Click Create . The namespace bucket is created along with Object Bucket Claim for your namespace. Navigate to Object Bucket Claims tab and verify that the Object Bucket Claim created is in Bound state. Navigate to Object Buckets tab and verify that the your namespace bucket is present in the list and is in Bound state. 11.5. Mirroring data for hybrid and Multicloud buckets You can use the simplified process of the Multicloud Object Gateway (MCG) to span data across cloud providers and clusters. Before you create a bucket class that reflects the data management policy and mirroring, you must add a backing storage that can be used by the MCG. For information, see Chapter 4, Section 11.3, "Adding storage resources for hybrid or Multicloud" . You can set up mirroring data by using the OpenShift UI, YAML or MCG command-line interface. See the following sections: Section 11.5.1, "Creating bucket classes to mirror data using the MCG command-line-interface" Section 11.5.2, "Creating bucket classes to mirror data using a YAML" 11.5.1. Creating bucket classes to mirror data using the MCG command-line-interface Prerequisites Ensure to download Multicloud Object Gateway (MCG) command-line interface. Procedure From the Multicloud Object Gateway (MCG) command-line interface, run the following command to create a bucket class with a mirroring policy: Set the newly created bucket class to a new bucket claim to generate a new bucket that will be mirrored between two locations: 11.5.2. Creating bucket classes to mirror data using a YAML Apply the following YAML. This YAML is a hybrid example that mirrors data between local Ceph storage and AWS: Add the following lines to your standard Object Bucket Claim (OBC): For more information about OBCs, see Section 11.7, "Object Bucket Claim" . 11.6. Bucket policies in the Multicloud Object Gateway OpenShift Data Foundation supports AWS S3 bucket policies. Bucket policies allow you to grant users access permissions for buckets and the objects in them. 11.6.1. Introduction to bucket policies Bucket policies are an access policy option available for you to grant permission to your AWS S3 buckets and objects. Bucket policies use JSON-based access policy language. For more information about access policy language, see AWS Access Policy Language Overview . 11.6.2. Using bucket policies in Multicloud Object Gateway Prerequisites A running OpenShift Data Foundation Platform. Access to the Multicloud Object Gateway (MCG), see Section 11.2, "Accessing the Multicloud Object Gateway with your applications" A valid Multicloud Object Gateway user account. See Creating a user in the Multicloud Object Gateway for instructions to create a user account. Procedure To use bucket policies in the MCG: Create the bucket policy in JSON format. For example: Replace [email protected] with a valid Multicloud Object Gateway user account. Using AWS S3 client, use the put-bucket-policy command to apply the bucket policy to your S3 bucket: Replace ENDPOINT with the S3 endpoint. Replace MyBucket with the bucket to set the policy on. Replace BucketPolicy with the bucket policy JSON file. Add --no-verify-ssl if you are using the default self signed certificates. For example: For more information on the put-bucket-policy command, see the AWS CLI Command Reference for put-bucket-policy . Note The principal element specifies the user that is allowed or denied access to a resource, such as a bucket. Currently, Only NooBaa accounts can be used as principals. In the case of object bucket claims, NooBaa automatically create an account obc-account.<generated bucket name>@noobaa.io . Note Bucket policy conditions are not supported. Additional resources There are many available elements for bucket policies with regard to access permissions. For details on these elements and examples of how they can be used to control the access permissions, see AWS Access Policy Language Overview . For more examples of bucket policies, see AWS Bucket Policy Examples . OpenShift Data Foundation version 4.17 introduces the bucket policy elements NotPrincipal , NotAction , and NotResource . For more information on these elements, see IAM JSON policy elements reference . 11.6.3. Creating a user in the Multicloud Object Gateway Prerequisites A running OpenShift Data Foundation Platform. Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Execute the following command to create an MCG user account: <noobaa-account-name> Specify the name of the new MCG user account. --allow_bucket_create Allows the user to create new buckets. --allowed_buckets Sets the user's allowed bucket list (use commas or multiple flags). --default_resource Sets the default resource.The new buckets are created on this default resource (including the future ones). --full_permission Allows this account to access all existing and future buckets. Important You need to provide permission to access atleast one bucket or full permission to access all the buckets. 11.7. Object Bucket Claim An Object Bucket Claim can be used to request an S3 compatible bucket backend for your workloads. You can create an Object Bucket Claim in three ways: Section 11.7.1, "Dynamic Object Bucket Claim" Section 11.7.2, "Creating an Object Bucket Claim using the command line interface" Section 11.7.3, "Creating an Object Bucket Claim using the OpenShift Web Console" An object bucket claim creates a new bucket and an application account in NooBaa with permissions to the bucket, including a new access key and secret access key. The application account is allowed to access only a single bucket and can't create new buckets by default. 11.7.1. Dynamic Object Bucket Claim Similar to Persistent Volumes, you can add the details of the Object Bucket claim (OBC) to your application's YAML, and get the object service endpoint, access key, and secret access key available in a configuration map and secret. It is easy to read this information dynamically into environment variables of your application. Note The Multicloud Object Gateway endpoints uses self-signed certificates only if OpenShift uses self-signed certificates. Using signed certificates in OpenShift automatically replaces the Multicloud Object Gateway endpoints certificates with signed certificates. Get the certificate currently used by Multicloud Object Gateway by accessing the endpoint via the browser. See Accessing the Multicloud Object Gateway with your applications for more information. Procedure Add the following lines to your application YAML: These lines are the OBC itself. Replace <obc-name> with the a unique OBC name. Replace <obc-bucket-name> with a unique bucket name for your OBC. To automate the use of the OBC add more lines to the YAML file. For example: The example is the mapping between the bucket claim result, which is a configuration map with data and a secret with the credentials. This specific job claims the Object Bucket from NooBaa, which creates a bucket and an account. Replace all instances of <obc-name> with your OBC name. Replace <your application image> with your application image. Apply the updated YAML file: Replace <yaml.file> with the name of your YAML file. To view the new configuration map, run the following: Replace obc-name with the name of your OBC. You can expect the following environment variables in the output: BUCKET_HOST - Endpoint to use in the application. BUCKET_PORT - The port available for the application. The port is related to the BUCKET_HOST . For example, if the BUCKET_HOST is https://my.example.com , and the BUCKET_PORT is 443, the endpoint for the object service would be https://my.example.com:443 . BUCKET_NAME - Requested or generated bucket name. AWS_ACCESS_KEY_ID - Access key that is part of the credentials. AWS_SECRET_ACCESS_KEY - Secret access key that is part of the credentials. Important Retrieve the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY . The names are used so that it is compatible with the AWS S3 API. You need to specify the keys while performing S3 operations, especially when you read, write or list from the Multicloud Object Gateway (MCG) bucket. The keys are encoded in Base64. Decode the keys before using them. <obc_name> Specify the name of the object bucket claim. 11.7.2. Creating an Object Bucket Claim using the command line interface When creating an Object Bucket Claim (OBC) using the command-line interface, you get a configuration map and a Secret that together contain all the information your application needs to use the object storage service. Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Use the command-line interface to generate the details of a new bucket and credentials. Run the following command: Replace <obc-name> with a unique OBC name, for example, myappobc . Additionally, you can use the --app-namespace option to specify the namespace where the OBC configuration map and secret will be created, for example, myapp-namespace . For example: The MCG command-line-interface has created the necessary configuration and has informed OpenShift about the new OBC. Run the following command to view the OBC: For example: Run the following command to view the YAML file for the new OBC: For example: Inside of your openshift-storage namespace, you can find the configuration map and the secret to use this OBC. The CM and the secret have the same name as the OBC. Run the following command to view the secret: For example: The secret gives you the S3 access credentials. Run the following command to view the configuration map: For example: The configuration map contains the S3 endpoint information for your application. 11.7.3. Creating an Object Bucket Claim using the OpenShift Web Console You can create an Object Bucket Claim (OBC) using the OpenShift Web Console. Prerequisites Administrative access to the OpenShift Web Console. In order for your applications to communicate with the OBC, you need to use the configmap and secret. For more information about this, see Section 11.7.1, "Dynamic Object Bucket Claim" . Procedure Log into the OpenShift Web Console. On the left navigation bar, click Storage Object Storage Object Bucket Claims Create Object Bucket Claim . Enter a name for your object bucket claim and select the appropriate storage class based on your deployment, internal or external, from the dropdown menu: Internal mode The following storage classes, which were created after deployment, are available for use: ocs-storagecluster-ceph-rgw uses the Ceph Object Gateway (RGW) openshift-storage.noobaa.io uses the Multicloud Object Gateway (MCG) External mode The following storage classes, which were created after deployment, are available for use: ocs-external-storagecluster-ceph-rgw uses the RGW openshift-storage.noobaa.io uses the MCG Note The RGW OBC storage class is only available with fresh installations of OpenShift Data Foundation version 4.5. It does not apply to clusters upgraded from OpenShift Data Foundation releases. Click Create . Once you create the OBC, you are redirected to its detail page. 11.7.4. Attaching an Object Bucket Claim to a deployment Once created, Object Bucket Claims (OBCs) can be attached to specific deployments. Prerequisites Administrative access to the OpenShift Web Console. Procedure On the left navigation bar, click Storage Object Storage Object Bucket Claims . Click the Action menu (...) to the OBC you created. From the drop-down menu, select Attach to Deployment . Select the desired deployment from the Deployment Name list, then click Attach . 11.7.5. Viewing object buckets using the OpenShift Web Console You can view the details of object buckets created for Object Bucket Claims (OBCs) using the OpenShift Web Console. Prerequisites Administrative access to the OpenShift Web Console. Procedure Log into the OpenShift Web Console. On the left navigation bar, click Storage Object Storage Object Buckets . Optonal: You can also navigate to the details page of a specific OBC, and click the Resource link to view the object buckets for that OBC. Select the object bucket of which you want to see the details. Once selected you are navigated to the Object Bucket Details page. 11.7.6. Deleting Object Bucket Claims Prerequisites Administrative access to the OpenShift Web Console. Procedure On the left navigation bar, click Storage Object Storage Object Bucket Claims . Click the Action menu (...) to the Object Bucket Claim (OBC) you want to delete. Select Delete Object Bucket Claim . Click Delete . 11.8. Caching policy for object buckets A cache bucket is a namespace bucket with a hub target and a cache target. The hub target is an S3 compatible large object storage bucket. The cache bucket is the local Multicloud Object Gateway bucket. You can create a cache bucket that caches an AWS bucket or an IBM COS bucket. Important Cache buckets are a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information, see Technology Preview Features Support Scope . AWS S3 IBM COS 11.8.1. Creating an AWS cache bucket Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in the MCG namespace buckets. From the MCG command-line interface, run the following command: Replace <namespacestore> with the name of the namespacestore. Replace <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> with an AWS access key ID and secret access key you created for this purpose. Replace <bucket-name> with an existing AWS bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. You can also add storage resources by applying a YAML. First create a secret with credentials: You must supply and encode your own AWS access key ID and secret access key using Base64, and use the results in place of <AWS ACCESS KEY ID ENCODED IN BASE64> and <AWS SECRET ACCESS KEY ENCODED IN BASE64> . Replace <namespacestore-secret-name> with a unique name. Then apply the following YAML: Replace <namespacestore> with a unique name. Replace <namespacestore-secret-name> with the secret created in the step. Replace <namespace-secret> with the namespace used to create the secret in the step. Replace <target-bucket> with the AWS S3 bucket you created for the namespacestore. Run the following command to create a bucket class: Replace <my-cache-bucket-class> with a unique bucket class name. Replace <backing-store> with the relevant backing store. You can list one or more backingstores separated by commas in this field. Replace <namespacestore> with the namespacestore created in the step. Run the following command to create a bucket using an Object Bucket Claim (OBC) resource that uses the bucket class defined in step 2. Replace <my-bucket-claim> with a unique name. Replace <custom-bucket-class> with the name of the bucket class created in step 2. 11.8.2. Creating an IBM COS cache bucket Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in the MCG namespace buckets. From the MCG command-line interface, run the following command: Replace <namespacestore> with the name of the NamespaceStore. Replace <IBM ACCESS KEY> , <IBM SECRET ACCESS KEY> , <IBM COS ENDPOINT> with an IBM access key ID, secret access key and the appropriate regional endpoint that corresponds to the location of the existing IBM bucket. Replace <bucket-name> with an existing IBM bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. You can also add storage resources by applying a YAML. First, Create a secret with the credentials: You must supply and encode your own IBM COS access key ID and secret access key using Base64, and use the results in place of <IBM COS ACCESS KEY ID ENCODED IN BASE64> and <IBM COS SECRET ACCESS KEY ENCODED IN BASE64> . Replace <namespacestore-secret-name> with a unique name. Then apply the following YAML: Replace <namespacestore> with a unique name. Replace <IBM COS ENDPOINT> with the appropriate IBM COS endpoint. Replace <backingstore-secret-name> with the secret created in the step. Replace <namespace-secret> with the namespace used to create the secret in the step. Replace <target-bucket> with the AWS S3 bucket you created for the namespacestore. Run the following command to create a bucket class: Replace <my-bucket-class> with a unique bucket class name. Replace <backing-store> with the relevant backing store. You can list one or more backingstores separated by commas in this field. Replace <namespacestore> with the namespacestore created in the step. Run the following command to create a bucket using an Object Bucket Claim resource that uses the bucket class defined in step 2. Replace <my-bucket-claim> with a unique name. Replace <custom-bucket-class> with the name of the bucket class created in step 2. 11.9. Scaling Multicloud Object Gateway performance by adding endpoints The Multicloud Object Gateway performance may vary from one environment to another. In some cases, specific applications require faster performance which can be easily addressed by scaling S3 endpoints. The Multicloud Object Gateway resource pool is a group of NooBaa daemon containers that provide two types of services enabled by default: Storage service S3 endpoint service 11.9.1. Scaling the Multicloud Object Gateway with storage nodes Prerequisites A running OpenShift Data Foundation cluster on OpenShift Container Platform with access to the Multicloud Object Gateway (MCG). A storage node in the MCG is a NooBaa daemon container attached to one or more Persistent Volumes (PVs) and used for local object service data storage. NooBaa daemons can be deployed on Kubernetes nodes. This can be done by creating a Kubernetes pool consisting of StatefulSet pods. Procedure Log in to OpenShift Web Console . From the MCG user interface, click Overview Add Storage Resources . In the window, click Deploy Kubernetes Pool . In the Create Pool step create the target pool for the future installed nodes. In the Configure step, configure the number of requested pods and the size of each PV. For each new pod, one PV is to be created. In the Review step, you can find the details of the new pool and select the deployment method you wish to use: local or external deployment. If local deployment is selected, the Kubernetes nodes will deploy within the cluster. If external deployment is selected, you will be provided with a YAML file to run externally. All nodes will be assigned to the pool you chose in the first step, and can be found under Resources Storage resources Resource name . 11.10. Automatic scaling of MultiCloud Object Gateway endpoints The number of MultiCloud Object Gateway (MCG) endpoints scale automatically when the load on the MCG S3 service increases or decreases. OpenShift Data Foundation clusters are deployed with one active MCG endpoint. Each MCG endpoint pod is configured by default with 1 CPU and 2Gi memory request, with limits matching the request. When the CPU load on the endpoint crosses over an 80% usage threshold for a consistent period of time, a second endpoint is deployed lowering the load on the first endpoint. When the average CPU load on both endpoints falls below the 80% threshold for a consistent period of time, one of the endpoints is deleted. This feature improves performance and serviceability of the MCG. You can scale the Horizontal Pod Autoscaler (HPA) for noobaa-endpoint using the following oc patch command, for example: The example above sets the minCount to 3 and the maxCount to `10 . | [
"noobaa backingstore create aws-s3 <backingstore_name> --access-key=<AWS ACCESS KEY> --secret-key=<AWS SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage",
"INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"aws-resource\" INFO[0002] ✅ Created: Secret \"backing-store-secret-aws-resource\"",
"apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> namespace: openshift-storage type: Opaque data: AWS_ACCESS_KEY_ID: <AWS ACCESS KEY ID ENCODED IN BASE64> AWS_SECRET_ACCESS_KEY: <AWS SECRET ACCESS KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: awsS3: secret: name: <backingstore-secret-name> namespace: openshift-storage targetBucket: <bucket-name> type: aws-s3",
"noobaa backingstore create ibm-cos <backingstore_name> --access-key=<IBM ACCESS KEY> --secret-key=<IBM SECRET ACCESS KEY> --endpoint=<IBM COS ENDPOINT> --target-bucket <bucket-name> -n openshift-storage",
"INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"ibm-resource\" INFO[0002] ✅ Created: Secret \"backing-store-secret-ibm-resource\"",
"apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> namespace: openshift-storage type: Opaque data: IBM_COS_ACCESS_KEY_ID: <IBM COS ACCESS KEY ID ENCODED IN BASE64> IBM_COS_SECRET_ACCESS_KEY: <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: ibmCos: endpoint: <endpoint> secret: name: <backingstore-secret-name> namespace: openshift-storage targetBucket: <bucket-name> type: ibm-cos",
"noobaa backingstore create azure-blob <backingstore_name> --account-key=<AZURE ACCOUNT KEY> --account-name=<AZURE ACCOUNT NAME> --target-blob-container <blob container name> -n openshift-storage",
"INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"azure-resource\" INFO[0002] ✅ Created: Secret \"backing-store-secret-azure-resource\"",
"apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> type: Opaque data: AccountName: <AZURE ACCOUNT NAME ENCODED IN BASE64> AccountKey: <AZURE ACCOUNT KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: azureBlob: secret: name: <backingstore-secret-name> namespace: openshift-storage targetBlobContainer: <blob-container-name> type: azure-blob",
"noobaa backingstore create google-cloud-storage <backingstore_name> --private-key-json-file=<PATH TO GCP PRIVATE KEY JSON FILE> --target-bucket <GCP bucket name> -n openshift-storage",
"INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"google-gcp\" INFO[0002] ✅ Created: Secret \"backing-store-google-cloud-storage-gcp\"",
"apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> type: Opaque data: GoogleServiceAccountPrivateKeyJson: <GCP PRIVATE KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: googleCloudStorage: secret: name: <backingstore-secret-name> namespace: openshift-storage targetBucket: <target bucket> type: google-cloud-storage",
"noobaa -n openshift-storage backingstore create pv-pool <backingstore_name> --num-volumes <NUMBER OF VOLUMES> --pv-size-gb <VOLUME SIZE> --request-cpu <CPU REQUEST> --request-memory <MEMORY REQUEST> --limit-cpu <CPU LIMIT> --limit-memory <MEMORY LIMIT> --storage-class <LOCAL STORAGE CLASS>",
"apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <backingstore_name> namespace: openshift-storage spec: pvPool: numVolumes: <NUMBER OF VOLUMES> resources: requests: storage: <VOLUME SIZE> cpu: <CPU REQUEST> memory: <MEMORY REQUEST> limits: cpu: <CPU LIMIT> memory: <MEMORY LIMIT> storageClass: <LOCAL STORAGE CLASS> type: pv-pool",
"INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Exists: BackingStore \"local-mcg-storage\"",
"noobaa backingstore create s3-compatible rgw-resource --access-key=<RGW ACCESS KEY> --secret-key=<RGW SECRET KEY> --target-bucket=<bucket-name> --endpoint=<RGW endpoint> -n openshift-storage",
"get secret <RGW USER SECRET NAME> -o yaml -n openshift-storage",
"INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"rgw-resource\" INFO[0002] ✅ Created: Secret \"backing-store-secret-rgw-resource\"",
"apiVersion: ceph.rook.io/v1 kind: CephObjectStoreUser metadata: name: <RGW-Username> namespace: openshift-storage spec: store: ocs-storagecluster-cephobjectstore displayName: \"<Display-name>\"",
"apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <backingstore-name> namespace: openshift-storage spec: s3Compatible: endpoint: <RGW endpoint> secret: name: <backingstore-secret-name> namespace: openshift-storage signatureVersion: v4 targetBucket: <RGW-bucket-name> type: s3-compatible",
"apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: AWS_ACCESS_KEY_ID: <AWS ACCESS KEY ID ENCODED IN BASE64> AWS_SECRET_ACCESS_KEY: <AWS SECRET ACCESS KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <resource-name> namespace: openshift-storage spec: awsS3: secret: name: <namespacestore-secret-name> namespace: <namespace-secret> targetBucket: <target-bucket> type: aws-s3",
"apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: single: resource: <resource>",
"apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: Multi multi: writeResource: <write-resource> readResources: - <read-resources> - <read-resources>",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <resource-name> namespace: openshift-storage spec: generateBucketName: <my-bucket> storageClassName: openshift-storage.noobaa.io additionalConfig: bucketclass: <my-bucket-class>",
"apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: IBM_COS_ACCESS_KEY_ID: <IBM COS ACCESS KEY ID ENCODED IN BASE64> IBM_COS_SECRET_ACCESS_KEY: <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: s3Compatible: endpoint: <IBM COS ENDPOINT> secret: name: <namespacestore-secret-name> namespace: <namespace-secret> signatureVersion: v2 targetBucket: <target-bucket> type: ibm-cos",
"apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: single: resource: <resource>",
"apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: Multi multi: writeResource: <write-resource> readResources: - <read-resources> - <read-resources>",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <resource-name> namespace: openshift-storage spec: generateBucketName: <my-bucket> storageClassName: openshift-storage.noobaa.io additionalConfig: bucketclass: <my-bucket-class>",
"noobaa namespacestore create aws-s3 <namespacestore> --access-key <AWS ACCESS KEY> --secret-key <AWS SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage",
"noobaa bucketclass create namespace-bucketclass single <my-bucket-class> --resource <resource> -n openshift-storage",
"noobaa bucketclass create namespace-bucketclass multi <my-bucket-class> --write-resource <write-resource> --read-resources <read-resources> -n openshift-storage",
"noobaa obc create my-bucket-claim -n openshift-storage --app-namespace my-app --bucketclass <custom-bucket-class>",
"noobaa namespacestore create ibm-cos <namespacestore> --endpoint <IBM COS ENDPOINT> --access-key <IBM ACCESS KEY> --secret-key <IBM SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage",
"noobaa bucketclass create namespace-bucketclass single <my-bucket-class> --resource <resource> -n openshift-storage",
"noobaa bucketclass create namespace-bucketclass multi <my-bucket-class> --write-resource <write-resource> --read-resources <read-resources> -n openshift-storage",
"noobaa obc create my-bucket-claim -n openshift-storage --app-namespace my-app --bucketclass <custom-bucket-class>",
"noobaa bucketclass create placement-bucketclass mirror-to-aws --backingstores=azure-resource,aws-resource --placement Mirror",
"noobaa obc create mirrored-bucket --bucketclass=mirror-to-aws",
"apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <bucket-class-name> namespace: openshift-storage spec: placementPolicy: tiers: - backingStores: - <backing-store-1> - <backing-store-2> placement: Mirror",
"additionalConfig: bucketclass: mirror-to-aws",
"{ \"Version\": \"NewVersion\", \"Statement\": [ { \"Sid\": \"Example\", \"Effect\": \"Allow\", \"Principal\": [ \"[email protected]\" ], \"Action\": [ \"s3:GetObject\" ], \"Resource\": [ \"arn:aws:s3:::john_bucket\" ] } ] }",
"aws --endpoint ENDPOINT --no-verify-ssl s3api put-bucket-policy --bucket MyBucket --policy file:// BucketPolicy",
"aws --endpoint https://s3-openshift-storage.apps.gogo44.noobaa.org --no-verify-ssl s3api put-bucket-policy -bucket MyBucket --policy file://BucketPolicy",
"noobaa account create <noobaa-account-name> [--allow_bucket_create=true] [--allowed_buckets=[]] [--default_resource=''] [--full_permission=false]",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <obc-name> spec: generateBucketName: <obc-bucket-name> storageClassName: openshift-storage.noobaa.io",
"apiVersion: batch/v1 kind: Job metadata: name: testjob spec: template: spec: restartPolicy: OnFailure containers: - image: <your application image> name: test env: - name: BUCKET_NAME valueFrom: configMapKeyRef: name: <obc-name> key: BUCKET_NAME - name: BUCKET_HOST valueFrom: configMapKeyRef: name: <obc-name> key: BUCKET_HOST - name: BUCKET_PORT valueFrom: configMapKeyRef: name: <obc-name> key: BUCKET_PORT - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: <obc-name> key: AWS_ACCESS_KEY_ID - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: <obc-name> key: AWS_SECRET_ACCESS_KEY",
"oc apply -f <yaml.file>",
"oc get cm <obc-name> -o yaml",
"oc get secret <obc_name> -o yaml",
"noobaa obc create <obc-name> -n openshift-storage",
"INFO[0001] ✅ Created: ObjectBucketClaim \"test21obc\"",
"oc get obc -n openshift-storage",
"NAME STORAGE-CLASS PHASE AGE test21obc openshift-storage.noobaa.io Bound 38s",
"oc get obc test21obc -o yaml -n openshift-storage",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: creationTimestamp: \"2019-10-24T13:30:07Z\" finalizers: - objectbucket.io/finalizer generation: 2 labels: app: noobaa bucket-provisioner: openshift-storage.noobaa.io-obc noobaa-domain: openshift-storage.noobaa.io name: test21obc namespace: openshift-storage resourceVersion: \"40756\" selfLink: /apis/objectbucket.io/v1alpha1/namespaces/openshift-storage/objectbucketclaims/test21obc uid: 64f04cba-f662-11e9-bc3c-0295250841af spec: ObjectBucketName: obc-openshift-storage-test21obc bucketName: test21obc-933348a6-e267-4f82-82f1-e59bf4fe3bb4 generateBucketName: test21obc storageClassName: openshift-storage.noobaa.io status: phase: Bound",
"oc get -n openshift-storage secret test21obc -o yaml",
"apiVersion: v1 data: AWS_ACCESS_KEY_ID: c0M0R2xVanF3ODR3bHBkVW94cmY= AWS_SECRET_ACCESS_KEY: Wi9kcFluSWxHRzlWaFlzNk1hc0xma2JXcjM1MVhqa051SlBleXpmOQ== kind: Secret metadata: creationTimestamp: \"2019-10-24T13:30:07Z\" finalizers: - objectbucket.io/finalizer labels: app: noobaa bucket-provisioner: openshift-storage.noobaa.io-obc noobaa-domain: openshift-storage.noobaa.io name: test21obc namespace: openshift-storage ownerReferences: - apiVersion: objectbucket.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ObjectBucketClaim name: test21obc uid: 64f04cba-f662-11e9-bc3c-0295250841af resourceVersion: \"40751\" selfLink: /api/v1/namespaces/openshift-storage/secrets/test21obc uid: 65117c1c-f662-11e9-9094-0a5305de57bb type: Opaque",
"oc get -n openshift-storage cm test21obc -o yaml",
"apiVersion: v1 data: BUCKET_HOST: 10.0.171.35 BUCKET_NAME: test21obc-933348a6-e267-4f82-82f1-e59bf4fe3bb4 BUCKET_PORT: \"31242\" BUCKET_REGION: \"\" BUCKET_SUBREGION: \"\" kind: ConfigMap metadata: creationTimestamp: \"2019-10-24T13:30:07Z\" finalizers: - objectbucket.io/finalizer labels: app: noobaa bucket-provisioner: openshift-storage.noobaa.io-obc noobaa-domain: openshift-storage.noobaa.io name: test21obc namespace: openshift-storage ownerReferences: - apiVersion: objectbucket.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ObjectBucketClaim name: test21obc uid: 64f04cba-f662-11e9-bc3c-0295250841af resourceVersion: \"40752\" selfLink: /api/v1/namespaces/openshift-storage/configmaps/test21obc uid: 651c6501-f662-11e9-9094-0a5305de57bb",
"noobaa namespacestore create aws-s3 <namespacestore> --access-key <AWS ACCESS KEY> --secret-key <AWS SECRET ACCESS KEY> --target-bucket <bucket-name>",
"apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: AWS_ACCESS_KEY_ID: <AWS ACCESS KEY ID ENCODED IN BASE64> AWS_SECRET_ACCESS_KEY: <AWS SECRET ACCESS KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <namespacestore> namespace: openshift-storage spec: awsS3: secret: name: <namespacestore-secret-name> namespace: <namespace-secret> targetBucket: <target-bucket> type: aws-s3",
"noobaa bucketclass create namespace-bucketclass cache <my-cache-bucket-class> --backingstores <backing-store> --hub-resource <namespacestore>",
"noobaa obc create <my-bucket-claim> my-app --bucketclass <custom-bucket-class>",
"noobaa namespacestore create ibm-cos <namespacestore> --endpoint <IBM COS ENDPOINT> --access-key <IBM ACCESS KEY> --secret-key <IBM SECRET ACCESS KEY> --target-bucket <bucket-name>",
"apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: IBM_COS_ACCESS_KEY_ID: <IBM COS ACCESS KEY ID ENCODED IN BASE64> IBM_COS_SECRET_ACCESS_KEY: <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <namespacestore> namespace: openshift-storage spec: s3Compatible: endpoint: <IBM COS ENDPOINT> secret: name: <backingstore-secret-name> namespace: <namespace-secret> signatureVersion: v2 targetBucket: <target-bucket> type: ibm-cos",
"noobaa bucketclass create namespace-bucketclass cache <my-bucket-class> --backingstores <backing-store> --hubResource <namespacestore>",
"noobaa obc create <my-bucket-claim> my-app --bucketclass <custom-bucket-class>",
"oc patch -n openshift-storage storagecluster ocs-storagecluster --type merge --patch '{\"spec\": {\"multiCloudGateway\": {\"endpoints\": {\"minCount\": 3,\"maxCount\": 10}}}}'"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/multicloud-object-gateway_osp |
Chapter 11. Advanced migration options | Chapter 11. Advanced migration options You can automate your migrations and modify the MigPlan and MigrationController custom resources in order to perform large-scale migrations and to improve performance. 11.1. Terminology Table 11.1. MTC terminology Term Definition Source cluster Cluster from which the applications are migrated. Destination cluster [1] Cluster to which the applications are migrated. Replication repository Object storage used for copying images, volumes, and Kubernetes objects during indirect migration or for Kubernetes objects during direct volume migration or direct image migration. The replication repository must be accessible to all clusters. Host cluster Cluster on which the migration-controller pod and the web console are running. The host cluster is usually the destination cluster but this is not required. The host cluster does not require an exposed registry route for direct image migration. Remote cluster A remote cluster is usually the source cluster but this is not required. A remote cluster requires a Secret custom resource that contains the migration-controller service account token. A remote cluster requires an exposed secure registry route for direct image migration. Indirect migration Images, volumes, and Kubernetes objects are copied from the source cluster to the replication repository and then from the replication repository to the destination cluster. Direct volume migration Persistent volumes are copied directly from the source cluster to the destination cluster. Direct image migration Images are copied directly from the source cluster to the destination cluster. Stage migration Data is copied to the destination cluster without stopping the application. Running a stage migration multiple times reduces the duration of the cutover migration. Cutover migration The application is stopped on the source cluster and its resources are migrated to the destination cluster. State migration Application state is migrated by copying specific persistent volume claims to the destination cluster. Rollback migration Rollback migration rolls back a completed migration. 1 Called the target cluster in the MTC web console. 11.2. Migrating an application from on-premises to a cloud-based cluster You can migrate from a source cluster that is behind a firewall to a cloud-based destination cluster by establishing a network tunnel between the two clusters. The crane tunnel-api command establishes such a tunnel by creating a VPN tunnel on the source cluster and then connecting to a VPN server running on the destination cluster. The VPN server is exposed to the client using a load balancer address on the destination cluster. A service created on the destination cluster exposes the source cluster's API to MTC, which is running on the destination cluster. Prerequisites The system that creates the VPN tunnel must have access and be logged in to both clusters. It must be possible to create a load balancer on the destination cluster. Refer to your cloud provider to ensure this is possible. Have names prepared to assign to namespaces, on both the source cluster and the destination cluster, in which to run the VPN tunnel. These namespaces should not be created in advance. For information about namespace rules, see https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-subdomain-names. When connecting multiple firewall-protected source clusters to the cloud cluster, each source cluster requires its own namespace. OpenVPN server is installed on the destination cluster. OpenVPN client is installed on the source cluster. When configuring the source cluster in MTC, the API URL takes the form of https://proxied-cluster.<namespace>.svc.cluster.local:8443 . If you use the API, see Create a MigCluster CR manifest for each remote cluster . If you use the MTC web console, see Migrating your applications using the MTC web console . The MTC web console and Migration Controller must be installed on the target cluster. Procedure Install the crane utility: USD podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-controller-rhel8:v1.7):/crane ./ Log in remotely to a node on the source cluster and a node on the destination cluster. Obtain the cluster context for both clusters after logging in: USD oc config view Establish a tunnel by entering the following command on the command system: USD crane tunnel-api [--namespace <namespace>] \ --destination-context <destination-cluster> \ --source-context <source-cluster> If you don't specify a namespace, the command uses the default value openvpn . For example: USD crane tunnel-api --namespace my_tunnel \ --destination-context openshift-migration/c131-e-us-east-containers-cloud-ibm-com/admin \ --source-context default/192-168-122-171-nip-io:8443/admin Tip See all available parameters for the crane tunnel-api command by entering crane tunnel-api --help . The command generates TSL/SSL Certificates. This process might take several minutes. A message appears when the process completes. The OpenVPN server starts on the destination cluster and the OpenVPN client starts on the source cluster. After a few minutes, the load balancer resolves on the source node. Tip You can view the log for the OpenVPN pods to check the status of this process by entering the following commands with root privileges: # oc get po -n <namespace> Example output NAME READY STATUS RESTARTS AGE <pod_name> 2/2 Running 0 44s # oc logs -f -n <namespace> <pod_name> -c openvpn When the address of the load balancer is resolved, the message Initialization Sequence Completed appears at the end of the log. On the OpenVPN server, which is on a destination control node, verify that the openvpn service and the proxied-cluster service are running: USD oc get service -n <namespace> On the source node, get the service account (SA) token for the migration controller: # oc sa get-token -n openshift-migration migration-controller Open the MTC web console and add the source cluster, using the following values: Cluster name : The source cluster name. URL : proxied-cluster.<namespace>.svc.cluster.local:8443 . If you did not define a value for <namespace> , use openvpn . Service account token : The token of the migration controller service account. Exposed route host to image registry : proxied-cluster.<namespace>.svc.cluster.local:5000 . If you did not define a value for <namespace> , use openvpn . After MTC has successfully validated the connection, you can proceed to create and run a migration plan. The namespace for the source cluster should appear in the list of namespaces. Additional resources For information about creating a MigCluster CR manifest for each remote cluster, see Migrating an application by using the MTC API . For information about adding a cluster using the web console, see Migrating your applications by using the MTC web console 11.3. Migrating applications by using the command line You can migrate applications with the MTC API by using the command line interface (CLI) in order to automate the migration. 11.3.1. Migration prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Direct image migration You must ensure that the secure OpenShift image registry of the source cluster is exposed. You must create a route to the exposed registry. Direct volume migration If your clusters use proxies, you must configure an Stunnel TCP proxy. Internal images If your application uses internal images from the openshift namespace, you must ensure that the required versions of the images are present on the target cluster. You can manually update an image stream tag in order to use a deprecated OpenShift Container Platform 3 image on an OpenShift Container Platform 4.10 cluster. Clusters The source cluster must be upgraded to the latest MTC z-stream release. The MTC version must be the same on all clusters. Network The clusters have unrestricted network access to each other and to the replication repository. If you copy the persistent volumes with move , the clusters must have unrestricted network access to the remote volumes. You must enable the following ports on an OpenShift Container Platform 3 cluster: 8443 (API server) 443 (routes) 53 (DNS) You must enable the following ports on an OpenShift Container Platform 4 cluster: 6443 (API server) 443 (routes) 53 (DNS) You must enable port 443 on the replication repository if you are using TLS. Persistent volumes (PVs) The PVs must be valid. The PVs must be bound to persistent volume claims. If you use snapshots to copy the PVs, the following additional prerequisites apply: The cloud provider must support snapshots. The PVs must have the same cloud provider. The PVs must be located in the same geographic region. The PVs must have the same storage class. 11.3.2. Creating a registry route for direct image migration For direct image migration, you must create a route to the exposed OpenShift image registry on all remote clusters. Prerequisites The OpenShift image registry must be exposed to external traffic on all remote clusters. The OpenShift Container Platform 4 registry is exposed by default. The OpenShift Container Platform 3 registry must be exposed manually . Procedure To create a route to an OpenShift Container Platform 3 registry, run the following command: USD oc create route passthrough --service=docker-registry -n default To create a route to an OpenShift Container Platform 4 registry, run the following command: USD oc create route passthrough --service=image-registry -n openshift-image-registry 11.3.3. Proxy configuration For OpenShift Container Platform 4.1 and earlier versions, you must configure proxies in the MigrationController custom resource (CR) manifest after you install the Migration Toolkit for Containers Operator because these versions do not support a cluster-wide proxy object. For OpenShift Container Platform 4.2 to 4.10, the Migration Toolkit for Containers (MTC) inherits the cluster-wide proxy settings. You can change the proxy parameters if you want to override the cluster-wide proxy settings. 11.3.3.1. Direct volume migration Direct Volume Migration (DVM) was introduced in MTC 1.4.2. DVM supports only one proxy. The source cluster cannot access the route of the target cluster if the target cluster is also behind a proxy. If you want to perform a DVM from a source cluster behind a proxy, you must configure a TCP proxy that works at the transport layer and forwards the SSL connections transparently without decrypting and re-encrypting them with their own SSL certificates. A Stunnel proxy is an example of such a proxy. 11.3.3.1.1. TCP proxy setup for DVM You can set up a direct connection between the source and the target cluster through a TCP proxy and configure the stunnel_tcp_proxy variable in the MigrationController CR to use the proxy: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port Direct volume migration (DVM) supports only basic authentication for the proxy. Moreover, DVM works only from behind proxies that can tunnel a TCP connection transparently. HTTP/HTTPS proxies in man-in-the-middle mode do not work. The existing cluster-wide proxies might not support this behavior. As a result, the proxy settings for DVM are intentionally kept different from the usual proxy configuration in MTC. 11.3.3.1.2. Why use a TCP proxy instead of an HTTP/HTTPS proxy? You can enable DVM by running Rsync between the source and the target cluster over an OpenShift route. Traffic is encrypted using Stunnel, a TCP proxy. The Stunnel running on the source cluster initiates a TLS connection with the target Stunnel and transfers data over an encrypted channel. Cluster-wide HTTP/HTTPS proxies in OpenShift are usually configured in man-in-the-middle mode where they negotiate their own TLS session with the outside servers. However, this does not work with Stunnel. Stunnel requires that its TLS session be untouched by the proxy, essentially making the proxy a transparent tunnel which simply forwards the TCP connection as-is. Therefore, you must use a TCP proxy. 11.3.3.1.3. Known issue Migration fails with error Upgrade request required The migration Controller uses the SPDY protocol to execute commands within remote pods. If the remote cluster is behind a proxy or a firewall that does not support the SPDY protocol, the migration controller fails to execute remote commands. The migration fails with the error message Upgrade request required . Workaround: Use a proxy that supports the SPDY protocol. In addition to supporting the SPDY protocol, the proxy or firewall also must pass the Upgrade HTTP header to the API server. The client uses this header to open a websocket connection with the API server. If the Upgrade header is blocked by the proxy or firewall, the migration fails with the error message Upgrade request required . Workaround: Ensure that the proxy forwards the Upgrade header. 11.3.3.2. Tuning network policies for migrations OpenShift supports restricting traffic to or from pods using NetworkPolicy or EgressFirewalls based on the network plugin used by the cluster. If any of the source namespaces involved in a migration use such mechanisms to restrict network traffic to pods, the restrictions might inadvertently stop traffic to Rsync pods during migration. Rsync pods running on both the source and the target clusters must connect to each other over an OpenShift Route. Existing NetworkPolicy or EgressNetworkPolicy objects can be configured to automatically exempt Rsync pods from these traffic restrictions. 11.3.3.2.1. NetworkPolicy configuration 11.3.3.2.1.1. Egress traffic from Rsync pods You can use the unique labels of Rsync pods to allow egress traffic to pass from them if the NetworkPolicy configuration in the source or destination namespaces blocks this type of traffic. The following policy allows all egress traffic from Rsync pods in the namespace: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress 11.3.3.2.1.2. Ingress traffic to Rsync pods apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress 11.3.3.2.2. EgressNetworkPolicy configuration The EgressNetworkPolicy object or Egress Firewalls are OpenShift constructs designed to block egress traffic leaving the cluster. Unlike the NetworkPolicy object, the Egress Firewall works at a project level because it applies to all pods in the namespace. Therefore, the unique labels of Rsync pods do not exempt only Rsync pods from the restrictions. However, you can add the CIDR ranges of the source or target cluster to the Allow rule of the policy so that a direct connection can be setup between two clusters. Based on which cluster the Egress Firewall is present in, you can add the CIDR range of the other cluster to allow egress traffic between the two: apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny 11.3.3.2.3. Choosing alternate endpoints for data transfer By default, DVM uses an OpenShift Container Platform route as an endpoint to transfer PV data to destination clusters. You can choose another type of supported endpoint, if cluster topologies allow. For each cluster, you can configure an endpoint by setting the rsync_endpoint_type variable on the appropriate destination cluster in your MigrationController CR: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route] 11.3.3.2.4. Configuring supplemental groups for Rsync pods When your PVCs use a shared storage, you can configure the access to that storage by adding supplemental groups to Rsync pod definitions in order for the pods to allow access: Table 11.2. Supplementary groups for Rsync pods Variable Type Default Description src_supplemental_groups string Not set Comma-separated list of supplemental groups for source Rsync pods target_supplemental_groups string Not set Comma-separated list of supplemental groups for target Rsync pods Example usage The MigrationController CR can be updated to set values for these supplemental groups: spec: src_supplemental_groups: "1000,2000" target_supplemental_groups: "2000,3000" 11.3.3.3. Configuring proxies Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Procedure Get the MigrationController CR manifest: USD oc get migrationcontroller <migration_controller> -n openshift-migration Update the proxy parameters: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration ... spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2 1 Stunnel proxy URL for direct volume migration. 2 Comma-separated list of destination domain names, domains, IP addresses, or other network CIDRs to exclude proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues. This field is ignored if neither the httpProxy nor the httpsProxy field is set. Save the manifest as migration-controller.yaml . Apply the updated manifest: USD oc replace -f migration-controller.yaml -n openshift-migration 11.3.4. Migrating an application by using the MTC API You can migrate an application from the command line by using the Migration Toolkit for Containers (MTC) API. Procedure Create a MigCluster CR manifest for the host cluster: USD cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <host_cluster> namespace: openshift-migration spec: isHostCluster: true EOF Create a Secret object manifest for each remote cluster: USD cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: <cluster_secret> namespace: openshift-config type: Opaque data: saToken: <sa_token> 1 EOF 1 Specify the base64-encoded migration-controller service account (SA) token of the remote cluster. You can obtain the token by running the following command: USD oc sa get-token migration-controller -n openshift-migration | base64 -w 0 Create a MigCluster CR manifest for each remote cluster: USD cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <remote_cluster> 1 namespace: openshift-migration spec: exposedRegistryPath: <exposed_registry_route> 2 insecure: false 3 isHostCluster: false serviceAccountSecretRef: name: <remote_cluster_secret> 4 namespace: openshift-config url: <remote_cluster_url> 5 EOF 1 Specify the Cluster CR of the remote cluster. 2 Optional: For direct image migration, specify the exposed registry route. 3 SSL verification is enabled if false . CA certificates are not required or checked if true . 4 Specify the Secret object of the remote cluster. 5 Specify the URL of the remote cluster. Verify that all clusters are in a Ready state: USD oc describe cluster <cluster> Create a Secret object manifest for the replication repository: USD cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: namespace: openshift-config name: <migstorage_creds> type: Opaque data: aws-access-key-id: <key_id_base64> 1 aws-secret-access-key: <secret_key_base64> 2 EOF 1 Specify the key ID in base64 format. 2 Specify the secret key in base64 format. AWS credentials are base64-encoded by default. For other storage providers, you must encode your credentials by running the following command with each key: USD echo -n "<key>" | base64 -w 0 1 1 Specify the key ID or the secret key. Both keys must be base64-encoded. Create a MigStorage CR manifest for the replication repository: USD cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigStorage metadata: name: <migstorage> namespace: openshift-migration spec: backupStorageConfig: awsBucketName: <bucket> 1 credsSecretRef: name: <storage_secret> 2 namespace: openshift-config backupStorageProvider: <storage_provider> 3 volumeSnapshotConfig: credsSecretRef: name: <storage_secret> 4 namespace: openshift-config volumeSnapshotProvider: <storage_provider> 5 EOF 1 Specify the bucket name. 2 Specify the Secrets CR of the object storage. You must ensure that the credentials stored in the Secrets CR of the object storage are correct. 3 Specify the storage provider. 4 Optional: If you are copying data by using snapshots, specify the Secrets CR of the object storage. You must ensure that the credentials stored in the Secrets CR of the object storage are correct. 5 Optional: If you are copying data by using snapshots, specify the storage provider. Verify that the MigStorage CR is in a Ready state: USD oc describe migstorage <migstorage> Create a MigPlan CR manifest: USD cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: destMigClusterRef: name: <host_cluster> namespace: openshift-migration indirectImageMigration: true 1 indirectVolumeMigration: true 2 migStorageRef: name: <migstorage> 3 namespace: openshift-migration namespaces: - <source_namespace_1> 4 - <source_namespace_2> - <source_namespace_3>:<destination_namespace> 5 srcMigClusterRef: name: <remote_cluster> 6 namespace: openshift-migration EOF 1 Direct image migration is enabled if false . 2 Direct volume migration is enabled if false . 3 Specify the name of the MigStorage CR instance. 4 Specify one or more source namespaces. By default, the destination namespace has the same name. 5 Specify a destination namespace if it is different from the source namespace. 6 Specify the name of the source cluster MigCluster instance. Verify that the MigPlan instance is in a Ready state: USD oc describe migplan <migplan> -n openshift-migration Create a MigMigration CR manifest to start the migration defined in the MigPlan instance: USD cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: <migmigration> namespace: openshift-migration spec: migPlanRef: name: <migplan> 1 namespace: openshift-migration quiescePods: true 2 stage: false 3 rollback: false 4 EOF 1 Specify the MigPlan CR name. 2 The pods on the source cluster are stopped before migration if true . 3 A stage migration, which copies most of the data without stopping the application, is performed if true . 4 A completed migration is rolled back if true . Verify the migration by watching the MigMigration CR progress: USD oc watch migmigration <migmigration> -n openshift-migration The output resembles the following: Example output Name: c8b034c0-6567-11eb-9a4f-0bc004db0fbc Namespace: openshift-migration Labels: migration.openshift.io/migplan-name=django Annotations: openshift.io/touch: e99f9083-6567-11eb-8420-0a580a81020c API Version: migration.openshift.io/v1alpha1 Kind: MigMigration ... Spec: Mig Plan Ref: Name: migplan Namespace: openshift-migration Stage: false Status: Conditions: Category: Advisory Last Transition Time: 2021-02-02T15:04:09Z Message: Step: 19/47 Reason: InitialBackupCreated Status: True Type: Running Category: Required Last Transition Time: 2021-02-02T15:03:19Z Message: The migration is ready. Status: True Type: Ready Category: Required Durable: true Last Transition Time: 2021-02-02T15:04:05Z Message: The migration registries are healthy. Status: True Type: RegistriesHealthy Itinerary: Final Observed Digest: 7fae9d21f15979c71ddc7dd075cb97061895caac5b936d92fae967019ab616d5 Phase: InitialBackupCreated Pipeline: Completed: 2021-02-02T15:04:07Z Message: Completed Name: Prepare Started: 2021-02-02T15:03:18Z Message: Waiting for initial Velero backup to complete. Name: Backup Phase: InitialBackupCreated Progress: Backup openshift-migration/c8b034c0-6567-11eb-9a4f-0bc004db0fbc-wpc44: 0 out of estimated total of 0 objects backed up (5s) Started: 2021-02-02T15:04:07Z Message: Not started Name: StageBackup Message: Not started Name: StageRestore Message: Not started Name: DirectImage Message: Not started Name: DirectVolume Message: Not started Name: Restore Message: Not started Name: Cleanup Start Timestamp: 2021-02-02T15:03:18Z Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Running 57s migmigration_controller Step: 2/47 Normal Running 57s migmigration_controller Step: 3/47 Normal Running 57s (x3 over 57s) migmigration_controller Step: 4/47 Normal Running 54s migmigration_controller Step: 5/47 Normal Running 54s migmigration_controller Step: 6/47 Normal Running 52s (x2 over 53s) migmigration_controller Step: 7/47 Normal Running 51s (x2 over 51s) migmigration_controller Step: 8/47 Normal Ready 50s (x12 over 57s) migmigration_controller The migration is ready. Normal Running 50s migmigration_controller Step: 9/47 Normal Running 50s migmigration_controller Step: 10/47 11.3.5. State migration You can perform repeatable, state-only migrations by using Migration Toolkit for Containers (MTC) to migrate persistent volume claims (PVCs) that constitute an application's state. You migrate specified PVCs by excluding other PVCs from the migration plan. You can map the PVCs to ensure that the source and the target PVCs are synchronized. Persistent volume (PV) data is copied to the target cluster. The PV references are not moved, and the application pods continue to run on the source cluster. State migration is specifically designed to be used in conjunction with external CD mechanisms, such as OpenShift Gitops. You can migrate application manifests using GitOps while migrating the state using MTC. If you have a CI/CD pipeline, you can migrate stateless components by deploying them on the target cluster. Then you can migrate stateful components by using MTC. You can perform a state migration between clusters or within the same cluster. Important State migration migrates only the components that constitute an application's state. If you want to migrate an entire namespace, use stage or cutover migration. Prerequisites The state of the application on the source cluster is persisted in PersistentVolumes provisioned through PersistentVolumeClaims . The manifests of the application are available in a central repository that is accessible from both the source and the target clusters. Procedure Migrate persistent volume data from the source to the target cluster. You can perform this step as many times as needed. The source application continues running. Quiesce the source application. You can do this by setting the replicas of workload resources to 0 , either directly on the source cluster or by updating the manifests in GitHub and re-syncing the Argo CD application. Clone application manifests to the target cluster. You can use Argo CD to clone the application manifests to the target cluster. Migrate the remaining volume data from the source to the target cluster. Migrate any new data created by the application during the state migration process by performing a final data migration. If the cloned application is in a quiesced state, unquiesce it. Switch the DNS record to the target cluster to re-direct user traffic to the migrated application. Note MTC 1.6 cannot quiesce applications automatically when performing state migration. It can only migrate PV data. Therefore, you must use your CD mechanisms for quiescing or unquiescing applications. MTC 1.7 introduces explicit Stage and Cutover flows. You can use staging to perform initial data transfers as many times as needed. Then you can perform a cutover, in which the source applications are quiesced automatically. Additional resources See Excluding PVCs from migration to select PVCs for state migration. See Mapping PVCs to migrate source PV data to provisioned PVCs on the destination cluster. See Migrating Kubernetes objects to migrate the Kubernetes objects that constitute an application's state. 11.4. Migration hooks You can add up to four migration hooks to a single migration plan, with each hook running at a different phase of the migration. Migration hooks perform tasks such as customizing application quiescence, manually migrating unsupported data types, and updating applications after migration. A migration hook runs on a source or a target cluster at one of the following migration steps: PreBackup : Before resources are backed up on the source cluster. PostBackup : After resources are backed up on the source cluster. PreRestore : Before resources are restored on the target cluster. PostRestore : After resources are restored on the target cluster. You can create a hook by creating an Ansible playbook that runs with the default Ansible image or with a custom hook container. Ansible playbook The Ansible playbook is mounted on a hook container as a config map. The hook container runs as a job, using the cluster, service account, and namespace specified in the MigPlan custom resource. The job continues to run until it reaches the default limit of 6 retries or a successful completion. This continues even if the initial pod is evicted or killed. The default Ansible runtime image is registry.redhat.io/rhmtc/openshift-migration-hook-runner-rhel7:1.7 . This image is based on the Ansible Runner image and includes python-openshift for Ansible Kubernetes resources and an updated oc binary. Custom hook container You can use a custom hook container instead of the default Ansible image. 11.4.1. Writing an Ansible playbook for a migration hook You can write an Ansible playbook to use as a migration hook. The hook is added to a migration plan by using the MTC web console or by specifying values for the spec.hooks parameters in the MigPlan custom resource (CR) manifest. The Ansible playbook is mounted onto a hook container as a config map. The hook container runs as a job, using the cluster, service account, and namespace specified in the MigPlan CR. The hook container uses a specified service account token so that the tasks do not require authentication before they run in the cluster. 11.4.1.1. Ansible modules You can use the Ansible shell module to run oc commands. Example shell module - hosts: localhost gather_facts: false tasks: - name: get pod name shell: oc get po --all-namespaces You can use kubernetes.core modules, such as k8s_info , to interact with Kubernetes resources. Example k8s_facts module - hosts: localhost gather_facts: false tasks: - name: Get pod k8s_info: kind: pods api: v1 namespace: openshift-migration name: "{{ lookup( 'env', 'HOSTNAME') }}" register: pods - name: Print pod name debug: msg: "{{ pods.resources[0].metadata.name }}" You can use the fail module to produce a non-zero exit status in cases where a non-zero exit status would not normally be produced, ensuring that the success or failure of a hook is detected. Hooks run as jobs and the success or failure status of a hook is based on the exit status of the job container. Example fail module - hosts: localhost gather_facts: false tasks: - name: Set a boolean set_fact: do_fail: true - name: "fail" fail: msg: "Cause a failure" when: do_fail 11.4.1.2. Environment variables The MigPlan CR name and migration namespaces are passed as environment variables to the hook container. These variables are accessed by using the lookup plugin. Example environment variables - hosts: localhost gather_facts: false tasks: - set_fact: namespaces: "{{ (lookup( 'env', 'MIGRATION_NAMESPACES')).split(',') }}" - debug: msg: "{{ item }}" with_items: "{{ namespaces }}" - debug: msg: "{{ lookup( 'env', 'MIGRATION_PLAN_NAME') }}" 11.5. Migration plan options You can exclude, edit, and map components in the MigPlan custom resource (CR). 11.5.1. Excluding resources You can exclude resources, for example, image streams, persistent volumes (PVs), or subscriptions, from a Migration Toolkit for Containers (MTC) migration plan to reduce the resource load for migration or to migrate images or PVs with a different tool. By default, the MTC excludes service catalog resources and Operator Lifecycle Manager (OLM) resources from migration. These resources are parts of the service catalog API group and the OLM API group, neither of which is supported for migration at this time. Procedure Edit the MigrationController custom resource manifest: USD oc edit migrationcontroller <migration_controller> -n openshift-migration Update the spec section by adding parameters to exclude specific resources. For those resources that do not have their own exclusion parameters, add the additional_excluded_resources parameter: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: disable_image_migration: true 1 disable_pv_migration: true 2 additional_excluded_resources: 3 - resource1 - resource2 ... 1 Add disable_image_migration: true to exclude image streams from the migration. imagestreams is added to the excluded_resources list in main.yml when the MigrationController pod restarts. 2 Add disable_pv_migration: true to exclude PVs from the migration plan. persistentvolumes and persistentvolumeclaims are added to the excluded_resources list in main.yml when the MigrationController pod restarts. Disabling PV migration also disables PV discovery when you create the migration plan. 3 You can add OpenShift Container Platform resources that you want to exclude to the additional_excluded_resources list. Wait two minutes for the MigrationController pod to restart so that the changes are applied. Verify that the resource is excluded: USD oc get deployment -n openshift-migration migration-controller -o yaml | grep EXCLUDED_RESOURCES -A1 The output contains the excluded resources: Example output name: EXCLUDED_RESOURCES value: resource1,resource2,imagetags,templateinstances,clusterserviceversions,packagemanifests,subscriptions,servicebrokers,servicebindings,serviceclasses,serviceinstances,serviceplans,imagestreams,persistentvolumes,persistentvolumeclaims 11.5.2. Mapping namespaces If you map namespaces in the MigPlan custom resource (CR), you must ensure that the namespaces are not duplicated on the source or the destination clusters because the UID and GID ranges of the namespaces are copied during migration. Two source namespaces mapped to the same destination namespace spec: namespaces: - namespace_2 - namespace_1:namespace_2 If you want the source namespace to be mapped to a namespace of the same name, you do not need to create a mapping. By default, a source namespace and a target namespace have the same name. Incorrect namespace mapping spec: namespaces: - namespace_1:namespace_1 Correct namespace reference spec: namespaces: - namespace_1 11.5.3. Excluding persistent volume claims You select persistent volume claims (PVCs) for state migration by excluding the PVCs that you do not want to migrate. You exclude PVCs by setting the spec.persistentVolumes.pvc.selection.action parameter of the MigPlan custom resource (CR) after the persistent volumes (PVs) have been discovered. Prerequisites MigPlan CR is in a Ready state. Procedure Add the spec.persistentVolumes.pvc.selection.action parameter to the MigPlan CR and set it to skip : apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: ... persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: ... selection: action: skip 11.5.4. Mapping persistent volume claims You can migrate persistent volume (PV) data from the source cluster to persistent volume claims (PVCs) that are already provisioned in the destination cluster in the MigPlan CR by mapping the PVCs. This mapping ensures that the destination PVCs of migrated applications are synchronized with the source PVCs. You map PVCs by updating the spec.persistentVolumes.pvc.name parameter in the MigPlan custom resource (CR) after the PVs have been discovered. Prerequisites MigPlan CR is in a Ready state. Procedure Update the spec.persistentVolumes.pvc.name parameter in the MigPlan CR: apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: ... persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: name: <source_pvc>:<destination_pvc> 1 1 Specify the PVC on the source cluster and the PVC on the destination cluster. If the destination PVC does not exist, it will be created. You can use this mapping to change the PVC name during migration. 11.5.5. Editing persistent volume attributes After you create a MigPlan custom resource (CR), the MigrationController CR discovers the persistent volumes (PVs). The spec.persistentVolumes block and the status.destStorageClasses block are added to the MigPlan CR. You can edit the values in the spec.persistentVolumes.selection block. If you change values outside the spec.persistentVolumes.selection block, the values are overwritten when the MigPlan CR is reconciled by the MigrationController CR. Note The default value for the spec.persistentVolumes.selection.storageClass parameter is determined by the following logic: If the source cluster PV is Gluster or NFS, the default is either cephfs , for accessMode: ReadWriteMany , or cephrbd , for accessMode: ReadWriteOnce . If the PV is neither Gluster nor NFS or if cephfs or cephrbd are not available, the default is a storage class for the same provisioner. If a storage class for the same provisioner is not available, the default is the default storage class of the destination cluster. You can change the storageClass value to the value of any name parameter in the status.destStorageClasses block of the MigPlan CR. If the storageClass value is empty, the PV will have no storage class after migration. This option is appropriate if, for example, you want to move the PV to an NFS volume on the destination cluster. Prerequisites MigPlan CR is in a Ready state. Procedure Edit the spec.persistentVolumes.selection values in the MigPlan CR: apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: pvc-095a6559-b27f-11eb-b27f-021bddcaf6e4 proposedCapacity: 10Gi pvc: accessModes: - ReadWriteMany hasReference: true name: mysql namespace: mysql-persistent selection: action: <copy> 1 copyMethod: <filesystem> 2 verify: true 3 storageClass: <gp2> 4 accessMode: <ReadWriteMany> 5 storageClass: cephfs 1 Allowed values are move , copy , and skip . If only one action is supported, the default value is the supported action. If multiple actions are supported, the default value is copy . 2 Allowed values are snapshot and filesystem . Default value is filesystem . 3 The verify parameter is displayed if you select the verification option for file system copy in the MTC web console. You can set it to false . 4 You can change the default value to the value of any name parameter in the status.destStorageClasses block of the MigPlan CR. If no value is specified, the PV will have no storage class after migration. 5 Allowed values are ReadWriteOnce and ReadWriteMany . If this value is not specified, the default is the access mode of the source cluster PVC. You can only edit the access mode in the MigPlan CR. You cannot edit it by using the MTC web console. Additional resources For details about the move and copy actions, see MTC workflow . For details about the skip action, see Excluding PVCs from migration . For details about the file system and snapshot copy methods, see About data copy methods . 11.5.6. Performing a state migration of Kubernetes objects by using the MTC API After you migrate all the PV data, you can use the Migration Toolkit for Containers (MTC) API to perform a one-time state migration of Kubernetes objects that constitute an application. You do this by configuring MigPlan custom resource (CR) fields to provide a list of Kubernetes resources with an additional label selector to further filter those resources, and then performing a migration by creating a MigMigration CR. The MigPlan resource is closed after the migration. Note Selecting Kubernetes resources is an API-only feature. You must update the MigPlan CR and create a MigMigration CR for it by using the CLI. The MTC web console does not support migrating Kubernetes objects. Note After migration, the closed parameter of the MigPlan CR is set to true . You cannot create another MigMigration CR for this MigPlan CR. You add Kubernetes objects to the MigPlan CR by using one of the following options: Adding the Kubernetes objects to the includedResources section. When the includedResources field is specified in the MigPlan CR, the plan takes a list of group-kind as input. Only resources present in the list are included in the migration. Adding the optional labelSelector parameter to filter the includedResources in the MigPlan . When this field is specified, only resources matching the label selector are included in the migration. For example, you can filter a list of Secret and ConfigMap resources by using the label app: frontend as a filter. Procedure Update the MigPlan CR to include Kubernetes resources and, optionally, to filter the included resources by adding the labelSelector parameter: To update the MigPlan CR to include Kubernetes resources: apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind> 1 group: "" - kind: <kind> group: "" 1 Specify the Kubernetes object, for example, Secret or ConfigMap . Optional: To filter the included resources by adding the labelSelector parameter: apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind> 1 group: "" - kind: <kind> group: "" ... labelSelector: matchLabels: <label> 2 1 Specify the Kubernetes object, for example, Secret or ConfigMap . 2 Specify the label of the resources to migrate, for example, app: frontend . Create a MigMigration CR to migrate the selected Kubernetes resources. Verify that the correct MigPlan is referenced in migPlanRef : apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: generateName: <migplan> namespace: openshift-migration spec: migPlanRef: name: <migplan> namespace: openshift-migration stage: false 11.6. Migration controller options You can edit migration plan limits, enable persistent volume resizing, or enable cached Kubernetes clients in the MigrationController custom resource (CR) for large migrations and improved performance. 11.6.1. Increasing limits for large migrations You can increase the limits on migration objects and container resources for large migrations with the Migration Toolkit for Containers (MTC). Important You must test these changes before you perform a migration in a production environment. Procedure Edit the MigrationController custom resource (CR) manifest: USD oc edit migrationcontroller -n openshift-migration Update the following parameters: ... mig_controller_limits_cpu: "1" 1 mig_controller_limits_memory: "10Gi" 2 ... mig_controller_requests_cpu: "100m" 3 mig_controller_requests_memory: "350Mi" 4 ... mig_pv_limit: 100 5 mig_pod_limit: 100 6 mig_namespace_limit: 10 7 ... 1 Specifies the number of CPUs available to the MigrationController CR. 2 Specifies the amount of memory available to the MigrationController CR. 3 Specifies the number of CPU units available for MigrationController CR requests. 100m represents 0.1 CPU units (100 * 1e-3). 4 Specifies the amount of memory available for MigrationController CR requests. 5 Specifies the number of persistent volumes that can be migrated. 6 Specifies the number of pods that can be migrated. 7 Specifies the number of namespaces that can be migrated. Create a migration plan that uses the updated parameters to verify the changes. If your migration plan exceeds the MigrationController CR limits, the MTC console displays a warning message when you save the migration plan. 11.6.2. Enabling persistent volume resizing for direct volume migration You can enable persistent volume (PV) resizing for direct volume migration to avoid running out of disk space on the destination cluster. When the disk usage of a PV reaches a configured level, the MigrationController custom resource (CR) compares the requested storage capacity of a persistent volume claim (PVC) to its actual provisioned capacity. Then, it calculates the space required on the destination cluster. A pv_resizing_threshold parameter determines when PV resizing is used. The default threshold is 3% . This means that PV resizing occurs when the disk usage of a PV is more than 97% . You can increase this threshold so that PV resizing occurs at a lower disk usage level. PVC capacity is calculated according to the following criteria: If the requested storage capacity ( spec.resources.requests.storage ) of the PVC is not equal to its actual provisioned capacity ( status.capacity.storage ), the greater value is used. If a PV is provisioned through a PVC and then subsequently changed so that its PV and PVC capacities no longer match, the greater value is used. Prerequisites The PVCs must be attached to one or more running pods so that the MigrationController CR can execute commands. Procedure Log in to the host cluster. Enable PV resizing by patching the MigrationController CR: USD oc patch migrationcontroller migration-controller -p '{"spec":{"enable_dvm_pv_resizing":true}}' \ 1 --type='merge' -n openshift-migration 1 Set the value to false to disable PV resizing. Optional: Update the pv_resizing_threshold parameter to increase the threshold: USD oc patch migrationcontroller migration-controller -p '{"spec":{"pv_resizing_threshold":41}}' \ 1 --type='merge' -n openshift-migration 1 The default value is 3 . When the threshold is exceeded, the following status message is displayed in the MigPlan CR status: status: conditions: ... - category: Warn durable: true lastTransitionTime: "2021-06-17T08:57:01Z" message: 'Capacity of the following volumes will be automatically adjusted to avoid disk capacity issues in the target cluster: [pvc-b800eb7b-cf3b-11eb-a3f7-0eae3e0555f3]' reason: Done status: "False" type: PvCapacityAdjustmentRequired Note For AWS gp2 storage, this message does not appear unless the pv_resizing_threshold is 42% or greater because of the way gp2 calculates volume usage and size. ( BZ#1973148 ) 11.6.3. Enabling cached Kubernetes clients You can enable cached Kubernetes clients in the MigrationController custom resource (CR) for improved performance during migration. The greatest performance benefit is displayed when migrating between clusters in different regions or with significant network latency. Note Delegated tasks, for example, Rsync backup for direct volume migration or Velero backup and restore, however, do not show improved performance with cached clients. Cached clients require extra memory because the MigrationController CR caches all API resources that are required for interacting with MigCluster CRs. Requests that are normally sent to the API server are directed to the cache instead. The cache watches the API server for updates. You can increase the memory limits and requests of the MigrationController CR if OOMKilled errors occur after you enable cached clients. Procedure Enable cached clients by running the following command: USD oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch \ '[{ "op": "replace", "path": "/spec/mig_controller_enable_cache", "value": true}]' Optional: Increase the MigrationController CR memory limits by running the following command: USD oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch \ '[{ "op": "replace", "path": "/spec/mig_controller_limits_memory", "value": <10Gi>}]' Optional: Increase the MigrationController CR memory requests by running the following command: USD oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch \ '[{ "op": "replace", "path": "/spec/mig_controller_requests_memory", "value": <350Mi>}]' | [
"podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-controller-rhel8:v1.7):/crane ./",
"oc config view",
"crane tunnel-api [--namespace <namespace>] --destination-context <destination-cluster> --source-context <source-cluster>",
"crane tunnel-api --namespace my_tunnel --destination-context openshift-migration/c131-e-us-east-containers-cloud-ibm-com/admin --source-context default/192-168-122-171-nip-io:8443/admin",
"oc get po -n <namespace>",
"NAME READY STATUS RESTARTS AGE <pod_name> 2/2 Running 0 44s",
"oc logs -f -n <namespace> <pod_name> -c openvpn",
"oc get service -n <namespace>",
"oc sa get-token -n openshift-migration migration-controller",
"oc create route passthrough --service=docker-registry -n default",
"oc create route passthrough --service=image-registry -n openshift-image-registry",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress",
"apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route]",
"spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"",
"oc get migrationcontroller <migration_controller> -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2",
"oc replace -f migration-controller.yaml -n openshift-migration",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <host_cluster> namespace: openshift-migration spec: isHostCluster: true EOF",
"cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: <cluster_secret> namespace: openshift-config type: Opaque data: saToken: <sa_token> 1 EOF",
"oc sa get-token migration-controller -n openshift-migration | base64 -w 0",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <remote_cluster> 1 namespace: openshift-migration spec: exposedRegistryPath: <exposed_registry_route> 2 insecure: false 3 isHostCluster: false serviceAccountSecretRef: name: <remote_cluster_secret> 4 namespace: openshift-config url: <remote_cluster_url> 5 EOF",
"oc describe cluster <cluster>",
"cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: namespace: openshift-config name: <migstorage_creds> type: Opaque data: aws-access-key-id: <key_id_base64> 1 aws-secret-access-key: <secret_key_base64> 2 EOF",
"echo -n \"<key>\" | base64 -w 0 1",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigStorage metadata: name: <migstorage> namespace: openshift-migration spec: backupStorageConfig: awsBucketName: <bucket> 1 credsSecretRef: name: <storage_secret> 2 namespace: openshift-config backupStorageProvider: <storage_provider> 3 volumeSnapshotConfig: credsSecretRef: name: <storage_secret> 4 namespace: openshift-config volumeSnapshotProvider: <storage_provider> 5 EOF",
"oc describe migstorage <migstorage>",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: destMigClusterRef: name: <host_cluster> namespace: openshift-migration indirectImageMigration: true 1 indirectVolumeMigration: true 2 migStorageRef: name: <migstorage> 3 namespace: openshift-migration namespaces: - <source_namespace_1> 4 - <source_namespace_2> - <source_namespace_3>:<destination_namespace> 5 srcMigClusterRef: name: <remote_cluster> 6 namespace: openshift-migration EOF",
"oc describe migplan <migplan> -n openshift-migration",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: <migmigration> namespace: openshift-migration spec: migPlanRef: name: <migplan> 1 namespace: openshift-migration quiescePods: true 2 stage: false 3 rollback: false 4 EOF",
"oc watch migmigration <migmigration> -n openshift-migration",
"Name: c8b034c0-6567-11eb-9a4f-0bc004db0fbc Namespace: openshift-migration Labels: migration.openshift.io/migplan-name=django Annotations: openshift.io/touch: e99f9083-6567-11eb-8420-0a580a81020c API Version: migration.openshift.io/v1alpha1 Kind: MigMigration Spec: Mig Plan Ref: Name: migplan Namespace: openshift-migration Stage: false Status: Conditions: Category: Advisory Last Transition Time: 2021-02-02T15:04:09Z Message: Step: 19/47 Reason: InitialBackupCreated Status: True Type: Running Category: Required Last Transition Time: 2021-02-02T15:03:19Z Message: The migration is ready. Status: True Type: Ready Category: Required Durable: true Last Transition Time: 2021-02-02T15:04:05Z Message: The migration registries are healthy. Status: True Type: RegistriesHealthy Itinerary: Final Observed Digest: 7fae9d21f15979c71ddc7dd075cb97061895caac5b936d92fae967019ab616d5 Phase: InitialBackupCreated Pipeline: Completed: 2021-02-02T15:04:07Z Message: Completed Name: Prepare Started: 2021-02-02T15:03:18Z Message: Waiting for initial Velero backup to complete. Name: Backup Phase: InitialBackupCreated Progress: Backup openshift-migration/c8b034c0-6567-11eb-9a4f-0bc004db0fbc-wpc44: 0 out of estimated total of 0 objects backed up (5s) Started: 2021-02-02T15:04:07Z Message: Not started Name: StageBackup Message: Not started Name: StageRestore Message: Not started Name: DirectImage Message: Not started Name: DirectVolume Message: Not started Name: Restore Message: Not started Name: Cleanup Start Timestamp: 2021-02-02T15:03:18Z Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Running 57s migmigration_controller Step: 2/47 Normal Running 57s migmigration_controller Step: 3/47 Normal Running 57s (x3 over 57s) migmigration_controller Step: 4/47 Normal Running 54s migmigration_controller Step: 5/47 Normal Running 54s migmigration_controller Step: 6/47 Normal Running 52s (x2 over 53s) migmigration_controller Step: 7/47 Normal Running 51s (x2 over 51s) migmigration_controller Step: 8/47 Normal Ready 50s (x12 over 57s) migmigration_controller The migration is ready. Normal Running 50s migmigration_controller Step: 9/47 Normal Running 50s migmigration_controller Step: 10/47",
"- hosts: localhost gather_facts: false tasks: - name: get pod name shell: oc get po --all-namespaces",
"- hosts: localhost gather_facts: false tasks: - name: Get pod k8s_info: kind: pods api: v1 namespace: openshift-migration name: \"{{ lookup( 'env', 'HOSTNAME') }}\" register: pods - name: Print pod name debug: msg: \"{{ pods.resources[0].metadata.name }}\"",
"- hosts: localhost gather_facts: false tasks: - name: Set a boolean set_fact: do_fail: true - name: \"fail\" fail: msg: \"Cause a failure\" when: do_fail",
"- hosts: localhost gather_facts: false tasks: - set_fact: namespaces: \"{{ (lookup( 'env', 'MIGRATION_NAMESPACES')).split(',') }}\" - debug: msg: \"{{ item }}\" with_items: \"{{ namespaces }}\" - debug: msg: \"{{ lookup( 'env', 'MIGRATION_PLAN_NAME') }}\"",
"oc edit migrationcontroller <migration_controller> -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: disable_image_migration: true 1 disable_pv_migration: true 2 additional_excluded_resources: 3 - resource1 - resource2",
"oc get deployment -n openshift-migration migration-controller -o yaml | grep EXCLUDED_RESOURCES -A1",
"name: EXCLUDED_RESOURCES value: resource1,resource2,imagetags,templateinstances,clusterserviceversions,packagemanifests,subscriptions,servicebrokers,servicebindings,serviceclasses,serviceinstances,serviceplans,imagestreams,persistentvolumes,persistentvolumeclaims",
"spec: namespaces: - namespace_2 - namespace_1:namespace_2",
"spec: namespaces: - namespace_1:namespace_1",
"spec: namespaces: - namespace_1",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: selection: action: skip",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: name: <source_pvc>:<destination_pvc> 1",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: pvc-095a6559-b27f-11eb-b27f-021bddcaf6e4 proposedCapacity: 10Gi pvc: accessModes: - ReadWriteMany hasReference: true name: mysql namespace: mysql-persistent selection: action: <copy> 1 copyMethod: <filesystem> 2 verify: true 3 storageClass: <gp2> 4 accessMode: <ReadWriteMany> 5 storageClass: cephfs",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind> 1 group: \"\" - kind: <kind> group: \"\"",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind> 1 group: \"\" - kind: <kind> group: \"\" labelSelector: matchLabels: <label> 2",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: generateName: <migplan> namespace: openshift-migration spec: migPlanRef: name: <migplan> namespace: openshift-migration stage: false",
"oc edit migrationcontroller -n openshift-migration",
"mig_controller_limits_cpu: \"1\" 1 mig_controller_limits_memory: \"10Gi\" 2 mig_controller_requests_cpu: \"100m\" 3 mig_controller_requests_memory: \"350Mi\" 4 mig_pv_limit: 100 5 mig_pod_limit: 100 6 mig_namespace_limit: 10 7",
"oc patch migrationcontroller migration-controller -p '{\"spec\":{\"enable_dvm_pv_resizing\":true}}' \\ 1 --type='merge' -n openshift-migration",
"oc patch migrationcontroller migration-controller -p '{\"spec\":{\"pv_resizing_threshold\":41}}' \\ 1 --type='merge' -n openshift-migration",
"status: conditions: - category: Warn durable: true lastTransitionTime: \"2021-06-17T08:57:01Z\" message: 'Capacity of the following volumes will be automatically adjusted to avoid disk capacity issues in the target cluster: [pvc-b800eb7b-cf3b-11eb-a3f7-0eae3e0555f3]' reason: Done status: \"False\" type: PvCapacityAdjustmentRequired",
"oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/mig_controller_enable_cache\", \"value\": true}]'",
"oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/mig_controller_limits_memory\", \"value\": <10Gi>}]'",
"oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/mig_controller_requests_memory\", \"value\": <350Mi>}]'"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/migrating_from_version_3_to_4/advanced-migration-options-3-4 |
Chapter 6. Starting to use Red Hat Quay | Chapter 6. Starting to use Red Hat Quay With Red Hat Quay now running, you can: Select Tutorial from the Quay home page to try the 15-minute tutorial. In the tutorial, you learn to log into Quay, start a container, create images, push repositories, view repositories, and change repository permissions with Quay. Refer to the Use Red Hat Quay for information on working with Red Hat Quay repositories. | null | https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/deploy_red_hat_quay_-_high_availability/starting_to_use_red_hat_quay |
Chapter 1. OpenShift Container Platform CI/CD overview | Chapter 1. OpenShift Container Platform CI/CD overview OpenShift Container Platform is an enterprise-ready Kubernetes platform for developers, which enables organizations to automate the application delivery process through DevOps practices, such as continuous integration (CI) and continuous delivery (CD). To meet your organizational needs, the OpenShift Container Platform provides the following CI/CD solutions: OpenShift Builds OpenShift Pipelines OpenShift GitOps 1.1. OpenShift Builds With OpenShift Builds, you can create cloud-native apps by using a declarative build process. You can define the build process in a YAML file that you use to create a BuildConfig object. This definition includes attributes such as build triggers, input parameters, and source code. When deployed, the BuildConfig object typically builds a runnable image and pushes it to a container image registry. OpenShift Builds provides the following extensible support for build strategies: Docker build Source-to-image (S2I) build Custom build For more information, see Understanding image builds 1.2. OpenShift Pipelines OpenShift Pipelines provides a Kubernetes-native CI/CD framework to design and run each step of the CI/CD pipeline in its own container. It can scale independently to meet the on-demand pipelines with predictable outcomes. For more information, see Understanding OpenShift Pipelines 1.3. OpenShift GitOps OpenShift GitOps is an Operator that uses Argo CD as the declarative GitOps engine. It enables GitOps workflows across multicluster OpenShift and Kubernetes infrastructure. Using OpenShift GitOps, administrators can consistently configure and deploy Kubernetes-based infrastructure and applications across clusters and development lifecycles. For more information, see Understanding OpenShift GitOps 1.4. Jenkins Jenkins automates the process of building, testing, and deploying applications and projects. OpenShift Developer Tools provides a Jenkins image that integrates directly with the OpenShift Container Platform. Jenkins can be deployed on OpenShift by using the Samples Operator templates or certified Helm chart. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/cicd/ci-cd-overview |
Chapter 16. Compiler Toolset Images | Chapter 16. Compiler Toolset Images Red Hat Developer Tools container images are available for the AMD64 and Intel 64, 64-bit IBM Z, and IBM POWER, little endian architectures for the following compiler toolsets: Clang and LLVM Toolset Rust Toolset Go Toolset For details, see the Red Hat Developer Tools documentation . | null | https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/using_red_hat_software_collections_container_images/compiler-toolsets-images |
Chapter 2. Configuring your firewall | Chapter 2. Configuring your firewall If you use a firewall, you must configure it so that OpenShift Container Platform can access the sites that it requires to function. You must always grant access to some sites, and you grant access to more if you use Red Hat Insights, the Telemetry service, a cloud to host your cluster, and certain build strategies. 2.1. Configuring your firewall for OpenShift Container Platform Before you install OpenShift Container Platform, you must configure your firewall to grant access to the sites that OpenShift Container Platform requires. When using a firewall, make additional configurations to the firewall so that OpenShift Container Platform can access the sites that it requires to function. There are no special configuration considerations for services running on only controller nodes compared to worker nodes. Note If your environment has a dedicated load balancer in front of your OpenShift Container Platform cluster, review the allowlists between your firewall and load balancer to prevent unwanted network restrictions to your cluster. Procedure Set the following registry URLs for your firewall's allowlist: URL Port Function registry.redhat.io 443 Provides core container images access.redhat.com 443 Hosts a signature store that a container client requires for verifying images pulled from registry.access.redhat.com . In a firewall environment, ensure that this resource is on the allowlist. registry.access.redhat.com 443 Hosts all the container images that are stored on the Red Hat Ecosystem Catalog, including core container images. quay.io 443 Provides core container images cdn.quay.io 443 Provides core container images cdn01.quay.io 443 Provides core container images cdn02.quay.io 443 Provides core container images cdn03.quay.io 443 Provides core container images cdn04.quay.io 443 Provides core container images cdn05.quay.io 443 Provides core container images cdn06.quay.io 443 Provides core container images sso.redhat.com 443 The https://console.redhat.com site uses authentication from sso.redhat.com You can use the wildcards *.quay.io and *.openshiftapps.com instead of cdn.quay.io and cdn0[1-6].quay.io in your allowlist. You can use the wildcard *.access.redhat.com to simplify the configuration and ensure that all subdomains, including registry.access.redhat.com , are allowed. When you add a site, such as quay.io , to your allowlist, do not add a wildcard entry, such as *.quay.io , to your denylist. In most cases, image registries use a content delivery network (CDN) to serve images. If a firewall blocks access, image downloads are denied when the initial download request redirects to a hostname such as cdn01.quay.io . Set your firewall's allowlist to include any site that provides resources for a language or framework that your builds require. If you do not disable Telemetry, you must grant access to the following URLs to access Red Hat Insights: URL Port Function cert-api.access.redhat.com 443 Required for Telemetry api.access.redhat.com 443 Required for Telemetry infogw.api.openshift.com 443 Required for Telemetry console.redhat.com 443 Required for Telemetry and for insights-operator If you use Alibaba Cloud, Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) to host your cluster, you must grant access to the URLs that offer the cloud provider API and DNS for that cloud: Cloud URL Port Function Alibaba *.aliyuncs.com 443 Required to access Alibaba Cloud services and resources. Review the Alibaba endpoints_config.go file to find the exact endpoints to allow for the regions that you use. AWS aws.amazon.com 443 Used to install and manage clusters in an AWS environment. *.amazonaws.com Alternatively, if you choose to not use a wildcard for AWS APIs, you must include the following URLs in your allowlist: 443 Required to access AWS services and resources. Review the AWS Service Endpoints in the AWS documentation to find the exact endpoints to allow for the regions that you use. ec2.amazonaws.com 443 Used to install and manage clusters in an AWS environment. events.amazonaws.com 443 Used to install and manage clusters in an AWS environment. iam.amazonaws.com 443 Used to install and manage clusters in an AWS environment. route53.amazonaws.com 443 Used to install and manage clusters in an AWS environment. *.s3.amazonaws.com 443 Used to install and manage clusters in an AWS environment. *.s3.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. *.s3.dualstack.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. sts.amazonaws.com 443 Used to install and manage clusters in an AWS environment. sts.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. tagging.us-east-1.amazonaws.com 443 Used to install and manage clusters in an AWS environment. This endpoint is always us-east-1 , regardless of the region the cluster is deployed in. ec2.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. elasticloadbalancing.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. servicequotas.<aws_region>.amazonaws.com 443 Required. Used to confirm quotas for deploying the service. tagging.<aws_region>.amazonaws.com 443 Allows the assignment of metadata about AWS resources in the form of tags. *.cloudfront.net 443 Used to provide access to CloudFront. If you use the AWS Security Token Service (STS) and the private S3 bucket, you must provide access to CloudFront. GCP *.googleapis.com 443 Required to access GCP services and resources. Review Cloud Endpoints in the GCP documentation to find the endpoints to allow for your APIs. accounts.google.com 443 Required to access your GCP account. Microsoft Azure management.azure.com 443 Required to access Microsoft Azure services and resources. Review the Microsoft Azure REST API reference in the Microsoft Azure documentation to find the endpoints to allow for your APIs. *.blob.core.windows.net 443 Required to download Ignition files. login.microsoftonline.com 443 Required to access Microsoft Azure services and resources. Review the Azure REST API reference in the Microsoft Azure documentation to find the endpoints to allow for your APIs. Allowlist the following URLs: URL Port Function *.apps.<cluster_name>.<base_domain> 443 Required to access the default cluster routes unless you set an ingress wildcard during installation. api.openshift.com 443 Required both for your cluster token and to check if updates are available for the cluster. console.redhat.com 443 Required for your cluster token. mirror.openshift.com 443 Required to access mirrored installation content and images. This site is also a source of release image signatures, although the Cluster Version Operator needs only a single functioning source. quayio-production-s3.s3.amazonaws.com 443 Required to access Quay image content in AWS. rhcos.mirror.openshift.com 443 Required to download Red Hat Enterprise Linux CoreOS (RHCOS) images. sso.redhat.com 443 The https://console.redhat.com site uses authentication from sso.redhat.com storage.googleapis.com/openshift-release 443 A source of release image signatures, although the Cluster Version Operator needs only a single functioning source. Operators require route access to perform health checks. Specifically, the authentication and web console Operators connect to two routes to verify that the routes work. If you are the cluster administrator and do not want to allow *.apps.<cluster_name>.<base_domain> , then allow these routes: oauth-openshift.apps.<cluster_name>.<base_domain> canary-openshift-ingress-canary.apps.<cluster_name>.<base_domain> console-openshift-console.apps.<cluster_name>.<base_domain> , or the hostname that is specified in the spec.route.hostname field of the consoles.operator/cluster object if the field is not empty. Allowlist the following URLs for optional third-party content: URL Port Function registry.connect.redhat.com 443 Required for all third-party images and certified operators. rhc4tp-prod-z8cxf-image-registry-us-east-1-evenkyleffocxqvofrk.s3.dualstack.us-east-1.amazonaws.com 443 Provides access to container images hosted on registry.connect.redhat.com oso-rhc4tp-docker-registry.s3-us-west-2.amazonaws.com 443 Required for Sonatype Nexus, F5 Big IP operators. If you use a default Red Hat Network Time Protocol (NTP) server allow the following URLs: 1.rhel.pool.ntp.org 2.rhel.pool.ntp.org 3.rhel.pool.ntp.org Note If you do not use a default Red Hat NTP server, verify the NTP server for your platform and allow it in your firewall. Additional resources OpenID Connect requirements for AWS STS 2.2. OpenShift Container Platform network flow matrix The network flow matrix describes the ingress flows to OpenShift Container Platform services. The network information in the matrix is accurate for both bare-metal and cloud environments. Use the information in the network flow matrix to help you manage ingress traffic. You can restrict ingress traffic to essential flows to improve network security. To view or download the raw CSV content, see this resource . Additionally, consider the following dynamic port ranges when managing ingress traffic: 9000-9999 : Host level services 30000-32767 : Kubernetes node ports 49152-65535 : Dynamic or private ports Note The network flow matrix describes ingress traffic flows for a base OpenShift Container Platform installation. It does not describe network flows for additional components, such as optional Operators available from the Red Hat Marketplace. The matrix does not apply for hosted control planes, Red Hat build of MicroShift, or standalone clusters. Table 2.1. Network flow matrix Direction Protocol Port Namespace Service Pod Container Node Role Optional Ingress TCP 22 Host system service sshd master TRUE Ingress TCP 53 openshift-dns dns-default dnf-default dns master FALSE Ingress TCP 80 openshift-ingress router-default router-default router master FALSE Ingress TCP 111 Host system service rpcbind master TRUE Ingress TCP 443 openshift-ingress router-default router-default router master FALSE Ingress TCP 1936 openshift-ingress router-default router-default router master FALSE Ingress TCP 2379 openshift-etcd etcd etcd etcdctl master FALSE Ingress TCP 2380 openshift-etcd healthz etcd etcd master FALSE Ingress TCP 5050 openshift-machine-api ironic-proxy ironic-proxy master FALSE Ingress TCP 5051 openshift-machine-api metal3-state metal3 metal3-httpd master FALSE Ingress TCP 6080 openshift-kube-apiserver kube-apiserver kube-apiserver-insecure-readyz master FALSE Ingress TCP 6180 openshift-machine-api metal3-state metal3 metal3-httpd master FALSE Ingress TCP 6183 openshift-machine-api metal3-state metal3 metal3-httpd master FALSE Ingress TCP 6385 openshift-machine-api ironic-proxy ironic-proxy master FALSE Ingress TCP 6388 openshift-machine-api metal3-state metal3 metal3-httpd master FALSE Ingress TCP 6443 openshift-kube-apiserver apiserver kube-apiserver kube-apiserver master FALSE Ingress TCP 8080 openshift-network-operator network-operator network-operator master FALSE Ingress TCP 8798 openshift-machine-config-operator machine-config-daemon machine-config-daemon machine-config-daemon master FALSE Ingress TCP 9001 openshift-machine-config-operator machine-config-daemon machine-config-daemon kube-rbac-proxy master FALSE Ingress TCP 9099 openshift-cluster-version cluster-version-operator cluster-version-operator cluster-version-operator master FALSE Ingress TCP 9100 openshift-monitoring node-exporter node-exporter kube-rbac-proxy master FALSE Ingress TCP 9103 openshift-ovn-kubernetes ovn-kubernetes-node ovnkube-node kube-rbac-proxy-node master FALSE Ingress TCP 9104 openshift-network-operator metrics network-operator network-operator master FALSE Ingress TCP 9105 openshift-ovn-kubernetes ovn-kubernetes-node ovnkube-node kube-rbac-proxy-ovn-metrics master FALSE Ingress TCP 9107 openshift-ovn-kubernetes egressip-node-healthcheck ovnkube-node ovnkube-controller master FALSE Ingress TCP 9108 openshift-ovn-kubernetes ovn-kubernetes-control-plane ovnkube-control-plane kube-rbac-proxy master FALSE Ingress TCP 9192 openshift-cluster-machine-approver machine-approver machine-approver kube-rbac-proxy master FALSE Ingress TCP 9258 openshift-cloud-controller-manager-operator machine-approver cluster-cloud-controller-manager cluster-cloud-controller-manager master FALSE Ingress TCP 9444 openshift-kni-infra haproxy haproxy master FALSE Ingress TCP 9445 openshift-kni-infra haproxy haproxy master FALSE Ingress TCP 9447 openshift-machine-api metal3-baremetal-operator master FALSE Ingress TCP 9537 Host system service crio-metrics master FALSE Ingress TCP 9637 openshift-machine-config-operator kube-rbac-proxy-crio kube-rbac-proxy-crio kube-rbac-proxy-crio master FALSE Ingress TCP 9978 openshift-etcd etcd etcd etcd-metrics master FALSE Ingress TCP 9979 openshift-etcd etcd etcd etcd-metrics master FALSE Ingress TCP 9980 openshift-etcd etcd etcd etcd master FALSE Ingress TCP 10250 Host system service kubelet master FALSE Ingress TCP 10256 openshift-ovn-kubernetes ovnkube ovnkube ovnkube-controller master FALSE Ingress TCP 10257 openshift-kube-controller-manager kube-controller-manager kube-controller-manager kube-controller-manager master FALSE Ingress TCP 10258 openshift-cloud-controller-manager-operator cloud-controller cloud-controller-manager cloud-controller-manager master FALSE Ingress TCP 10259 openshift-kube-scheduler scheduler openshift-kube-scheduler kube-scheduler master FALSE Ingress TCP 10260 openshift-cloud-controller-manager-operator cloud-controller cloud-controller-manager cloud-controller-manager master FALSE Ingress TCP 10300 openshift-cluster-csi-drivers csi-livenessprobe csi-driver-node csi-driver master FALSE Ingress TCP 10309 openshift-cluster-csi-drivers csi-node-driver csi-driver-node csi-node-driver-registrar master FALSE Ingress TCP 10357 openshift-kube-apiserver openshift-kube-apiserver-healthz kube-apiserver kube-apiserver-check-endpoints master FALSE Ingress TCP 17697 openshift-kube-apiserver openshift-kube-apiserver-healthz kube-apiserver kube-apiserver-check-endpoints master FALSE Ingress TCP 18080 openshift-kni-infra coredns coredns master FALSE Ingress TCP 22623 openshift-machine-config-operator machine-config-server machine-config-server machine-config-server master FALSE Ingress TCP 22624 openshift-machine-config-operator machine-config-server machine-config-server machine-config-server master FALSE Ingress UDP 53 openshift-dns dns-default dnf-default dns master FALSE Ingress UDP 111 Host system service rpcbind master TRUE Ingress UDP 6081 openshift-ovn-kubernetes ovn-kubernetes geneve master FALSE Ingress TCP 22 Host system service sshd worker TRUE Ingress TCP 53 openshift-dns dns-default dnf-default dns worker FALSE Ingress TCP 80 openshift-ingress router-default router-default router worker FALSE Ingress TCP 111 Host system service rpcbind worker TRUE Ingress TCP 443 openshift-ingress router-default router-default router worker FALSE Ingress TCP 1936 openshift-ingress router-default router-default router worker FALSE Ingress TCP 8798 openshift-machine-config-operator machine-config-daemon machine-config-daemon machine-config-daemon worker FALSE Ingress TCP 9001 openshift-machine-config-operator machine-config-daemon machine-config-daemon kube-rbac-proxy worker FALSE Ingress TCP 9100 openshift-monitoring node-exporter node-exporter kube-rbac-proxy worker FALSE Ingress TCP 9103 openshift-ovn-kubernetes ovn-kubernetes-node ovnkube-node kube-rbac-proxy-node worker FALSE Ingress TCP 9105 openshift-ovn-kubernetes ovn-kubernetes-node ovnkube-node kube-rbac-proxy-ovn-metrics worker FALSE Ingress TCP 9107 openshift-ovn-kubernetes egressip-node-healthcheck ovnkube-node ovnkube-controller worker FALSE Ingress TCP 9537 Host system service crio-metrics worker FALSE Ingress TCP 9637 openshift-machine-config-operator kube-rbac-proxy-crio kube-rbac-proxy-crio kube-rbac-proxy-crio worker FALSE Ingress TCP 10250 Host system service kubelet worker FALSE Ingress TCP 10256 openshift-ovn-kubernetes ovnkube ovnkube ovnkube-controller worker TRUE Ingress TCP 10300 openshift-cluster-csi-drivers csi-livenessprobe csi-driver-node csi-driver worker FALSE Ingress TCP 10309 openshift-cluster-csi-drivers csi-node-driver csi-driver-node csi-node-driver-registrar worker FALSE Ingress TCP 18080 openshift-kni-infra coredns coredns worker FALSE Ingress UDP 53 openshift-dns dns-default dnf-default dns worker FALSE Ingress UDP 111 Host system service rpcbind worker TRUE Ingress UDP 6081 openshift-ovn-kubernetes ovn-kubernetes geneve worker FALSE | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installation_configuration/configuring-firewall |
Chapter 31. extension | Chapter 31. extension This chapter describes the commands under the extension command. 31.1. extension list List API extensions Usage: Table 31.1. Command arguments Value Summary -h, --help Show this help message and exit --compute List extensions for the compute api --identity List extensions for the identity api --network List extensions for the network api --volume List extensions for the block storage api --long List additional fields in output Table 31.2. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 31.3. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 31.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 31.5. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 31.2. extension show Show API extension Usage: Table 31.6. Positional arguments Value Summary <extension> Extension to display. currently, only network extensions are supported. (Name or Alias) Table 31.7. Command arguments Value Summary -h, --help Show this help message and exit Table 31.8. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 31.9. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 31.10. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 31.11. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack extension list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--compute] [--identity] [--network] [--volume] [--long]",
"openstack extension show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <extension>"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/command_line_interface_reference/extension |
probe::json_data | probe::json_data Name probe::json_data - Fires whenever JSON data is wanted by a reader. Synopsis json_data Values None Context This probe fires when the JSON data is about to be read. This probe must gather up data and then call the following macros to output the data in JSON format. First, @ json_output_data_start must be called. That call is followed by one or more of the following (one call for each data item): @ json_output_string_value , @ json_output_numeric_value , @ json_output_array_string_value , and @ json_output_array_numeric_value . Finally @ json_output_data_end must be called. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-json-data |
Chapter 41. JSLT Action | Chapter 41. JSLT Action Apply a JSLT query or transformation on JSON. 41.1. Configuration Options The following table summarizes the configuration options available for the jslt-action Kamelet: Property Name Description Type Default Example template * Template The inline template for JSLT Transformation string "file://template.json" Note Fields marked with an asterisk (*) are mandatory. 41.2. Dependencies At runtime, the jslt-action Kamelet relies upon the presence of the following dependencies: camel:jslt camel:kamelet 41.3. Usage This section describes how you can use the jslt-action . 41.3.1. Knative Action You can use the jslt-action Kamelet as an intermediate step in a Knative binding. jslt-action-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jslt-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: {"foo" : "bar"} steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jslt-action properties: template: "file://template.json" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel 41.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you are connected to. 41.3.1.2. Procedure for using the cluster CLI Save the jslt-action-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the action by using the following command: oc apply -f jslt-action-binding.yaml 41.3.1.3. Procedure for using the Kamel CLI Configure and run the action by using the following command: kamel bind timer-source?message=Hello --step jslt-action -p "step-0.template=file://template.json" channel:mychannel This command creates the KameletBinding in the current namespace on the cluster. If the template points to a file that is not in the current directory, and if file:// or classpath:// is used, supply the transformation using the secret or the configmap. To view examples, see with secret and with configmap . For details about necessary traits, see Mount trait and JVM classpath trait . 41.3.2. Kafka Action You can use the jslt-action Kamelet as an intermediate step in a Kafka binding. jslt-action-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jslt-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: {"foo" : "bar"} steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jslt-action properties: template: "file://template.json" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic 41.3.2.1. Prerequisites Ensure that you have installed the AMQ Streams operator in your OpenShift cluster and create a topic named my-topic in the current namespace. Also, you must have "Red Hat Integration - Camel K" installed into the OpenShift cluster you are connected to. 41.3.2.2. Procedure for using the cluster CLI Save the jslt-action-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the action by using the following command: oc apply -f jslt-action-binding.yaml 41.3.2.3. Procedure for using the Kamel CLI Configure and run the action by using the following command: kamel bind timer-source?message=Hello --step jslt-action -p "step-0.template=file://template.json" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic This command creates the KameletBinding in the current namespace on the cluster. 41.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/blob/main/jslt-action.kamelet.yaml | [
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jslt-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: {\"foo\" : \"bar\"} steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jslt-action properties: template: \"file://template.json\" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel",
"apply -f jslt-action-binding.yaml",
"kamel bind timer-source?message=Hello --step jslt-action -p \"step-0.template=file://template.json\" channel:mychannel",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jslt-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: {\"foo\" : \"bar\"} steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jslt-action properties: template: \"file://template.json\" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic",
"apply -f jslt-action-binding.yaml",
"kamel bind timer-source?message=Hello --step jslt-action -p \"step-0.template=file://template.json\" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.5/html/kamelets_reference/jslt-action |
Chapter 2. Cluster Observability Operator overview | Chapter 2. Cluster Observability Operator overview The Cluster Observability Operator (COO) is an optional component of the OpenShift Container Platform designed for creating and managing highly customizable monitoring stacks. It enables cluster administrators to automate configuration and management of monitoring needs extensively, offering a more tailored and detailed view of each namespace compared to the default OpenShift Container Platform monitoring system. The COO deploys the following monitoring components: Prometheus - A highly available Prometheus instance capable of sending metrics to an external endpoint by using remote write. Thanos Querier (optional) - Enables querying of Prometheus instances from a central location. Alertmanager (optional) - Provides alert configuration capabilities for different services. UI plugins (optional) - Enhances the observability capabilities with plugins for monitoring, logging, distributed tracing and troubleshooting. Korrel8r (optional) - Provides observability signal correlation, powered by the open source Korrel8r project. 2.1. COO compared to default monitoring stack The COO components function independently of the default in-cluster monitoring stack, which is deployed and managed by the Cluster Monitoring Operator (CMO). Monitoring stacks deployed by the two Operators do not conflict. You can use a COO monitoring stack in addition to the default platform monitoring components deployed by the CMO. The key differences between COO and the default in-cluster monitoring stack are shown in the following table: Feature COO Default monitoring stack Scope and integration Offers comprehensive monitoring and analytics for enterprise-level needs, covering cluster and workload performance. However, it lacks direct integration with OpenShift Container Platform and typically requires an external Grafana instance for dashboards. Limited to core components within the cluster, for example, API server and etcd, and to OpenShift-specific namespaces. There is deep integration into OpenShift Container Platform including console dashboards and alert management in the console. Configuration and customization Broader configuration options including data retention periods, storage methods, and collected data types. The COO can delegate ownership of single configurable fields in custom resources to users by using Server-Side Apply (SSA), which enhances customization. Built-in configurations with limited customization options. Data retention and storage Long-term data retention, supporting historical analysis and capacity planning Shorter data retention times, focusing on short-term monitoring and real-time detection. 2.2. Key advantages of using COO Deploying COO helps you address monitoring requirements that are hard to achieve using the default monitoring stack. 2.2.1. Extensibility You can add more metrics to a COO-deployed monitoring stack, which is not possible with core platform monitoring without losing support. You can receive cluster-specific metrics from core platform monitoring through federation. COO supports advanced monitoring scenarios like trend forecasting and anomaly detection. 2.2.2. Multi-tenancy support You can create monitoring stacks per user namespace. You can deploy multiple stacks per namespace or a single stack for multiple namespaces. COO enables independent configuration of alerts and receivers for different teams. 2.2.3. Scalability Supports multiple monitoring stacks on a single cluster. Enables monitoring of large clusters through manual sharding. Addresses cases where metrics exceed the capabilities of a single Prometheus instance. 2.2.4. Flexibility Decoupled from OpenShift Container Platform release cycles. Faster release iterations and rapid response to changing requirements. Independent management of alerting rules. 2.3. Target users for COO COO is ideal for users who need high customizability, scalability, and long-term data retention, especially in complex, multi-tenant enterprise environments. 2.3.1. Enterprise-level users and administrators Enterprise users require in-depth monitoring capabilities for OpenShift Container Platform clusters, including advanced performance analysis, long-term data retention, trend forecasting, and historical analysis. These features help enterprises better understand resource usage, prevent performance issues, and optimize resource allocation. 2.3.2. Operations teams in multi-tenant environments With multi-tenancy support, COO allows different teams to configure monitoring views for their projects and applications, making it suitable for teams with flexible monitoring needs. 2.3.3. Development and operations teams COO provides fine-grained monitoring and customizable observability views for in-depth troubleshooting, anomaly detection, and performance tuning during development and operations. 2.4. Using Server-Side Apply to customize Prometheus resources Server-Side Apply is a feature that enables collaborative management of Kubernetes resources. The control plane tracks how different users and controllers manage fields within a Kubernetes object. It introduces the concept of field managers and tracks ownership of fields. This centralized control provides conflict detection and resolution, and reduces the risk of unintended overwrites. Compared to Client-Side Apply, it is more declarative, and tracks field management instead of last applied state. Server-Side Apply Declarative configuration management by updating a resource's state without needing to delete and recreate it. Field management Users can specify which fields of a resource they want to update, without affecting the other fields. Managed fields Kubernetes stores metadata about who manages each field of an object in the managedFields field within metadata. Conflicts If multiple managers try to modify the same field, a conflict occurs. The applier can choose to overwrite, relinquish control, or share management. Merge strategy Server-Side Apply merges fields based on the actor who manages them. Procedure Add a MonitoringStack resource using the following configuration: Example MonitoringStack object apiVersion: monitoring.rhobs/v1alpha1 kind: MonitoringStack metadata: labels: coo: example name: sample-monitoring-stack namespace: coo-demo spec: logLevel: debug retention: 1d resourceSelector: matchLabels: app: demo A Prometheus resource named sample-monitoring-stack is generated in the coo-demo namespace. Retrieve the managed fields of the generated Prometheus resource by running the following command: USD oc -n coo-demo get Prometheus.monitoring.rhobs -oyaml --show-managed-fields Example output managedFields: - apiVersion: monitoring.rhobs/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:labels: f:app.kubernetes.io/managed-by: {} f:app.kubernetes.io/name: {} f:app.kubernetes.io/part-of: {} f:ownerReferences: k:{"uid":"81da0d9a-61aa-4df3-affc-71015bcbde5a"}: {} f:spec: f:additionalScrapeConfigs: {} f:affinity: f:podAntiAffinity: f:requiredDuringSchedulingIgnoredDuringExecution: {} f:alerting: f:alertmanagers: {} f:arbitraryFSAccessThroughSMs: {} f:logLevel: {} f:podMetadata: f:labels: f:app.kubernetes.io/component: {} f:app.kubernetes.io/part-of: {} f:podMonitorSelector: {} f:replicas: {} f:resources: f:limits: f:cpu: {} f:memory: {} f:requests: f:cpu: {} f:memory: {} f:retention: {} f:ruleSelector: {} f:rules: f:alert: {} f:securityContext: f:fsGroup: {} f:runAsNonRoot: {} f:runAsUser: {} f:serviceAccountName: {} f:serviceMonitorSelector: {} f:thanos: f:baseImage: {} f:resources: {} f:version: {} f:tsdb: {} manager: observability-operator operation: Apply - apiVersion: monitoring.rhobs/v1 fieldsType: FieldsV1 fieldsV1: f:status: .: {} f:availableReplicas: {} f:conditions: .: {} k:{"type":"Available"}: .: {} f:lastTransitionTime: {} f:observedGeneration: {} f:status: {} f:type: {} k:{"type":"Reconciled"}: .: {} f:lastTransitionTime: {} f:observedGeneration: {} f:status: {} f:type: {} f:paused: {} f:replicas: {} f:shardStatuses: .: {} k:{"shardID":"0"}: .: {} f:availableReplicas: {} f:replicas: {} f:shardID: {} f:unavailableReplicas: {} f:updatedReplicas: {} f:unavailableReplicas: {} f:updatedReplicas: {} manager: PrometheusOperator operation: Update subresource: status Check the metadata.managedFields values, and observe that some fields in metadata and spec are managed by the MonitoringStack resource. Modify a field that is not controlled by the MonitoringStack resource: Change spec.enforcedSampleLimit , which is a field not set by the MonitoringStack resource. Create the file prom-spec-edited.yaml : prom-spec-edited.yaml apiVersion: monitoring.rhobs/v1 kind: Prometheus metadata: name: sample-monitoring-stack namespace: coo-demo spec: enforcedSampleLimit: 1000 Apply the YAML by running the following command: USD oc apply -f ./prom-spec-edited.yaml --server-side Note You must use the --server-side flag. Get the changed Prometheus object and note that there is one more section in managedFields which has spec.enforcedSampleLimit : USD oc get prometheus -n coo-demo Example output managedFields: 1 - apiVersion: monitoring.rhobs/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:labels: f:app.kubernetes.io/managed-by: {} f:app.kubernetes.io/name: {} f:app.kubernetes.io/part-of: {} f:spec: f:enforcedSampleLimit: {} 2 manager: kubectl operation: Apply 1 managedFields 2 spec.enforcedSampleLimit Modify a field that is managed by the MonitoringStack resource: Change spec.LogLevel , which is a field managed by the MonitoringStack resource, using the following YAML configuration: # changing the logLevel from debug to info apiVersion: monitoring.rhobs/v1 kind: Prometheus metadata: name: sample-monitoring-stack namespace: coo-demo spec: logLevel: info 1 1 spec.logLevel has been added Apply the YAML by running the following command: USD oc apply -f ./prom-spec-edited.yaml --server-side Example output error: Apply failed with 1 conflict: conflict with "observability-operator": .spec.logLevel Please review the fields above--they currently have other managers. Here are the ways you can resolve this warning: * If you intend to manage all of these fields, please re-run the apply command with the `--force-conflicts` flag. * If you do not intend to manage all of the fields, please edit your manifest to remove references to the fields that should keep their current managers. * You may co-own fields by updating your manifest to match the existing value; in this case, you'll become the manager if the other manager(s) stop managing the field (remove it from their configuration). See https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts Notice that the field spec.logLevel cannot be changed using Server-Side Apply, because it is already managed by observability-operator . Use the --force-conflicts flag to force the change. USD oc apply -f ./prom-spec-edited.yaml --server-side --force-conflicts Example output prometheus.monitoring.rhobs/sample-monitoring-stack serverside-applied With --force-conflicts flag, the field can be forced to change, but since the same field is also managed by the MonitoringStack resource, the Observability Operator detects the change, and reverts it back to the value set by the MonitoringStack resource. Note Some Prometheus fields generated by the MonitoringStack resource are influenced by the fields in the MonitoringStack spec stanza, for example, logLevel . These can be changed by changing the MonitoringStack spec . To change the logLevel in the Prometheus object, apply the following YAML to change the MonitoringStack resource: apiVersion: monitoring.rhobs/v1alpha1 kind: MonitoringStack metadata: name: sample-monitoring-stack labels: coo: example spec: logLevel: info To confirm that the change has taken place, query for the log level by running the following command: USD oc -n coo-demo get Prometheus.monitoring.rhobs -o=jsonpath='{.items[0].spec.logLevel}' Example output info Note If a new version of an Operator generates a field that was previously generated and controlled by an actor, the value set by the actor will be overridden. For example, you are managing a field enforcedSampleLimit which is not generated by the MonitoringStack resource. If the Observability Operator is upgraded, and the new version of the Operator generates a value for enforcedSampleLimit , this will overide the value you have previously set. The Prometheus object generated by the MonitoringStack resource may contain some fields which are not explicitly set by the monitoring stack. These fields appear because they have default values. Additional resources Kubernetes documentation for Server-Side Apply (SSA) | [
"apiVersion: monitoring.rhobs/v1alpha1 kind: MonitoringStack metadata: labels: coo: example name: sample-monitoring-stack namespace: coo-demo spec: logLevel: debug retention: 1d resourceSelector: matchLabels: app: demo",
"oc -n coo-demo get Prometheus.monitoring.rhobs -oyaml --show-managed-fields",
"managedFields: - apiVersion: monitoring.rhobs/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:labels: f:app.kubernetes.io/managed-by: {} f:app.kubernetes.io/name: {} f:app.kubernetes.io/part-of: {} f:ownerReferences: k:{\"uid\":\"81da0d9a-61aa-4df3-affc-71015bcbde5a\"}: {} f:spec: f:additionalScrapeConfigs: {} f:affinity: f:podAntiAffinity: f:requiredDuringSchedulingIgnoredDuringExecution: {} f:alerting: f:alertmanagers: {} f:arbitraryFSAccessThroughSMs: {} f:logLevel: {} f:podMetadata: f:labels: f:app.kubernetes.io/component: {} f:app.kubernetes.io/part-of: {} f:podMonitorSelector: {} f:replicas: {} f:resources: f:limits: f:cpu: {} f:memory: {} f:requests: f:cpu: {} f:memory: {} f:retention: {} f:ruleSelector: {} f:rules: f:alert: {} f:securityContext: f:fsGroup: {} f:runAsNonRoot: {} f:runAsUser: {} f:serviceAccountName: {} f:serviceMonitorSelector: {} f:thanos: f:baseImage: {} f:resources: {} f:version: {} f:tsdb: {} manager: observability-operator operation: Apply - apiVersion: monitoring.rhobs/v1 fieldsType: FieldsV1 fieldsV1: f:status: .: {} f:availableReplicas: {} f:conditions: .: {} k:{\"type\":\"Available\"}: .: {} f:lastTransitionTime: {} f:observedGeneration: {} f:status: {} f:type: {} k:{\"type\":\"Reconciled\"}: .: {} f:lastTransitionTime: {} f:observedGeneration: {} f:status: {} f:type: {} f:paused: {} f:replicas: {} f:shardStatuses: .: {} k:{\"shardID\":\"0\"}: .: {} f:availableReplicas: {} f:replicas: {} f:shardID: {} f:unavailableReplicas: {} f:updatedReplicas: {} f:unavailableReplicas: {} f:updatedReplicas: {} manager: PrometheusOperator operation: Update subresource: status",
"apiVersion: monitoring.rhobs/v1 kind: Prometheus metadata: name: sample-monitoring-stack namespace: coo-demo spec: enforcedSampleLimit: 1000",
"oc apply -f ./prom-spec-edited.yaml --server-side",
"oc get prometheus -n coo-demo",
"managedFields: 1 - apiVersion: monitoring.rhobs/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:labels: f:app.kubernetes.io/managed-by: {} f:app.kubernetes.io/name: {} f:app.kubernetes.io/part-of: {} f:spec: f:enforcedSampleLimit: {} 2 manager: kubectl operation: Apply",
"changing the logLevel from debug to info apiVersion: monitoring.rhobs/v1 kind: Prometheus metadata: name: sample-monitoring-stack namespace: coo-demo spec: logLevel: info 1",
"oc apply -f ./prom-spec-edited.yaml --server-side",
"error: Apply failed with 1 conflict: conflict with \"observability-operator\": .spec.logLevel Please review the fields above--they currently have other managers. Here are the ways you can resolve this warning: * If you intend to manage all of these fields, please re-run the apply command with the `--force-conflicts` flag. * If you do not intend to manage all of the fields, please edit your manifest to remove references to the fields that should keep their current managers. * You may co-own fields by updating your manifest to match the existing value; in this case, you'll become the manager if the other manager(s) stop managing the field (remove it from their configuration). See https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts",
"oc apply -f ./prom-spec-edited.yaml --server-side --force-conflicts",
"prometheus.monitoring.rhobs/sample-monitoring-stack serverside-applied",
"apiVersion: monitoring.rhobs/v1alpha1 kind: MonitoringStack metadata: name: sample-monitoring-stack labels: coo: example spec: logLevel: info",
"oc -n coo-demo get Prometheus.monitoring.rhobs -o=jsonpath='{.items[0].spec.logLevel}'",
"info"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/cluster_observability_operator/cluster-observability-operator-overview |
Chapter 34. Case Management Model and Notation | Chapter 34. Case Management Model and Notation You can use Business Central to import, view, and modify the content of Case Management Model and Notation (CMMN) files. When authoring a project, you can import your case management model and then select it from the asset list to view or modify it in a standard XML editor. The following CMMN constructs are currently available: Tasks (human task, process task, decision task, case task) Discretionary tasks (same as above) Stages Milestones Case file items Sentries (entry and exit) The following tasks are not supported: Required Repeat Manual activation Sentries for individual tasks are limited to entry criteria while entry and exit criteria are supported for stages and milestones. Decision tasks map by default to a DMN decision. Event listeners are not supported. Red Hat Process Automation Manager does not provide any modeling capabilities for CMMN and focuses solely on the execution of the model. | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_process_services_in_red_hat_process_automation_manager/case-management-cmmn-con-case-management-design |
Chapter 165. StrimziPodSetStatus schema reference | Chapter 165. StrimziPodSetStatus schema reference Used in: StrimziPodSet Property Property type Description conditions Condition array List of status conditions. observedGeneration integer The generation of the CRD that was last reconciled by the operator. pods integer Number of pods managed by this StrimziPodSet resource. readyPods integer Number of pods managed by this StrimziPodSet resource that are ready. currentPods integer Number of pods managed by this StrimziPodSet resource that have the current revision. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-strimzipodsetstatus-reference |
9.6. Utilization and Placement Strategy | 9.6. Utilization and Placement Strategy Pacemaker decides where to place a resource according to the resource allocation scores on every node. The resource will be allocated to the node where the resource has the highest score. This allocation score is derived from a combination of factors, including resource constraints, resource-stickiness settings, prior failure history of a resource on each node, and utilization of each node. If the resource allocation scores on all the nodes are equal, by the default placement strategy Pacemaker will choose a node with the least number of allocated resources for balancing the load. If the number of resources on each node is equal, the first eligible node listed in the CIB will be chosen to run the resource. Often, however, different resources use significantly different proportions of a node's capacities (such as memory or I/O). You cannot always balance the load ideally by taking into account only the number of resources allocated to a node. In addition, if resources are placed such that their combined requirements exceed the provided capacity, they may fail to start completely or they may run run with degraded performance. To take these factors into account, Pacemaker allows you to configure the following components: the capacity a particular node provides the capacity a particular resource requires an overall strategy for placement of resources The following sections describe how to configure these components. 9.6.1. Utilization Attributes To configure the capacity that a node provides or a resource requires, you can use utilization attributes for nodes and resources. You do this by setting a utilization variable for a resource and assigning a value to that variable to indicate what the resource requires, and then setting that same utilization variable for a node and assigning a value to that variable to indicate what that node provides. You can name utilization attributes according to your preferences and define as many name and value pairs as your configuration needs. The values of utilization attributes must be integers. As of Red Hat Enterprise Linux 7.3, you can set utilization attributes with the pcs command. The following example configures a utilization attribute of CPU capacity for two nodes, naming the attribute cpu . It also configures a utilization attribute of RAM capacity, naming the attribute memory . In this example: Node 1 is defined as providing a CPU capacity of two and a RAM capacity of 2048 Node 2 is defined as providing a CPU capacity of four and a RAM capacity of 2048 The following example specifies the same utilization attributes that three different resources require. In this example: resource dummy-small requires a CPU capacity of 1 and a RAM capacity of 1024 resource dummy-medium requires a CPU capacity of 2 and a RAM capacity of 2048 resource dummy-large requires a CPU capacity of 1 and a RAM capacity of 3072 A node is considered eligible for a resource if it has sufficient free capacity to satisfy the resource's requirements, as defined by the utilization attributes. 9.6.2. Placement Strategy After you have configured the capacities your nodes provide and the capacities your resources require, you need to set the placement-strategy cluster property, otherwise the capacity configurations have no effect. For information on setting cluster properties, see Chapter 12, Pacemaker Cluster Properties . Four values are available for the placement-strategy cluster property: default - Utilization values are not taken into account at all. Resources are allocated according to allocation scores. If scores are equal, resources are evenly distributed across nodes. utilization - Utilization values are taken into account only when deciding whether a node is considered eligible (that is, whether it has sufficient free capacity to satisfy the resource's requirements). Load-balancing is still done based on the number of resources allocated to a node. balanced - Utilization values are taken into account when deciding whether a node is eligible to serve a resource and when load-balancing, so an attempt is made to spread the resources in a way that optimizes resource performance. minimal - Utilization values are taken into account only when deciding whether a node is eligible to serve a resource. For load-balancing, an attempt is made to concentrate the resources on as few nodes as possible, thereby enabling possible power savings on the remaining nodes. The following example command sets the value of placement-strategy to balanced . After running this command, Pacemaker will ensure the load from your resources will be distributed evenly throughout the cluster, without the need for complicated sets of colocation constraints. 9.6.3. Resource Allocation The following subsections summarize how Pacemaker allocates resources. 9.6.3.1. Node Preference Pacemaker determines which node is preferred when allocating resources according to the following strategy. The node with the highest node weight gets consumed first. Node weight is a score maintained by the cluster to represent node health. If multiple nodes have the same node weight: If the placement-strategy cluster property is default or utilization : The node that has the least number of allocated resources gets consumed first. If the numbers of allocated resources are equal, the first eligible node listed in the CIB gets consumed first. If the placement-strategy cluster property is balanced : The node that has the most free capacity gets consumed first. If the free capacities of the nodes are equal, the node that has the least number of allocated resources gets consumed first. If the free capacities of the nodes are equal and the number of allocated resources is equal, the first eligible node listed in the CIB gets consumed first. If the placement-strategy cluster property is minimal , the first eligible node listed in the CIB gets consumed first. 9.6.3.2. Node Capacity Pacemaker determines which node has the most free capacity according to the following strategy. If only one type of utilization attribute has been defined, free capacity is a simple numeric comparison. If multiple types of utilization attributes have been defined, then the node that is numerically highest in the most attribute types has the most free capacity. For example: If NodeA has more free CPUs, and NodeB has more free memory, then their free capacities are equal. If NodeA has more free CPUs, while NodeB has more free memory and storage, then NodeB has more free capacity. 9.6.3.3. Resource Allocation Preference Pacemaker determines which resource is allocated first according to the following strategy. The resource that has the highest priority gets allocated first. For information on setting priority for a resource, see Table 6.3, "Resource Meta Options" . If the priorities of the resources are equal, the resource that has the highest score on the node where it is running gets allocated first, to prevent resource shuffling. If the resource scores on the nodes where the resources are running are equal or the resources are not running, the resource that has the highest score on the preferred node gets allocated first. If the resource scores on the preferred node are equal in this case, the first runnable resource listed in the CIB gets allocated first. 9.6.4. Resource Placement Strategy Guidelines To ensure that Pacemaker's placement strategy for resources works most effectively, you should take the following considerations into account when configuring your system. Make sure that you have sufficient physical capacity. If the physical capacity of your nodes is being used to near maximum under normal conditions, then problems could occur during failover. Even without the utilization feature, you may start to experience timeouts and secondary failures. Build some buffer into the capabilities you configure for the nodes. Advertise slightly more node resources than you physically have, on the assumption the that a Pacemaker resource will not use 100% of the configured amount of CPU, memory, and so forth all the time. This practice is sometimes called overcommit. Specify resource priorities. If the cluster is going to sacrifice services, it should be the ones you care about least. Ensure that resource priorities are properly set so that your most important resources are scheduled first. For information on setting resource priorities, see Table 6.3, "Resource Meta Options" . 9.6.5. The NodeUtilization Resource Agent (Red Hat Enterprise Linux 7.4 and later) Red Hat Enterprise Linux 7.4 supports the NodeUtilization resource agent. The NodeUtilization agent can detect the system parameters of available CPU, host memory availability, and hypervisor memory availability and add these parameters into the CIB. You can run the agent as a clone resource to have it automatically populate these parameters on each node. For information on the NodeUtilization resource agent and the resource options for this agent, run the pcs resource describe NodeUtilization command. | [
"pcs node utilization node1 cpu=2 memory=2048 pcs node utilization node2 cpu=4 memory=2048",
"pcs resource utilization dummy-small cpu=1 memory=1024 pcs resource utilization dummy-medium cpu=2 memory=2048 pcs resource utilization dummy-large cpu=3 memory=3072",
"pcs property set placement-strategy=balanced"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-utilization-haar |
Appendix E. Command Line Tools Summary | Appendix E. Command Line Tools Summary Table E.1, "Command Line Tool Summary" summarizes preferred command-line tools for configuring and managing the High Availability Add-On. For more information about commands and variables, see the man page for each command-line tool. Table E.1. Command Line Tool Summary Command Line Tool Used With Purpose ccs_config_dump - Cluster Configuration Dump Tool Cluster Infrastructure ccs_config_dump generates XML output of running configuration. The running configuration is, sometimes, different from the stored configuration on file because some subsystems store or set some default information into the configuration. Those values are generally not present on the on-disk version of the configuration but are required at runtime for the cluster to work properly. For more information about this tool, see the ccs_config_dump(8) man page. ccs_config_validate - Cluster Configuration Validation Tool Cluster Infrastructure ccs_config_validate validates cluster.conf against the schema, cluster.rng (located in /usr/share/cluster/cluster.rng on each node). For more information about this tool, see the ccs_config_validate(8) man page. clustat - Cluster Status Utility High-availability Service Management Components The clustat command displays the status of the cluster. It shows membership information, quorum view, and the state of all configured user services. For more information about this tool, see the clustat(8) man page. clusvcadm - Cluster User Service Administration Utility High-availability Service Management Components The clusvcadm command allows you to enable, disable, relocate, and restart high-availability services in a cluster. For more information about this tool, see the clusvcadm(8) man page. cman_tool - Cluster Management Tool Cluster Infrastructure cman_tool is a program that manages the CMAN cluster manager. It provides the capability to join a cluster, leave a cluster, kill a node, or change the expected quorum votes of a node in a cluster. For more information about this tool, see the cman_tool(8) man page. fence_tool - Fence Tool Cluster Infrastructure fence_tool is a program used to join and leave the fence domain. For more information about this tool, see the fence_tool(8) man page. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/ap-cli-tools-CA |
Chapter 1. Getting started with RPM packaging | Chapter 1. Getting started with RPM packaging The following section introduces the concept of RPM packaging and its main advantages. 1.1. Introduction to RPM packaging The RPM Package Manager (RPM) is a package management system that runs on RHEL, CentOS, and Fedora. You can use RPM to distribute, manage, and update software that you create for any of the operating systems mentioned above. 1.2. RPM advantages The RPM package management system brings several advantages over distribution of software in conventional archive files. RPM enables you to: Install, reinstall, remove, upgrade and verify packages with standard package management tools, such as Yum or PackageKit. Use a database of installed packages to query and verify packages. Use metadata to describe packages, their installation instructions, and other package parameters. Package software sources, patches and complete build instructions into source and binary packages. Add packages to Yum repositories. Digitally sign your packages by using GNU Privacy Guard (GPG) signing keys. 1.3. Creating your first rpm package Creating an RPM package can be complicated. Here is a complete, working RPM Spec file with several things skipped and simplified. Name: hello-world Version: 1 Release: 1 Summary: Most simple RPM package License: FIXME %description This is my first RPM package, which does nothing. %prep # we have no source, so nothing here %build cat > hello-world.sh <<EOF #!/usr/bin/bash echo Hello world EOF %install mkdir -p %{buildroot}/usr/bin/ install -m 755 hello-world.sh %{buildroot}/usr/bin/hello-world.sh %files /usr/bin/hello-world.sh %changelog # let's skip this for now Save this file as hello-world.spec . Now use these commands: USD rpmdev-setuptree USD rpmbuild -ba hello-world.spec The command rpmdev-setuptree creates several working directories. As those directories are stored permanently in USDHOME, this command does not need to be used again. The command rpmbuild creates the actual rpm package. The output of this command can be similar to: ... [SNIP] Wrote: /home/<username>/rpmbuild/SRPMS/hello-world-1-1.src.rpm Wrote: /home/<username>/rpmbuild/RPMS/x86_64/hello-world-1-1.x86_64.rpm Executing(%clean): /bin/sh -e /var/tmp/rpm-tmp.wgaJzv + umask 022 + cd /home/<username>/rpmbuild/BUILD + /usr/bin/rm -rf /home/<username>/rpmbuild/BUILDROOT/hello-world-1-1.x86_64 + exit 0 The file /home/<username>/rpmbuild/RPMS/x86_64/hello-world-1-1.x86_64.rpm is your first RPM package. It can be installed in the system and tested. | [
"Name: hello-world Version: 1 Release: 1 Summary: Most simple RPM package License: FIXME %description This is my first RPM package, which does nothing. %prep we have no source, so nothing here %build cat > hello-world.sh <<EOF #!/usr/bin/bash echo Hello world EOF %install mkdir -p %{buildroot}/usr/bin/ install -m 755 hello-world.sh %{buildroot}/usr/bin/hello-world.sh %files /usr/bin/hello-world.sh %changelog let's skip this for now",
"rpmdev-setuptree rpmbuild -ba hello-world.spec",
"... [SNIP] Wrote: /home/<username>/rpmbuild/SRPMS/hello-world-1-1.src.rpm Wrote: /home/<username>/rpmbuild/RPMS/x86_64/hello-world-1-1.x86_64.rpm Executing(%clean): /bin/sh -e /var/tmp/rpm-tmp.wgaJzv + umask 022 + cd /home/<username>/rpmbuild/BUILD + /usr/bin/rm -rf /home/<username>/rpmbuild/BUILDROOT/hello-world-1-1.x86_64 + exit 0"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/rpm_packaging_guide/getting-started-with-rpm-packaging |
Chapter 58. Regex Router Action | Chapter 58. Regex Router Action Update the destination using the configured regular expression and replacement string 58.1. Configuration Options The following table summarizes the configuration options available for the regex-router-action Kamelet: Property Name Description Type Default Example regex * Regex Regular Expression for destination string replacement * Replacement Replacement when matching string Note Fields marked with an asterisk (*) are mandatory. 58.2. Dependencies At runtime, the regex-router-action Kamelet relies upon the presence of the following dependencies: github:openshift-integration.kamelet-catalog:camel-kamelets-utils:kamelet-catalog-1.6-SNAPSHOT camel:kamelet camel:core 58.3. Usage This section describes how you can use the regex-router-action . 58.3.1. Knative Action You can use the regex-router-action Kamelet as an intermediate step in a Knative binding. regex-router-action-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: regex-router-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: regex-router-action properties: regex: "The Regex" replacement: "The Replacement" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel 58.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 58.3.1.2. Procedure for using the cluster CLI Save the regex-router-action-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the action by using the following command: oc apply -f regex-router-action-binding.yaml 58.3.1.3. Procedure for using the Kamel CLI Configure and run the action by using the following command: kamel bind timer-source?message=Hello --step regex-router-action -p "step-0.regex=The Regex" -p "step-0.replacement=The Replacement" channel:mychannel This command creates the KameletBinding in the current namespace on the cluster. 58.3.2. Kafka Action You can use the regex-router-action Kamelet as an intermediate step in a Kafka binding. regex-router-action-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: regex-router-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: regex-router-action properties: regex: "The Regex" replacement: "The Replacement" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic 58.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 58.3.2.2. Procedure for using the cluster CLI Save the regex-router-action-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the action by using the following command: oc apply -f regex-router-action-binding.yaml 58.3.2.3. Procedure for using the Kamel CLI Configure and run the action by using the following command: kamel bind timer-source?message=Hello --step regex-router-action -p "step-0.regex=The Regex" -p "step-0.replacement=The Replacement" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic This command creates the KameletBinding in the current namespace on the cluster. 58.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/regex-router-action.kamelet.yaml | [
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: regex-router-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: \"Hello\" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: regex-router-action properties: regex: \"The Regex\" replacement: \"The Replacement\" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel",
"apply -f regex-router-action-binding.yaml",
"kamel bind timer-source?message=Hello --step regex-router-action -p \"step-0.regex=The Regex\" -p \"step-0.replacement=The Replacement\" channel:mychannel",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: regex-router-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: \"Hello\" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: regex-router-action properties: regex: \"The Regex\" replacement: \"The Replacement\" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic",
"apply -f regex-router-action-binding.yaml",
"kamel bind timer-source?message=Hello --step regex-router-action -p \"step-0.regex=The Regex\" -p \"step-0.replacement=The Replacement\" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.7/html/kamelets_reference/regex-router-action |
Chapter 11. Advanced migration options | Chapter 11. Advanced migration options You can automate your migrations and modify the MigPlan and MigrationController custom resources in order to perform large-scale migrations and to improve performance. 11.1. Terminology Table 11.1. MTC terminology Term Definition Source cluster Cluster from which the applications are migrated. Destination cluster [1] Cluster to which the applications are migrated. Replication repository Object storage used for copying images, volumes, and Kubernetes objects during indirect migration or for Kubernetes objects during direct volume migration or direct image migration. The replication repository must be accessible to all clusters. Host cluster Cluster on which the migration-controller pod and the web console are running. The host cluster is usually the destination cluster but this is not required. The host cluster does not require an exposed registry route for direct image migration. Remote cluster A remote cluster is usually the source cluster but this is not required. A remote cluster requires a Secret custom resource that contains the migration-controller service account token. A remote cluster requires an exposed secure registry route for direct image migration. Indirect migration Images, volumes, and Kubernetes objects are copied from the source cluster to the replication repository and then from the replication repository to the destination cluster. Direct volume migration Persistent volumes are copied directly from the source cluster to the destination cluster. Direct image migration Images are copied directly from the source cluster to the destination cluster. Stage migration Data is copied to the destination cluster without stopping the application. Running a stage migration multiple times reduces the duration of the cutover migration. Cutover migration The application is stopped on the source cluster and its resources are migrated to the destination cluster. State migration Application state is migrated by copying specific persistent volume claims to the destination cluster. Rollback migration Rollback migration rolls back a completed migration. 1 Called the target cluster in the MTC web console. 11.2. Migrating an application from on-premises to a cloud-based cluster You can migrate from a source cluster that is behind a firewall to a cloud-based destination cluster by establishing a network tunnel between the two clusters. The crane tunnel-api command establishes such a tunnel by creating a VPN tunnel on the source cluster and then connecting to a VPN server running on the destination cluster. The VPN server is exposed to the client using a load balancer address on the destination cluster. A service created on the destination cluster exposes the source cluster's API to MTC, which is running on the destination cluster. Prerequisites The system that creates the VPN tunnel must have access and be logged in to both clusters. It must be possible to create a load balancer on the destination cluster. Refer to your cloud provider to ensure this is possible. Have names prepared to assign to namespaces, on both the source cluster and the destination cluster, in which to run the VPN tunnel. These namespaces should not be created in advance. For information about namespace rules, see https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-subdomain-names. When connecting multiple firewall-protected source clusters to the cloud cluster, each source cluster requires its own namespace. OpenVPN server is installed on the destination cluster. OpenVPN client is installed on the source cluster. When configuring the source cluster in MTC, the API URL takes the form of https://proxied-cluster.<namespace>.svc.cluster.local:8443 . If you use the API, see Create a MigCluster CR manifest for each remote cluster . If you use the MTC web console, see Migrating your applications using the MTC web console . The MTC web console and Migration Controller must be installed on the target cluster. Procedure Install the crane utility: USD podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-controller-rhel8:v1.8):/crane ./ Log in remotely to a node on the source cluster and a node on the destination cluster. Obtain the cluster context for both clusters after logging in: USD oc config view Establish a tunnel by entering the following command on the command system: USD crane tunnel-api [--namespace <namespace>] \ --destination-context <destination-cluster> \ --source-context <source-cluster> If you don't specify a namespace, the command uses the default value openvpn . For example: USD crane tunnel-api --namespace my_tunnel \ --destination-context openshift-migration/c131-e-us-east-containers-cloud-ibm-com/admin \ --source-context default/192-168-122-171-nip-io:8443/admin Tip See all available parameters for the crane tunnel-api command by entering crane tunnel-api --help . The command generates TSL/SSL Certificates. This process might take several minutes. A message appears when the process completes. The OpenVPN server starts on the destination cluster and the OpenVPN client starts on the source cluster. After a few minutes, the load balancer resolves on the source node. Tip You can view the log for the OpenVPN pods to check the status of this process by entering the following commands with root privileges: # oc get po -n <namespace> Example output NAME READY STATUS RESTARTS AGE <pod_name> 2/2 Running 0 44s # oc logs -f -n <namespace> <pod_name> -c openvpn When the address of the load balancer is resolved, the message Initialization Sequence Completed appears at the end of the log. On the OpenVPN server, which is on a destination control node, verify that the openvpn service and the proxied-cluster service are running: USD oc get service -n <namespace> On the source node, get the service account (SA) token for the migration controller: # oc sa get-token -n openshift-migration migration-controller Open the MTC web console and add the source cluster, using the following values: Cluster name : The source cluster name. URL : proxied-cluster.<namespace>.svc.cluster.local:8443 . If you did not define a value for <namespace> , use openvpn . Service account token : The token of the migration controller service account. Exposed route host to image registry : proxied-cluster.<namespace>.svc.cluster.local:5000 . If you did not define a value for <namespace> , use openvpn . After MTC has successfully validated the connection, you can proceed to create and run a migration plan. The namespace for the source cluster should appear in the list of namespaces. Additional resources For information about creating a MigCluster CR manifest for each remote cluster, see Migrating an application by using the MTC API . For information about adding a cluster using the web console, see Migrating your applications by using the MTC web console 11.3. Migrating applications by using the command line You can migrate applications with the MTC API by using the command line interface (CLI) in order to automate the migration. 11.3.1. Migration prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Direct image migration You must ensure that the secure OpenShift image registry of the source cluster is exposed. You must create a route to the exposed registry. Direct volume migration If your clusters use proxies, you must configure an Stunnel TCP proxy. Internal images If your application uses internal images from the openshift namespace, you must ensure that the required versions of the images are present on the target cluster. You can manually update an image stream tag in order to use a deprecated OpenShift Container Platform 3 image on an OpenShift Container Platform 4.15 cluster. Clusters The source cluster must be upgraded to the latest MTC z-stream release. The MTC version must be the same on all clusters. Network The clusters have unrestricted network access to each other and to the replication repository. If you copy the persistent volumes with move , the clusters must have unrestricted network access to the remote volumes. You must enable the following ports on an OpenShift Container Platform 3 cluster: 8443 (API server) 443 (routes) 53 (DNS) You must enable the following ports on an OpenShift Container Platform 4 cluster: 6443 (API server) 443 (routes) 53 (DNS) You must enable port 443 on the replication repository if you are using TLS. Persistent volumes (PVs) The PVs must be valid. The PVs must be bound to persistent volume claims. If you use snapshots to copy the PVs, the following additional prerequisites apply: The cloud provider must support snapshots. The PVs must have the same cloud provider. The PVs must be located in the same geographic region. The PVs must have the same storage class. 11.3.2. Creating a registry route for direct image migration For direct image migration, you must create a route to the exposed OpenShift image registry on all remote clusters. Prerequisites The OpenShift image registry must be exposed to external traffic on all remote clusters. The OpenShift Container Platform 4 registry is exposed by default. The OpenShift Container Platform 3 registry must be exposed manually . Procedure To create a route to an OpenShift Container Platform 3 registry, run the following command: USD oc create route passthrough --service=docker-registry -n default To create a route to an OpenShift Container Platform 4 registry, run the following command: USD oc create route passthrough --service=image-registry -n openshift-image-registry 11.3.3. Proxy configuration For OpenShift Container Platform 4.1 and earlier versions, you must configure proxies in the MigrationController custom resource (CR) manifest after you install the Migration Toolkit for Containers Operator because these versions do not support a cluster-wide proxy object. For OpenShift Container Platform 4.2 to 4.15, the MTC inherits the cluster-wide proxy settings. You can change the proxy parameters if you want to override the cluster-wide proxy settings. 11.3.3.1. Direct volume migration Direct Volume Migration (DVM) was introduced in MTC 1.4.2. DVM supports only one proxy. The source cluster cannot access the route of the target cluster if the target cluster is also behind a proxy. If you want to perform a DVM from a source cluster behind a proxy, you must configure a TCP proxy that works at the transport layer and forwards the SSL connections transparently without decrypting and re-encrypting them with their own SSL certificates. A Stunnel proxy is an example of such a proxy. 11.3.3.1.1. TCP proxy setup for DVM You can set up a direct connection between the source and the target cluster through a TCP proxy and configure the stunnel_tcp_proxy variable in the MigrationController CR to use the proxy: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port Direct volume migration (DVM) supports only basic authentication for the proxy. Moreover, DVM works only from behind proxies that can tunnel a TCP connection transparently. HTTP/HTTPS proxies in man-in-the-middle mode do not work. The existing cluster-wide proxies might not support this behavior. As a result, the proxy settings for DVM are intentionally kept different from the usual proxy configuration in MTC. 11.3.3.1.2. Why use a TCP proxy instead of an HTTP/HTTPS proxy? You can enable DVM by running Rsync between the source and the target cluster over an OpenShift route. Traffic is encrypted using Stunnel, a TCP proxy. The Stunnel running on the source cluster initiates a TLS connection with the target Stunnel and transfers data over an encrypted channel. Cluster-wide HTTP/HTTPS proxies in OpenShift are usually configured in man-in-the-middle mode where they negotiate their own TLS session with the outside servers. However, this does not work with Stunnel. Stunnel requires that its TLS session be untouched by the proxy, essentially making the proxy a transparent tunnel which simply forwards the TCP connection as-is. Therefore, you must use a TCP proxy. 11.3.3.1.3. Known issue Migration fails with error Upgrade request required The migration Controller uses the SPDY protocol to execute commands within remote pods. If the remote cluster is behind a proxy or a firewall that does not support the SPDY protocol, the migration controller fails to execute remote commands. The migration fails with the error message Upgrade request required . Workaround: Use a proxy that supports the SPDY protocol. In addition to supporting the SPDY protocol, the proxy or firewall also must pass the Upgrade HTTP header to the API server. The client uses this header to open a websocket connection with the API server. If the Upgrade header is blocked by the proxy or firewall, the migration fails with the error message Upgrade request required . Workaround: Ensure that the proxy forwards the Upgrade header. 11.3.3.2. Tuning network policies for migrations OpenShift supports restricting traffic to or from pods using NetworkPolicy or EgressFirewalls based on the network plugin used by the cluster. If any of the source namespaces involved in a migration use such mechanisms to restrict network traffic to pods, the restrictions might inadvertently stop traffic to Rsync pods during migration. Rsync pods running on both the source and the target clusters must connect to each other over an OpenShift Route. Existing NetworkPolicy or EgressNetworkPolicy objects can be configured to automatically exempt Rsync pods from these traffic restrictions. 11.3.3.2.1. NetworkPolicy configuration 11.3.3.2.1.1. Egress traffic from Rsync pods You can use the unique labels of Rsync pods to allow egress traffic to pass from them if the NetworkPolicy configuration in the source or destination namespaces blocks this type of traffic. The following policy allows all egress traffic from Rsync pods in the namespace: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress 11.3.3.2.1.2. Ingress traffic to Rsync pods apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress 11.3.3.2.2. EgressNetworkPolicy configuration The EgressNetworkPolicy object or Egress Firewalls are OpenShift constructs designed to block egress traffic leaving the cluster. Unlike the NetworkPolicy object, the Egress Firewall works at a project level because it applies to all pods in the namespace. Therefore, the unique labels of Rsync pods do not exempt only Rsync pods from the restrictions. However, you can add the CIDR ranges of the source or target cluster to the Allow rule of the policy so that a direct connection can be setup between two clusters. Based on which cluster the Egress Firewall is present in, you can add the CIDR range of the other cluster to allow egress traffic between the two: apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny 11.3.3.2.3. Choosing alternate endpoints for data transfer By default, DVM uses an OpenShift Container Platform route as an endpoint to transfer PV data to destination clusters. You can choose another type of supported endpoint, if cluster topologies allow. For each cluster, you can configure an endpoint by setting the rsync_endpoint_type variable on the appropriate destination cluster in your MigrationController CR: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route] 11.3.3.2.4. Configuring supplemental groups for Rsync pods When your PVCs use a shared storage, you can configure the access to that storage by adding supplemental groups to Rsync pod definitions in order for the pods to allow access: Table 11.2. Supplementary groups for Rsync pods Variable Type Default Description src_supplemental_groups string Not set Comma-separated list of supplemental groups for source Rsync pods target_supplemental_groups string Not set Comma-separated list of supplemental groups for target Rsync pods Example usage The MigrationController CR can be updated to set values for these supplemental groups: spec: src_supplemental_groups: "1000,2000" target_supplemental_groups: "2000,3000" 11.3.3.3. Configuring proxies Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Procedure Get the MigrationController CR manifest: USD oc get migrationcontroller <migration_controller> -n openshift-migration Update the proxy parameters: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration ... spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2 1 Stunnel proxy URL for direct volume migration. 2 Comma-separated list of destination domain names, domains, IP addresses, or other network CIDRs to exclude proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues. This field is ignored if neither the httpProxy nor the httpsProxy field is set. Save the manifest as migration-controller.yaml . Apply the updated manifest: USD oc replace -f migration-controller.yaml -n openshift-migration 11.3.4. Migrating an application by using the MTC API You can migrate an application from the command line by using the Migration Toolkit for Containers (MTC) API. Procedure Create a MigCluster CR manifest for the host cluster: USD cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <host_cluster> namespace: openshift-migration spec: isHostCluster: true EOF Create a Secret object manifest for each remote cluster: USD cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: <cluster_secret> namespace: openshift-config type: Opaque data: saToken: <sa_token> 1 EOF 1 Specify the base64-encoded migration-controller service account (SA) token of the remote cluster. You can obtain the token by running the following command: USD oc sa get-token migration-controller -n openshift-migration | base64 -w 0 Create a MigCluster CR manifest for each remote cluster: USD cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <remote_cluster> 1 namespace: openshift-migration spec: exposedRegistryPath: <exposed_registry_route> 2 insecure: false 3 isHostCluster: false serviceAccountSecretRef: name: <remote_cluster_secret> 4 namespace: openshift-config url: <remote_cluster_url> 5 EOF 1 Specify the Cluster CR of the remote cluster. 2 Optional: For direct image migration, specify the exposed registry route. 3 SSL verification is enabled if false . CA certificates are not required or checked if true . 4 Specify the Secret object of the remote cluster. 5 Specify the URL of the remote cluster. Verify that all clusters are in a Ready state: USD oc describe MigCluster <cluster> Create a Secret object manifest for the replication repository: USD cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: namespace: openshift-config name: <migstorage_creds> type: Opaque data: aws-access-key-id: <key_id_base64> 1 aws-secret-access-key: <secret_key_base64> 2 EOF 1 Specify the key ID in base64 format. 2 Specify the secret key in base64 format. AWS credentials are base64-encoded by default. For other storage providers, you must encode your credentials by running the following command with each key: USD echo -n "<key>" | base64 -w 0 1 1 Specify the key ID or the secret key. Both keys must be base64-encoded. Create a MigStorage CR manifest for the replication repository: USD cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigStorage metadata: name: <migstorage> namespace: openshift-migration spec: backupStorageConfig: awsBucketName: <bucket> 1 credsSecretRef: name: <storage_secret> 2 namespace: openshift-config backupStorageProvider: <storage_provider> 3 volumeSnapshotConfig: credsSecretRef: name: <storage_secret> 4 namespace: openshift-config volumeSnapshotProvider: <storage_provider> 5 EOF 1 Specify the bucket name. 2 Specify the Secrets CR of the object storage. You must ensure that the credentials stored in the Secrets CR of the object storage are correct. 3 Specify the storage provider. 4 Optional: If you are copying data by using snapshots, specify the Secrets CR of the object storage. You must ensure that the credentials stored in the Secrets CR of the object storage are correct. 5 Optional: If you are copying data by using snapshots, specify the storage provider. Verify that the MigStorage CR is in a Ready state: USD oc describe migstorage <migstorage> Create a MigPlan CR manifest: USD cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: destMigClusterRef: name: <host_cluster> namespace: openshift-migration indirectImageMigration: true 1 indirectVolumeMigration: true 2 migStorageRef: name: <migstorage> 3 namespace: openshift-migration namespaces: - <source_namespace_1> 4 - <source_namespace_2> - <source_namespace_3>:<destination_namespace> 5 srcMigClusterRef: name: <remote_cluster> 6 namespace: openshift-migration EOF 1 Direct image migration is enabled if false . 2 Direct volume migration is enabled if false . 3 Specify the name of the MigStorage CR instance. 4 Specify one or more source namespaces. By default, the destination namespace has the same name. 5 Specify a destination namespace if it is different from the source namespace. 6 Specify the name of the source cluster MigCluster instance. Verify that the MigPlan instance is in a Ready state: USD oc describe migplan <migplan> -n openshift-migration Create a MigMigration CR manifest to start the migration defined in the MigPlan instance: USD cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: <migmigration> namespace: openshift-migration spec: migPlanRef: name: <migplan> 1 namespace: openshift-migration quiescePods: true 2 stage: false 3 rollback: false 4 EOF 1 Specify the MigPlan CR name. 2 The pods on the source cluster are stopped before migration if true . 3 A stage migration, which copies most of the data without stopping the application, is performed if true . 4 A completed migration is rolled back if true . Verify the migration by watching the MigMigration CR progress: USD oc watch migmigration <migmigration> -n openshift-migration The output resembles the following: Example output Name: c8b034c0-6567-11eb-9a4f-0bc004db0fbc Namespace: openshift-migration Labels: migration.openshift.io/migplan-name=django Annotations: openshift.io/touch: e99f9083-6567-11eb-8420-0a580a81020c API Version: migration.openshift.io/v1alpha1 Kind: MigMigration ... Spec: Mig Plan Ref: Name: migplan Namespace: openshift-migration Stage: false Status: Conditions: Category: Advisory Last Transition Time: 2021-02-02T15:04:09Z Message: Step: 19/47 Reason: InitialBackupCreated Status: True Type: Running Category: Required Last Transition Time: 2021-02-02T15:03:19Z Message: The migration is ready. Status: True Type: Ready Category: Required Durable: true Last Transition Time: 2021-02-02T15:04:05Z Message: The migration registries are healthy. Status: True Type: RegistriesHealthy Itinerary: Final Observed Digest: 7fae9d21f15979c71ddc7dd075cb97061895caac5b936d92fae967019ab616d5 Phase: InitialBackupCreated Pipeline: Completed: 2021-02-02T15:04:07Z Message: Completed Name: Prepare Started: 2021-02-02T15:03:18Z Message: Waiting for initial Velero backup to complete. Name: Backup Phase: InitialBackupCreated Progress: Backup openshift-migration/c8b034c0-6567-11eb-9a4f-0bc004db0fbc-wpc44: 0 out of estimated total of 0 objects backed up (5s) Started: 2021-02-02T15:04:07Z Message: Not started Name: StageBackup Message: Not started Name: StageRestore Message: Not started Name: DirectImage Message: Not started Name: DirectVolume Message: Not started Name: Restore Message: Not started Name: Cleanup Start Timestamp: 2021-02-02T15:03:18Z Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Running 57s migmigration_controller Step: 2/47 Normal Running 57s migmigration_controller Step: 3/47 Normal Running 57s (x3 over 57s) migmigration_controller Step: 4/47 Normal Running 54s migmigration_controller Step: 5/47 Normal Running 54s migmigration_controller Step: 6/47 Normal Running 52s (x2 over 53s) migmigration_controller Step: 7/47 Normal Running 51s (x2 over 51s) migmigration_controller Step: 8/47 Normal Ready 50s (x12 over 57s) migmigration_controller The migration is ready. Normal Running 50s migmigration_controller Step: 9/47 Normal Running 50s migmigration_controller Step: 10/47 11.3.5. State migration You can perform repeatable, state-only migrations by using Migration Toolkit for Containers (MTC) to migrate persistent volume claims (PVCs) that constitute an application's state. You migrate specified PVCs by excluding other PVCs from the migration plan. You can map the PVCs to ensure that the source and the target PVCs are synchronized. Persistent volume (PV) data is copied to the target cluster. The PV references are not moved, and the application pods continue to run on the source cluster. State migration is specifically designed to be used in conjunction with external CD mechanisms, such as OpenShift Gitops. You can migrate application manifests using GitOps while migrating the state using MTC. If you have a CI/CD pipeline, you can migrate stateless components by deploying them on the target cluster. Then you can migrate stateful components by using MTC. You can perform a state migration between clusters or within the same cluster. Important State migration migrates only the components that constitute an application's state. If you want to migrate an entire namespace, use stage or cutover migration. Prerequisites The state of the application on the source cluster is persisted in PersistentVolumes provisioned through PersistentVolumeClaims . The manifests of the application are available in a central repository that is accessible from both the source and the target clusters. Procedure Migrate persistent volume data from the source to the target cluster. You can perform this step as many times as needed. The source application continues running. Quiesce the source application. You can do this by setting the replicas of workload resources to 0 , either directly on the source cluster or by updating the manifests in GitHub and re-syncing the Argo CD application. Clone application manifests to the target cluster. You can use Argo CD to clone the application manifests to the target cluster. Migrate the remaining volume data from the source to the target cluster. Migrate any new data created by the application during the state migration process by performing a final data migration. If the cloned application is in a quiesced state, unquiesce it. Switch the DNS record to the target cluster to re-direct user traffic to the migrated application. Note MTC 1.6 cannot quiesce applications automatically when performing state migration. It can only migrate PV data. Therefore, you must use your CD mechanisms for quiescing or unquiescing applications. MTC 1.7 introduces explicit Stage and Cutover flows. You can use staging to perform initial data transfers as many times as needed. Then you can perform a cutover, in which the source applications are quiesced automatically. Additional resources See Excluding PVCs from migration to select PVCs for state migration. See Mapping PVCs to migrate source PV data to provisioned PVCs on the destination cluster. See Migrating Kubernetes objects to migrate the Kubernetes objects that constitute an application's state. 11.4. Migration hooks You can add up to four migration hooks to a single migration plan, with each hook running at a different phase of the migration. Migration hooks perform tasks such as customizing application quiescence, manually migrating unsupported data types, and updating applications after migration. A migration hook runs on a source or a target cluster at one of the following migration steps: PreBackup : Before resources are backed up on the source cluster. PostBackup : After resources are backed up on the source cluster. PreRestore : Before resources are restored on the target cluster. PostRestore : After resources are restored on the target cluster. You can create a hook by creating an Ansible playbook that runs with the default Ansible image or with a custom hook container. Ansible playbook The Ansible playbook is mounted on a hook container as a config map. The hook container runs as a job, using the cluster, service account, and namespace specified in the MigPlan custom resource. The job continues to run until it reaches the default limit of 6 retries or a successful completion. This continues even if the initial pod is evicted or killed. The default Ansible runtime image is registry.redhat.io/rhmtc/openshift-migration-hook-runner-rhel7:1.8 . This image is based on the Ansible Runner image and includes python-openshift for Ansible Kubernetes resources and an updated oc binary. Custom hook container You can use a custom hook container instead of the default Ansible image. 11.4.1. Writing an Ansible playbook for a migration hook You can write an Ansible playbook to use as a migration hook. The hook is added to a migration plan by using the MTC web console or by specifying values for the spec.hooks parameters in the MigPlan custom resource (CR) manifest. The Ansible playbook is mounted onto a hook container as a config map. The hook container runs as a job, using the cluster, service account, and namespace specified in the MigPlan CR. The hook container uses a specified service account token so that the tasks do not require authentication before they run in the cluster. 11.4.1.1. Ansible modules You can use the Ansible shell module to run oc commands. Example shell module - hosts: localhost gather_facts: false tasks: - name: get pod name shell: oc get po --all-namespaces You can use kubernetes.core modules, such as k8s_info , to interact with Kubernetes resources. Example k8s_facts module - hosts: localhost gather_facts: false tasks: - name: Get pod k8s_info: kind: pods api: v1 namespace: openshift-migration name: "{{ lookup( 'env', 'HOSTNAME') }}" register: pods - name: Print pod name debug: msg: "{{ pods.resources[0].metadata.name }}" You can use the fail module to produce a non-zero exit status in cases where a non-zero exit status would not normally be produced, ensuring that the success or failure of a hook is detected. Hooks run as jobs and the success or failure status of a hook is based on the exit status of the job container. Example fail module - hosts: localhost gather_facts: false tasks: - name: Set a boolean set_fact: do_fail: true - name: "fail" fail: msg: "Cause a failure" when: do_fail 11.4.1.2. Environment variables The MigPlan CR name and migration namespaces are passed as environment variables to the hook container. These variables are accessed by using the lookup plugin. Example environment variables - hosts: localhost gather_facts: false tasks: - set_fact: namespaces: "{{ (lookup( 'env', 'MIGRATION_NAMESPACES')).split(',') }}" - debug: msg: "{{ item }}" with_items: "{{ namespaces }}" - debug: msg: "{{ lookup( 'env', 'MIGRATION_PLAN_NAME') }}" 11.5. Migration plan options You can exclude, edit, and map components in the MigPlan custom resource (CR). 11.5.1. Excluding resources You can exclude resources, for example, image streams, persistent volumes (PVs), or subscriptions, from a Migration Toolkit for Containers (MTC) migration plan to reduce the resource load for migration or to migrate images or PVs with a different tool. By default, the MTC excludes service catalog resources and Operator Lifecycle Manager (OLM) resources from migration. These resources are parts of the service catalog API group and the OLM API group, neither of which is supported for migration at this time. Procedure Edit the MigrationController custom resource manifest: USD oc edit migrationcontroller <migration_controller> -n openshift-migration Update the spec section by adding parameters to exclude specific resources. For those resources that do not have their own exclusion parameters, add the additional_excluded_resources parameter: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: disable_image_migration: true 1 disable_pv_migration: true 2 additional_excluded_resources: 3 - resource1 - resource2 ... 1 Add disable_image_migration: true to exclude image streams from the migration. imagestreams is added to the excluded_resources list in main.yml when the MigrationController pod restarts. 2 Add disable_pv_migration: true to exclude PVs from the migration plan. persistentvolumes and persistentvolumeclaims are added to the excluded_resources list in main.yml when the MigrationController pod restarts. Disabling PV migration also disables PV discovery when you create the migration plan. 3 You can add OpenShift Container Platform resources that you want to exclude to the additional_excluded_resources list. Wait two minutes for the MigrationController pod to restart so that the changes are applied. Verify that the resource is excluded: USD oc get deployment -n openshift-migration migration-controller -o yaml | grep EXCLUDED_RESOURCES -A1 The output contains the excluded resources: Example output name: EXCLUDED_RESOURCES value: resource1,resource2,imagetags,templateinstances,clusterserviceversions,packagemanifests,subscriptions,servicebrokers,servicebindings,serviceclasses,serviceinstances,serviceplans,imagestreams,persistentvolumes,persistentvolumeclaims 11.5.2. Mapping namespaces If you map namespaces in the MigPlan custom resource (CR), you must ensure that the namespaces are not duplicated on the source or the destination clusters because the UID and GID ranges of the namespaces are copied during migration. Two source namespaces mapped to the same destination namespace spec: namespaces: - namespace_2 - namespace_1:namespace_2 If you want the source namespace to be mapped to a namespace of the same name, you do not need to create a mapping. By default, a source namespace and a target namespace have the same name. Incorrect namespace mapping spec: namespaces: - namespace_1:namespace_1 Correct namespace reference spec: namespaces: - namespace_1 11.5.3. Excluding persistent volume claims You select persistent volume claims (PVCs) for state migration by excluding the PVCs that you do not want to migrate. You exclude PVCs by setting the spec.persistentVolumes.pvc.selection.action parameter of the MigPlan custom resource (CR) after the persistent volumes (PVs) have been discovered. Prerequisites MigPlan CR is in a Ready state. Procedure Add the spec.persistentVolumes.pvc.selection.action parameter to the MigPlan CR and set it to skip : apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: ... persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: ... selection: action: skip 11.5.4. Mapping persistent volume claims You can migrate persistent volume (PV) data from the source cluster to persistent volume claims (PVCs) that are already provisioned in the destination cluster in the MigPlan CR by mapping the PVCs. This mapping ensures that the destination PVCs of migrated applications are synchronized with the source PVCs. You map PVCs by updating the spec.persistentVolumes.pvc.name parameter in the MigPlan custom resource (CR) after the PVs have been discovered. Prerequisites MigPlan CR is in a Ready state. Procedure Update the spec.persistentVolumes.pvc.name parameter in the MigPlan CR: apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: ... persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: name: <source_pvc>:<destination_pvc> 1 1 Specify the PVC on the source cluster and the PVC on the destination cluster. If the destination PVC does not exist, it will be created. You can use this mapping to change the PVC name during migration. 11.5.5. Editing persistent volume attributes After you create a MigPlan custom resource (CR), the MigrationController CR discovers the persistent volumes (PVs). The spec.persistentVolumes block and the status.destStorageClasses block are added to the MigPlan CR. You can edit the values in the spec.persistentVolumes.selection block. If you change values outside the spec.persistentVolumes.selection block, the values are overwritten when the MigPlan CR is reconciled by the MigrationController CR. Note The default value for the spec.persistentVolumes.selection.storageClass parameter is determined by the following logic: If the source cluster PV is Gluster or NFS, the default is either cephfs , for accessMode: ReadWriteMany , or cephrbd , for accessMode: ReadWriteOnce . If the PV is neither Gluster nor NFS or if cephfs or cephrbd are not available, the default is a storage class for the same provisioner. If a storage class for the same provisioner is not available, the default is the default storage class of the destination cluster. You can change the storageClass value to the value of any name parameter in the status.destStorageClasses block of the MigPlan CR. If the storageClass value is empty, the PV will have no storage class after migration. This option is appropriate if, for example, you want to move the PV to an NFS volume on the destination cluster. Prerequisites MigPlan CR is in a Ready state. Procedure Edit the spec.persistentVolumes.selection values in the MigPlan CR: apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: pvc-095a6559-b27f-11eb-b27f-021bddcaf6e4 proposedCapacity: 10Gi pvc: accessModes: - ReadWriteMany hasReference: true name: mysql namespace: mysql-persistent selection: action: <copy> 1 copyMethod: <filesystem> 2 verify: true 3 storageClass: <gp2> 4 accessMode: <ReadWriteMany> 5 storageClass: cephfs 1 Allowed values are move , copy , and skip . If only one action is supported, the default value is the supported action. If multiple actions are supported, the default value is copy . 2 Allowed values are snapshot and filesystem . Default value is filesystem . 3 The verify parameter is displayed if you select the verification option for file system copy in the MTC web console. You can set it to false . 4 You can change the default value to the value of any name parameter in the status.destStorageClasses block of the MigPlan CR. If no value is specified, the PV will have no storage class after migration. 5 Allowed values are ReadWriteOnce and ReadWriteMany . If this value is not specified, the default is the access mode of the source cluster PVC. You can only edit the access mode in the MigPlan CR. You cannot edit it by using the MTC web console. Additional resources For details about the move and copy actions, see MTC workflow . For details about the skip action, see Excluding PVCs from migration . For details about the file system and snapshot copy methods, see About data copy methods . 11.5.6. Performing a state migration of Kubernetes objects by using the MTC API After you migrate all the PV data, you can use the Migration Toolkit for Containers (MTC) API to perform a one-time state migration of Kubernetes objects that constitute an application. You do this by configuring MigPlan custom resource (CR) fields to provide a list of Kubernetes resources with an additional label selector to further filter those resources, and then performing a migration by creating a MigMigration CR. The MigPlan resource is closed after the migration. Note Selecting Kubernetes resources is an API-only feature. You must update the MigPlan CR and create a MigMigration CR for it by using the CLI. The MTC web console does not support migrating Kubernetes objects. Note After migration, the closed parameter of the MigPlan CR is set to true . You cannot create another MigMigration CR for this MigPlan CR. You add Kubernetes objects to the MigPlan CR by using one of the following options: Adding the Kubernetes objects to the includedResources section. When the includedResources field is specified in the MigPlan CR, the plan takes a list of group-kind as input. Only resources present in the list are included in the migration. Adding the optional labelSelector parameter to filter the includedResources in the MigPlan . When this field is specified, only resources matching the label selector are included in the migration. For example, you can filter a list of Secret and ConfigMap resources by using the label app: frontend as a filter. Procedure Update the MigPlan CR to include Kubernetes resources and, optionally, to filter the included resources by adding the labelSelector parameter: To update the MigPlan CR to include Kubernetes resources: apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind> 1 group: "" - kind: <kind> group: "" 1 Specify the Kubernetes object, for example, Secret or ConfigMap . Optional: To filter the included resources by adding the labelSelector parameter: apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind> 1 group: "" - kind: <kind> group: "" ... labelSelector: matchLabels: <label> 2 1 Specify the Kubernetes object, for example, Secret or ConfigMap . 2 Specify the label of the resources to migrate, for example, app: frontend . Create a MigMigration CR to migrate the selected Kubernetes resources. Verify that the correct MigPlan is referenced in migPlanRef : apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: generateName: <migplan> namespace: openshift-migration spec: migPlanRef: name: <migplan> namespace: openshift-migration stage: false 11.6. Migration controller options You can edit migration plan limits, enable persistent volume resizing, or enable cached Kubernetes clients in the MigrationController custom resource (CR) for large migrations and improved performance. 11.6.1. Increasing limits for large migrations You can increase the limits on migration objects and container resources for large migrations with the Migration Toolkit for Containers (MTC). Important You must test these changes before you perform a migration in a production environment. Procedure Edit the MigrationController custom resource (CR) manifest: USD oc edit migrationcontroller -n openshift-migration Update the following parameters: ... mig_controller_limits_cpu: "1" 1 mig_controller_limits_memory: "10Gi" 2 ... mig_controller_requests_cpu: "100m" 3 mig_controller_requests_memory: "350Mi" 4 ... mig_pv_limit: 100 5 mig_pod_limit: 100 6 mig_namespace_limit: 10 7 ... 1 Specifies the number of CPUs available to the MigrationController CR. 2 Specifies the amount of memory available to the MigrationController CR. 3 Specifies the number of CPU units available for MigrationController CR requests. 100m represents 0.1 CPU units (100 * 1e-3). 4 Specifies the amount of memory available for MigrationController CR requests. 5 Specifies the number of persistent volumes that can be migrated. 6 Specifies the number of pods that can be migrated. 7 Specifies the number of namespaces that can be migrated. Create a migration plan that uses the updated parameters to verify the changes. If your migration plan exceeds the MigrationController CR limits, the MTC console displays a warning message when you save the migration plan. 11.6.2. Enabling persistent volume resizing for direct volume migration You can enable persistent volume (PV) resizing for direct volume migration to avoid running out of disk space on the destination cluster. When the disk usage of a PV reaches a configured level, the MigrationController custom resource (CR) compares the requested storage capacity of a persistent volume claim (PVC) to its actual provisioned capacity. Then, it calculates the space required on the destination cluster. A pv_resizing_threshold parameter determines when PV resizing is used. The default threshold is 3% . This means that PV resizing occurs when the disk usage of a PV is more than 97% . You can increase this threshold so that PV resizing occurs at a lower disk usage level. PVC capacity is calculated according to the following criteria: If the requested storage capacity ( spec.resources.requests.storage ) of the PVC is not equal to its actual provisioned capacity ( status.capacity.storage ), the greater value is used. If a PV is provisioned through a PVC and then subsequently changed so that its PV and PVC capacities no longer match, the greater value is used. Prerequisites The PVCs must be attached to one or more running pods so that the MigrationController CR can execute commands. Procedure Log in to the host cluster. Enable PV resizing by patching the MigrationController CR: USD oc patch migrationcontroller migration-controller -p '{"spec":{"enable_dvm_pv_resizing":true}}' \ 1 --type='merge' -n openshift-migration 1 Set the value to false to disable PV resizing. Optional: Update the pv_resizing_threshold parameter to increase the threshold: USD oc patch migrationcontroller migration-controller -p '{"spec":{"pv_resizing_threshold":41}}' \ 1 --type='merge' -n openshift-migration 1 The default value is 3 . When the threshold is exceeded, the following status message is displayed in the MigPlan CR status: status: conditions: ... - category: Warn durable: true lastTransitionTime: "2021-06-17T08:57:01Z" message: 'Capacity of the following volumes will be automatically adjusted to avoid disk capacity issues in the target cluster: [pvc-b800eb7b-cf3b-11eb-a3f7-0eae3e0555f3]' reason: Done status: "False" type: PvCapacityAdjustmentRequired Note For AWS gp2 storage, this message does not appear unless the pv_resizing_threshold is 42% or greater because of the way gp2 calculates volume usage and size. ( BZ#1973148 ) 11.6.3. Enabling cached Kubernetes clients You can enable cached Kubernetes clients in the MigrationController custom resource (CR) for improved performance during migration. The greatest performance benefit is displayed when migrating between clusters in different regions or with significant network latency. Note Delegated tasks, for example, Rsync backup for direct volume migration or Velero backup and restore, however, do not show improved performance with cached clients. Cached clients require extra memory because the MigrationController CR caches all API resources that are required for interacting with MigCluster CRs. Requests that are normally sent to the API server are directed to the cache instead. The cache watches the API server for updates. You can increase the memory limits and requests of the MigrationController CR if OOMKilled errors occur after you enable cached clients. Procedure Enable cached clients by running the following command: USD oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch \ '[{ "op": "replace", "path": "/spec/mig_controller_enable_cache", "value": true}]' Optional: Increase the MigrationController CR memory limits by running the following command: USD oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch \ '[{ "op": "replace", "path": "/spec/mig_controller_limits_memory", "value": <10Gi>}]' Optional: Increase the MigrationController CR memory requests by running the following command: USD oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch \ '[{ "op": "replace", "path": "/spec/mig_controller_requests_memory", "value": <350Mi>}]' | [
"podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-controller-rhel8:v1.8):/crane ./",
"oc config view",
"crane tunnel-api [--namespace <namespace>] --destination-context <destination-cluster> --source-context <source-cluster>",
"crane tunnel-api --namespace my_tunnel --destination-context openshift-migration/c131-e-us-east-containers-cloud-ibm-com/admin --source-context default/192-168-122-171-nip-io:8443/admin",
"oc get po -n <namespace>",
"NAME READY STATUS RESTARTS AGE <pod_name> 2/2 Running 0 44s",
"oc logs -f -n <namespace> <pod_name> -c openvpn",
"oc get service -n <namespace>",
"oc sa get-token -n openshift-migration migration-controller",
"oc create route passthrough --service=docker-registry -n default",
"oc create route passthrough --service=image-registry -n openshift-image-registry",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress",
"apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route]",
"spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"",
"oc get migrationcontroller <migration_controller> -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2",
"oc replace -f migration-controller.yaml -n openshift-migration",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <host_cluster> namespace: openshift-migration spec: isHostCluster: true EOF",
"cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: <cluster_secret> namespace: openshift-config type: Opaque data: saToken: <sa_token> 1 EOF",
"oc sa get-token migration-controller -n openshift-migration | base64 -w 0",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <remote_cluster> 1 namespace: openshift-migration spec: exposedRegistryPath: <exposed_registry_route> 2 insecure: false 3 isHostCluster: false serviceAccountSecretRef: name: <remote_cluster_secret> 4 namespace: openshift-config url: <remote_cluster_url> 5 EOF",
"oc describe MigCluster <cluster>",
"cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: namespace: openshift-config name: <migstorage_creds> type: Opaque data: aws-access-key-id: <key_id_base64> 1 aws-secret-access-key: <secret_key_base64> 2 EOF",
"echo -n \"<key>\" | base64 -w 0 1",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigStorage metadata: name: <migstorage> namespace: openshift-migration spec: backupStorageConfig: awsBucketName: <bucket> 1 credsSecretRef: name: <storage_secret> 2 namespace: openshift-config backupStorageProvider: <storage_provider> 3 volumeSnapshotConfig: credsSecretRef: name: <storage_secret> 4 namespace: openshift-config volumeSnapshotProvider: <storage_provider> 5 EOF",
"oc describe migstorage <migstorage>",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: destMigClusterRef: name: <host_cluster> namespace: openshift-migration indirectImageMigration: true 1 indirectVolumeMigration: true 2 migStorageRef: name: <migstorage> 3 namespace: openshift-migration namespaces: - <source_namespace_1> 4 - <source_namespace_2> - <source_namespace_3>:<destination_namespace> 5 srcMigClusterRef: name: <remote_cluster> 6 namespace: openshift-migration EOF",
"oc describe migplan <migplan> -n openshift-migration",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: <migmigration> namespace: openshift-migration spec: migPlanRef: name: <migplan> 1 namespace: openshift-migration quiescePods: true 2 stage: false 3 rollback: false 4 EOF",
"oc watch migmigration <migmigration> -n openshift-migration",
"Name: c8b034c0-6567-11eb-9a4f-0bc004db0fbc Namespace: openshift-migration Labels: migration.openshift.io/migplan-name=django Annotations: openshift.io/touch: e99f9083-6567-11eb-8420-0a580a81020c API Version: migration.openshift.io/v1alpha1 Kind: MigMigration Spec: Mig Plan Ref: Name: migplan Namespace: openshift-migration Stage: false Status: Conditions: Category: Advisory Last Transition Time: 2021-02-02T15:04:09Z Message: Step: 19/47 Reason: InitialBackupCreated Status: True Type: Running Category: Required Last Transition Time: 2021-02-02T15:03:19Z Message: The migration is ready. Status: True Type: Ready Category: Required Durable: true Last Transition Time: 2021-02-02T15:04:05Z Message: The migration registries are healthy. Status: True Type: RegistriesHealthy Itinerary: Final Observed Digest: 7fae9d21f15979c71ddc7dd075cb97061895caac5b936d92fae967019ab616d5 Phase: InitialBackupCreated Pipeline: Completed: 2021-02-02T15:04:07Z Message: Completed Name: Prepare Started: 2021-02-02T15:03:18Z Message: Waiting for initial Velero backup to complete. Name: Backup Phase: InitialBackupCreated Progress: Backup openshift-migration/c8b034c0-6567-11eb-9a4f-0bc004db0fbc-wpc44: 0 out of estimated total of 0 objects backed up (5s) Started: 2021-02-02T15:04:07Z Message: Not started Name: StageBackup Message: Not started Name: StageRestore Message: Not started Name: DirectImage Message: Not started Name: DirectVolume Message: Not started Name: Restore Message: Not started Name: Cleanup Start Timestamp: 2021-02-02T15:03:18Z Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Running 57s migmigration_controller Step: 2/47 Normal Running 57s migmigration_controller Step: 3/47 Normal Running 57s (x3 over 57s) migmigration_controller Step: 4/47 Normal Running 54s migmigration_controller Step: 5/47 Normal Running 54s migmigration_controller Step: 6/47 Normal Running 52s (x2 over 53s) migmigration_controller Step: 7/47 Normal Running 51s (x2 over 51s) migmigration_controller Step: 8/47 Normal Ready 50s (x12 over 57s) migmigration_controller The migration is ready. Normal Running 50s migmigration_controller Step: 9/47 Normal Running 50s migmigration_controller Step: 10/47",
"- hosts: localhost gather_facts: false tasks: - name: get pod name shell: oc get po --all-namespaces",
"- hosts: localhost gather_facts: false tasks: - name: Get pod k8s_info: kind: pods api: v1 namespace: openshift-migration name: \"{{ lookup( 'env', 'HOSTNAME') }}\" register: pods - name: Print pod name debug: msg: \"{{ pods.resources[0].metadata.name }}\"",
"- hosts: localhost gather_facts: false tasks: - name: Set a boolean set_fact: do_fail: true - name: \"fail\" fail: msg: \"Cause a failure\" when: do_fail",
"- hosts: localhost gather_facts: false tasks: - set_fact: namespaces: \"{{ (lookup( 'env', 'MIGRATION_NAMESPACES')).split(',') }}\" - debug: msg: \"{{ item }}\" with_items: \"{{ namespaces }}\" - debug: msg: \"{{ lookup( 'env', 'MIGRATION_PLAN_NAME') }}\"",
"oc edit migrationcontroller <migration_controller> -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: disable_image_migration: true 1 disable_pv_migration: true 2 additional_excluded_resources: 3 - resource1 - resource2",
"oc get deployment -n openshift-migration migration-controller -o yaml | grep EXCLUDED_RESOURCES -A1",
"name: EXCLUDED_RESOURCES value: resource1,resource2,imagetags,templateinstances,clusterserviceversions,packagemanifests,subscriptions,servicebrokers,servicebindings,serviceclasses,serviceinstances,serviceplans,imagestreams,persistentvolumes,persistentvolumeclaims",
"spec: namespaces: - namespace_2 - namespace_1:namespace_2",
"spec: namespaces: - namespace_1:namespace_1",
"spec: namespaces: - namespace_1",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: selection: action: skip",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: name: <source_pvc>:<destination_pvc> 1",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: pvc-095a6559-b27f-11eb-b27f-021bddcaf6e4 proposedCapacity: 10Gi pvc: accessModes: - ReadWriteMany hasReference: true name: mysql namespace: mysql-persistent selection: action: <copy> 1 copyMethod: <filesystem> 2 verify: true 3 storageClass: <gp2> 4 accessMode: <ReadWriteMany> 5 storageClass: cephfs",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind> 1 group: \"\" - kind: <kind> group: \"\"",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind> 1 group: \"\" - kind: <kind> group: \"\" labelSelector: matchLabels: <label> 2",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: generateName: <migplan> namespace: openshift-migration spec: migPlanRef: name: <migplan> namespace: openshift-migration stage: false",
"oc edit migrationcontroller -n openshift-migration",
"mig_controller_limits_cpu: \"1\" 1 mig_controller_limits_memory: \"10Gi\" 2 mig_controller_requests_cpu: \"100m\" 3 mig_controller_requests_memory: \"350Mi\" 4 mig_pv_limit: 100 5 mig_pod_limit: 100 6 mig_namespace_limit: 10 7",
"oc patch migrationcontroller migration-controller -p '{\"spec\":{\"enable_dvm_pv_resizing\":true}}' \\ 1 --type='merge' -n openshift-migration",
"oc patch migrationcontroller migration-controller -p '{\"spec\":{\"pv_resizing_threshold\":41}}' \\ 1 --type='merge' -n openshift-migration",
"status: conditions: - category: Warn durable: true lastTransitionTime: \"2021-06-17T08:57:01Z\" message: 'Capacity of the following volumes will be automatically adjusted to avoid disk capacity issues in the target cluster: [pvc-b800eb7b-cf3b-11eb-a3f7-0eae3e0555f3]' reason: Done status: \"False\" type: PvCapacityAdjustmentRequired",
"oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/mig_controller_enable_cache\", \"value\": true}]'",
"oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/mig_controller_limits_memory\", \"value\": <10Gi>}]'",
"oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/mig_controller_requests_memory\", \"value\": <350Mi>}]'"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/migrating_from_version_3_to_4/advanced-migration-options-3-4 |
Chapter 7. Installing OpenShift DR Hub Operator on Hub cluster | Chapter 7. Installing OpenShift DR Hub Operator on Hub cluster Prerequisites Ensure that the values for the access and secret key are base-64 encoded . The encoded values for the keys were retrieved in the prior section and the resulting Secrets are exactly the same as those created already on the managed clusters. Procedure On the Hub cluster, navigate to OperatorHub and use the search filter for OpenShift DR Hub Operator . Follow the screen instructions to Install the operator into the project openshift-dr-system . Create S3 secrets for the Hub cluster using the following S3 secret YAML format for the Primary managed cluster . Run the following command to create this secret on the Hub cluster. Example output: Create S3 secrets using the following S3 secret YAML format for the Secondary managed cluster . Run the following command to create this secret on the Hub cluster. Example output: Configure ConfigMap for the OpenShift DR Hub Operator. After the operator is successfully created, a new ConfigMap called ramen-hub-operator-config is created. Run the following command to edit the file. Add the following new content starting at s3StoreProfiles to the ConfigMap on the Hub cluster. Note Make sure to replace <primary clusterID> , <secondary clusterID> , baseDomain , odrbucket-<your value1> , and odrbucket-<your value2> variables with exact same values as used for the ramen-cluster-operator-config ConfigMap on the managed clusters. | [
"apiVersion: v1 data: AWS_ACCESS_KEY_ID: <primary cluster base-64 encoded access key> AWS_SECRET_ACCESS_KEY: <primary cluster base-64 encoded secret access key> kind: Secret metadata: name: odr-s3secret-primary namespace: openshift-dr-system",
"oc create -f odr-s3secret-primary.yaml",
"secret/odr-s3secret-primary created",
"apiVersion: v1 data: AWS_ACCESS_KEY_ID: <secondary cluster base-64 encoded access key> AWS_SECRET_ACCESS_KEY: <secondary cluster base-64 encoded secret access key> kind: Secret metadata: name: odr-s3secret-secondary namespace: openshift-dr-system",
"oc create -f odr-s3secret-secondary.yaml",
"secret/odr-s3secret-secondary created",
"oc edit configmap ramen-hub-operator-config -n openshift-dr-system",
"[...] apiVersion: v1 data: ramen_manager_config.yaml: | apiVersion: ramendr.openshift.io/v1alpha1 kind: RamenConfig [...] ramenControllerType: \"dr-hub\" ### Start of new content to be added s3StoreProfiles: - s3ProfileName: s3-primary s3CompatibleEndpoint: https://s3-openshift-storage.apps.<primary clusterID>.<baseDomain> s3Region: primary s3Bucket: odrbucket-<your value1> s3SecretRef: name: odr-s3secret-primary namespace: openshift-dr-system - s3ProfileName: s3-secondary s3CompatibleEndpoint: https://s3-openshift-storage.apps.<secondary clusterID>.<baseDomain> s3Region: secondary s3Bucket: odrbucket-<your value2> s3SecretRef: name: odr-s3secret-secondary namespace: openshift-dr-system [...]"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/configuring_openshift_data_foundation_for_regional-dr_with_advanced_cluster_management/installing-openshift-dr-hub-operator-on-hub-cluster_rhodf |
Chapter 2. Release notes | Chapter 2. Release notes 2.1. Red Hat OpenShift support for Windows Containers release notes The release notes for Red Hat OpenShift for Windows Containers tracks the development of the Windows Machine Config Operator (WMCO), which provides all Windows container workload capabilities in OpenShift Container Platform. 2.1.1. Windows Machine Config Operator numbering Starting with this release, y-stream releases of the WMCO will be in step with OpenShift Container Platform, with only z-stream releases between OpenShift Container Platform releases. The WMCO numbering reflects the associated OpenShift Container Platform version in the y-stream position. For example, the current release of WMCO is associated with OpenShift Container Platform version 4.15. Thus, the numbering is WMCO 10.15.z. 2.1.2. Release notes for Red Hat Windows Machine Config Operator 10.15.3 This release of the WMCO provides new features and bug fixes for running Windows compute nodes in an OpenShift Container Platform cluster. The components of the WMCO 10.15.3 were released in RHSA-2024:5745 . 2.1.2.1. Bug fixes Previously, after rotating the kube-apiserver-to-kubelet-client-ca certificate , the contents of the kubetl-ca.crt file on Windows nodes was not populated correctly. With this fix, after certificate rotation, the kubetl-ca.crt file contains the correct certificates. ( OCPBUGS-33875 ) Previously, if reverse DNS lookup failed due to an error, such as the reverse DNS lookup services being unavailable, the WMCO would not fall back to using the VM hostname to determine if a certificate signing requests (CSR) should be approved. As a consequence, Bring-Your-Own-Host (BYOH) Windows nodes configured with an IP address would not become available. With this fix, BYOH nodes are properly added if reverse DNS is not available. ( OCPBUGS-37533 ) Previously, if there were multiple service account token secrets in the WMCO namespace, the scaling of Windows nodes would fail. With this fix, the WMCO uses only the secret it creates, ignoring any other service account token secrets in the WMCO namespace. As a result, Windows nodes scale properly. ( OCPBUGS-38485 ) 2.2. Release notes for past releases of the Windows Machine Config Operator The following release notes are for versions of the Windows Machine Config Operator (WMCO). 2.2.1. Release notes for Red Hat Windows Machine Config Operator 10.15.2 This release of the WMCO provides new features and bug fixes for running Windows compute nodes in an OpenShift Container Platform cluster. The components of the WMCO 10.15.2 were released in RHBA-2024:2704 . 2.2.1.1. Bug fixes Previously, on Azure clusters the WMCO would check if an external Cloud Controller Manager (CCM) was being used on the cluster. CCM use is the default. If a CCM is being used, the Operator would adjust configuration logic accordingly. Because the status condition that the WMCO used to check for the CCM was removed, the WMCO proceeded as if a CCM was not in use. This fix removes the check. As a result, the WMCO always configures the required logic on Azure clusters. ( OCPBUGS-31704 ) Previously, the kubelet was unable to authenticate with private Elastic Container Registries (ECR) registries. Because of this error, the kubelet was not able to pull images from these registries. With this fix, the kubelet is able to pull images from these registries as expected. ( OCPBUGS-26602 ) Previously, the WMCO was logging error messages when any commands being run through an SSH connection to a Windows instance failed. This was incorrect behavior because some commands are expected to fail. For example, when WMCO reboots a node the Operator runs PowerShell commands on the instance until they fail, meaning the SSH connection rebooted as expected. With this fix, only actualy errors are now logged. ( OCPBUGS-20255 ) 2.2.2. Release notes for Red Hat Windows Machine Config Operator 10.15.1 This release of the WMCO provides new features and bug fixes for running Windows compute nodes in an OpenShift Container Platform cluster. The components of the WMCO 10.15.1 were released in RHBA-2024:1191 . Due to an internal issue, the planned WMCO 10.15.0 could not be released. Multiple bug and security fixes, described below, were included in WMCO 10.15.0. These fixes are included in WMCO 10.15.1. For specific details about these bug and security fixes, see the RHSA-2024:0954 errata. 2.2.2.1. New features and improvements 2.2.2.1.1. CPU and memory usage metrics are now available CPU and memory usage metrics for Windows pods are now available in Prometheus. The metrics are shown in the OpenShift Container Platform web console on the Metrics tab for each Windows pod and can be queried by users. 2.2.2.1.2. Operator SDK upgrade The WMCO now uses the Operator SDK version 1.32.0. 2.2.2.1.3. Kubernetes upgrade The WMCO now uses Kubernetes 1.28. 2.2.2.2. Bug fixes Previously, there was a flaw in the handling of multiplexed streams in the HTTP/2 protocol, which is utilized by the WMCO. A client could repeatedly make a request for a new multiplex stream and then immediately send an RST_STREAM frame to cancel those requests. This activity created additional work for the server by setting up and dismantling streams, but avoided any server-side limitations on the maximum number of active streams per connection. As a result, a denial of service occurred due to server resource consumption. This issue has been fixed. ( BZ-2243296 ) Previously, there was a flaw in Kubernetes, where a user who can create pods and persistent volumes on Windows nodes was able to escalate to admin privileges on those nodes. Kubernetes clusters were only affected if they were using an in-tree storage plugin for Windows nodes. This issue has been fixed. ( BZ-2247163 ) Previously, there was a flaw in the SSH channel integrity. By manipulating sequence numbers during the handshake, an attacker could remove the initial messages on the secure channel without causing a MAC failure. For example, an attacker could disable the ping extension and thus disable the new countermeasure in OpenSSH 9.5 against keystroke timing attacks. This issue has been fixed. ( BZ-2254210 ) Previously, the routes from a Windows Bring-Your-Own-Host (BYOH) VM to the metadata endpoint were being added as non-persistent routes, so the routes were removed when a VM was removed (deconfigured) or re-configured. This would cause the node to fail if configured again, as the metadata endpoint was unreachable. With this fix, the WMCO runs the AWS EC2 launch v2 service after removal or re-configuration. As a result, the routes are restored so that the VM can be configured into a node, as expected. ( OCPBUGS-15988 ) Previously, the WMCO did not properly wait for Windows virtual machines (VMs) to finish rebooting. This led to occasional timing issues where the WMCO would attempt to interact with a node that was in the middle of a reboot, causing WMCO to log an error and restart node configuration. Now, the WMCO waits for the instance to completely reboot. ( OCPBUGS-17217 ) Previously, the WMCO configuration was missing the DeleteEmptyDirData: true field, which is required for draining nodes that have emptyDir volumes attached. As a consequence, customers that had nodes with emptyDir volumes would see the following error in the logs: cannot delete Pods with local storage . With this fix, the DeleteEmptyDirData: true field was added to the node drain helper struct in the WMCO. As a result, customers are able to drain nodes with emptyDir volumes attached. ( OCPBUGS-27300 ) Previously, because of a lack of synchronization between Windows machine set nodes and BYOH instances, during an update the machine set nodes and the BYOH instances could update simultaneously. This could impact running workloads. This fix introduces a locking mechanism so that machine set nodes and BYOH instances update individually. ( OCPBUGS-8996 ) Previously, because of a missing secret, the WMCO could not configure proper credentials for the WICD on Nutanix clusters. As a consequence, the WMCO could not create Windows nodes. With this fix, the WMCO creates long-lived credentials for the WICD service account. As a result, the WMCO is able to configure a Windows node on Nutanix clusters. ( OCPBUGS-25350 ) Previously, because of bad logic in the networking configuration script, the WICD was incorrectly reading carriage returns in the CNI configuration file as changes, and identified the file as modified. This caused the CNI configuration to be unnecessarily reloaded, potentially resulting in container restarts and brief network outages. With this fix, the WICD now reloads the CNI configuration only when the CNI configuration is actually modified. ( OCPBUGS-25756 ) 2.3. Windows Machine Config Operator prerequisites The following information details the supported platform versions, Windows Server versions, and networking configurations for the Windows Machine Config Operator. See the vSphere documentation for any information that is relevant to only that platform. 2.3.1. WMCO supported installation method The WMCO fully supports installing Windows nodes into installer-provisioned infrastructure (IPI) clusters. This is the preferred OpenShift Container Platform installation method. For user-provisioned infrastructure (UPI) clusters, the WMCO supports installing Windows nodes only into a UPI cluster installed with the platform: none field set in the install-config.yaml file (bare-metal or provider-agnostic) and only for the BYOH (Bring Your Own Host) use case. UPI is not supported for any other platform. 2.3.2. WMCO 10.y supported platforms and Windows Server versions The following table lists the Windows Server versions that are supported by WMCO 10.y, based on the applicable platform. Windows Server versions not listed are not supported and attempting to use them will cause errors. To prevent these errors, use only an appropriate version for your platform. Platform Supported Windows Server version Amazon Web Services (AWS) Windows Server 2022, OS Build 20348.681 or later [1] Windows Server 2019, version 1809 Microsoft Azure Windows Server 2022, OS Build 20348.681 or later Windows Server 2019, version 1809 VMware vSphere Windows Server 2022, OS Build 20348.681 or later Google Cloud Platform (GCP) Windows Server 2022, OS Build 20348.681 or later Nutanix Windows Server 2022, OS Build 20348.681 or later Bare metal or provider agnostic Windows Server 2022, OS Build 20348.681 or later Windows Server 2019, version 1809 For disconnected clusters, the Windows AMI must have the EC2LaunchV2 agent version 2.0.1643 or later installed. For more information, see the Install the latest version of EC2Launch v2 in the AWS documentation. 2.3.3. Supported networking Hybrid networking with OVN-Kubernetes is the only supported networking configuration. See the additional resources below for more information on this functionality. The following tables outline the type of networking configuration and Windows Server versions to use based on your platform. You must specify the network configuration when you install the cluster. Note The WMCO does not support OVN-Kubernetes without hybrid networking or OpenShift SDN. Dual NIC is not supported on WMCO-managed Windows instances. Table 2.1. Platform networking support Platform Supported networking Amazon Web Services (AWS) Hybrid networking with OVN-Kubernetes Microsoft Azure Hybrid networking with OVN-Kubernetes VMware vSphere Hybrid networking with OVN-Kubernetes with a custom VXLAN port Google Cloud Platform (GCP) Hybrid networking with OVN-Kubernetes Nutanix Hybrid networking with OVN-Kubernetes Bare metal or provider agnostic Hybrid networking with OVN-Kubernetes Table 2.2. Hybrid OVN-Kubernetes Windows Server support Hybrid networking with OVN-Kubernetes Supported Windows Server version Default VXLAN port Windows Server 2022, OS Build 20348.681 or later Windows Server 2019, version 1809 Custom VXLAN port Windows Server 2022, OS Build 20348.681 or later Additional resources Hybrid networking 2.4. Windows Machine Config Operator known limitations Note the following limitations when working with Windows nodes managed by the WMCO (Windows nodes): The following OpenShift Container Platform features are not supported on Windows nodes: Image builds OpenShift Pipelines OpenShift Service Mesh OpenShift monitoring of user-defined projects OpenShift Serverless Horizontal Pod Autoscaling Vertical Pod Autoscaling The following Red Hat features are not supported on Windows nodes: Red Hat Insights cost management Red Hat OpenShift Local Dual NIC is not supported on WMCO-managed Windows instances. Windows nodes do not support workloads created by using deployment configs. You can use a deployment or other method to deploy workloads. Windows nodes are not supported in clusters that are in a disconnected environment. Red Hat OpenShift support for Windows Containers does not support adding Windows nodes to a cluster through a trunk port. The only supported networking configuration for adding Windows nodes is through an access port that carries traffic for the VLAN. Red Hat OpenShift support for Windows Containers does not support any Windows operating system language other than English (United States). Due to a limitation within the Windows operating system, clusterNetwork CIDR addresses of class E, such as 240.0.0.0 , are not compatible with Windows nodes. Kubernetes has identified the following node feature limitations : Huge pages are not supported for Windows containers. Privileged containers are not supported for Windows containers. Kubernetes has identified several API compatibility issues . | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/windows_container_support_for_openshift/release-notes |
5.190. microcode_ctl | 5.190. microcode_ctl 5.190.1. RHBA-2012:0827 - microcode_ctl bug fix and enhancement update Updated microcode_ctl packages that fix one bug and add two enhancements are now available for Red Hat Enterprise Linux 6. The microcode_ctl packages provide microcode updates for Intel and AMD processors. Bug Fix BZ# 768803 Previously, running the microcode_ctl utility with long arguments for the "-d" or "-f" options led to a buffer overflow. Consequently, microcode_ctl terminated unexpectedly with a segmentation fault and a backtrace was displayed. With this update, microcode_ctl has been modified to handle this situation gracefully. The microcode_ctl utility no longer crashes and displays an error message informing the user that the file name used is too long. Enhancements BZ# 736266 The Intel CPU microcode file has been updated to version 20111110, which is the latest version of the microcode available from Intel. BZ# 787757 The AMD CPU microcode file has been updated to version 20120117, which is the latest version of the microcode available from AMD. All users of microcode_ctl are advised to upgrade to these updated packages, which fix this bug and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/microcode_ctl |
3.3. realmd Commands | 3.3. realmd Commands The realmd system has two major task areas: managing system enrollment in a domain setting which domain users are allowed to access the local system resources The central utility in realmd is called realm . Most realm commands require the user to specify the action that the utility should perform, and the entity, such as a domain or user account, for which to perform the action: For example: Table 3.1. realmd Commands Command Description Realm Commands discover Run a discovery scan for domains on the network. join Add the system to the specified domain. leave Remove the system from the specified domain. list List all configured domains for the system or all discovered and configured domains. Login Commands permit Enable access for specified users or for all users within a configured domain to access the local system. deny Restrict access for specified users or for all users within a configured domain to access the local system. For more information about the realm commands, see the realm (8) man page. | [
"realm command arguments",
"realm join ad.example.com realm permit user_name"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/windows_integration_guide/cmd-realmd |
Chapter 3. Using node disruption policies to minimize disruption from machine config changes | Chapter 3. Using node disruption policies to minimize disruption from machine config changes By default, when you make certain changes to the fields in a MachineConfig object, the Machine Config Operator (MCO) drains and reboots the nodes associated with that machine config. However, you can create a node disruption policy that defines a set of changes to some Ignition config objects that would require little or no disruption to your workloads. A node disruption policy allows you to define the configuration changes that cause a disruption to your cluster, and which changes do not. This allows you to reduce node downtime when making small machine configuration changes in your cluster. To configure the policy, you modify the MachineConfiguration object, which is in the openshift-machine-config-operator namespace. See the example node disruption policies in the MachineConfiguration objects that follow. Note There are machine configuration changes that always require a reboot, regardless of any node disruption policies. For more information, see About the Machine Config Operator . After you create the node disruption policy, the MCO validates the policy to search for potential issues in the file, such as problems with formatting. The MCO then merges the policy with the cluster defaults and populates the status.nodeDisruptionPolicyStatus fields in the machine config with the actions to be performed upon future changes to the machine config. The configurations in your policy always overwrite the cluster defaults. Important The MCO does not validate whether a change can be successfully applied by your node disruption policy. Therefore, you are responsible to ensure the accuracy of your node disruption policies. For example, you can configure a node disruption policy so that sudo configurations do not require a node drain and reboot. Or, you can configure your cluster so that updates to sshd are applied with only a reload of that one service. You can control the behavior of the MCO when making the changes to the following Ignition configuration objects: configuration files : You add to or update the files in the /var or /etc directory. You can configure a policy for a specific file anywhere in the directory or for a path to a specific directory. For a path, a change or addition to any file in that directory triggers the policy. Note If a file is included in more than one policy, only the policy with the best match to that file is applied. For example, if you have a policy for the /etc/ directory and a policy for the /etc/pki/ directory, a change to the /etc/pki/tls/certs/ca-bundle.crt file would apply the etc/pki policy. systemd units : You create and set the status of a systemd service or modify a systemd service. users and groups : You change SSH keys in the passwd section postinstallation. ICSP , ITMS , IDMS objects: You can remove mirroring rules from an ImageContentSourcePolicy (ICSP), ImageTagMirrorSet (ITMS), and ImageDigestMirrorSet (IDMS) object. When you make any of these changes, the node disruption policy determines which of the following actions are required when the MCO implements the changes: Reboot : The MCO drains and reboots the nodes. This is the default behavior. None : The MCO does not drain or reboot the nodes. The MCO applies the changes with no further action. Drain : The MCO cordons and drains the nodes of their workloads. The workloads restart with the new configurations. Reload : For services, the MCO reloads the specified services without restarting the service. Restart : For services, the MCO fully restarts the specified services. DaemonReload : The MCO reloads the systemd manager configuration. Special : This is an internal MCO-only action and cannot be set by the user. Note The Reboot and None actions cannot be used with any other actions, as the Reboot and None actions override the others. Actions are applied in the order that they are set in the node disruption policy list. If you make other machine config changes that do require a reboot or other disruption to the nodes, that reboot supercedes the node disruption policy actions. 3.1. Example node disruption policies The following example MachineConfiguration objects contain a node disruption policy. Tip A MachineConfiguration object and a MachineConfig object are different objects. A MachineConfiguration object is a singleton object in the MCO namespace that contains configuration parameters for the MCO operator. A MachineConfig object defines changes that are applied to a machine config pool. The following example MachineConfiguration object shows no user defined policies. The default node disruption policy values are shown in the status stanza. Default node disruption policy apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster spec: logLevel: Normal managementState: Managed operatorLogLevel: Normal status: nodeDisruptionPolicyStatus: clusterPolicies: files: - actions: - type: None path: /etc/mco/internal-registry-pull-secret.json - actions: - type: None path: /var/lib/kubelet/config.json - actions: - reload: serviceName: crio.service type: Reload path: /etc/machine-config-daemon/no-reboot/containers-gpg.pub - actions: - reload: serviceName: crio.service type: Reload path: /etc/containers/policy.json - actions: - type: Special path: /etc/containers/registries.conf - actions: - reload: serviceName: crio.service type: Reload path: /etc/containers/registries.d - actions: - type: None path: /etc/nmstate/openshift - actions: - restart: serviceName: coreos-update-ca-trust.service type: Restart - restart: serviceName: crio.service type: Restart path: /etc/pki/ca-trust/source/anchors/openshift-config-user-ca-bundle.crt sshkey: actions: - type: None observedGeneration: 9 In the following example, when changes are made to the SSH keys, the MCO drains the cluster nodes, reloads the crio.service , reloads the systemd configuration, and restarts the crio-service . Example node disruption policy for an SSH key change apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster # ... spec: nodeDisruptionPolicy: sshkey: actions: - type: Drain - reload: serviceName: crio.service type: Reload - type: DaemonReload - restart: serviceName: crio.service type: Restart # ... In the following example, when changes are made to the /etc/chrony.conf file, the MCO restarts the chronyd.service on the cluster nodes. If files are added to or modified in the /var/run directory, the MCO applies the changes with no further action. Example node disruption policy for a configuration file change apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster # ... spec: nodeDisruptionPolicy: files: - actions: - restart: serviceName: chronyd.service type: Restart path: /etc/chrony.conf - actions: - type: None path: /var/run In the following example, when changes are made to the auditd.service systemd unit, the MCO drains the cluster nodes, reloads the crio.service , reloads the systemd manager configuration, and restarts the crio.service . Example node disruption policy for a systemd unit change apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster # ... spec: nodeDisruptionPolicy: units: - name: auditd.service actions: - type: Drain - type: Reload reload: serviceName: crio.service - type: DaemonReload - type: Restart restart: serviceName: crio.service In the following example, when changes are made to the registries.conf file, such as by editing an ImageContentSourcePolicy (ICSP) object, the MCO does not drain or reboot the nodes and applies the changes with no further action. Example node disruption policy for a registries.conf file change apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster # ... spec: nodeDisruptionPolicy: files: - actions: - type: None path: /etc/containers/registries.conf 3.2. Configuring node restart behaviors upon machine config changes You can create a node disruption policy to define the machine configuration changes that cause a disruption to your cluster, and which changes do not. You can control how your nodes respond to changes in the files in the /var or /etc directory, the systemd units, the SSH keys, and the registries.conf file. When you make any of these changes, the node disruption policy determines which of the following actions are required when the MCO implements the changes: Reboot : The MCO drains and reboots the nodes. This is the default behavior. None : The MCO does not drain or reboot the nodes. The MCO applies the changes with no further action. Drain : The MCO cordons and drains the nodes of their workloads. The workloads restart with the new configurations. Reload : For services, the MCO reloads the specified services without restarting the service. Restart : For services, the MCO fully restarts the specified services. DaemonReload : The MCO reloads the systemd manager configuration. Special : This is an internal MCO-only action and cannot be set by the user. Note The Reboot and None actions cannot be used with any other actions, as the Reboot and None actions override the others. Actions are applied in the order that they are set in the node disruption policy list. If you make other machine config changes that do require a reboot or other disruption to the nodes, that reboot supercedes the node disruption policy actions. Procedure Edit the machineconfigurations.operator.openshift.io object to define the node disruption policy: USD oc edit MachineConfiguration cluster -n openshift-machine-config-operator Add a node disruption policy similar to the following: apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster # ... spec: nodeDisruptionPolicy: 1 files: 2 - actions: 3 - restart: 4 serviceName: chronyd.service 5 type: Restart path: /etc/chrony.conf 6 sshkey: 7 actions: - type: Drain - reload: serviceName: crio.service type: Reload - type: DaemonReload - restart: serviceName: crio.service type: Restart units: 8 - actions: - type: Drain - reload: serviceName: crio.service type: Reload - type: DaemonReload - restart: serviceName: crio.service type: Restart name: test.service 1 Specifies the node disruption policy. 2 Specifies a list of machine config file definitions and actions to take to changes on those paths. This list supports a maximum of 50 entries. 3 Specifies the series of actions to be executed upon changes to the specified files. Actions are applied in the order that they are set in this list. This list supports a maximum of 10 entries. 4 Specifies that the listed service is to be reloaded upon changes to the specified files. 5 Specifies the full name of the service to be acted upon. 6 Specifies the location of a file that is managed by a machine config. The actions in the policy apply when changes are made to the file in path . 7 Specifies a list of service names and actions to take upon changes to the SSH keys in the cluster. 8 Specifies a list of systemd unit names and actions to take upon changes to those units. Verification View the MachineConfiguration object file that you created: Example output apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: labels: machineconfiguration.openshift.io/role: worker name: cluster # ... status: nodeDisruptionPolicyStatus: 1 clusterPolicies: files: # ... - actions: - restart: serviceName: chronyd.service type: Restart path: /etc/chrony.conf sshkey: actions: - type: Drain - reload: serviceName: crio.service type: Reload - type: DaemonReload - restart: serviceName: crio.service type: Restart units: - actions: - type: Drain - reload: serviceName: crio.service type: Reload - type: DaemonReload - restart: serviceName: crio.service type: Restart name: test.se # ... 1 Specifies the current cluster-validated policies. | [
"apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster spec: logLevel: Normal managementState: Managed operatorLogLevel: Normal status: nodeDisruptionPolicyStatus: clusterPolicies: files: - actions: - type: None path: /etc/mco/internal-registry-pull-secret.json - actions: - type: None path: /var/lib/kubelet/config.json - actions: - reload: serviceName: crio.service type: Reload path: /etc/machine-config-daemon/no-reboot/containers-gpg.pub - actions: - reload: serviceName: crio.service type: Reload path: /etc/containers/policy.json - actions: - type: Special path: /etc/containers/registries.conf - actions: - reload: serviceName: crio.service type: Reload path: /etc/containers/registries.d - actions: - type: None path: /etc/nmstate/openshift - actions: - restart: serviceName: coreos-update-ca-trust.service type: Restart - restart: serviceName: crio.service type: Restart path: /etc/pki/ca-trust/source/anchors/openshift-config-user-ca-bundle.crt sshkey: actions: - type: None observedGeneration: 9",
"apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster spec: nodeDisruptionPolicy: sshkey: actions: - type: Drain - reload: serviceName: crio.service type: Reload - type: DaemonReload - restart: serviceName: crio.service type: Restart",
"apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster spec: nodeDisruptionPolicy: files: - actions: - restart: serviceName: chronyd.service type: Restart path: /etc/chrony.conf - actions: - type: None path: /var/run",
"apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster spec: nodeDisruptionPolicy: units: - name: auditd.service actions: - type: Drain - type: Reload reload: serviceName: crio.service - type: DaemonReload - type: Restart restart: serviceName: crio.service",
"apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster spec: nodeDisruptionPolicy: files: - actions: - type: None path: /etc/containers/registries.conf",
"oc edit MachineConfiguration cluster -n openshift-machine-config-operator",
"apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster spec: nodeDisruptionPolicy: 1 files: 2 - actions: 3 - restart: 4 serviceName: chronyd.service 5 type: Restart path: /etc/chrony.conf 6 sshkey: 7 actions: - type: Drain - reload: serviceName: crio.service type: Reload - type: DaemonReload - restart: serviceName: crio.service type: Restart units: 8 - actions: - type: Drain - reload: serviceName: crio.service type: Reload - type: DaemonReload - restart: serviceName: crio.service type: Restart name: test.service",
"oc get MachineConfiguration/cluster -o yaml",
"apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: labels: machineconfiguration.openshift.io/role: worker name: cluster status: nodeDisruptionPolicyStatus: 1 clusterPolicies: files: - actions: - restart: serviceName: chronyd.service type: Restart path: /etc/chrony.conf sshkey: actions: - type: Drain - reload: serviceName: crio.service type: Reload - type: DaemonReload - restart: serviceName: crio.service type: Restart units: - actions: - type: Drain - reload: serviceName: crio.service type: Reload - type: DaemonReload - restart: serviceName: crio.service type: Restart name: test.se"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/machine_configuration/machine-config-node-disruption_machine-configs-configure |
Chapter 2. Release notes | Chapter 2. Release notes 2.1. OpenShift Virtualization release notes 2.1.1. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . 2.1.2. Providing documentation feedback To report an error or to improve our documentation, log in to your Red Hat Jira account and submit a Jira issue . 2.1.3. About Red Hat OpenShift Virtualization With Red Hat OpenShift Virtualization, you can bring traditional virtual machines (VMs) into OpenShift Container Platform and run them alongside containers. In OpenShift Virtualization, VMs are native Kubernetes objects that you can manage by using the OpenShift Container Platform web console or the command line. OpenShift Virtualization is represented by the icon. You can use OpenShift Virtualization with either the OVN-Kubernetes or the OpenShiftSDN default Container Network Interface (CNI) network provider. Learn more about what you can do with OpenShift Virtualization . Learn more about OpenShift Virtualization architecture and deployments . Prepare your cluster for OpenShift Virtualization. 2.1.3.1. OpenShift Virtualization supported cluster version OpenShift Virtualization 4.15 is supported for use on OpenShift Container Platform 4.15 clusters. To use the latest z-stream release of OpenShift Virtualization, you must first upgrade to the latest version of OpenShift Container Platform. 2.1.3.2. Supported guest operating systems To view the supported guest operating systems for OpenShift Virtualization, see Certified Guest Operating Systems in Red Hat OpenStack Platform, Red Hat Virtualization, OpenShift Virtualization and Red Hat Enterprise Linux with KVM . 2.1.3.3. Microsoft Windows SVVP certification OpenShift Virtualization is certified in Microsoft's Windows Server Virtualization Validation Program (SVVP) to run Windows Server workloads. The SVVP certification applies to: Red Hat Enterprise Linux CoreOS workers. In the Microsoft SVVP Catalog, they are named Red Hat OpenShift Container Platform 4 on RHEL CoreOS 9 . Intel and AMD CPUs. 2.1.4. Quick starts Quick start tours are available for several OpenShift Virtualization features. To view the tours, click the Help icon ? in the menu bar on the header of the OpenShift Container Platform web console and then select Quick Starts . You can filter the available tours by entering the keyword virtualization in the Filter field. 2.1.5. New and changed features This release adds new features and enhancements related to the following components and concepts: 2.1.5.1. Installation and update You can now use the kubevirt_vm_created_total metric (Type: Counter) to query the number of VMs created in a specified namespace. 2.1.5.2. Infrastructure The instanceType API now uses a more stable v1beta1 version. 2.1.5.3. Virtualization You can now enable access to the serial console logs of VM guests to facilitate troubleshooting. This feature is disabled by default. Cluster administrators can change the default setting for VMs by using the web console or the CLI. Users can toggle guest log access on individual VMs regardless of the cluster-wide default setting. Free page reporting is enabled by default. You can configure OpenShift Virtualization to activate kernel samepage merging (KSM) when a node is overloaded. 2.1.5.4. Networking You can hot plug a secondary network interface to a running virtual machine (VM). Hot plugging and hot unplugging is supported only for VMs created with OpenShift Virtualization 4.14 or later. Hot unplugging is not supported for Single Root I/O Virtualization (SR-IOV) interfaces. OpenShift Virtualization now supports the localnet topology for OVN-Kubernetes secondary networks . A localnet topology connects the secondary network to the physical underlay. This enables both east-west cluster traffic and access to services running outside the cluster, but it requires additional configuration of the underlying Open vSwitch (OVS) system on cluster nodes. An OVN-Kubernetes secondary network is compatible with the multi-network policy API , which provides the MultiNetworkPolicy custom resource definition (CRD) to control traffic flow to and from VMs. You can use the ipBlock attribute to define network policy ingress and egress rules for specific CIDR blocks. Configuring a cluster for DPDK workloads on SR-IOV was previously Technology Preview and is now generally available. 2.1.5.5. Storage When cloning a data volume, the Containerized Data Importer (CDI) chooses an efficient Container Storage Interface (CSI) clone if certain prerequisites are met. Host-assisted cloning, a less efficient method, is used as a fallback. To understand why host-assisted cloning was used, you can check the cdi.kubevirt.io/cloneFallbackReason annotation on the cloned persistent volume claim (PVC). 2.1.5.6. Web console Installing and editing customized instance types and preferences to create a virtual machine (VM) from a volume or persistent volume claim (PVC) was previously Technology Preview and is now generally available. The Preview features tab can now be found under Virtualization Overview Settings . You can configure disk sharing for ordinary virtual machine (VM) or LUN-backed VM disks to allow multiple VMs to share the same underlying storage. Any disk to be shared must be in block mode. To allow a LUN-backed block mode VM disk to be shared among multiple VMs, a cluster administrator must enable the SCSI persistentReservation feature gate. For more information, see Configuring shared volumes for virtual machines . You can now search for VM configuration settings in the Configuration tab of the VirtualMachine details page. You can now configure SSH over NodePort service under Virtualization Overview Settings Cluster General settings SSH configurations . When creating a VM from an instance type, you can now designate favorite bootable volumes by starring them in the volume list of the OpenShift Container Platform web console. You can run a VM latency checkup by using the web console. From the side menu, click Virtualization Checkups Network latency . To run your first checkup, click Install permissions and then click Run checkup . You can run a storage validation checkup by using the web console. From the side menu, click Virtualization Checkups Storage . To run your first checkup, click Install permissions and then click Run checkup . You can enable or disable the kernel samepage merging (KSM) activation feature for all cluster nodes by using the web console . You can now hot plug a Single Root I/O Virtualization (SR-IOV) interface to a running virtual machine (VM) by using the web console. You can now use existing secrets from other projects when adding a public SSH key during VM creation or when adding a secret to an existing VM . You can now create a network attachment definition (NAD) for OVN-Kubernetes localnet topology by using the OpenShift Container Platform web console. 2.1.6. Deprecated and removed features 2.1.6.1. Deprecated features Deprecated features are included in the current release and supported. However, they will be removed in a future release and are not recommended for new deployments. The tekton-tasks-operator is deprecated and Tekton tasks and example pipelines are now deployed by the ssp-operator . The copy-template , modify-vm-template , and create-vm-from-template tasks are deprecated. Support for Windows Server 2012 R2 templates is deprecated. 2.1.6.2. Removed features Removed features are not supported in the current release. Support for the legacy HPP custom resource, and the associated storage class, has been removed for all new deployments. In OpenShift Virtualization 4.15, the HPP Operator uses the Kubernetes Container Storage Interface (CSI) driver to configure local storage. A legacy HPP custom resource is supported only if it had been installed on a version of OpenShift Virtualization. CentOS 7 and CentOS Stream 8 are now in the End of Life phase. As a consequence, the container images for these operating systems have been removed from OpenShift Virtualization and are no longer community supported . 2.1.7. Technology Preview features Some features in this release are currently in Technology Preview. These experimental features are not intended for production use. Note the following scope of support on the Red Hat Customer Portal for these features: Technology Preview Features Support Scope You can now configure a VM eviction strategy for the entire cluster . You can now enable nested virtualization on OpenShift Virtualization hosts . Cluster admins can now enable CPU resource limits on a namespace in the OpenShift Container Platform web console under Overview Settings Cluster Preview features . 2.1.8. Bug fixes Previously, the windows-efi-installer pipeline failed when started with a storage class that had the volumeBindingMode set to WaitForFirstConsumer . This fix removes the annotation in the StorageClass object that was causing the pipelines to fail. ( CNV-32287 ) Previously, if you simultaneously cloned approximately 1000 virtual machines (VMs) using the provided data sources in the openshift-virtualization-os-images namespace, not all of the VMs moved to a running state. With this fix, you can clone a large number of VMs concurrently. ( CNV-30083 ) Previously, you could not SSH into a VM by using a NodePort service and its associated fully qualified domain name (FQDN) displayed in the web console when using networkType: OVNKubernetes in your install-config.yaml file. With this update, you can configure the web console so it shows a valid accessible endpoint for SSH NodePort services. ( CNV-24889 ) With this update, live migration no longer fails for a virtual machine instance (VMI) after hot plugging a virtual disk. ( CNV-34761 ) 2.1.9. Known issues Monitoring The Pod Disruption Budget (PDB) prevents pod disruptions for migratable virtual machine images. If the PDB detects pod disruption, then openshift-monitoring sends a PodDisruptionBudgetAtLimit alert every 60 minutes for virtual machine images that use the LiveMigrate eviction strategy. ( CNV-33834 ) As a workaround, silence alerts . Networking Nodes Uninstalling OpenShift Virtualization does not remove the feature.node.kubevirt.io node labels created by OpenShift Virtualization. You must remove the labels manually. ( CNV-38543 ) In a heterogeneous cluster with different compute nodes, virtual machines that have HyperV reenlightenment enabled cannot be scheduled on nodes that do not support timestamp-counter scaling (TSC) or have the appropriate TSC frequency. ( BZ#2151169 ) Storage If you use Portworx as your storage solution on AWS and create a VM disk image, the created image might be smaller than expected due to the filesystem overhead being accounted for twice. ( CNV-32695 ) As a workaround, you can manually expand the persistent volume claim (PVC) to increase the available space after the initial provisioning process completes. In some instances, multiple virtual machines can mount the same PVC in read-write mode, which might result in data corruption. ( CNV-13500 ) As a workaround, avoid using a single PVC in read-write mode with multiple VMs. If you clone more than 100 VMs using the csi-clone cloning strategy, then the Ceph CSI might not purge the clones. Manually deleting the clones might also fail. ( CNV-23501 ) As a workaround, you can restart the ceph-mgr to purge the VM clones. Virtualization A critical bug in qemu-kvm causes VMs to hang and experience I/O errors after disk hot plug operations. This issue can also affect the operating system disk and other disks that were not involved in the hot plug operations. If the operating system disk stops working, the root file system shuts down. For more information, see Virtual Machine loses access to its disks after hot-plugging some extra disks in the Red Hat Knowledgebase. Important Due to package versioning, this bug might reappear after updating OpenShift Virtualization from 4.13.z or 4.14.z to 4.15.0. When adding a virtual Trusted Platform Module (vTPM) device to a Windows VM, the BitLocker Drive Encryption system check passes even if the vTPM device is not persistent. This is because a vTPM device that is not persistent stores and recovers encryption keys using ephemeral storage for the lifetime of the virt-launcher pod. When the VM migrates or is shut down and restarts, the vTPM data is lost. ( CNV-36448 ) OpenShift Virtualization links a service account token in use by a pod to that specific pod. OpenShift Virtualization implements a service account volume by creating a disk image that contains a token. If you migrate a VM, then the service account volume becomes invalid. ( CNV-33835 ) As a workaround, use user accounts rather than service accounts because user account tokens are not bound to a specific pod. With the release of the RHSA-2023:3722 advisory, the TLS Extended Master Secret (EMS) extension ( RFC 7627 ) is mandatory for TLS 1.2 connections on FIPS-enabled Red Hat Enterprise Linux (RHEL) 9 systems. This is in accordance with FIPS-140-3 requirements. TLS 1.3 is not affected. Legacy OpenSSL clients that do not support EMS or TLS 1.3 now cannot connect to FIPS servers running on RHEL 9. Similarly, RHEL 9 clients in FIPS mode cannot connect to servers that only support TLS 1.2 without EMS. This in practice means that these clients cannot connect to servers on RHEL 6, RHEL 7 and non-RHEL legacy operating systems. This is because the legacy 1.0.x versions of OpenSSL do not support EMS or TLS 1.3. For more information, see TLS Extension "Extended Master Secret" enforced with Red Hat Enterprise Linux 9.2 . As a workaround, update legacy OpenSSL clients to a version that supports TLS 1.3 and configure OpenShift Virtualization to use TLS 1.3, with the Modern TLS security profile type, for FIPS mode. Web console When you first deploy an OpenShift Container Platform cluster, creating VMs from templates or instance types by using the web console, fails if you do not have cluster-admin permissions. As a workaround, the cluster administrator must first create a config map to enable other users to use templates and instance types to create VMs. (link: CNV-38284 ) When you create a network attachment definition (NAD) for an OVN-Kubernetes localnet topology by using the web console, the invalid annotation k8s.v1.cni.cncf.io/resourceName: openshift.io/ appears. This annotation prevents the starting of the VM. As a workaround, remove the annotation. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/virtualization/release-notes |
Chapter 40. Endpoint Interface | Chapter 40. Endpoint Interface Abstract This chapter describes how to implement the Endpoint interface, which is an essential step in the implementation of a Apache Camel component. 40.1. The Endpoint Interface Overview An instance of org.apache.camel.Endpoint type encapsulates an endpoint URI, and it also serves as a factory for Consumer , Producer , and Exchange objects. There are three different approaches to implementing an endpoint: Event-driven scheduled poll polling These endpoint implementation patterns complement the corresponding patterns for implementing a consumer - see Section 41.2, "Implementing the Consumer Interface" . Figure 40.1, "Endpoint Inheritance Hierarchy" shows the relevant Java interfaces and classes that make up the Endpoint inheritance hierarchy. Figure 40.1. Endpoint Inheritance Hierarchy The Endpoint interface Example 40.1, "Endpoint Interface" shows the definition of the org.apache.camel.Endpoint interface. Example 40.1. Endpoint Interface Endpoint methods The Endpoint interface defines the following methods: isSingleton() - Returns true , if you want to ensure that each URI maps to a single endpoint within a CamelContext. When this property is true , multiple references to the identical URI within your routes always refer to a single endpoint instance. When this property is false , on the other hand, multiple references to the same URI within your routes refer to distinct endpoint instances. Each time you refer to the URI in a route, a new endpoint instance is created. getEndpointUri() - Returns the endpoint URI of this endpoint. getEndpointKey() - Used by org.apache.camel.spi.LifecycleStrategy when registering the endpoint. getCamelContext() - return a reference to the CamelContext instance to which this endpoint belongs. setCamelContext() - Sets the CamelContext instance to which this endpoint belongs. configureProperties() - Stores a copy of the parameter map that is used to inject parameters when creating a new Consumer instance. isLenientProperties() - Returns true to indicate that the URI is allowed to contain unknown parameters (that is, parameters that cannot be injected on the Endpoint or the Consumer class). Normally, this method should be implemented to return false . createExchange() - An overloaded method with the following variants: Exchange createExchange() - Creates a new exchange instance with a default exchange pattern setting. Exchange createExchange(ExchangePattern pattern) - Creates a new exchange instance with the specified exchange pattern. Exchange createExchange(Exchange exchange) - Converts the given exchange argument to the type of exchange needed for this endpoint. If the given exchange is not already of the correct type, this method copies it into a new instance of the correct type. A default implementation of this method is provided in the DefaultEndpoint class. createProducer() - Factory method used to create new Producer instances. createConsumer() - Factory method to create new event-driven consumer instances. The processor argument is a reference to the first processor in the route. createPollingConsumer() - Factory method to create new polling consumer instances. Endpoint singletons In order to avoid unnecessary overhead, it is a good idea to create a single endpoint instance for all endpoints that have the same URI (within a CamelContext). You can enforce this condition by implementing isSingleton() to return true . Note In this context, same URI means that two URIs are the same when compared using string equality. In principle, it is possible to have two URIs that are equivalent, though represented by different strings. In that case, the URIs would not be treated as the same. 40.2. Implementing the Endpoint Interface Alternative ways of implementing an endpoint The following alternative endpoint implementation patterns are supported: Event-driven endpoint implementation Scheduled poll endpoint implementation Polling endpoint implementation Event-driven endpoint implementation If your custom endpoint conforms to the event-driven pattern (see Section 38.1.3, "Consumer Patterns and Threading" ), it is implemented by extending the abstract class, org.apache.camel.impl.DefaultEndpoint , as shown in Example 40.2, "Implementing DefaultEndpoint" . Example 40.2. Implementing DefaultEndpoint 1 Implement an event-driven custom endpoint, CustomEndpoint , by extending the DefaultEndpoint class. 2 You must have at least one constructor that takes the endpoint URI, endpointUri , and the parent component reference, component , as arguments. 3 Implement the createProducer() factory method to create producer endpoints. 4 Implement the createConsumer() factory method to create event-driven consumer instances. 5 In general, it is not necessary to override the createExchange() methods. The implementations inherited from DefaultEndpoint create a DefaultExchange object by default, which can be used in any Apache Camel component. If you need to initialize some exchange properties in the DefaultExchange object, however, it is appropriate to override the createExchange() methods here in order to add the exchange property settings. Important Do not override the createPollingConsumer() method. The DefaultEndpoint class provides default implementations of the following methods, which you might find useful when writing your custom endpoint code: getEndpointUri() - Returns the endpoint URI. getCamelContext() - Returns a reference to the CamelContext . getComponent() - Returns a reference to the parent component. createPollingConsumer() - Creates a polling consumer. The created polling consumer's functionality is based on the event-driven consumer. If you override the event-driven consumer method, createConsumer() , you get a polling consumer implementation. createExchange(Exchange e) - Converts the given exchange object, e , to the type required for this endpoint. This method creates a new endpoint using the overridden createExchange() endpoints. This ensures that the method also works for custom exchange types. Scheduled poll endpoint implementation If your custom endpoint conforms to the scheduled poll pattern (see Section 38.1.3, "Consumer Patterns and Threading" ) it is implemented by inheriting from the abstract class, org.apache.camel.impl.ScheduledPollEndpoint , as shown in Example 40.3, "ScheduledPollEndpoint Implementation" . Example 40.3. ScheduledPollEndpoint Implementation 1 Implement a scheduled poll custom endpoint, CustomEndpoint , by extending the ScheduledPollEndpoint class. 2 You must to have at least one constructor that takes the endpoint URI, endpointUri , and the parent component reference, component , as arguments. 3 Implement the createProducer() factory method to create a producer endpoint. 4 Implement the createConsumer() factory method to create a scheduled poll consumer instance. 5 The configureConsumer() method, defined in the ScheduledPollEndpoint base class, is responsible for injecting consumer query options into the consumer. See the section called "Consumer parameter injection" . 6 In general, it is not necessary to override the createExchange() methods. The implementations inherited from DefaultEndpoint create a DefaultExchange object by default, which can be used in any Apache Camel component. If you need to initialize some exchange properties in the DefaultExchange object, however, it is appropriate to override the createExchange() methods here in order to add the exchange property settings. Important Do not override the createPollingConsumer() method. Polling endpoint implementation If your custom endpoint conforms to the polling consumer pattern (see Section 38.1.3, "Consumer Patterns and Threading" ), it is implemented by inheriting from the abstract class, org.apache.camel.impl.DefaultPollingEndpoint , as shown in Example 40.4, "DefaultPollingEndpoint Implementation" . Example 40.4. DefaultPollingEndpoint Implementation Because this CustomEndpoint class is a polling endpoint, you must implement the createPollingConsumer() method instead of the createConsumer() method. The consumer instance returned from createPollingConsumer() must inherit from the PollingConsumer interface. For details of how to implement a polling consumer, see the section called "Polling consumer implementation" . Apart from the implementation of the createPollingConsumer() method, the steps for implementing a DefaultPollingEndpoint are similar to the steps for implementing a ScheduledPollEndpoint . See Example 40.3, "ScheduledPollEndpoint Implementation" for details. Implementing the BrowsableEndpoint interface If you want to expose the list of exchange instances that are pending in the current endpoint, you can implement the org.apache.camel.spi.BrowsableEndpoint interface, as shown in Example 40.5, "BrowsableEndpoint Interface" . It makes sense to implement this interface if the endpoint performs some sort of buffering of incoming events. For example, the Apache Camel SEDA endpoint implements the BrowsableEndpoint interface - see Example 40.6, "SedaEndpoint Implementation" . Example 40.5. BrowsableEndpoint Interface Example Example 40.6, "SedaEndpoint Implementation" shows a sample implementation of SedaEndpoint . The SEDA endpoint is an example of an event-driven endpoint . Incoming events are stored in a FIFO queue (an instance of java.util.concurrent.BlockingQueue ) and a SEDA consumer starts up a thread to read and process the events. The events themselves are represented by org.apache.camel.Exchange objects. Example 40.6. SedaEndpoint Implementation 1 The SedaEndpoint class follows the pattern for implementing an event-driven endpoint by extending the DefaultEndpoint class. The SedaEndpoint class also implements the BrowsableEndpoint interface, which provides access to the list of exchange objects in the queue. 2 Following the usual pattern for an event-driven consumer, SedaEndpoint defines a constructor that takes an endpoint argument, endpointUri , and a component reference argument, component . 3 Another constructor is provided, which delegates queue creation to the parent component instance. 4 The createProducer() factory method creates an instance of CollectionProducer , which is a producer implementation that adds events to the queue. 5 The createConsumer() factory method creates an instance of SedaConsumer , which is responsible for pulling events off the queue and processing them. 6 The getQueue() method returns a reference to the queue. 7 The isSingleton() method returns true , indicating that a single endpoint instance should be created for each unique URI string. 8 The getExchanges() method implements the corresponding abstract method from BrowsableEndpoint. | [
"package org.apache.camel; public interface Endpoint { boolean isSingleton(); String getEndpointUri(); String getEndpointKey(); CamelContext getCamelContext(); void setCamelContext(CamelContext context); void configureProperties(Map options); boolean isLenientProperties(); Exchange createExchange(); Exchange createExchange(ExchangePattern pattern); Exchange createExchange(Exchange exchange); Producer createProducer() throws Exception; Consumer createConsumer(Processor processor) throws Exception; PollingConsumer createPollingConsumer() throws Exception; }",
"import java.util.Map; import java.util.concurrent.BlockingQueue; import org.apache.camel.Component; import org.apache.camel.Consumer; import org.apache.camel.Exchange; import org.apache.camel.Processor; import org.apache.camel.Producer; import org.apache.camel.impl.DefaultEndpoint; import org.apache.camel.impl.DefaultExchange; public class CustomEndpoint extends DefaultEndpoint { 1 public CustomEndpoint (String endpointUri, Component component) { 2 super(endpointUri, component); // Do any other initialization } public Producer createProducer() throws Exception { 3 return new CustomProducer (this); } public Consumer createConsumer(Processor processor) throws Exception { 4 return new CustomConsumer (this, processor); } public boolean isSingleton() { return true; } // Implement the following methods, only if you need to set exchange properties. // public Exchange createExchange() { 5 return this.createExchange(getExchangePattern()); } public Exchange createExchange(ExchangePattern pattern) { Exchange result = new DefaultExchange(getCamelContext(), pattern); // Set exchange properties return result; } }",
"import org.apache.camel.Consumer; import org.apache.camel.Processor; import org.apache.camel.Producer; import org.apache.camel.ExchangePattern; import org.apache.camel.Message; import org.apache.camel.impl.ScheduledPollEndpoint; public class CustomEndpoint extends ScheduledPollEndpoint { 1 protected CustomEndpoint (String endpointUri, CustomComponent component) { 2 super(endpointUri, component); // Do any other initialization } public Producer createProducer() throws Exception { 3 Producer result = new CustomProducer (this); return result; } public Consumer createConsumer(Processor processor) throws Exception { 4 Consumer result = new CustomConsumer (this, processor); configureConsumer(result); 5 return result; } public boolean isSingleton() { return true; } // Implement the following methods, only if you need to set exchange properties. // public Exchange createExchange() { 6 return this.createExchange(getExchangePattern()); } public Exchange createExchange(ExchangePattern pattern) { Exchange result = new DefaultExchange(getCamelContext(), pattern); // Set exchange properties return result; } }",
"import org.apache.camel.Consumer; import org.apache.camel.Processor; import org.apache.camel.Producer; import org.apache.camel.ExchangePattern; import org.apache.camel.Message; import org.apache.camel.impl.DefaultPollingEndpoint; public class CustomEndpoint extends DefaultPollingEndpoint { public PollingConsumer createPollingConsumer() throws Exception { PollingConsumer result = new CustomConsumer (this); configureConsumer(result); return result; } // Do NOT implement createConsumer(). It is already implemented in DefaultPollingEndpoint. }",
"package org.apache.camel.spi; import java.util.List; import org.apache.camel.Endpoint; import org.apache.camel.Exchange; public interface BrowsableEndpoint extends Endpoint { List<Exchange> getExchanges(); }",
"package org.apache.camel.component.seda; import java.util.ArrayList; import java.util.List; import java.util.Map; import java.util.concurrent.BlockingQueue; import org.apache.camel.Component; import org.apache.camel.Consumer; import org.apache.camel.Exchange; import org.apache.camel.Processor; import org.apache.camel.Producer; import org.apache.camel.impl.DefaultEndpoint; import org.apache.camel.spi.BrowsableEndpoint; public class SedaEndpoint extends DefaultEndpoint implements BrowsableEndpoint { 1 private BlockingQueue<Exchange> queue; public SedaEndpoint(String endpointUri, Component component, BlockingQueue<Exchange> queue) { 2 super(endpointUri, component); this.queue = queue; } public SedaEndpoint(String uri, SedaComponent component, Map parameters) { 3 this(uri, component, component.createQueue(uri, parameters)); } public Producer createProducer() throws Exception { 4 return new CollectionProducer(this, getQueue()); } public Consumer createConsumer(Processor processor) throws Exception { 5 return new SedaConsumer(this, processor); } public BlockingQueue<Exchange> getQueue() { 6 return queue; } public boolean isSingleton() { 7 return true; } public List<Exchange> getExchanges() { 8 return new ArrayList<Exchange> getQueue()); } }"
]
| https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_development_guide/endpointintf |
Chapter 3. Creating a custom Java runtime environment for modular applications | Chapter 3. Creating a custom Java runtime environment for modular applications You can create a custom Java runtime environment from a modular application by using the jlink tool. Prerequisites Install Installing Red Hat build of OpenJDK on RHEL using an archive . Note For best results, use portable Red Hat binaries as a basis for a Jlink runtime, because these binaries contain bundled libraries. Procedure Create a simple Hello World application by using the Logger class. Check the base Red Hat build of OpenJDK 21 binary exists in the jdk-17 folder: Create a directory for your application: Create hello-example/sample/HelloWorld.java file with the following content: package sample; import java.util.logging.Logger; public class HelloWorld { private static final Logger LOG = Logger.getLogger(HelloWorld.class.getName()); public static void main(String[] args) { LOG.info("Hello World!"); } } Create a file called hello-example/module-info.java and include the following code in the file: module sample { requires java.logging; } Compile your application: Run your application without a custom JRE: The example shows the base Red Hat build of OpenJDK requiring 311 MB to run a single class. (Optional) You can inspect the Red Hat build of OpenJDK and see many non-required modules for your application: This sample Hello World application has very few dependencies. You can use jlink to create custom runtime images for your application. With these images you can run your application with only the required Red Hat build of OpenJDK dependencies. Create your application module: Create a custom JRE with the required modules and a custom application launcher for your application: List the modules of the produced custom JRE. Observe that only a fraction of the original Red Hat build of OpenJDK remains. Note Red Hat build of OpenJDK reduces the size of your custom Java runtime image from a 313 M runtime image to a 50 M runtime image. Launch the application using the hello launcher: The generated JRE with your sample application does not have any other dependencies besides java.base , java.logging , and sample module. You can distribute your application that is bundled with the custom runtime in custom-runtime . This custom runtime includes your application. Note You must rebuild the custom Java runtime images for your application with every security update of your base Red Hat build of OpenJDK. Revised on 2024-05-09 14:53:28 UTC | [
"ls jdk-17 bin conf demo include jmods legal lib man NEWS release ./jdk-17/bin/java -version openjdk version \"17.0.10\" 2021-01-19 LTS OpenJDK Runtime Environment 18.9 (build 17.0.10+9-LTS) OpenJDK 64-Bit Server VM 18.9 (build 17.0.10+9-LTS, mixed mode)",
"mkdir -p hello-example/sample",
"package sample; import java.util.logging.Logger; public class HelloWorld { private static final Logger LOG = Logger.getLogger(HelloWorld.class.getName()); public static void main(String[] args) { LOG.info(\"Hello World!\"); } }",
"module sample { requires java.logging; }",
"./jdk-17/bin/javac -d example USD(find hello-example -name \\*.java)",
"./jdk-17/bin/java -cp example sample.HelloWorld Mar 09, 2021 10:48:59 AM sample.HelloWorld main INFO: Hello World!",
"du -sh jdk-17/ 313M jdk-17/",
"./jdk-17/bin/java --list-modules [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected]",
"mkdir sample-module ./jdk-17/bin/jmod create --class-path example/ --main-class sample.HelloWorld --module-version 1.0.0 -p example sample-module/hello.jmod",
"./jdk-17/bin/jlink --launcher hello=sample/sample.HelloWorld --module-path sample-module --add-modules sample --output custom-runtime",
"du -sh custom-runtime 50M custom-runtime/ ./custom-runtime/bin/java --list-modules [email protected] [email protected] [email protected]",
"./custom-runtime/bin/hello Jan 14, 2021 12:13:26 PM HelloWorld main INFO: Hello World!"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/using_jlink_to_customize_java_runtime_environment/creating-custom-jre-modular |
Chapter 1. Data Grid 8 | Chapter 1. Data Grid 8 Start the journey of migration to Data Grid 8 with a brief overview and a look at some of the basics. 1.1. Migration to Data Grid 8 Data Grid 8 introduces significant changes from Data Grid versions, including a whole new architecture for server deployments. While this makes certain aspects of migration more challenging for existing environments, the Data Grid team believe that these changes benefit users by reducing deployment complexity and administrative overhead. In comparison to versions, migration to Data Grid 8 means you gain: Cloud-native design built for container platforms. Lighter memory footprint and less overall resource usage. Faster start times. Increased security through smaller attack surface. Better integration with Red Hat technologies and solutions. And Data Grid 8 continues to give you the best possible in-memory datastorage capabilities built from tried and trusted, open-source technology. 1.2. Migration paths This documentation focuses on Data Grid 7.3 to Data Grid 8 migration but is still applicable for 7.x versions, starting from 7.0.1. If you are planning a migration from Data Grid 6, this document might not capture everything you need. You should contact Red Hat support for advice specific to your deployment before migrating. As always, please let us know if we can help you by improving this documentation. 1.3. Component downloads To start using Data Grid 8, you either: Download components from the Red Hat customer portal if you are installing Data Grid on bare metal or other host environment. Create an Data Grid Operator subscription if you are running on OpenShift. This following information describes the available component downloads for bare metal deployments, which are different to versions of Data Grid. Also see: Data Grid on OpenShift Migration Data Grid 8 Supported Configurations Maven repository Data Grid 8 no longer provides separate downloads from the Red Hat customer portal for the following components: Data Grid core libraries to create embedded caches in custom applications, referred to as "Library Mode" in versions. Hot Rod Java client. Utilities such as StoreMigrator . Instead of making these components available as downloads, Data Grid provides Java artifacts through a Maven repository. This change means that you can use Maven to centrally manage dependencies, which provides better control over dependencies across projects. You can download the Data Grid Maven repository from the customer portal or pull Data Grid dependencies from the public Red Hat Enterprise Maven repository. Instructions for both methods are available in the Data Grid documentation. Configuring the Data Grid Maven Repository Data Grid Server Data Grid Server is distributed as an archive that you can download and extract to host file systems. The archive distribution contains the following top-level folders: ├── bin 1 ├── boot 2 ├── docs 3 ├── lib 4 ├── server 5 └── static 6 1 Scripts to start and manage Data Grid Server as well as the Data Grid Command Line Interface (CLI). 2 Boot libraries. 3 Resources to help you configure and run Data Grid Server. 4 Run-time libraries for Data Grid Server. Note that this folder is intended for internal code only, not custom code libraries. 5 Root directory for Data Grid Server instances. 6 Static resources for Data Grid Console. The server folder is the root directory for Data Grid Server instances and contains subdirectories for custom code libraries, configuration files, and data. You can find more information about the filesystem and contents of the distributions in the Data Grid Server Guide . Data Grid Server Filesystem Data Grid Server README Modules for JBoss EAP You can use the modules for Red Hat JBoss EAP (EAP) to embed Data Grid caching functionality in your EAP applications. Important In EAP 7.4 applications can directly handle the infinispan subsystem without the need to separately install Data Grid modules. After EAP 7.4 GA is released, Data Grid will no longer provide EAP modules for download. Red Hat still offers support if you want to build and use your own Data Grid modules. However, Red Hat recommends that you use Data Grid APIs directly with EAP 7.4 because modules: Cannot use centrally managed Data Grid configuration that is shared across EAP applications. To use modules, you need to store configuration inside the application JAR or WAR. Often result in Java classloading issues that require debugging and additional overhead to implement. You can find more information about the EAP modules that Data Grid provides in the Embedding Data Grid in Java Applications . Data Grid Modules for Red Hat JBoss EAP Tomcat session client The Tomcat session client lets you externalize HTTP sessions from JBoss Web Server (JWS) applications to Data Grid via the Apache Tomcat org.apache.catalina.Manager interface. Hot Rod Node.js client The Hot Rod Node.js client is a reference JavaScript implementation for use with Data Grid Server clusters. Hot Rod Node.js Client API Source code Uncompiled source code for each Data Grid release. | [
"├── bin 1 ├── boot 2 ├── docs 3 ├── lib 4 ├── server 5 └── static 6"
]
| https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/migrating_to_data_grid_8/rhdg-migration |
Appendix B. Revision History | Appendix B. Revision History Revision History Revision 6-9.3 Thu 25 May 2017 Vladimir Slavik Removal of section on distributed compiling. Revision 6-9.2 Mon 3 April 2017 Robert Kratky Release of the Developer Guide for Red Hat_Enterprise Linux 6.9 Revision 2-60 Wed 4 May 2016 Robert Kratky Release of the Developer Guide for Red Hat_Enterprise Linux 6.8 Revision 2-56 Tue Jul 6 2015 Robert Kratky Release of the Developer Guide for Red Hat_Enterprise Linux 6.7 Revision 2-55 Wed Apr 15 2015 Robert Kratky Release of the Developer Guide for Red Hat_Enterprise Linux 6.7 Beta Revision 2-54 Tue Dec 16 2014 Robert Kratky Update to sort order on the Red Hat Customer Portal. Revision 2-52 Wed Nov 11 2014 Robert Kratky Re-release for RHSCL 1.2 and DTS 3.0. Revision 2-51 Fri Oct 10 2014 Robert Kratky Release of the Developer Guide for Red Hat_Enterprise Linux 6.6 | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/developer_guide/appe-publican-revision_history |
Chapter 1. Understanding RHTAP's foundations | Chapter 1. Understanding RHTAP's foundations Discover the robust foundation of Red Hat Trusted Application Pipeline (RHTAP), a framework designed to revolutionize cybersecurity practices across the software development lifecycle (SDLC). With RHTAP, you embark on a journey that transcends traditional security measures, integrating cutting-edge solutions and a DevSecOps CI/CD framework from inception to deployment. This proactive strategy accelerates developer onboarding, process acceleration, and the embedding of security from the beginning. 1.1. Secure CI/CD Framework Central to RHTAP is its pioneering secure CI/CD framework, designed to uphold highest standards in software development. By aligning with the Supply-chain Levels for Software Artifacts (SLSA) level 3 , RHTAP ensures that every line of code contributes to a fortress of security, significantly enhancing early vulnerability detection and mitigation. 1.2. Deep dive into RHTAP's security tools Ensuring the security of software throughout its development is essential for mitigating potential vulnerabilities. The RHTAP leverages a powerful suite of tools designed to bolster your security measures. Let's explore how RHTAP utilizes its components - RHDH, RHTAS, and RHTPA - to provide a robust defense against security threats. Red Hat Developer Hub (RHDH) Red Hat Developer Hub serves as a self-service portal for developers. It streamlines the onboarding process and offers access to a wealth of resources and tools necessary for secure software development. This platform encourages best practices and facilitates the integration of security measures right from the start of the development process. Red Hat Trusted Artifact Signer (RHTAS) Red Hat Trusted Artifact Signer focuses on enhancing software integrity through signature and attestation mechanisms. By ensuring that every piece of code and every artifact is signed and attested, RHTAS provides a verifiable trust chain that confirms the authenticity and security of the software components being used. Red Hat Trusted Profile Analyzer (RHTPA) Red Hat Trusted Profile Analyzer, deals with the generation and management of Software Bills of Materials (SBOMs). SBOMs are critical for maintaining transparency and compliance, as they provide a detailed list of all components, libraries, and dependencies included in a software product. RHTPA automates the creation of SBOMs, ensuring that stakeholders have accurate and up-to-date information on the software's composition. 1.3. Leveraging ready-to-use software templates RHTAP offers ready-to-use software templates, embedding security directly into the development workflow, thus allowing developers to concentrate on innovation while minimizing security related distractions. These ready-to-use software templates are fully customizable, ensuring they meet your organization's unique requirements seamlessly. Benefit from integrated features right out of the box: Red Hat Advanced Cluster Security (RHACS): Strengthens your deployments against vulnerabilities. Quay: Provides a secure repository for your container images. Tekton pipelines: Enables precision in automated deployments. GitOps: Maintains consistency and automated configuration management. 1.4. Key security practices RHTAP incorporates these tools to address specific security concerns effectively: Vulnerability Scanning: With each pull request, RHTAP conducts thorough scans with your CVE scanner of choice, such as Advanced Cluster Security, to identify and address vulnerabilities at the earliest possible stage. SBOM Generation: RHTAP's automated generation of SBOMs plays a vital role in maintaining software transparency and compliance. By providing a comprehensive inventory of software components, organizations can better manage and secure their software supply chain. Container Image Security: RHTAP verifies that container images comply with SLSA (Supply-chain Levels for Software Artifacts) guidelines. This is achieved through an enterprise contract that includes over 41 rules, ensuring that the container images used in the development process meet stringent security standards. 1.5. The path forward Embracing a DevSecOps mindset and utilizing RHTAP promotes a secure and efficient development environment. This ongoing journey of assessment and elevation equips organizations to address both current and future cybersecurity challenges effectively. step Your path to secure application development Additional resources For information on Red Hat Developer Hub, see Getting started with Red Hat Developer Hub guide . For information on Red Hat Trusted Artifact Signer, see RHTAS Deployment guide. For information on Red Hat Trusted Profile Analyzer, see Quick Start guide. | null | https://docs.redhat.com/en/documentation/red_hat_trusted_application_pipeline/1.0/html/getting_started_with_red_hat_trusted_application_pipeline/understanding-rhtap-foundations_default |
Data Grid downloads | Data Grid downloads Access the Data Grid Software Downloads on the Red Hat customer portal. Note You must have a Red Hat account to access and download Data Grid software. | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/configuring_data_grid_caches/rhdg-downloads_datagrid |
2.8. Additional Resources | 2.8. Additional Resources man(1) man page - Describes man pages and how to find them. NetworkManager(8) man page - Describes the network management daemon. NetworkManager.conf(5) man page - Describes the NetworkManager configuration file. /usr/share/doc/initscripts- version /sysconfig.txt - Describes ifcfg configuration files and their directives as understood by the legacy network service. /usr/share/doc/initscripts- version /examples/networking/ - A directory containing example configuration files. ifcfg(8) man page - Describes briefly the ifcfg command. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-getting_started_with_networkmanager-additional_resources |
Release notes for Red Hat build of OpenJDK 8.0.342 and 8.0.345 | Release notes for Red Hat build of OpenJDK 8.0.342 and 8.0.345 Red Hat build of OpenJDK 8 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/release_notes_for_red_hat_build_of_openjdk_8.0.342_and_8.0.345/index |
Chapter 18. API reference | Chapter 18. API reference 18.1. 5.6 Logging API reference 18.1.1. Logging 5.6 API reference 18.1.1.1. ClusterLogForwarder ClusterLogForwarder is an API to configure forwarding logs. You configure forwarding by specifying a list of pipelines , which forward from a set of named inputs to a set of named outputs. There are built-in input names for common log categories, and you can define custom inputs to do additional filtering. There is a built-in output name for the default openshift log store, but you can define your own outputs with a URL and other connection information to forward logs to other stores or processors, inside or outside the cluster. For more details see the documentation on the API fields. Property Type Description spec object Specification of the desired behavior of ClusterLogForwarder status object Status of the ClusterLogForwarder 18.1.1.1.1. .spec 18.1.1.1.1.1. Description ClusterLogForwarderSpec defines how logs should be forwarded to remote targets. 18.1.1.1.1.1.1. Type object Property Type Description inputs array (optional) Inputs are named filters for log messages to be forwarded. outputDefaults object (optional) DEPRECATED OutputDefaults specify forwarder config explicitly for the default store. outputs array (optional) Outputs are named destinations for log messages. pipelines array Pipelines forward the messages selected by a set of inputs to a set of outputs. 18.1.1.1.2. .spec.inputs[] 18.1.1.1.2.1. Description InputSpec defines a selector of log messages. 18.1.1.1.2.1.1. Type array Property Type Description application object (optional) Application, if present, enables named set of application logs that name string Name used to refer to the input of a pipeline . 18.1.1.1.3. .spec.inputs[].application 18.1.1.1.3.1. Description Application log selector. All conditions in the selector must be satisfied (logical AND) to select logs. 18.1.1.1.3.1.1. Type object Property Type Description namespaces array (optional) Namespaces from which to collect application logs. selector object (optional) Selector for logs from pods with matching labels. 18.1.1.1.4. .spec.inputs[].application.namespaces[] 18.1.1.1.4.1. Description 18.1.1.1.4.1.1. Type array 18.1.1.1.5. .spec.inputs[].application.selector 18.1.1.1.5.1. Description A label selector is a label query over a set of resources. 18.1.1.1.5.1.1. Type object Property Type Description matchLabels object (optional) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels 18.1.1.1.6. .spec.inputs[].application.selector.matchLabels 18.1.1.1.6.1. Description 18.1.1.1.6.1.1. Type object 18.1.1.1.7. .spec.outputDefaults 18.1.1.1.7.1. Description 18.1.1.1.7.1.1. Type object Property Type Description elasticsearch object (optional) Elasticsearch OutputSpec default values 18.1.1.1.8. .spec.outputDefaults.elasticsearch 18.1.1.1.8.1. Description ElasticsearchStructuredSpec is spec related to structured log changes to determine the elasticsearch index 18.1.1.1.8.1.1. Type object Property Type Description enableStructuredContainerLogs bool (optional) EnableStructuredContainerLogs enables multi-container structured logs to allow structuredTypeKey string (optional) StructuredTypeKey specifies the metadata key to be used as name of elasticsearch index structuredTypeName string (optional) StructuredTypeName specifies the name of elasticsearch schema 18.1.1.1.9. .spec.outputs[] 18.1.1.1.9.1. Description Output defines a destination for log messages. 18.1.1.1.9.1.1. Type array Property Type Description syslog object (optional) fluentdForward object (optional) elasticsearch object (optional) kafka object (optional) cloudwatch object (optional) loki object (optional) googleCloudLogging object (optional) splunk object (optional) name string Name used to refer to the output from a pipeline . secret object (optional) Secret for authentication. tls object TLS contains settings for controlling options on TLS client connections. type string Type of output plugin. url string (optional) URL to send log records to. 18.1.1.1.10. .spec.outputs[].secret 18.1.1.1.10.1. Description OutputSecretSpec is a secret reference containing name only, no namespace. 18.1.1.1.10.1.1. Type object Property Type Description name string Name of a secret in the namespace configured for log forwarder secrets. 18.1.1.1.11. .spec.outputs[].tls 18.1.1.1.11.1. Description OutputTLSSpec contains options for TLS connections that are agnostic to the output type. 18.1.1.1.11.1.1. Type object Property Type Description insecureSkipVerify bool If InsecureSkipVerify is true, then the TLS client will be configured to ignore errors with certificates. 18.1.1.1.12. .spec.pipelines[] 18.1.1.1.12.1. Description PipelinesSpec link a set of inputs to a set of outputs. 18.1.1.1.12.1.1. Type array Property Type Description detectMultilineErrors bool (optional) DetectMultilineErrors enables multiline error detection of container logs inputRefs array InputRefs lists the names ( input.name ) of inputs to this pipeline. labels object (optional) Labels applied to log records passing through this pipeline. name string (optional) Name is optional, but must be unique in the pipelines list if provided. outputRefs array OutputRefs lists the names ( output.name ) of outputs from this pipeline. parse string (optional) Parse enables parsing of log entries into structured logs 18.1.1.1.13. .spec.pipelines[].inputRefs[] 18.1.1.1.13.1. Description 18.1.1.1.13.1.1. Type array 18.1.1.1.14. .spec.pipelines[].labels 18.1.1.1.14.1. Description 18.1.1.1.14.1.1. Type object 18.1.1.1.15. .spec.pipelines[].outputRefs[] 18.1.1.1.15.1. Description 18.1.1.1.15.1.1. Type array 18.1.1.1.16. .status 18.1.1.1.16.1. Description ClusterLogForwarderStatus defines the observed state of ClusterLogForwarder 18.1.1.1.16.1.1. Type object Property Type Description conditions object Conditions of the log forwarder. inputs Conditions Inputs maps input name to condition of the input. outputs Conditions Outputs maps output name to condition of the output. pipelines Conditions Pipelines maps pipeline name to condition of the pipeline. 18.1.1.1.17. .status.conditions 18.1.1.1.17.1. Description 18.1.1.1.17.1.1. Type object 18.1.1.1.18. .status.inputs 18.1.1.1.18.1. Description 18.1.1.1.18.1.1. Type Conditions 18.1.1.1.19. .status.outputs 18.1.1.1.19.1. Description 18.1.1.1.19.1.1. Type Conditions 18.1.1.1.20. .status.pipelines 18.1.1.1.20.1. Description 18.1.1.1.20.1.1. Type Conditions== ClusterLogging A Red Hat OpenShift Logging instance. ClusterLogging is the Schema for the clusterloggings API Property Type Description spec object Specification of the desired behavior of ClusterLogging status object Status defines the observed state of ClusterLogging 18.1.1.1.21. .spec 18.1.1.1.21.1. Description ClusterLoggingSpec defines the desired state of ClusterLogging 18.1.1.1.21.1.1. Type object Property Type Description collection object Specification of the Collection component for the cluster curation object (DEPRECATED) (optional) Deprecated. Specification of the Curation component for the cluster forwarder object (DEPRECATED) (optional) Deprecated. Specification for Forwarder component for the cluster logStore object (optional) Specification of the Log Storage component for the cluster managementState string (optional) Indicator if the resource is 'Managed' or 'Unmanaged' by the operator visualization object (optional) Specification of the Visualization component for the cluster 18.1.1.1.22. .spec.collection 18.1.1.1.22.1. Description This is the struct that will contain information pertinent to Log and event collection 18.1.1.1.22.1.1. Type object Property Type Description resources object (optional) The resource requirements for the collector nodeSelector object (optional) Define which Nodes the Pods are scheduled on. tolerations array (optional) Define the tolerations the Pods will accept fluentd object (optional) Fluentd represents the configuration for forwarders of type fluentd. logs object (DEPRECATED) (optional) Deprecated. Specification of Log Collection for the cluster type string (optional) The type of Log Collection to configure 18.1.1.1.23. .spec.collection.fluentd 18.1.1.1.23.1. Description FluentdForwarderSpec represents the configuration for forwarders of type fluentd. 18.1.1.1.23.1.1. Type object Property Type Description buffer object inFile object 18.1.1.1.24. .spec.collection.fluentd.buffer 18.1.1.1.24.1. Description FluentdBufferSpec represents a subset of fluentd buffer parameters to tune the buffer configuration for all fluentd outputs. It supports a subset of parameters to configure buffer and queue sizing, flush operations and retry flushing. For general parameters refer to: https://docs.fluentd.org/configuration/buffer-section#buffering-parameters For flush parameters refer to: https://docs.fluentd.org/configuration/buffer-section#flushing-parameters For retry parameters refer to: https://docs.fluentd.org/configuration/buffer-section#retries-parameters 18.1.1.1.24.1.1. Type object Property Type Description chunkLimitSize string (optional) ChunkLimitSize represents the maximum size of each chunk. Events will be flushInterval string (optional) FlushInterval represents the time duration to wait between two consecutive flush flushMode string (optional) FlushMode represents the mode of the flushing thread to write chunks. The mode flushThreadCount int (optional) FlushThreadCount reprents the number of threads used by the fluentd buffer overflowAction string (optional) OverflowAction represents the action for the fluentd buffer plugin to retryMaxInterval string (optional) RetryMaxInterval represents the maximum time interval for exponential backoff retryTimeout string (optional) RetryTimeout represents the maximum time interval to attempt retries before giving up retryType string (optional) RetryType represents the type of retrying flush operations. Flush operations can retryWait string (optional) RetryWait represents the time duration between two consecutive retries to flush totalLimitSize string (optional) TotalLimitSize represents the threshold of node space allowed per fluentd 18.1.1.1.25. .spec.collection.fluentd.inFile 18.1.1.1.25.1. Description FluentdInFileSpec represents a subset of fluentd in-tail plugin parameters to tune the configuration for all fluentd in-tail inputs. For general parameters refer to: https://docs.fluentd.org/input/tail#parameters 18.1.1.1.25.1.1. Type object Property Type Description readLinesLimit int (optional) ReadLinesLimit represents the number of lines to read with each I/O operation 18.1.1.1.26. .spec.collection.logs 18.1.1.1.26.1. Description 18.1.1.1.26.1.1. Type object Property Type Description fluentd object Specification of the Fluentd Log Collection component type string The type of Log Collection to configure 18.1.1.1.27. .spec.collection.logs.fluentd 18.1.1.1.27.1. Description CollectorSpec is spec to define scheduling and resources for a collector 18.1.1.1.27.1.1. Type object Property Type Description nodeSelector object (optional) Define which Nodes the Pods are scheduled on. resources object (optional) The resource requirements for the collector tolerations array (optional) Define the tolerations the Pods will accept 18.1.1.1.28. .spec.collection.logs.fluentd.nodeSelector 18.1.1.1.28.1. Description 18.1.1.1.28.1.1. Type object 18.1.1.1.29. .spec.collection.logs.fluentd.resources 18.1.1.1.29.1. Description 18.1.1.1.29.1.1. Type object Property Type Description limits object (optional) Limits describes the maximum amount of compute resources allowed. requests object (optional) Requests describes the minimum amount of compute resources required. 18.1.1.1.30. .spec.collection.logs.fluentd.resources.limits 18.1.1.1.30.1. Description 18.1.1.1.30.1.1. Type object 18.1.1.1.31. .spec.collection.logs.fluentd.resources.requests 18.1.1.1.31.1. Description 18.1.1.1.31.1.1. Type object 18.1.1.1.32. .spec.collection.logs.fluentd.tolerations[] 18.1.1.1.32.1. Description 18.1.1.1.32.1.1. Type array Property Type Description effect string (optional) Effect indicates the taint effect to match. Empty means match all taint effects. key string (optional) Key is the taint key that the toleration applies to. Empty means match all taint keys. operator string (optional) Operator represents a key's relationship to the value. tolerationSeconds int (optional) TolerationSeconds represents the period of time the toleration (which must be value string (optional) Value is the taint value the toleration matches to. 18.1.1.1.33. .spec.collection.logs.fluentd.tolerations[].tolerationSeconds 18.1.1.1.33.1. Description 18.1.1.1.33.1.1. Type int 18.1.1.1.34. .spec.curation 18.1.1.1.34.1. Description This is the struct that will contain information pertinent to Log curation (Curator) 18.1.1.1.34.1.1. Type object Property Type Description curator object The specification of curation to configure type string The kind of curation to configure 18.1.1.1.35. .spec.curation.curator 18.1.1.1.35.1. Description 18.1.1.1.35.1.1. Type object Property Type Description nodeSelector object Define which Nodes the Pods are scheduled on. resources object (optional) The resource requirements for Curator schedule string The cron schedule that the Curator job is run. Defaults to "30 3 * * *" tolerations array 18.1.1.1.36. .spec.curation.curator.nodeSelector 18.1.1.1.36.1. Description 18.1.1.1.36.1.1. Type object 18.1.1.1.37. .spec.curation.curator.resources 18.1.1.1.37.1. Description 18.1.1.1.37.1.1. Type object Property Type Description limits object (optional) Limits describes the maximum amount of compute resources allowed. requests object (optional) Requests describes the minimum amount of compute resources required. 18.1.1.1.38. .spec.curation.curator.resources.limits 18.1.1.1.38.1. Description 18.1.1.1.38.1.1. Type object 18.1.1.1.39. .spec.curation.curator.resources.requests 18.1.1.1.39.1. Description 18.1.1.1.39.1.1. Type object 18.1.1.1.40. .spec.curation.curator.tolerations[] 18.1.1.1.40.1. Description 18.1.1.1.40.1.1. Type array Property Type Description effect string (optional) Effect indicates the taint effect to match. Empty means match all taint effects. key string (optional) Key is the taint key that the toleration applies to. Empty means match all taint keys. operator string (optional) Operator represents a key's relationship to the value. tolerationSeconds int (optional) TolerationSeconds represents the period of time the toleration (which must be value string (optional) Value is the taint value the toleration matches to. 18.1.1.1.41. .spec.curation.curator.tolerations[].tolerationSeconds 18.1.1.1.41.1. Description 18.1.1.1.41.1.1. Type int 18.1.1.1.42. .spec.forwarder 18.1.1.1.42.1. Description ForwarderSpec contains global tuning parameters for specific forwarder implementations. This field is not required for general use, it allows performance tuning by users familiar with the underlying forwarder technology. Currently supported: fluentd . 18.1.1.1.42.1.1. Type object Property Type Description fluentd object 18.1.1.1.43. .spec.forwarder.fluentd 18.1.1.1.43.1. Description FluentdForwarderSpec represents the configuration for forwarders of type fluentd. 18.1.1.1.43.1.1. Type object Property Type Description buffer object inFile object 18.1.1.1.44. .spec.forwarder.fluentd.buffer 18.1.1.1.44.1. Description FluentdBufferSpec represents a subset of fluentd buffer parameters to tune the buffer configuration for all fluentd outputs. It supports a subset of parameters to configure buffer and queue sizing, flush operations and retry flushing. For general parameters refer to: https://docs.fluentd.org/configuration/buffer-section#buffering-parameters For flush parameters refer to: https://docs.fluentd.org/configuration/buffer-section#flushing-parameters For retry parameters refer to: https://docs.fluentd.org/configuration/buffer-section#retries-parameters 18.1.1.1.44.1.1. Type object Property Type Description chunkLimitSize string (optional) ChunkLimitSize represents the maximum size of each chunk. Events will be flushInterval string (optional) FlushInterval represents the time duration to wait between two consecutive flush flushMode string (optional) FlushMode represents the mode of the flushing thread to write chunks. The mode flushThreadCount int (optional) FlushThreadCount reprents the number of threads used by the fluentd buffer overflowAction string (optional) OverflowAction represents the action for the fluentd buffer plugin to retryMaxInterval string (optional) RetryMaxInterval represents the maximum time interval for exponential backoff retryTimeout string (optional) RetryTimeout represents the maximum time interval to attempt retries before giving up retryType string (optional) RetryType represents the type of retrying flush operations. Flush operations can retryWait string (optional) RetryWait represents the time duration between two consecutive retries to flush totalLimitSize string (optional) TotalLimitSize represents the threshold of node space allowed per fluentd 18.1.1.1.45. .spec.forwarder.fluentd.inFile 18.1.1.1.45.1. Description FluentdInFileSpec represents a subset of fluentd in-tail plugin parameters to tune the configuration for all fluentd in-tail inputs. For general parameters refer to: https://docs.fluentd.org/input/tail#parameters 18.1.1.1.45.1.1. Type object Property Type Description readLinesLimit int (optional) ReadLinesLimit represents the number of lines to read with each I/O operation 18.1.1.1.46. .spec.logStore 18.1.1.1.46.1. Description The LogStoreSpec contains information about how logs are stored. 18.1.1.1.46.1.1. Type object Property Type Description elasticsearch object Specification of the Elasticsearch Log Store component lokistack object LokiStack contains information about which LokiStack to use for log storage if Type is set to LogStoreTypeLokiStack. retentionPolicy object (optional) Retention policy defines the maximum age for an index after which it should be deleted type string The Type of Log Storage to configure. The operator currently supports either using ElasticSearch 18.1.1.1.47. .spec.logStore.elasticsearch 18.1.1.1.47.1. Description 18.1.1.1.47.1.1. Type object Property Type Description nodeCount int Number of nodes to deploy for Elasticsearch nodeSelector object Define which Nodes the Pods are scheduled on. proxy object Specification of the Elasticsearch Proxy component redundancyPolicy string (optional) resources object (optional) The resource requirements for Elasticsearch storage object (optional) The storage specification for Elasticsearch data nodes tolerations array 18.1.1.1.48. .spec.logStore.elasticsearch.nodeSelector 18.1.1.1.48.1. Description 18.1.1.1.48.1.1. Type object 18.1.1.1.49. .spec.logStore.elasticsearch.proxy 18.1.1.1.49.1. Description 18.1.1.1.49.1.1. Type object Property Type Description resources object 18.1.1.1.50. .spec.logStore.elasticsearch.proxy.resources 18.1.1.1.50.1. Description 18.1.1.1.50.1.1. Type object Property Type Description limits object (optional) Limits describes the maximum amount of compute resources allowed. requests object (optional) Requests describes the minimum amount of compute resources required. 18.1.1.1.51. .spec.logStore.elasticsearch.proxy.resources.limits 18.1.1.1.51.1. Description 18.1.1.1.51.1.1. Type object 18.1.1.1.52. .spec.logStore.elasticsearch.proxy.resources.requests 18.1.1.1.52.1. Description 18.1.1.1.52.1.1. Type object 18.1.1.1.53. .spec.logStore.elasticsearch.resources 18.1.1.1.53.1. Description 18.1.1.1.53.1.1. Type object Property Type Description limits object (optional) Limits describes the maximum amount of compute resources allowed. requests object (optional) Requests describes the minimum amount of compute resources required. 18.1.1.1.54. .spec.logStore.elasticsearch.resources.limits 18.1.1.1.54.1. Description 18.1.1.1.54.1.1. Type object 18.1.1.1.55. .spec.logStore.elasticsearch.resources.requests 18.1.1.1.55.1. Description 18.1.1.1.55.1.1. Type object 18.1.1.1.56. .spec.logStore.elasticsearch.storage 18.1.1.1.56.1. Description 18.1.1.1.56.1.1. Type object Property Type Description size object The max storage capacity for the node to provision. storageClassName string (optional) The name of the storage class to use with creating the node's PVC. 18.1.1.1.57. .spec.logStore.elasticsearch.storage.size 18.1.1.1.57.1. Description 18.1.1.1.57.1.1. Type object Property Type Description Format string Change Format at will. See the comment for Canonicalize for d object d is the quantity in inf.Dec form if d.Dec != nil i int i is the quantity in int64 scaled form, if d.Dec == nil s string s is the generated value of this quantity to avoid recalculation 18.1.1.1.58. .spec.logStore.elasticsearch.storage.size.d 18.1.1.1.58.1. Description 18.1.1.1.58.1.1. Type object Property Type Description Dec object 18.1.1.1.59. .spec.logStore.elasticsearch.storage.size.d.Dec 18.1.1.1.59.1. Description 18.1.1.1.59.1.1. Type object Property Type Description scale int unscaled object 18.1.1.1.60. .spec.logStore.elasticsearch.storage.size.d.Dec.unscaled 18.1.1.1.60.1. Description 18.1.1.1.60.1.1. Type object Property Type Description abs Word sign neg bool 18.1.1.1.61. .spec.logStore.elasticsearch.storage.size.d.Dec.unscaled.abs 18.1.1.1.61.1. Description 18.1.1.1.61.1.1. Type Word 18.1.1.1.62. .spec.logStore.elasticsearch.storage.size.i 18.1.1.1.62.1. Description 18.1.1.1.62.1.1. Type int Property Type Description scale int value int 18.1.1.1.63. .spec.logStore.elasticsearch.tolerations[] 18.1.1.1.63.1. Description 18.1.1.1.63.1.1. Type array Property Type Description effect string (optional) Effect indicates the taint effect to match. Empty means match all taint effects. key string (optional) Key is the taint key that the toleration applies to. Empty means match all taint keys. operator string (optional) Operator represents a key's relationship to the value. tolerationSeconds int (optional) TolerationSeconds represents the period of time the toleration (which must be value string (optional) Value is the taint value the toleration matches to. 18.1.1.1.64. .spec.logStore.elasticsearch.tolerations[].tolerationSeconds 18.1.1.1.64.1. Description 18.1.1.1.64.1.1. Type int 18.1.1.1.65. .spec.logStore.lokistack 18.1.1.1.65.1. Description LokiStackStoreSpec is used to set up cluster-logging to use a LokiStack as logging storage. It points to an existing LokiStack in the same namespace. 18.1.1.1.65.1.1. Type object Property Type Description name string Name of the LokiStack resource. 18.1.1.1.66. .spec.logStore.retentionPolicy 18.1.1.1.66.1. Description 18.1.1.1.66.1.1. Type object Property Type Description application object audit object infra object 18.1.1.1.67. .spec.logStore.retentionPolicy.application 18.1.1.1.67.1. Description 18.1.1.1.67.1.1. Type object Property Type Description diskThresholdPercent int (optional) The threshold percentage of ES disk usage that when reached, old indices should be deleted (e.g. 75) maxAge string (optional) namespaceSpec array (optional) The per namespace specification to delete documents older than a given minimum age pruneNamespacesInterval string (optional) How often to run a new prune-namespaces job 18.1.1.1.68. .spec.logStore.retentionPolicy.application.namespaceSpec[] 18.1.1.1.68.1. Description 18.1.1.1.68.1.1. Type array Property Type Description minAge string (optional) Delete the records matching the namespaces which are older than this MinAge (e.g. 1d) namespace string Target Namespace to delete logs older than MinAge (defaults to 7d) 18.1.1.1.69. .spec.logStore.retentionPolicy.audit 18.1.1.1.69.1. Description 18.1.1.1.69.1.1. Type object Property Type Description diskThresholdPercent int (optional) The threshold percentage of ES disk usage that when reached, old indices should be deleted (e.g. 75) maxAge string (optional) namespaceSpec array (optional) The per namespace specification to delete documents older than a given minimum age pruneNamespacesInterval string (optional) How often to run a new prune-namespaces job 18.1.1.1.70. .spec.logStore.retentionPolicy.audit.namespaceSpec[] 18.1.1.1.70.1. Description 18.1.1.1.70.1.1. Type array Property Type Description minAge string (optional) Delete the records matching the namespaces which are older than this MinAge (e.g. 1d) namespace string Target Namespace to delete logs older than MinAge (defaults to 7d) 18.1.1.1.71. .spec.logStore.retentionPolicy.infra 18.1.1.1.71.1. Description 18.1.1.1.71.1.1. Type object Property Type Description diskThresholdPercent int (optional) The threshold percentage of ES disk usage that when reached, old indices should be deleted (e.g. 75) maxAge string (optional) namespaceSpec array (optional) The per namespace specification to delete documents older than a given minimum age pruneNamespacesInterval string (optional) How often to run a new prune-namespaces job 18.1.1.1.72. .spec.logStore.retentionPolicy.infra.namespaceSpec[] 18.1.1.1.72.1. Description 18.1.1.1.72.1.1. Type array Property Type Description minAge string (optional) Delete the records matching the namespaces which are older than this MinAge (e.g. 1d) namespace string Target Namespace to delete logs older than MinAge (defaults to 7d) 18.1.1.1.73. .spec.visualization 18.1.1.1.73.1. Description This is the struct that will contain information pertinent to Log visualization (Kibana) 18.1.1.1.73.1.1. Type object Property Type Description kibana object Specification of the Kibana Visualization component type string The type of Visualization to configure 18.1.1.1.74. .spec.visualization.kibana 18.1.1.1.74.1. Description 18.1.1.1.74.1.1. Type object Property Type Description nodeSelector object Define which Nodes the Pods are scheduled on. proxy object Specification of the Kibana Proxy component replicas int Number of instances to deploy for a Kibana deployment resources object (optional) The resource requirements for Kibana tolerations array 18.1.1.1.75. .spec.visualization.kibana.nodeSelector 18.1.1.1.75.1. Description 18.1.1.1.75.1.1. Type object 18.1.1.1.76. .spec.visualization.kibana.proxy 18.1.1.1.76.1. Description 18.1.1.1.76.1.1. Type object Property Type Description resources object 18.1.1.1.77. .spec.visualization.kibana.proxy.resources 18.1.1.1.77.1. Description 18.1.1.1.77.1.1. Type object Property Type Description limits object (optional) Limits describes the maximum amount of compute resources allowed. requests object (optional) Requests describes the minimum amount of compute resources required. 18.1.1.1.78. .spec.visualization.kibana.proxy.resources.limits 18.1.1.1.78.1. Description 18.1.1.1.78.1.1. Type object 18.1.1.1.79. .spec.visualization.kibana.proxy.resources.requests 18.1.1.1.79.1. Description 18.1.1.1.79.1.1. Type object 18.1.1.1.80. .spec.visualization.kibana.replicas 18.1.1.1.80.1. Description 18.1.1.1.80.1.1. Type int 18.1.1.1.81. .spec.visualization.kibana.resources 18.1.1.1.81.1. Description 18.1.1.1.81.1.1. Type object Property Type Description limits object (optional) Limits describes the maximum amount of compute resources allowed. requests object (optional) Requests describes the minimum amount of compute resources required. 18.1.1.1.82. .spec.visualization.kibana.resources.limits 18.1.1.1.82.1. Description 18.1.1.1.82.1.1. Type object 18.1.1.1.83. .spec.visualization.kibana.resources.requests 18.1.1.1.83.1. Description 18.1.1.1.83.1.1. Type object 18.1.1.1.84. .spec.visualization.kibana.tolerations[] 18.1.1.1.84.1. Description 18.1.1.1.84.1.1. Type array Property Type Description effect string (optional) Effect indicates the taint effect to match. Empty means match all taint effects. key string (optional) Key is the taint key that the toleration applies to. Empty means match all taint keys. operator string (optional) Operator represents a key's relationship to the value. tolerationSeconds int (optional) TolerationSeconds represents the period of time the toleration (which must be value string (optional) Value is the taint value the toleration matches to. 18.1.1.1.85. .spec.visualization.kibana.tolerations[].tolerationSeconds 18.1.1.1.85.1. Description 18.1.1.1.85.1.1. Type int 18.1.1.1.86. .status 18.1.1.1.86.1. Description ClusterLoggingStatus defines the observed state of ClusterLogging 18.1.1.1.86.1.1. Type object Property Type Description collection object (optional) conditions object (optional) curation object (optional) logStore object (optional) visualization object (optional) 18.1.1.1.87. .status.collection 18.1.1.1.87.1. Description 18.1.1.1.87.1.1. Type object Property Type Description logs object (optional) 18.1.1.1.88. .status.collection.logs 18.1.1.1.88.1. Description 18.1.1.1.88.1.1. Type object Property Type Description fluentdStatus object (optional) 18.1.1.1.89. .status.collection.logs.fluentdStatus 18.1.1.1.89.1. Description 18.1.1.1.89.1.1. Type object Property Type Description clusterCondition object (optional) daemonSet string (optional) nodes object (optional) pods string (optional) 18.1.1.1.90. .status.collection.logs.fluentdStatus.clusterCondition 18.1.1.1.90.1. Description operator-sdk generate crds does not allow map-of-slice, must use a named type. 18.1.1.1.90.1.1. Type object 18.1.1.1.91. .status.collection.logs.fluentdStatus.nodes 18.1.1.1.91.1. Description 18.1.1.1.91.1.1. Type object 18.1.1.1.92. .status.conditions 18.1.1.1.92.1. Description 18.1.1.1.92.1.1. Type object 18.1.1.1.93. .status.curation 18.1.1.1.93.1. Description 18.1.1.1.93.1.1. Type object Property Type Description curatorStatus array (optional) 18.1.1.1.94. .status.curation.curatorStatus[] 18.1.1.1.94.1. Description 18.1.1.1.94.1.1. Type array Property Type Description clusterCondition object (optional) cronJobs string (optional) schedules string (optional) suspended bool (optional) 18.1.1.1.95. .status.curation.curatorStatus[].clusterCondition 18.1.1.1.95.1. Description operator-sdk generate crds does not allow map-of-slice, must use a named type. 18.1.1.1.95.1.1. Type object 18.1.1.1.96. .status.logStore 18.1.1.1.96.1. Description 18.1.1.1.96.1.1. Type object Property Type Description elasticsearchStatus array (optional) 18.1.1.1.97. .status.logStore.elasticsearchStatus[] 18.1.1.1.97.1. Description 18.1.1.1.97.1.1. Type array Property Type Description cluster object (optional) clusterConditions object (optional) clusterHealth string (optional) clusterName string (optional) deployments array (optional) nodeConditions object (optional) nodeCount int (optional) pods object (optional) replicaSets array (optional) shardAllocationEnabled string (optional) statefulSets array (optional) 18.1.1.1.98. .status.logStore.elasticsearchStatus[].cluster 18.1.1.1.98.1. Description 18.1.1.1.98.1.1. Type object Property Type Description activePrimaryShards int The number of Active Primary Shards for the Elasticsearch Cluster activeShards int The number of Active Shards for the Elasticsearch Cluster initializingShards int The number of Initializing Shards for the Elasticsearch Cluster numDataNodes int The number of Data Nodes for the Elasticsearch Cluster numNodes int The number of Nodes for the Elasticsearch Cluster pendingTasks int relocatingShards int The number of Relocating Shards for the Elasticsearch Cluster status string The current Status of the Elasticsearch Cluster unassignedShards int The number of Unassigned Shards for the Elasticsearch Cluster 18.1.1.1.99. .status.logStore.elasticsearchStatus[].clusterConditions 18.1.1.1.99.1. Description 18.1.1.1.99.1.1. Type object 18.1.1.1.100. .status.logStore.elasticsearchStatus[].deployments[] 18.1.1.1.100.1. Description 18.1.1.1.100.1.1. Type array 18.1.1.1.101. .status.logStore.elasticsearchStatus[].nodeConditions 18.1.1.1.101.1. Description 18.1.1.1.101.1.1. Type object 18.1.1.1.102. .status.logStore.elasticsearchStatus[].pods 18.1.1.1.102.1. Description 18.1.1.1.102.1.1. Type object 18.1.1.1.103. .status.logStore.elasticsearchStatus[].replicaSets[] 18.1.1.1.103.1. Description 18.1.1.1.103.1.1. Type array 18.1.1.1.104. .status.logStore.elasticsearchStatus[].statefulSets[] 18.1.1.1.104.1. Description 18.1.1.1.104.1.1. Type array 18.1.1.1.105. .status.visualization 18.1.1.1.105.1. Description 18.1.1.1.105.1.1. Type object Property Type Description kibanaStatus array (optional) 18.1.1.1.106. .status.visualization.kibanaStatus[] 18.1.1.1.106.1. Description 18.1.1.1.106.1.1. Type array Property Type Description clusterCondition object (optional) deployment string (optional) pods string (optional) The status for each of the Kibana pods for the Visualization component replicaSets array (optional) replicas int (optional) 18.1.1.1.107. .status.visualization.kibanaStatus[].clusterCondition 18.1.1.1.107.1. Description 18.1.1.1.107.1.1. Type object 18.1.1.1.108. .status.visualization.kibanaStatus[].replicaSets[] 18.1.1.1.108.1. Description 18.1.1.1.108.1.1. Type array | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/logging/api-reference |
Chapter 16. Replacing storage devices | Chapter 16. Replacing storage devices 16.1. Replacing operational or failed storage devices on Red Hat OpenStack Platform installer-provisioned infrastructure Use this procedure to replace storage device in OpenShift Data Foundation which is deployed on Red Hat OpenStack Platform. This procedure helps to create a new Persistent Volume Claim (PVC) on a new volume and remove the old object storage device (OSD). Procedure Identify the OSD that needs to be replaced and the OpenShift Container Platform node that has the OSD scheduled on it. Example output: In this example, rook-ceph-osd-0-6d77d6c7c6-m8xj6 needs to be replaced and compute-2 is the OpenShift Container platform node on which the OSD is scheduled. Note If the OSD to be replaced is healthy, the status of the pod will be Running . Scale down the OSD deployment for the OSD to be replaced. where, osd_id_to_remove is the integer in the pod name immediately after the rook-ceph-osd prefix. In this example, the deployment name is rook-ceph-osd-0 . Example output: Verify that the rook-ceph-osd pod is terminated. Example output: Note If the rook-ceph-osd pod is in terminating state, use the force option to delete the pod. Example output: Incase, the persistent volume associated with the failed OSD fails, get the failed persistent volumes details and delete them using the following commands: Remove the old OSD from the cluster so that a new OSD can be added. Delete any old ocs-osd-removal jobs. Example output: Change to the openshift-storage project. Remove the old OSD from the cluster. You can add comma separated OSD IDs in the command to remove more than one OSD. (For example, FAILED_OSD_IDS=0,1,2). The FORCE_OSD_REMOVAL value must be changed to "true" in clusters that only have three OSDs, or clusters with insufficient space to restore all three replicas of the data after the OSD is removed. Warning This step results in OSD being completely removed from the cluster. Ensure that the correct value of osd_id_to_remove is provided. Verify that the OSD was removed successfully by checking the status of the ocs-osd-removal-job pod. A status of Completed confirms that the OSD removal job succeeded. Ensure that the OSD removal is completed. Example output: Important If the ocs-osd-removal-job fails and the pod is not in the expected Completed state, check the pod logs for further debugging. For example: If encryption was enabled at the time of install, remove dm-crypt managed device-mapper mapping from the OSD devices that are removed from the respective OpenShift Data Foundation nodes. Get PVC name(s) of the replaced OSD(s) from the logs of ocs-osd-removal-job pod : For example: For each of the nodes identified in step #1, do the following: Create a debug pod and chroot to the host on the storage node. Find relevant device name based on the PVC names identified in the step Remove the mapped device. Note If the above command gets stuck due to insufficient privileges, run the following commands: Press CTRL+Z to exit the above command. Find PID of the process which was stuck. Terminate the process using kill command. Verify that the device name is removed. Delete the ocs-osd-removal job. Example output: Verfication steps Verify that there is a new OSD running. Example output: Verify that there is a new PVC created which is in Bound state. Example output: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in step, do the following: Create a debug pod and open a chroot environment for the selected host(s). Run "lsblk" and check for the "crypt" keyword beside the ocs-deviceset name(s) Log in to OpenShift Web Console and view the storage dashboard. Figure 16.1. OSD status in OpenShift Container Platform storage dashboard after device replacement | [
"oc get -n openshift-storage pods -l app=rook-ceph-osd -o wide",
"rook-ceph-osd-0-6d77d6c7c6-m8xj6 0/1 CrashLoopBackOff 0 24h 10.129.0.16 compute-2 <none> <none> rook-ceph-osd-1-85d99fb95f-2svc7 1/1 Running 0 24h 10.128.2.24 compute-0 <none> <none> rook-ceph-osd-2-6c66cdb977-jp542 1/1 Running 0 24h 10.130.0.18 compute-1 <none> <none>",
"osd_id_to_remove=0 oc scale -n openshift-storage deployment rook-ceph-osd-USD{osd_id_to_remove} --replicas=0",
"deployment.extensions/rook-ceph-osd-0 scaled",
"oc get -n openshift-storage pods -l ceph-osd-id=USD{osd_id_to_remove}",
"No resources found.",
"oc delete pod rook-ceph-osd-0-6d77d6c7c6-m8xj6 --force --grace-period=0",
"warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod \"rook-ceph-osd-0-6d77d6c7c6-m8xj6\" force deleted",
"oc get pv oc delete pv <failed-pv-name>",
"oc delete -n openshift-storage job ocs-osd-removal-USD{osd_id_to_remove}",
"job.batch \"ocs-osd-removal-0\" deleted",
"oc project openshift-storage",
"oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS=USD{osd_id_to_remove} FORCE_OSD_REMOVAL=false |oc create -n openshift-storage -f -",
"oc get pod -l job-name=ocs-osd-removal-job -n openshift-storage",
"oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 | egrep -i 'completed removal'",
"2022-05-10 06:50:04.501511 I | cephosd: completed removal of OSD 0",
"oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1",
"oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 |egrep -i 'pvc|deviceset'",
"2021-05-12 14:31:34.666000 I | cephosd: removing the OSD PVC \"ocs-deviceset-xxxx-xxx-xxx-xxx\"",
"oc debug node/<node name> chroot /host",
"sh-4.4# dmsetup ls| grep <pvc name> ocs-deviceset-xxx-xxx-xxx-xxx-block-dmcrypt (253:0)",
"cryptsetup luksClose --debug --verbose ocs-deviceset-xxx-xxx-xxx-xxx-block-dmcrypt",
"ps -ef | grep crypt",
"kill -9 <PID>",
"dmsetup ls",
"oc delete -n openshift-storage job ocs-osd-removal-USD{osd_id_to_remove}",
"job.batch \"ocs-osd-removal-0\" deleted",
"oc get -n openshift-storage pods -l app=rook-ceph-osd",
"rook-ceph-osd-0-5f7f4747d4-snshw 1/1 Running 0 4m47s rook-ceph-osd-1-85d99fb95f-2svc7 1/1 Running 0 1d20h rook-ceph-osd-2-6c66cdb977-jp542 1/1 Running 0 1d20h",
"oc get -n openshift-storage pvc",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE db-noobaa-db-0 Bound pvc-b44ebb5e-3c67-4000-998e-304752deb5a7 50Gi RWO ocs-storagecluster-ceph-rbd 6d ocs-deviceset-0-data-0-gwb5l Bound pvc-bea680cd-7278-463d-a4f6-3eb5d3d0defe 512Gi RWO standard 94s ocs-deviceset-1-data-0-w9pjm Bound pvc-01aded83-6ef1-42d1-a32e-6ca0964b96d4 512Gi RWO standard 6d ocs-deviceset-2-data-0-7bxcq Bound pvc-5d07cd6c-23cb-468c-89c1-72d07040e308 512Gi RWO standard 6d",
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/_<OSD-pod-name>_",
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm",
"NODE compute-1",
"oc debug node/<node name> chroot /host",
"lsblk"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/replacing_storage_devices |
F.3. Creating New Logical Volumes for an Existing Cluster | F.3. Creating New Logical Volumes for an Existing Cluster To create new volumes, either volumes need to be added to a managed volume group on the node where it is already activated by the service, or the volume_list must be temporarily bypassed or overridden to allow for creation of the volumes until they can be prepared to be configured by a cluster resource. Note New logical volumes can be added only to existing volume groups managed by a cluster lvm resource if lv_name is not specified. The lvm resource agent allows for only a single logical volume within a volume group if that resource is managing volumes individually, rather than at a volume group level. To create a new logical volume when the service containing the volume group where the new volumes will live is already active, use the following procedure. The volume group should already be tagged on the node owning that service, so simply create the volumes with a standard lvcreate command on that node. Determine the current owner of the relevant service. On the node where the service is started, create the logical volume. Add the volume into the service configuration in whatever way is necessary. To create a new volume group entirely, use the following procedure. Create the volume group on one node using that node's name as a tag, which should be included in the volume_list . Specify any desired settings for this volume group as normal and specify --addtag nodename , as in the following example: Create volumes within this volume group as normal, otherwise perform any necessary administration on the volume group. When the volume group activity is complete, deactivate the volume group and remove the tag. Add the volume into the service configuration in whatever way is necessary. | [
"clustat",
"lvcreate -l 100%FREE -n lv2 myVG",
"vgcreate myNewVG /dev/mapper/mpathb --addtag node1.example.com",
"vgchange -an myNewVg --deltag node1.example.com"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-halvm-newvols-ca |
3.3. Creating A Cluster | 3.3. Creating A Cluster Creating a cluster with luci consists of selecting cluster nodes, entering their passwords, and submitting the request to create a cluster. If the node information and passwords are correct, Conga automatically installs software into the cluster nodes and starts the cluster. Create a cluster as follows: As administrator of luci , select the cluster tab. Click Create a New Cluster . At the Cluster Name text box, enter a cluster name. The cluster name cannot exceed 15 characters. Add the node name and password for each cluster node. Enter the node name for each node in the Node Hostname column; enter the root password for each node in the in the Root Password column. Check the Enable Shared Storage Support checkbox if clustered storage is required. Click Submit . Clicking Submit causes the the Create a new cluster page to be displayed again, showing the parameters entered in the preceding step, and Lock Manager parameters. The Lock Manager parameters consist of the lock manager option buttons, DLM (preferred) and GULM , and Lock Server text boxes in the GULM lock server properties group box. Configure Lock Manager parameters for either DLM or GULM as follows: For DLM - Click DLM (preferred) or confirm that it is set. For GULM - Click GULM or confirm that it is set. At the GULM lock server properties group box, enter the FQDN or the IP address of each lock server in a Lock Server text box. Note You must enter the FQDN or the IP address of one, three, or five GULM lock servers. Re-enter enter the root password for each node in the in the Root Password column. Click Submit . Clicking Submit causes the following actions: Cluster software packages to be downloaded onto each cluster node. Cluster software to be installed onto each cluster node. Cluster configuration file to be created and propagated to each node in the cluster. Starting the cluster. A progress page shows the progress of those actions for each node in the cluster. When the process of creating a new cluster is complete, a page is displayed providing a configuration interface for the newly created cluster. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_administration/s1-creating-cluster-conga-ca |
Preface | Preface As part of cost management, resource optimization for OpenShift assesses and monitors your usage across clusters to optimize your Red Hat OpenShift resources. | null | https://docs.redhat.com/en/documentation/cost_management_service/1-latest/html/getting_started_with_resource_optimization_for_openshift/pr01 |
2.2.3.3. Qt Library Documentation | 2.2.3.3. Qt Library Documentation The qt-doc package provides HTML manuals and references located in /usr/share/doc/qt4/html/ . This package also provides the Qt Reference Documentation , which is an excellent starting point for development within the Qt framework. You can also install further demos and examples from qt-demos and qt-examples . To get an overview of the capabilities of the Qt framework, see /usr/bin/qtdemo-qt4 (provided by qt-demos ). | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/developer_guide/qt-docs |
Chapter 2. Red Hat Developer Toolset 12.0 Release | Chapter 2. Red Hat Developer Toolset 12.0 Release 2.1. Features 2.1.1. List of Components Red Hat Developer Toolset 12.0 provides the following components: Development Tools GNU Compiler Collection (GCC) binutils elfutils dwz make annobin Debugging Tools GNU Debugger (GDB) strace ltrace memstomp Performance Monitoring Tools SystemTap Valgrind OProfile Dyninst For details, see the Red Hat Developer Toolset Components table in the Red Hat Developer Toolset User Guide . 2.1.2. Changes in Red Hat Developer Toolset 12.0 All components in Red Hat Developer Toolset 12.0 are distributed with the devtoolset-12- prefix and only for Red Hat Enterprise Linux 7. The following components have been upgraded in Red Hat Developer Toolset 12.0 compared to the release of Red Hat Developer Toolset: GCC to version 12.1.1 annobin to version 10.76 elfutils to version 0.187 GDB to version 11.2 strace to version 5.18 SystemTap to version 4.7 Valgrind to version 3.19.0 Dyninst to version 12.1.0 In addition, a bug fix update is available for binutils . For detailed information on changes in Red Hat Developer Toolset 12.0, see Red Hat Developer Toolset User Guide . 2.1.3. Container Images The following container images are available with Red Hat Developer Toolset: rhscl/devtoolset-12-perftools-rhel7 rhscl/devtoolset-12-toolchain-rhel7 For more information, see the Red Hat Developer Toolset Images chapter in Using Red Hat Software Collections Container Images . Note that only the latest version of each container image is supported. 2.2. Known Issues dyninst component, BZ# 1763157 Dyninst 12 is provided only for the AMD64 and Intel 64 architectures. gcc component, BZ# 1731555 Executable files created with Red Hat Developer Toolset are dynamically linked in a nonstandard way. As a consequence, Fortran code cannot handle input/output (I/O) operations asynchronously even if this functionality is requested. To work around this problem, link the libgfortran library statically with the -static-libgfortran option to enable asynchronous I/O operations in Fortran code. Note that Red Hat discourages static linking for security reasons. gcc component, BZ# 1570853 In Red Hat Developer Toolset, libraries are linked via linker scripts which might specify some symbols through static archives. This is required to ensure compatibility with multiple versions of Red Hat Enterprise Linux. However, the linker scripts use names of the respective shared object files. As a consequence, the linker uses different symbol handling rules than expected, and does not recognize symbols required by object files when the option adding the library is specified before options specifying the object files, such as: Such use of a library from Red Hat Developer Toolset results in linker error messages undefined reference to symbol . To enable successful symbol resolution and linking, follow the standard linking practice and specify the option adding the library after the options specifying the object files: Note that this recommendation applies when using the version of GCC available as a part of Red Hat Enterprise Linux, too. gcc component, BZ# 1433946 GCC in Red Hat Developer Toolset 3.x contained the libasan package, which might have conflicted with the system version of libasan . As a consequence, depending on which libasan was present in the system, the -fsanitize=address tool worked only either with the system GCC or with the Red Hat Developer Toolset version of GCC , but not with both at the same time. To prevent the described conflict, in Red Hat Developer Toolset 4.x and later versions, the package was renamed to libasan N , where N is a number. However, if the Red Hat Software Collections repository is enabled, the problem can occur after the system update because the system version of libasan is available in an earlier version than the Red Hat Developer Toolset 3.x version, which is still available in the repository. To work around this problem, exclude this package while updating: oprofile component OProfile 1.3.0 and OProfile 1.2.0 shipped in Red Hat Developer Toolset works on all supported architectures, with the exception of IBM Z, where only the ocount tool works on the following models: z196, zEC12, and z13. operf and the other tools, such as oparchive or opannotate , do not work on IBM Z. For profiling purposes, users are recommended to use the Red Hat Enterprise Linux 7 system OProfile 0.9.9 version, which supports opcontrol with TIMER software interrupts. Note that for correct reporting of data collected by OProfile 0.9.9 , the corresponding opreport utility is necessary. Thus opcontrol -based profiling should be performed with Red Hat Developer Toolset disabled because the reporting tools from Red Hat Developer Toolset cannot process data collected within opcontrol legacy mode correctly. valgrind component, BZ# 869184 The default Valgrind gdbserver support ( --vgdb=yes ) can cause certain register and flags values to be not always up-to-date due to optimizations done by the Valgrind core. The GDB utility is therefore unable to show certain parameters or variables of programs running under Valgrind . To work around this problem, use the --vgdb=full parameter. Note that programs might run slower under Valgrind when this parameter is used. multiple components The devtoolset- version - package_name -debuginfo packages can conflict with the corresponding packages from the base Red Hat Enterprise Linux system or from other versions of Red Hat Developer Toolset. This namely applies to devtoolset- version -gcc-debuginfo , devtoolset- version -ltrace-debuginfo , devtoolset- version -valgrind-debuginfo , and might apply to other debuginfo packages, too. A similar conflict can also occur in a multilib environment, where 64-bit debuginfo packages conflict with 32-bit debuginfo packages. For example, on Red Hat Enterprise Linux 7, devtoolset-7-gcc-debuginfo conflicts with three packages: gcc-base-debuginfo , gcc-debuginfo , and gcc-libraries-debuginfo . On Red Hat Enterprise Linux 6, devtoolset-7-gcc-debuginfo conflicts with one package: gcc-libraries-debuginfo . As a consequence, if conflicting debuginfo packages are installed, attempts to install Red Hat Developer Toolset can fail with a transaction check error message similar to the following examples: To work around the problem, manually uninstall the conflicting debuginfo packages prior to installing Red Hat Developer Toolset 12.0. It is advisable to install only the relevant debuginfo packages when necessary and expect such problems to happen. Other Notes Red Hat Developer Toolset primarily aims to provide a compiler for development of user applications for deployment on multiple versions of Red Hat Enterprise Linux. Operating system components, kernel modules and device drivers generally correspond to a specific version of Red Hat Enterprise Linux, for which the supplied base OS compiler is recommended. Red Hat Developer Toolset 12.0 supports only C, C++ and Fortran development. For other languages, invoke the system version of GCC available on Red Hat Enterprise Linux. Building an application with Red Hat Developer Toolset 12.0 on Red Hat Enterprise Linux (for example, Red Hat Enterprise Linux 7) and then executing that application on an earlier minor version (such as Red Hat Enterprise Linux 6.7.z) may result in runtime errors due to differences in non-toolchain components between Red Hat Enterprise Linux releases. Users are advised to check compatibility carefully. Red Hat supports only execution of an application built with Red Hat Developer Toolset on the same, or a later, supported release of Red Hat Enterprise Linux than the version used to build that application. Valgrind must be rebuilt without Red Hat Developer Toolset's GCC installed, or it will be used in preference to Red Hat Enterprise Linux system GCC . The binary files shipped by Red Hat are built using the system GCC . For any testing, Red Hat Developer Toolset's GDB should be used. All code in the non-shared library libstdc++_nonshared.a in Red Hat Developer Toolset 12.0 is licensed under the GNU General Public License v3 with additional permissions granted under Section 7, described in the GCC Runtime Library Exception version 3.1, as published by the Free Software Foundation. The compiler included in Red Hat Developer Toolset emits newer DWARF debugging records than compilers available on Red Hat Enterprise Linux. These new debugging records improve the debugging experience in a variety of ways, particularly for C++ and optimized code. However, certain tools are not yet capable of handling the newer DWARF debug records. To generate the older style debugging records, use the options -gdwarf-2 -gstrict-dwarf or -gdwarf-3 -gstrict-dwarf . Some newer library features are statically linked into applications built with Red Hat Developer Toolset to support execution on multiple versions of Red Hat Enterprise Linux. This adds a small additional security risk because regular Red Hat Enterprise Linux errata would not change this code. If the need for developers to rebuild their applications due to such an issue arises, Red Hat will signal this in a security erratum. Developers are strongly advised not to statically link their entire application for the same reasons. Note that error messages related to a missing libitm library when using the -fgnu-tm option require the libitm package to be installed. You can install the package with the following command: To use the ccache utility with GCC included in Red Hat Developer Toolset, set your environment correctly. For example: Alternatively, you can create a shell with the Red Hat Developer Toolset version of GCC as the default compiler: After you have created the shell, run the following two commands: Because the elfutils libraries contained in Red Hat Developer Toolset 12.0 are linked to a client application statically, caution is advised when passing handles to libelf , libdw , and libasm data structures to external code and when passing handles received from external code to libelf , libdw , and libasm . Be especially careful when an external library, which is linked dynamically against the system version of elfutils , is passed a pointer to a structure that comes from the Red Hat Developer Toolset 12.0 version of elfutils (or vice versa). Generally, data structures used in the Red Hat Developer Toolset 12.0 version of elfutils are not compatible with the Red Hat Enterprise Linux system versions, and structures coming from one should never be touched by the other. In applications that use the Red Hat Developer Toolset 12.0 libraries, all code that was linked against the system version of the libraries should be recompiled against the libraries included in Red Hat Developer Toolset 12.0. The elfutils EBL library, which is used internally by libdw , was amended not to open back ends dynamically. Instead, a selection of back ends is compiled in the library itself: the 32-bit AMD and Intel architecture, AMD64 and Intel 64 systems, Intel Itanium, IBM Z, 32-bit IBM Power Systems, 64-bit IBM Power Systems, IBM POWER, big endian, and the 64-bit ARM architecture. Some functionality may not be available if the client wishes to work with ELF files from architectures other than those mentioned above. Some packages managed by the scl utility include privileged services that require sudo . The system sudo clears environment variables and so Red Hat Developer Toolset includes its own sudo shell script, wrapping scl enable . This script does not currently parse or pass normal sudo options, only sudo COMMAND ARGS ... . In order to use the system version of sudo from within a Red Hat Developer Toolset-enabled shell, use the /usr/bin/sudo binary. Intel have issued erratum HSW136 concerning TSX (Transactional Synchronization Extensions) instructions. Under certain circumstances, software using the Intel TSX instructions may result in unpredictable behavior. TSX instructions may be executed by applications built with Red Hat Developer Toolset GCC under certain conditions. These include use of GCC 's experimental Transactional Memory support (using the -fgnu-tm option) when executed on hardware with TSX instructions enabled. The users of Red Hat Developer Toolset are advised to exercise further caution when experimenting with Transaction Memory at this time, or to disable TSX instructions by applying an appropriate hardware or firmware update. To use the Memory Protection Extensions (MPX) feature in GCC , the Red Hat Developer Toolset version of the libmpx library is required, otherwise the application might not link properly. The two binutils linkers, gold and ld , have different ways of handling hidden symbols, which leads to incompatibilities in their behavior. Previously, the gold and ld linkers had inconsistent and incorrect behavior with regard to shared libraries and hidden symbols. There were two scenarios: If a shared library referenced a symbol that existed elsewhere in both hidden and non-hidden versions, the gold linker produced a bogus warning message about the hidden version. If a shared library referenced a symbol that existed elsewhere only as a hidden symbol, the gold linker created an executable, even though it could not work. The gold linker has been updated so that it no longer issues bogus warning messages about hidden symbols that also exist in a non-hidden version. The second scenario cannot be solved in the linker. It is up to the programmer to ensure that a non-hidden version of the symbol is available when the application is run. As a result, the two linkers' behavior is closer, but they still differ in case of a reference to a hidden symbol that cannot be found elsewhere in a non-hidden version. Unfortunately, there is not a single correct behavior for this situation, so the linkers are allowed to differ. The valgrind-openmpi subpackage is no longer provided with Valgrind in Red Hat Developer Toolset. The devtoolset-<version>-valgrind-openmpi subpackages previously caused incompatibility issues with various Red Hat Enterprise Linux minor releases and problems with rebuilding. Users are recommended to use the latest Red Hat Enterprise Linux system version of the valgrind and valgrind-openmpi packages if they need to run Valgrind against their programs that are built against the openmpi-devel libraries. The stap-server binary is no longer provided with SystemTap since Red Hat Developer Toolset 12. BZ# 2099259 | [
"gcc -lsomelib objfile.o",
"gcc objfile.o -lsomelib",
"~]USD yum update --exclude=libasan",
"file /usr/lib/debug/usr/lib64/libitm.so.1.0.0.debug from install of gcc-base-debuginfo-4.8.5-16.el7.x86_64 conflicts with file from package devtoolset-7-gcc-debuginfo-7.2.1-1.el7.x86_64",
"file /usr/lib/debug/usr/lib64/libtsan.so.0.0.0.debug from install of gcc-debuginfo-4.8.5-16.el7.x86_64 conflicts with file from package devtoolset-7-gcc-debuginfo-7.2.1-1.el7.x86_64",
"file /usr/lib/debug/usr/lib64/libitm.so.1.0.0.debug from install of devtoolset-7-gcc-debuginfo-7.2.1-1.el6.x86_64 conflicts with file from package gcc-libraries-debuginfo-7.1.1-2.3.1.el6_9.x86_64",
"install libitm",
"~]USD scl enable devtoolset-12 '/usr/lib64/ccache/gcc -c foo.c '",
"~]USD scl enable devtoolset-12 'bash'",
"~]USD export PATH=/usr/lib64/ccacheUSD{PATH:+:USD{PATH}}",
"~]USD gcc -c foo.c"
]
| https://docs.redhat.com/en/documentation/red_hat_developer_toolset/12/html/12.0_release_notes/dts12.0_release |
Chapter 5. Red Hat Quay build enhancements | Chapter 5. Red Hat Quay build enhancements Red Hat Quay builds can be run on virtualized platforms. Backwards compatibility to run build configurations are also available. 5.1. Red Hat Quay build limitations Running builds in Red Hat Quay in an unprivileged context might cause some commands that were working under the build strategy to fail. Attempts to change the build strategy could potentially cause performance issues and reliability with the build. Running builds directly in a container does not have the same isolation as using virtual machines. Changing the build environment might also caused builds that were previously working to fail. 5.2. Creating a Red Hat Quay builders environment with OpenShift Container Platform The procedures in this section explain how to create a Red Hat Quay virtual builders environment with OpenShift Container Platform. 5.2.1. OpenShift Container Platform TLS component The tls component allows you to control TLS configuration. Note Red Hat Quay 3.10 does not support builders when the TLS component is managed by the Operator. If you set tls to unmanaged , you supply your own ssl.cert and ssl.key files. In this instance, if you want your cluster to support builders, you must add both the Quay route and the builder route name to the SAN list in the cert, or use a wildcard. To add the builder route, use the following format: [quayregistry-cr-name]-quay-builder-[ocp-namespace].[ocp-domain-name]:443 5.2.2. Using OpenShift Container Platform for Red Hat Quay builders Builders require SSL/TLS certificates. For more information about SSL/TLS certificates, see Adding TLS certificates to the Red Hat Quay container . If you are using Amazon Web Service (AWS) S3 storage, you must modify your storage bucket in the AWS console, prior to running builders. See "Modifying your AWS S3 storage bucket" in the following section for the required parameters. 5.2.2.1. Preparing OpenShift Container Platform for virtual builders Use the following procedure to prepare OpenShift Container Platform for Red Hat Quay virtual builders. Note This procedure assumes you already have a cluster provisioned and a Quay Operator running. This procedure is for setting up a virtual namespace on OpenShift Container Platform. Procedure Log in to your Red Hat Quay cluster using a cluster administrator account. Create a new project where your virtual builders will be run, for example, virtual-builders , by running the following command: USD oc new-project virtual-builders Create a ServiceAccount in the project that will be used to run builds by entering the following command: USD oc create sa -n virtual-builders quay-builder Provide the created service account with editing permissions so that it can run the build: USD oc adm policy -n virtual-builders add-role-to-user edit system:serviceaccount:virtual-builders:quay-builder Grant the Quay builder anyuid scc permissions by entering the following command: USD oc adm policy -n virtual-builders add-scc-to-user anyuid -z quay-builder Note This action requires cluster admin privileges. This is required because builders must run as the Podman user for unprivileged or rootless builds to work. Obtain the token for the Quay builder service account. If using OpenShift Container Platform 4.10 or an earlier version, enter the following command: oc sa get-token -n virtual-builders quay-builder If using OpenShift Container Platform 4.11 or later, enter the following command: USD oc create token quay-builder -n virtual-builders Note When the token expires you will need to request a new token. Optionally, you can also add a custom expiration. For example, specify --duration 20160m to retain the token for two weeks. Example output eyJhbGciOiJSUzI1NiIsImtpZCI6IldfQUJkaDVmb3ltTHZ0dGZMYjhIWnYxZTQzN2dJVEJxcDJscldSdEUtYWsifQ... Determine the builder route by entering the following command: USD oc get route -n quay-enterprise Example output NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD ... example-registry-quay-builder example-registry-quay-builder-quay-enterprise.apps.docs.quayteam.org example-registry-quay-app grpc edge/Redirect None ... Generate a self-signed SSL/TlS certificate with the .crt extension by entering the following command: USD oc extract cm/kube-root-ca.crt -n openshift-apiserver Example output ca.crt Rename the ca.crt file to extra_ca_cert_build_cluster.crt by entering the following command: USD mv ca.crt extra_ca_cert_build_cluster.crt Locate the secret for you configuration bundle in the Console , and select Actions Edit Secret and add the appropriate builder configuration: FEATURE_USER_INITIALIZE: true BROWSER_API_CALLS_XHR_ONLY: false SUPER_USERS: - <superusername> FEATURE_USER_CREATION: false FEATURE_QUOTA_MANAGEMENT: true FEATURE_BUILD_SUPPORT: True BUILDMAN_HOSTNAME: <sample_build_route> 1 BUILD_MANAGER: - ephemeral - ALLOWED_WORKER_COUNT: 1 ORCHESTRATOR_PREFIX: buildman/production/ JOB_REGISTRATION_TIMEOUT: 3600 2 ORCHESTRATOR: REDIS_HOST: <sample_redis_hostname> 3 REDIS_PASSWORD: "" REDIS_SSL: false REDIS_SKIP_KEYSPACE_EVENT_SETUP: false EXECUTORS: - EXECUTOR: kubernetesPodman NAME: openshift BUILDER_NAMESPACE: <sample_builder_namespace> 4 SETUP_TIME: 180 MINIMUM_RETRY_THRESHOLD: 0 BUILDER_CONTAINER_IMAGE: <sample_builder_container_image> 5 # Kubernetes resource options K8S_API_SERVER: <sample_k8s_api_server> 6 K8S_API_TLS_CA: <sample_crt_file> 7 VOLUME_SIZE: 8G KUBERNETES_DISTRIBUTION: openshift CONTAINER_MEMORY_LIMITS: 300m 8 CONTAINER_CPU_LIMITS: 1G 9 CONTAINER_MEMORY_REQUEST: 300m 10 CONTAINER_CPU_REQUEST: 1G 11 NODE_SELECTOR_LABEL_KEY: "" NODE_SELECTOR_LABEL_VALUE: "" SERVICE_ACCOUNT_NAME: <sample_service_account_name> SERVICE_ACCOUNT_TOKEN: <sample_account_token> 12 1 The build route is obtained by running oc get route -n with the name of your OpenShift Operator's namespace. A port must be provided at the end of the route, and it should use the following format: [quayregistry-cr-name]-quay-builder-[ocp-namespace].[ocp-domain-name]:443 . 2 If the JOB_REGISTRATION_TIMEOUT parameter is set too low, you might receive the following error: failed to register job to build manager: rpc error: code = Unauthenticated desc = Invalid build token: Signature has expired . It is suggested that this parameter be set to at least 240. 3 If your Redis host has a password or SSL/TLS certificates, you must update accordingly. 4 Set to match the name of your virtual builders namespace, for example, virtual-builders . 5 For early access, the BUILDER_CONTAINER_IMAGE is currently quay.io/projectquay/quay-builder:3.7.0-rc.2 . Note that this might change during the early access window. If this happens, customers are alerted. 6 The K8S_API_SERVER is obtained by running oc cluster-info . 7 You must manually create and add your custom CA cert, for example, K8S_API_TLS_CA: /conf/stack/extra_ca_certs/build_cluster.crt . 8 Defaults to 5120Mi if left unspecified. 9 For virtual builds, you must ensure that there are enough resources in your cluster. Defaults to 1000m if left unspecified. 10 Defaults to 3968Mi if left unspecified. 11 Defaults to 500m if left unspecified. 12 Obtained when running oc create sa . Sample configuration FEATURE_USER_INITIALIZE: true BROWSER_API_CALLS_XHR_ONLY: false SUPER_USERS: - quayadmin FEATURE_USER_CREATION: false FEATURE_QUOTA_MANAGEMENT: true FEATURE_BUILD_SUPPORT: True BUILDMAN_HOSTNAME: example-registry-quay-builder-quay-enterprise.apps.docs.quayteam.org:443 BUILD_MANAGER: - ephemeral - ALLOWED_WORKER_COUNT: 1 ORCHESTRATOR_PREFIX: buildman/production/ JOB_REGISTRATION_TIMEOUT: 3600 ORCHESTRATOR: REDIS_HOST: example-registry-quay-redis REDIS_PASSWORD: "" REDIS_SSL: false REDIS_SKIP_KEYSPACE_EVENT_SETUP: false EXECUTORS: - EXECUTOR: kubernetesPodman NAME: openshift BUILDER_NAMESPACE: virtual-builders SETUP_TIME: 180 MINIMUM_RETRY_THRESHOLD: 0 BUILDER_CONTAINER_IMAGE: quay.io/projectquay/quay-builder:3.7.0-rc.2 # Kubernetes resource options K8S_API_SERVER: api.docs.quayteam.org:6443 K8S_API_TLS_CA: /conf/stack/extra_ca_certs/build_cluster.crt VOLUME_SIZE: 8G KUBERNETES_DISTRIBUTION: openshift CONTAINER_MEMORY_LIMITS: 1G CONTAINER_CPU_LIMITS: 1080m CONTAINER_MEMORY_REQUEST: 1G CONTAINER_CPU_REQUEST: 580m NODE_SELECTOR_LABEL_KEY: "" NODE_SELECTOR_LABEL_VALUE: "" SERVICE_ACCOUNT_NAME: quay-builder SERVICE_ACCOUNT_TOKEN: "eyJhbGciOiJSUzI1NiIsImtpZCI6IldfQUJkaDVmb3ltTHZ0dGZMYjhIWnYxZTQzN2dJVEJxcDJscldSdEUtYWsifQ" 5.2.2.2. Manually adding SSL/TLS certificates Due to a known issue with the configuration tool, you must manually add your custom SSL/TLS certificates to properly run builders. Use the following procedure to manually add custom SSL/TLS certificates. For more information creating SSL/TLS certificates, see Adding TLS certificates to the Red Hat Quay container . 5.2.2.2.1. Creating and signing certificates Use the following procedure to create and sign an SSL/TLS certificate. Procedure Create a certificate authority and sign a certificate. For more information, see Create a Certificate Authority and sign a certificate . openssl.cnf [req] req_extensions = v3_req distinguished_name = req_distinguished_name [req_distinguished_name] [ v3_req ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @alt_names [alt_names] DNS.1 = example-registry-quay-quay-enterprise.apps.docs.quayteam.org 1 DNS.2 = example-registry-quay-builder-quay-enterprise.apps.docs.quayteam.org 2 1 An alt_name for the URL of your Red Hat Quay registry must be included. 2 An alt_name for the BUILDMAN_HOSTNAME Sample commands USD openssl genrsa -out rootCA.key 2048 USD openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 1024 -out rootCA.pem USD openssl genrsa -out ssl.key 2048 USD openssl req -new -key ssl.key -out ssl.csr USD openssl x509 -req -in ssl.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out ssl.cert -days 356 -extensions v3_req -extfile openssl.cnf 5.2.2.2.2. Setting TLS to unmanaged Use the following procedure to set king:tls to unmanaged. Procedure In your Red Hat Quay Registry YAML, set kind: tls to managed: false : - kind: tls managed: false On the Events page, the change is blocked until you set up the appropriate config.yaml file. For example: - lastTransitionTime: '2022-03-28T12:56:49Z' lastUpdateTime: '2022-03-28T12:56:49Z' message: >- required component `tls` marked as unmanaged, but `configBundleSecret` is missing necessary fields reason: ConfigInvalid status: 'True' 5.2.2.2.3. Creating temporary secrets Use the following procedure to create temporary secrets for the CA certificate. Procedure Create a secret in your default namespace for the CA certificate: Create a secret in your default namespace for the ssl.key and ssl.cert files: 5.2.2.2.4. Copying secret data to the configuration YAML Use the following procedure to copy secret data to your config.yaml file. Procedure Locate the new secrets in the console UI at Workloads Secrets . For each secret, locate the YAML view: kind: Secret apiVersion: v1 metadata: name: temp-crt namespace: quay-enterprise uid: a4818adb-8e21-443a-a8db-f334ace9f6d0 resourceVersion: '9087855' creationTimestamp: '2022-03-28T13:05:30Z' ... data: extra_ca_cert_build_cluster.crt: >- LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURNakNDQWhxZ0F3SUJBZ0l.... type: Opaque kind: Secret apiVersion: v1 metadata: name: quay-config-ssl namespace: quay-enterprise uid: 4f5ae352-17d8-4e2d-89a2-143a3280783c resourceVersion: '9090567' creationTimestamp: '2022-03-28T13:10:34Z' ... data: ssl.cert: >- LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVaakNDQTA2Z0F3SUJBZ0lVT... ssl.key: >- LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBc... type: Opaque Locate the secret for your Red Hat Quay registry configuration bundle in the UI, or through the command line by running a command like the following: USD oc get quayregistries.quay.redhat.com -o jsonpath="{.items[0].spec.configBundleSecret}{'\n'}" -n quay-enterprise In the OpenShift Container Platform console, select the YAML tab for your configuration bundle secret, and add the data from the two secrets you created: kind: Secret apiVersion: v1 metadata: name: init-config-bundle-secret namespace: quay-enterprise uid: 4724aca5-bff0-406a-9162-ccb1972a27c1 resourceVersion: '4383160' creationTimestamp: '2022-03-22T12:35:59Z' ... data: config.yaml: >- RkVBVFVSRV9VU0VSX0lOSVRJQUxJWkU6IHRydWUKQlJ... extra_ca_cert_build_cluster.crt: >- LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURNakNDQWhxZ0F3SUJBZ0ldw.... ssl.cert: >- LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVaakNDQTA2Z0F3SUJBZ0lVT... ssl.key: >- LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBc... type: Opaque Click Save . Enter the following command to see if your pods are restarting: USD oc get pods -n quay-enterprise Example output NAME READY STATUS RESTARTS AGE ... example-registry-quay-app-6786987b99-vgg2v 0/1 ContainerCreating 0 2s example-registry-quay-app-7975d4889f-q7tvl 1/1 Running 0 5d21h example-registry-quay-app-7975d4889f-zn8bb 1/1 Running 0 5d21h example-registry-quay-app-upgrade-lswsn 0/1 Completed 0 6d1h example-registry-quay-config-editor-77847fc4f5-nsbbv 0/1 ContainerCreating 0 2s example-registry-quay-config-editor-c6c4d9ccd-2mwg2 1/1 Running 0 5d21h example-registry-quay-database-66969cd859-n2ssm 1/1 Running 0 6d1h example-registry-quay-mirror-764d7b68d9-jmlkk 1/1 Terminating 0 5d21h example-registry-quay-mirror-764d7b68d9-jqzwg 1/1 Terminating 0 5d21h example-registry-quay-redis-7cc5f6c977-956g8 1/1 Running 0 5d21h After your Red Hat Quay registry has reconfigured, enter the following command to check if the Red Hat Quay app pods are running: USD oc get pods -n quay-enterprise Example output example-registry-quay-app-6786987b99-sz6kb 1/1 Running 0 7m45s example-registry-quay-app-6786987b99-vgg2v 1/1 Running 0 9m1s example-registry-quay-app-upgrade-lswsn 0/1 Completed 0 6d1h example-registry-quay-config-editor-77847fc4f5-nsbbv 1/1 Running 0 9m1s example-registry-quay-database-66969cd859-n2ssm 1/1 Running 0 6d1h example-registry-quay-mirror-758fc68ff7-5wxlp 1/1 Running 0 8m29s example-registry-quay-mirror-758fc68ff7-lbl82 1/1 Running 0 8m29s example-registry-quay-redis-7cc5f6c977-956g8 1/1 Running 0 5d21h In your browser, access the registry endpoint and validate that the certificate has been updated appropriately. For example: Common Name (CN) example-registry-quay-quay-enterprise.apps.docs.quayteam.org Organisation (O) DOCS Organisational Unit (OU) QUAY 5.2.2.3. Using the UI to create a build trigger Use the following procedure to use the UI to create a build trigger. Procedure Log in to your Red Hat Quay repository. Click Create New Repository and create a new registry, for example, testrepo . On the Repositories page, click the Builds tab on the navigation pane. Alternatively, use the corresponding URL directly: Important In some cases, the builder might have issues resolving hostnames. This issue might be related to the dnsPolicy being set to default on the job object. Currently, there is no workaround for this issue. It will be resolved in a future version of Red Hat Quay. Click Create Build Trigger Custom Git Repository Push . Enter the HTTPS or SSH style URL used to clone your Git repository, then click Continue . For example: Check Tag manifest with the branch or tag name and then click Continue . Enter the location of the Dockerfile to build when the trigger is invoked, for example, /Dockerfile and click Continue . Enter the location of the context for the Docker build, for example, / , and click Continue . If warranted, create a Robot Account. Otherwise, click Continue . Click Continue to verify the parameters. On the Builds page, click Options icon of your Trigger Name, and then click Run Trigger Now . Enter a commit SHA from the Git repository and click Start Build . You can check the status of your build by clicking the commit in the Build History page, or by running oc get pods -n virtual-builders . For example: Example output USD oc get pods -n virtual-builders Example output Example output When the build is finished, you can check the status of the tag under Tags on the navigation pane. Note With early access, full build logs and timestamps of builds are currently unavailable. 5.2.2.4. Modifying your AWS S3 storage bucket Note Currently, modifying your AWS S3 storage bucket is not supported on IBM Power and IBM Z. If you are using AWS S3 storage, you must change your storage bucket in the AWS console, prior to running builders. Procedure Log in to your AWS console at s3.console.aws.com . In the search bar, search for S3 and then click S3 . Click the name of your bucket, for example, myawsbucket . Click the Permissions tab. Under Cross-origin resource sharing (CORS) , include the following parameters: [ { "AllowedHeaders": [ "Authorization" ], "AllowedMethods": [ "GET" ], "AllowedOrigins": [ "*" ], "ExposeHeaders": [], "MaxAgeSeconds": 3000 }, { "AllowedHeaders": [ "Content-Type", "x-amz-acl", "origin" ], "AllowedMethods": [ "PUT" ], "AllowedOrigins": [ "*" ], "ExposeHeaders": [], "MaxAgeSeconds": 3000 } ] 5.2.2.5. Modifying your Google Cloud Platform object bucket Note Currently, modifying your Google Cloud Platform object bucket is not supported on IBM Power and IBM Z. Use the following procedure to configure cross-origin resource sharing (CORS) for virtual builders. Note Without CORS configuration, uploading a build Dockerfile fails. Procedure Use the following reference to create a JSON file for your specific CORS needs. For example: USD cat gcp_cors.json Example output [ { "origin": ["*"], "method": ["GET"], "responseHeader": ["Authorization"], "maxAgeSeconds": 3600 }, { "origin": ["*"], "method": ["PUT"], "responseHeader": [ "Content-Type", "x-goog-acl", "origin"], "maxAgeSeconds": 3600 } ] Enter the following command to update your GCP storage bucket: USD gcloud storage buckets update gs://<bucket_name> --cors-file=./gcp_cors.json Example output Updating Completed 1 You can display the updated CORS configuration of your GCP bucket by running the following command: USD gcloud storage buckets describe gs://<bucket_name> --format="default(cors)" Example output cors: - maxAgeSeconds: 3600 method: - GET origin: - '*' responseHeader: - Authorization - maxAgeSeconds: 3600 method: - PUT origin: - '*' responseHeader: - Content-Type - x-goog-acl - origin | [
"[quayregistry-cr-name]-quay-builder-[ocp-namespace].[ocp-domain-name]:443",
"oc new-project virtual-builders",
"oc create sa -n virtual-builders quay-builder",
"oc adm policy -n virtual-builders add-role-to-user edit system:serviceaccount:virtual-builders:quay-builder",
"oc adm policy -n virtual-builders add-scc-to-user anyuid -z quay-builder",
"sa get-token -n virtual-builders quay-builder",
"oc create token quay-builder -n virtual-builders",
"eyJhbGciOiJSUzI1NiIsImtpZCI6IldfQUJkaDVmb3ltTHZ0dGZMYjhIWnYxZTQzN2dJVEJxcDJscldSdEUtYWsifQ",
"oc get route -n quay-enterprise",
"NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD example-registry-quay-builder example-registry-quay-builder-quay-enterprise.apps.docs.quayteam.org example-registry-quay-app grpc edge/Redirect None",
"oc extract cm/kube-root-ca.crt -n openshift-apiserver",
"ca.crt",
"mv ca.crt extra_ca_cert_build_cluster.crt",
"FEATURE_USER_INITIALIZE: true BROWSER_API_CALLS_XHR_ONLY: false SUPER_USERS: - <superusername> FEATURE_USER_CREATION: false FEATURE_QUOTA_MANAGEMENT: true FEATURE_BUILD_SUPPORT: True BUILDMAN_HOSTNAME: <sample_build_route> 1 BUILD_MANAGER: - ephemeral - ALLOWED_WORKER_COUNT: 1 ORCHESTRATOR_PREFIX: buildman/production/ JOB_REGISTRATION_TIMEOUT: 3600 2 ORCHESTRATOR: REDIS_HOST: <sample_redis_hostname> 3 REDIS_PASSWORD: \"\" REDIS_SSL: false REDIS_SKIP_KEYSPACE_EVENT_SETUP: false EXECUTORS: - EXECUTOR: kubernetesPodman NAME: openshift BUILDER_NAMESPACE: <sample_builder_namespace> 4 SETUP_TIME: 180 MINIMUM_RETRY_THRESHOLD: 0 BUILDER_CONTAINER_IMAGE: <sample_builder_container_image> 5 # Kubernetes resource options K8S_API_SERVER: <sample_k8s_api_server> 6 K8S_API_TLS_CA: <sample_crt_file> 7 VOLUME_SIZE: 8G KUBERNETES_DISTRIBUTION: openshift CONTAINER_MEMORY_LIMITS: 300m 8 CONTAINER_CPU_LIMITS: 1G 9 CONTAINER_MEMORY_REQUEST: 300m 10 CONTAINER_CPU_REQUEST: 1G 11 NODE_SELECTOR_LABEL_KEY: \"\" NODE_SELECTOR_LABEL_VALUE: \"\" SERVICE_ACCOUNT_NAME: <sample_service_account_name> SERVICE_ACCOUNT_TOKEN: <sample_account_token> 12",
"FEATURE_USER_INITIALIZE: true BROWSER_API_CALLS_XHR_ONLY: false SUPER_USERS: - quayadmin FEATURE_USER_CREATION: false FEATURE_QUOTA_MANAGEMENT: true FEATURE_BUILD_SUPPORT: True BUILDMAN_HOSTNAME: example-registry-quay-builder-quay-enterprise.apps.docs.quayteam.org:443 BUILD_MANAGER: - ephemeral - ALLOWED_WORKER_COUNT: 1 ORCHESTRATOR_PREFIX: buildman/production/ JOB_REGISTRATION_TIMEOUT: 3600 ORCHESTRATOR: REDIS_HOST: example-registry-quay-redis REDIS_PASSWORD: \"\" REDIS_SSL: false REDIS_SKIP_KEYSPACE_EVENT_SETUP: false EXECUTORS: - EXECUTOR: kubernetesPodman NAME: openshift BUILDER_NAMESPACE: virtual-builders SETUP_TIME: 180 MINIMUM_RETRY_THRESHOLD: 0 BUILDER_CONTAINER_IMAGE: quay.io/projectquay/quay-builder:3.7.0-rc.2 # Kubernetes resource options K8S_API_SERVER: api.docs.quayteam.org:6443 K8S_API_TLS_CA: /conf/stack/extra_ca_certs/build_cluster.crt VOLUME_SIZE: 8G KUBERNETES_DISTRIBUTION: openshift CONTAINER_MEMORY_LIMITS: 1G CONTAINER_CPU_LIMITS: 1080m CONTAINER_MEMORY_REQUEST: 1G CONTAINER_CPU_REQUEST: 580m NODE_SELECTOR_LABEL_KEY: \"\" NODE_SELECTOR_LABEL_VALUE: \"\" SERVICE_ACCOUNT_NAME: quay-builder SERVICE_ACCOUNT_TOKEN: \"eyJhbGciOiJSUzI1NiIsImtpZCI6IldfQUJkaDVmb3ltTHZ0dGZMYjhIWnYxZTQzN2dJVEJxcDJscldSdEUtYWsifQ\"",
"[req] req_extensions = v3_req distinguished_name = req_distinguished_name [req_distinguished_name] [ v3_req ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @alt_names [alt_names] DNS.1 = example-registry-quay-quay-enterprise.apps.docs.quayteam.org 1 DNS.2 = example-registry-quay-builder-quay-enterprise.apps.docs.quayteam.org 2",
"openssl genrsa -out rootCA.key 2048 openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 1024 -out rootCA.pem openssl genrsa -out ssl.key 2048 openssl req -new -key ssl.key -out ssl.csr openssl x509 -req -in ssl.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out ssl.cert -days 356 -extensions v3_req -extfile openssl.cnf",
"- kind: tls managed: false",
"- lastTransitionTime: '2022-03-28T12:56:49Z' lastUpdateTime: '2022-03-28T12:56:49Z' message: >- required component `tls` marked as unmanaged, but `configBundleSecret` is missing necessary fields reason: ConfigInvalid status: 'True'",
"oc create secret generic -n quay-enterprise temp-crt --from-file extra_ca_cert_build_cluster.crt",
"oc create secret generic -n quay-enterprise quay-config-ssl --from-file ssl.cert --from-file ssl.key",
"kind: Secret apiVersion: v1 metadata: name: temp-crt namespace: quay-enterprise uid: a4818adb-8e21-443a-a8db-f334ace9f6d0 resourceVersion: '9087855' creationTimestamp: '2022-03-28T13:05:30Z' data: extra_ca_cert_build_cluster.crt: >- LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURNakNDQWhxZ0F3SUJBZ0l. type: Opaque",
"kind: Secret apiVersion: v1 metadata: name: quay-config-ssl namespace: quay-enterprise uid: 4f5ae352-17d8-4e2d-89a2-143a3280783c resourceVersion: '9090567' creationTimestamp: '2022-03-28T13:10:34Z' data: ssl.cert: >- LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVaakNDQTA2Z0F3SUJBZ0lVT ssl.key: >- LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBc type: Opaque",
"oc get quayregistries.quay.redhat.com -o jsonpath=\"{.items[0].spec.configBundleSecret}{'\\n'}\" -n quay-enterprise",
"kind: Secret apiVersion: v1 metadata: name: init-config-bundle-secret namespace: quay-enterprise uid: 4724aca5-bff0-406a-9162-ccb1972a27c1 resourceVersion: '4383160' creationTimestamp: '2022-03-22T12:35:59Z' data: config.yaml: >- RkVBVFVSRV9VU0VSX0lOSVRJQUxJWkU6IHRydWUKQlJ extra_ca_cert_build_cluster.crt: >- LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURNakNDQWhxZ0F3SUJBZ0ldw. ssl.cert: >- LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVaakNDQTA2Z0F3SUJBZ0lVT ssl.key: >- LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBc type: Opaque",
"oc get pods -n quay-enterprise",
"NAME READY STATUS RESTARTS AGE example-registry-quay-app-6786987b99-vgg2v 0/1 ContainerCreating 0 2s example-registry-quay-app-7975d4889f-q7tvl 1/1 Running 0 5d21h example-registry-quay-app-7975d4889f-zn8bb 1/1 Running 0 5d21h example-registry-quay-app-upgrade-lswsn 0/1 Completed 0 6d1h example-registry-quay-config-editor-77847fc4f5-nsbbv 0/1 ContainerCreating 0 2s example-registry-quay-config-editor-c6c4d9ccd-2mwg2 1/1 Running 0 5d21h example-registry-quay-database-66969cd859-n2ssm 1/1 Running 0 6d1h example-registry-quay-mirror-764d7b68d9-jmlkk 1/1 Terminating 0 5d21h example-registry-quay-mirror-764d7b68d9-jqzwg 1/1 Terminating 0 5d21h example-registry-quay-redis-7cc5f6c977-956g8 1/1 Running 0 5d21h",
"oc get pods -n quay-enterprise",
"example-registry-quay-app-6786987b99-sz6kb 1/1 Running 0 7m45s example-registry-quay-app-6786987b99-vgg2v 1/1 Running 0 9m1s example-registry-quay-app-upgrade-lswsn 0/1 Completed 0 6d1h example-registry-quay-config-editor-77847fc4f5-nsbbv 1/1 Running 0 9m1s example-registry-quay-database-66969cd859-n2ssm 1/1 Running 0 6d1h example-registry-quay-mirror-758fc68ff7-5wxlp 1/1 Running 0 8m29s example-registry-quay-mirror-758fc68ff7-lbl82 1/1 Running 0 8m29s example-registry-quay-redis-7cc5f6c977-956g8 1/1 Running 0 5d21h",
"Common Name (CN) example-registry-quay-quay-enterprise.apps.docs.quayteam.org Organisation (O) DOCS Organisational Unit (OU) QUAY",
"https://example-registry-quay-quay-enterprise.apps.docs.quayteam.org/repository/quayadmin/testrepo?tab=builds",
"https://github.com/gabriel-rh/actions_test.git",
"oc get pods -n virtual-builders",
"NAME READY STATUS RESTARTS AGE f192fe4a-c802-4275-bcce-d2031e635126-9l2b5-25lg2 1/1 Running 0 7s",
"oc get pods -n virtual-builders",
"NAME READY STATUS RESTARTS AGE f192fe4a-c802-4275-bcce-d2031e635126-9l2b5-25lg2 1/1 Terminating 0 9s",
"oc get pods -n virtual-builders",
"No resources found in virtual-builders namespace.",
"[ { \"AllowedHeaders\": [ \"Authorization\" ], \"AllowedMethods\": [ \"GET\" ], \"AllowedOrigins\": [ \"*\" ], \"ExposeHeaders\": [], \"MaxAgeSeconds\": 3000 }, { \"AllowedHeaders\": [ \"Content-Type\", \"x-amz-acl\", \"origin\" ], \"AllowedMethods\": [ \"PUT\" ], \"AllowedOrigins\": [ \"*\" ], \"ExposeHeaders\": [], \"MaxAgeSeconds\": 3000 } ]",
"cat gcp_cors.json",
"[ { \"origin\": [\"*\"], \"method\": [\"GET\"], \"responseHeader\": [\"Authorization\"], \"maxAgeSeconds\": 3600 }, { \"origin\": [\"*\"], \"method\": [\"PUT\"], \"responseHeader\": [ \"Content-Type\", \"x-goog-acl\", \"origin\"], \"maxAgeSeconds\": 3600 } ]",
"gcloud storage buckets update gs://<bucket_name> --cors-file=./gcp_cors.json",
"Updating Completed 1",
"gcloud storage buckets describe gs://<bucket_name> --format=\"default(cors)\"",
"cors: - maxAgeSeconds: 3600 method: - GET origin: - '*' responseHeader: - Authorization - maxAgeSeconds: 3600 method: - PUT origin: - '*' responseHeader: - Content-Type - x-goog-acl - origin"
]
| https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/red_hat_quay_operator_features/red-hat-quay-builders-enhancement |
Chapter 96. KafkaConnectStatus schema reference | Chapter 96. KafkaConnectStatus schema reference Used in: KafkaConnect Property Property type Description conditions Condition array List of status conditions. observedGeneration integer The generation of the CRD that was last reconciled by the operator. url string The URL of the REST API endpoint for managing and monitoring Kafka Connect connectors. connectorPlugins ConnectorPlugin array The list of connector plugins available in this Kafka Connect deployment. labelSelector string Label selector for pods providing this resource. replicas integer The current number of pods being used to provide this resource. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-KafkaConnectStatus-reference |
Deploying OpenShift Data Foundation on any platform | Deploying OpenShift Data Foundation on any platform Red Hat OpenShift Data Foundation 4.17 Instructions on deploying OpenShift Data Foundation on any platform including virtualized and cloud environments. Red Hat Storage Documentation Team Abstract Read this document for instructions about how to install Red Hat OpenShift Data Foundation to use local storage on any platform. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug . Preface Red Hat OpenShift Data Foundation supports deployment on any platform that you provision including bare metal, virtualized, and cloud environments. Both internal and external OpenShift Data Foundation clusters are supported on these environments. See Planning your deployment and Preparing to deploy OpenShift Data Foundation for more information about deployment requirements. To deploy OpenShift Data Foundation, follow the appropriate deployment process based on your requirement: Internal mode Deploy using local storage devices Deploy standalone Multicloud Object Gateway component External mode Chapter 1. Preparing to deploy OpenShift Data Foundation When you deploy OpenShift Data Foundation on OpenShift Container Platform using the local storage devices on any platform, you can create internal cluster resources. This approach internally provisions base services so that all the applications can access additional storage classes. You can also deploy OpenShift Data Foundation to use an external Red Hat Ceph Storage cluster and IBM FlashSystem. For instructions, see Deploying OpenShift Data Foundation in external mode . External mode deployment works on clusters that are detected as non-cloud. If your cluster is not detected correctly, open up a bug in Bugzilla . Before you begin the deployment of Red Hat OpenShift Data Foundation using a local storage, ensure that you meet the resource requirements. See Requirements for installing OpenShift Data Foundation using local storage devices . After completing the preparatory steps, perform the following procedures: Install the Local Storage Operator . Install the Red Hat OpenShift Data Foundation Operator . Create the OpenShift Data Foundation cluster on any platform . 1.1. Requirements for installing OpenShift Data Foundation using local storage devices Node requirements The cluster must consist of at least three OpenShift Container Platform worker or infrastructure nodes with locally attached-storage devices on each of them. Each of the three selected nodes must have at least one raw block device available. OpenShift Data Foundation uses the one or more available raw block devices. The devices you use must be empty, the disks must not include Physical Volumes (PVs), Volume Groups (VGs), or Logical Volumes (LVs) remaining on the disk. For more information, see the Resource requirements section in the Planning guide . Disaster recovery requirements Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution: A valid Red Hat OpenShift Data Foundation Advanced subscription. A valid Red Hat Advanced Cluster Management (RHACM) for Kubernetes subscription. To know in detail how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . For detailed disaster recovery solution requirements, see Configuring OpenShift Data Foundation Disaster Recovery for OpenShift Workloads guide, and Requirements and recommendations section of the Install guide in Red Hat Advanced Cluster Management for Kubernetes documentation. Minimum starting node requirements An OpenShift Data Foundation cluster is deployed with a minimum configuration when the resource requirement for a standard deployment is not met. For more information, see the Resource requirements section in the Planning guide . Chapter 2. Deploy OpenShift Data Foundation using local storage devices You can deploy OpenShift Data Foundation on any platform including virtualized and cloud environments where OpenShift Container Platform is already installed. Also, it is possible to deploy only the Multicloud Object Gateway (MCG) component with OpenShift Data Foundation. For more information, see Deploy standalone Multicloud Object Gateway . Perform the following steps to deploy OpenShift Data Foundation: Install the Local Storage Operator . Install the Red Hat OpenShift Data Foundation Operator . Create an OpenShift Data Foundation cluster on any platform . 2.1. Installing Local Storage Operator Install the Local Storage Operator from the Operator Hub before creating Red Hat OpenShift Data Foundation clusters on local storage devices. Procedure Log in to the OpenShift Web Console. Click Operators -> OperatorHub . Type local storage in the Filter by keyword box to find the Local Storage Operator from the list of operators, and click on it. Set the following options on the Install Operator page: Update channel as stable . Installation mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-local-storage . Update approval as Automatic . Click Install . Verification steps Verify that the Local Storage Operator shows a green tick indicating successful installation. 2.2. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators -> OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.17 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 2.3. Creating OpenShift Data Foundation cluster on any platform Prerequisites Ensure that all the requirements in the Requirements for installing OpenShift Data Foundation using local storage devices section are met. Ensure that the disk type is SSD, which is the only supported disk type. If you want to use multus networking, you must create network attachment definitions (NADs) before deployment which is later attached to the cluster. For more information, see Multi network plug-in (Multus) support and Creating network attachment definitions . Procedure In the OpenShift Web Console, click Operators -> Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator, and then click Create StorageSystem . In the Backing storage page, perform the following: Select Full Deployment for the Deployment type option. Select the Create a new StorageClass using the local storage devices option. Optional: Select Use external PostgreSQL checkbox to use an external PostgreSQL [Technology preview] . This provides high availability solution for Multicloud Object Gateway where the PostgreSQL pod is a single point of failure. Provide the following connection details: Username Password Server name and Port Database name Select Enable TLS/SSL checkbox to enable encryption for the Postgres server. Click . Important You are prompted to install the Local Storage Operator if it is not already installed. Click Install , and follow the procedure as described in Installing Local Storage Operator . In the Create local volume set page, provide the following information: Enter a name for the LocalVolumeSet and the StorageClass . The local volume set name appears as the default value for the storage class name. You can change the name. Select one of the following: Disks on all nodes Uses the available disks that match the selected filters on all the nodes. Disks on selected nodes Uses the available disks that match the selected filters only on the selected nodes. Important The flexible scaling feature is enabled only when the storage cluster that you created with three or more nodes are spread across fewer than the minimum requirement of three availability zones. For information about flexible scaling, see knowledgebase article on Scaling OpenShift Data Foundation cluster using YAML when flexible scaling is enabled . Flexible scaling features get enabled at the time of deployment and can not be enabled or disabled later on. If the nodes selected do not match the OpenShift Data Foundation cluster requirement of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed if at least 24 CPUs and 72 GiB of RAM is available. For minimum starting node requirements, see the Resource requirements section in the Planning guide. From the available list of Disk Type , select SSD/NVMe . Expand the Advanced section and set the following options: Volume Mode Block is selected as the default value. Device Type Select one or more device types from the dropdown list. Disk Size Set a minimum size of 100GB for the device and maximum available size of the device that needs to be included. Maximum Disks Limit This indicates the maximum number of Persistent Volumes (PVs) that you can create on a node. If this field is left empty, then PVs are created for all the available disks on the matching nodes. Click . A pop-up to confirm the creation of LocalVolumeSet is displayed. Click Yes to continue. In the Capacity and nodes page, configure the following: Available raw capacity is populated with the capacity value based on all the attached disks associated with the storage class. This takes some time to show up. The Selected nodes list shows the nodes based on the storage class. In the Configure performance section, select one of the following performance profiles: Lean Use this in a resource constrained environment with minimum resources that are lower than the recommended. This profile minimizes resource consumption by allocating fewer CPUs and less memory. Balanced (default) Use this when recommended resources are available. This profile provides a balance between resource consumption and performance for diverse workloads. Performance Use this in an environment with sufficient resources to get the best performance. This profile is tailored for high performance by allocating ample memory and CPUs to ensure optimal execution of demanding workloads. Note You have the option to configure the performance profile even after the deployment using the Configure performance option from the options menu of the StorageSystems tab. Important Before selecting a resource profile, make sure to check the current availability of resources within the cluster. Opting for a higher resource profile in a cluster with insufficient resources might lead to installation failures. For more information about resource requirements, see Resource requirement for performance profiles . Optional: Select the Taint nodes checkbox to dedicate the selected nodes for OpenShift Data Foundation. Click . Optional: In the Security and network page, configure the following based on your requirement: To enable encryption, select Enable data encryption for block and file storage . Select one or both of the following Encryption level : Cluster-wide encryption Encrypts the entire cluster (block and file). StorageClass encryption Creates encrypted persistent volume (block only) using encryption enabled storage class. Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Select one of the following: Default (SDN) If you are using a single network. Custom (Multus) If you are using multiple network interfaces. Select a Public Network Interface from the dropdown. Select a Cluster Network Interface from the dropdown. Note If you are using only one additional network interface, select the single NetworkAttachementDefinition , that is, ocs-public-cluster for the Public Network Interface and leave the Cluster Network Interface blank. Click . In the Review and create page, review the configuration details. To modify any configuration settings, click Back to go back to the configuration page. Click Create StorageSystem . Note When your deployment has five or more nodes, racks, or rooms, and when there are five or more number of failure domains present in the deployment, you can configure Ceph monitor counts based on the number of racks or zones. An alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the option to increase the number of Ceph monitor counts. You can use the Configure option in the alert to configure the Ceph monitor counts. For more information, see Resolving low Ceph monitor count alert . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators -> OpenShift Data Foundation -> Storage System Click ocs-storagecluster-storagesystem -> Resources . Verify that the Status of the StorageCluster is Ready and has a green tick mark to it. To verify if the flexible scaling is enabled on your storage cluster, perform the following steps (for arbiter mode, flexible scaling is disabled): In the OpenShift Web Console, navigate to Installed Operators -> OpenShift Data Foundation -> Storage System Click ocs-storagecluster-storagesystem -> Resources -> ocs-storagecluster . In the YAML tab, search for the keys flexibleScaling in the spec section and failureDomain in the status section. If flexible scaling is true and failureDomain is set to host, flexible scaling feature is enabled: To verify that all the components for OpenShift Data Foundation are successfully installed, see Verifying your OpenShift Data Foundation installation . To verify the multi networking (Multus), see Verifying the Multus networking . Additional resources To expand the capacity of the initial cluster, see the Scaling Storage guide and follow the instructions in the "Scaling storage of bare metal OpenShift Data Foundation cluster" section. 2.4. Verifying OpenShift Data Foundation deployment To verify that OpenShift Data Foundation is deployed correctly: Verify the state of the pods . Verify that the OpenShift Data Foundation cluster is healthy . Verify that the Multicloud Object Gateway is healthy . Verify that the OpenShift Data Foundation specific storage classes exist . Verify the Multus networking . 2.4.1. Verifying the state of the pods Procedure Click Workloads -> Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see the following table: Set filter for Running and Completed pods to verify that the following pods are in Running and Completed state: Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) MON rook-ceph-mon-* (3 pods distributed across storage nodes) MGR rook-ceph-mgr-* (1 pod on any storage node) MDS rook-ceph-mds-ocs-storagecluster-cephfilesystem-* (2 pods distributed across storage nodes) RGW rook-ceph-rgw-ocs-storagecluster-cephobjectstore-* (1 pod on any storage node) CSI cephfs csi-cephfsplugin-* (1 pod on each storage node) csi-cephfsplugin-provisioner-* (2 pods distributed across storage nodes) rbd csi-rbdplugin-* (1 pod on each storage node) csi-rbdplugin-provisioner-* (2 pods distributed across storage nodes) rook-ceph-crashcollector rook-ceph-crashcollector-* (1 pod on each storage node) OSD rook-ceph-osd-* (1 pod for each device) 2.4.2. Verifying the OpenShift Data Foundation cluster is healthy Procedure In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Block and File tab, verify that the Storage Cluster has a green tick. In the Details card, verify that the cluster information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation . 2.4.3. Verifying the Multicloud Object Gateway is healthy Procedure In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the object service dashboard, see Monitoring OpenShift Data Foundation . Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . 2.4.4. Verifying that the specific storage classes exist Procedure Click Storage -> Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation: ocs-storagecluster-ceph-rbd ocs-storagecluster-cephfs openshift-storage.noobaa.io ocs-storagecluster-ceph-rgw 2.4.5. Verifying the Multus networking To determine if Multus is working in your cluster, verify the Multus networking. Procedure Based on your Network configuration choices, the OpenShift Data Foundation operator will do one of the following: If only a single NetworkAttachmentDefinition (for example, ocs-public-cluster ) was selected for the Public Network Interface, then the traffic between the application pods and the OpenShift Data Foundation cluster will happen on this network. Additionally the cluster will be self configured to also use this network for the replication and rebalancing traffic between OSDs. If both NetworkAttachmentDefinitions (for example, ocs-public and ocs-cluster ) were selected for the Public Network Interface and the Cluster Network Interface respectively during the Storage Cluster installation, then client storage traffic will be on the public network and cluster network for the replication and rebalancing traffic between OSDs. To verify the network configuration is correct, complete the following: In the OpenShift console, navigate to Installed Operators -> OpenShift Data Foundation -> Storage System -> ocs-storagecluster-storagesystem -> Resources -> ocs-storagecluster . In the YAML tab, search for network in the spec section and ensure the configuration is correct for your network interface choices. This example is for separating the client storage traffic from the storage replication traffic. Sample output: To verify the network configuration is correct using the command line interface, run the following commands: Sample output: Confirm the OSD pods are using correct network In the openshift-storage namespace use one of the OSD pods to verify the pod has connectivity to the correct networks. This example is for separating the client storage traffic from the storage replication traffic. Note Only the OSD pods will connect to both Multus public and cluster networks if both are created. All other OCS pods will connect to the Multus public network. Sample output: To confirm the OSD pods are using correct network using the command line interface, run the following command (requires the jq utility): Sample output: Chapter 3. Deploy standalone Multicloud Object Gateway Deploying only the Multicloud Object Gateway component with the OpenShift Data Foundation provides the flexibility in deployment and helps to reduce the resource consumption. Use this section to deploy only the standalone Multicloud Object Gateway component, which involves the following steps: Installing the Local Storage Operator. Installing Red Hat OpenShift Data Foundation Operator Creating standalone Multicloud Object Gateway Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . 3.1. Installing Local Storage Operator Install the Local Storage Operator from the Operator Hub before creating Red Hat OpenShift Data Foundation clusters on local storage devices. Procedure Log in to the OpenShift Web Console. Click Operators -> OperatorHub . Type local storage in the Filter by keyword box to find the Local Storage Operator from the list of operators, and click on it. Set the following options on the Install Operator page: Update channel as stable . Installation mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-local-storage . Update approval as Automatic . Click Install . Verification steps Verify that the Local Storage Operator shows a green tick indicating successful installation. 3.2. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators -> OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.17 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 3.3. Creating a standalone Multicloud Object Gateway You can create only the standalone Multicloud Object Gateway component while deploying OpenShift Data Foundation. Prerequisites Ensure that the OpenShift Data Foundation Operator is installed. Procedure In the OpenShift Web Console, click Operators -> Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, select the following: Select Multicloud Object Gateway for Deployment type . Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Click . In the Review and create page, review the configuration details: To modify any configuration settings, click Back . Click Create StorageSystem . Verification steps Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. Verifying the state of the pods Click Workloads -> Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list and verify that the following pods are in Running state. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) Chapter 4. View OpenShift Data Foundation Topology The topology shows the mapped visualization of the OpenShift Data Foundation storage cluster at various abstraction levels and also lets you to interact with these layers. The view also shows how the various elements compose the Storage cluster altogether. Procedure On the OpenShift Web Console, navigate to Storage -> Data Foundation -> Topology . The view shows the storage cluster and the zones inside it. You can see the nodes depicted by circular entities within the zones, which are indicated by dotted lines. The label of each item or resource contains basic information such as status and health or indication for alerts. Choose a node to view node details on the right-hand panel. You can also access resources or deployments within a node by clicking on the search/preview decorator icon. To view deployment details Click the preview decorator on a node. A modal window appears above the node that displays all of the deployments associated with that node along with their statuses. Click the Back to main view button in the model's upper left corner to close and return to the view. Select a specific deployment to see more information about it. All relevant data is shown in the side panel. Click the Resources tab to view the pods information. This tab provides a deeper understanding of the problems and offers granularity that aids in better troubleshooting. Click the pod links to view the pod information page on OpenShift Container Platform. The link opens in a new window. Chapter 5. Uninstalling OpenShift Data Foundation 5.1. Uninstalling OpenShift Data Foundation in Internal mode To uninstall OpenShift Data Foundation in Internal mode, refer to the knowledgebase article on Uninstalling OpenShift Data Foundation . | [
"oc annotate namespace openshift-storage openshift.io/node-selector=",
"spec: flexibleScaling: true [...] status: failureDomain: host",
"[..] spec: [..] network: ipFamily: IPv4 provider: multus selectors: cluster: openshift-storage/ocs-cluster public: openshift-storage/ocs-public [..]",
"oc get storagecluster ocs-storagecluster -n openshift-storage -o=jsonpath='{.spec.network}{\"\\n\"}'",
"{\"ipFamily\":\"IPv4\",\"provider\":\"multus\",\"selectors\":{\"cluster\":\"openshift-storage/ocs-cluster\",\"public\":\"openshift-storage/ocs-public\"}}",
"oc get -n openshift-storage USD(oc get pods -n openshift-storage -o name -l app=rook-ceph-osd | grep 'osd-0') -o=jsonpath='{.metadata.annotations.k8s\\.v1\\.cni\\.cncf\\.io/network-status}{\"\\n\"}'",
"[{ \"name\": \"openshift-sdn\", \"interface\": \"eth0\", \"ips\": [ \"10.129.2.30\" ], \"default\": true, \"dns\": {} },{ \"name\": \"openshift-storage/ocs-cluster\", \"interface\": \"net1\", \"ips\": [ \"192.168.2.1\" ], \"mac\": \"e2:04:c6:81:52:f1\", \"dns\": {} },{ \"name\": \"openshift-storage/ocs-public\", \"interface\": \"net2\", \"ips\": [ \"192.168.1.1\" ], \"mac\": \"ee:a0:b6:a4:07:94\", \"dns\": {} }]",
"oc get -n openshift-storage USD(oc get pods -n openshift-storage -o name -l app=rook-ceph-osd | grep 'osd-0') -o=jsonpath='{.metadata.annotations.k8s\\.v1\\.cni\\.cncf\\.io/network-status}{\"\\n\"}' | jq -r '.[].name'",
"openshift-sdn openshift-storage/ocs-cluster openshift-storage/ocs-public",
"oc annotate namespace openshift-storage openshift.io/node-selector="
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html-single/deploying_openshift_data_foundation_on_any_platform/index |
Chapter 4. Updating your Red Hat account information | Chapter 4. Updating your Red Hat account information You use your Red Hat account to log in to the Red Hat Hybrid Cloud Console. The following table lists information that you can update on your Red Hat account: Table 4.1. Red Hat account information Label Description Personal Update personal information including your name, email address, job title, and phone numbers. Login & password Change your password and manage accounts connected to your company's single sign-on (SSO). Note that you cannot change your Red Hat login after it has been created. Postal address Update your mailing address. Language and location Change your preferred language and time zone. Errata notifications Control which errata notifications you receive and when you receive them. Errata notifications are email notifications of security updates, bug fixes, and enhancements. Prerequisites You are logged in to the Hybrid Cloud Console. Procedure To update your Red Hat account information, click your user avatar in the upper right of the Red Hat Hybrid Cloud Console window. A drop-down list appears. Click My profile . Select a label under Your information or Your preferences . Update your information and then click Save . | null | https://docs.redhat.com/en/documentation/red_hat_hybrid_cloud_console/1-latest/html/getting_started_with_the_red_hat_hybrid_cloud_console/updating-your-red-hat-account_getting-started |
Chapter 10. Precaching glance images into nova | Chapter 10. Precaching glance images into nova When you configure OpenStack Compute to use local ephemeral storage, glance images are cached to quicken the deployment of instances. If an image that is necessary for an instance is not already cached, it is downloaded to the local disk of the Compute node when you create the instance. The process of downloading a glance image takes a variable amount of time, depending on the image size and network characteristics such as bandwidth and latency. If you attempt to start an instance, and the image is not available on the on the Ceph cluster that is local, launching an instance will fail with the following message: You see the following in the Compute service log: The instance fails to start due to a parameter in the nova.conf configuration file called never_download_image_if_on_rbd , which is set to true by default for DCN deployments. You can control this value using the heat parameter NovaDisableImageDownloadToRbd which you can find in the dcn-hci.yaml file. If you set the value of NovaDisableImageDownloadToRbd to false prior to deploying the overcloud, the following occurs: The Compute service (nova) will automatically stream images available at the central location if they are not available locally. You will not be using a COW copy from glance images. The Compute (nova) storage will potentially contain multiple copies of the same image, depending on the number of instances using it. You may saturate both the WAN link to the central location as well as the nova storage pool. Red Hat recommends leaving this value set to true, and ensuring required images are available locally prior to launching an instance. For more information on making images available to the edge, see Section A.1.3, "Copying an image to a new site" . For images that are local, you can speed up the creation of VMs by using the tripleo_nova_image_cache.yml ansible playbook to pre-cache commonly used images or images that are likely to be deployed in the near future. 10.1. Running the tripleo_nova_image_cache.yml ansible playbook Prerequisites Authentication credentials to the correct API in the shell environment. Before the command provided in each step, you must ensure that the correct authentication file is sourced. Procedure Create an ansible inventory file for the stack. You can specify multiple stacks in a comma delimited list to cache images at more than one site: Create a list of image IDs that you want to pre-cache: Retrieve a comprehensive list of available images: Create an ansible playbook argument file called nova_cache_args.yml , and add the IDs of the images that you want to pre-cache: Run the tripleo_nova_image_cache.yml ansible playbook: 10.2. Performance considerations You can specify the number of images that you want to download concurrently with the ansible forks parameter, which defaults to a value of 5 . You can reduce the time to distribute this image by increasing the value of the forks parameter, however you must balance this with the increase in network and glance-api load. Use the --forks parameter to adjust concurrency as shown: 10.3. Optimizing the image distribution to DCN sites You can reduce WAN traffic by using a proxy for glance image distribution. When you configure a proxy: Glance images are downloaded to a single Compute node that acts as the proxy. The proxy redistributes the glance image to other Compute nodes in the inventory. You can place the following parameters in the nova_cache_args.yml ansible argument file to configure a proxy node. Set the tripleo_nova_image_cache_use_proxy parameter to true to enable the image cache proxy. The image proxy uses secure copy scp to distribute images to other nodes in the inventory. SCP is inefficient over networks with high latency, such as a WAN between DCN sites. Red Hat recommends that you limit the playbook target to a single DCN location, which correlates to a single stack. Use the tripleo_nova_image_cache_proxy_hostname parameter to select the image cache proxy. The default proxy is the first compute node in the ansible inventory file. Use the tripleo_nova_image_cache_plan parameter to limit the playbook inventory to a single site: 10.4. Configuring the nova-cache cleanup A background process runs periodically to remove images from the nova cache when both of the following conditions are true: The image is not in use by an instance. The age of the image is greater than the value for the nova parameter remove_unused_original_minimum_age_seconds . The default value for the remove_unused_original_minimum_age_seconds parameter is 86400 . The value is expressed in seconds and is equal to 24 hours. You can control this value with the NovaImageCachTTL tripleo-heat-templates parameter during the initial deployment, or during a stack update of your cloud: When you instruct the playbook to pre-cache an image that already exists on a Compute node, ansible does not report a change, but the age of the image is reset to 0. Run the ansible play more frequently than the value of the NovaImageCacheTTL parameter to maintain a cache of images. | [
"Build of instance 3c04e982-c1d1-4364-b6bd-f876e399325b aborted: Image 20c5ff9d-5f54-4b74-830f-88e78b9999ed is unacceptable: No image locations are accessible",
"'Image %s is not on my ceph and [workarounds]/ never_download_image_if_on_rbd=True; refusing to fetch and upload.',",
"source stackrc tripleo-ansible-inventory --plan central,dcn0,dcn1 --static-yaml-inventory inventory.yaml",
"source centralrc openstack image list +--------------------------------------+---------+--------+ | ID | Name | Status | +--------------------------------------+---------+--------+ | 07bc2424-753b-4f65-9da5-5a99d8383fe6 | image_0 | active | | d5187afa-c821-4f22-aa4b-4e76382bef86 | image_1 | active | +--------------------------------------+---------+--------+",
"--- tripleo_nova_image_cache_images: - id: 07bc2424-753b-4f65-9da5-5a99d8383fe6 - id: d5187afa-c821-4f22-aa4b-4e76382bef86",
"source centralrc ansible-playbook -i inventory.yaml --extra-vars \"@nova_cache_args.yml\" /usr/share/ansible/tripleo-playbooks/tripleo_nova_image_cache.yml",
"ansible-playbook -i inventory.yaml --forks 10 --extra-vars \"@nova_cache_args.yml\" /usr/share/ansible/tripleo-playbooks/tripleo_nova_image_cache.yml",
"tripleo_nova_image_cache_use_proxy: true tripleo_nova_image_cache_proxy_hostname: dcn0-novacompute-1 tripleo_nova_image_cache_plan: dcn0",
"parameter_defaults: NovaImageCacheTTL: 604800 # Default to 7 days for all compute roles Compute2Parameters: NovaImageCacheTTL: 1209600 # Override to 14 days for the Compute2 compute role"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/distributed_compute_node_and_storage_deployment/precaching-glance-images-into-nova |
Appendix A. Installing Metrics Store with Satellite | Appendix A. Installing Metrics Store with Satellite You can use Satellite to install Metrics Store on a disconnected environment. Prerequisites The Satellite server is configured. For more information, see Disconnected installation using Satellite Docker registry Note If you encounter a missing image or a reference to an online image (depending on which applications you are using), consider updating the references in the deployment or build configuration of the application, or re-tag Docker images as a temporary measure (just to rule out that the image is not reachable). The following OpenShift component images are synchronized through Docker on your Satellite server: Two hosts are created on the Satellite server - one for the Metrics Store Installer virtual machine, and one for the OpenShift virtual machine, as follows: Create hosts on Satellite - see Creating a Host . Assign static IP addresses and MAC addresses for the virtual machines. The host for the OpenShift virtual machine should be of the format master-<suffix>0 to match the OpenShift virtual machine hostname. The qcow image is available on the Manager machine. Go to RHEL product software . In the Product Software tab, download the Red Hat Enterprise Linux KVM Guest Image to the Manager machine. Running the Ansible role On the Manager machine, copy /etc/ovirt-engine-metrics/metrics-store-config-satellite.yml.example to metrics-store-config.yml . Update the values of /etc/ovirt-engine-metrics/metrics-store-config.yml to match the details of your specific environment. On the Manager machine, copy /etc/ovirt-engine-metrics/secure_vars_satellite.yaml.example to /etc/ovirt-engine-metrics/secure_vars.yaml . Update the values of /etc/ovirt-engine-metrics/secure_vars.yaml to match the details of your specific environment. Encrypt the secure_vars.yaml file. Go to the ovirt-engine-metrics repo. Run the metrics store installation playbook that creates the metrics store installer virtual machine. Log in to the Administration Portal and review the Metrics Store installer virtual machine creation. Log in to the Metrics Store installer virtual machine. Run the Ansible playbook that deploys OpenShift on the virtual machines that were created. | [
"openshift3/oauth-proxy openshift3/ose-console openshift3/ose-control-plane openshift3/ose-deployer openshift3/ose-docker-registry openshift3/ose-haproxy-router openshift3/ose-logging-auth-proxy openshift3/ose-logging-curator5 openshift3/ose-logging-elasticsearch5 openshift3/ose-logging-fluentd openshift3/ose-logging-kibana5 openshift3/ose-node openshift3/ose-pod openshift3/ose-web-console openshift3/registry-console rhel7/etcd",
"cp /etc/ovirt-engine-metrics/metrics-store-config-satellite.yml.example /etc/ovirt-engine-metrics/config.yml.d/metrics-store-config.yml",
"vi /etc/ovirt-engine-metrics/config.yml.d/metrics-store-config.yml",
"cp /etc/ovirt-engine-metrics/secure_vars_satellite.yaml.example /etc/ovirt-engine-metrics/secure_vars.yaml",
"vi /etc/ovirt-engine-metrics/secure_vars.yaml",
"ansible-vault encrypt /etc/ovirt-engine-metrics/secure_vars.yaml",
"cd /usr/share/ovirt-engine-metrics",
"ANSIBLE_JINJA2_EXTENSIONS=\"jinja2.ext.do\" ./configure_ovirt_machines_for_metrics.sh --playbook=ovirt-metrics-store-installation.yml --ask-vault-pass -vvv",
"ssh root@<metrics-store-installer ip or fqdn>",
"ANSIBLE_CONFIG=\"/usr/share/ansible/openshift-ansible/ansible.cfg\" ANSIBLE_ROLES_PATH=\"/usr/share/ansible/roles/:/usr/share/ansible/openshift-ansible/roles\" ansible-playbook -i integ.ini install_okd.yaml -e @vars.yaml -e @secure_vars.yaml --ask-vault-pass -vvv"
]
| https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/metrics_store_installation_guide/install_with_satellite |
Chapter 5. Sending traces and metrics to the OpenTelemetry Collector | Chapter 5. Sending traces and metrics to the OpenTelemetry Collector You can set up and use the Red Hat build of OpenTelemetry to send traces to the OpenTelemetry Collector or the TempoStack instance. Sending traces and metrics to the OpenTelemetry Collector is possible with or without sidecar injection. 5.1. Sending traces and metrics to the OpenTelemetry Collector with sidecar injection You can set up sending telemetry data to an OpenTelemetry Collector instance with sidecar injection. The Red Hat build of OpenTelemetry Operator allows sidecar injection into deployment workloads and automatic configuration of your instrumentation to send telemetry data to the OpenTelemetry Collector. Prerequisites The Red Hat OpenShift distributed tracing platform (Tempo) is installed, and a TempoStack instance is deployed. You have access to the cluster through the web console or the OpenShift CLI ( oc ): You are logged in to the web console as a cluster administrator with the cluster-admin role. An active OpenShift CLI ( oc ) session by a cluster administrator with the cluster-admin role. For Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Procedure Create a project for an OpenTelemetry Collector instance. apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability Create a service account. apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-sidecar namespace: observability Grant the permissions to the service account for the k8sattributes and resourcedetection processors. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: ["", "config.openshift.io"] resources: ["pods", "namespaces", "infrastructures", "infrastructures/status"] verbs: ["get", "watch", "list"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-sidecar namespace: observability roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io Deploy the OpenTelemetry Collector as a sidecar. apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: serviceAccount: otel-collector-sidecar mode: sidecar config: serviceAccount: otel-collector-sidecar receivers: otlp: protocols: grpc: {} http: {} processors: batch: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] timeout: 2s exporters: otlp: endpoint: "tempo-<example>-gateway:8090" 1 tls: insecure: true service: pipelines: traces: receivers: [otlp] processors: [memory_limiter, resourcedetection, batch] exporters: [otlp] 1 This points to the Gateway of the TempoStack instance deployed by using the <example> Tempo Operator. Create your deployment using the otel-collector-sidecar service account. Add the sidecar.opentelemetry.io/inject: "true" annotation to your Deployment object. This will inject all the needed environment variables to send data from your workloads to the OpenTelemetry Collector instance. 5.2. Sending traces and metrics to the OpenTelemetry Collector without sidecar injection You can set up sending telemetry data to an OpenTelemetry Collector instance without sidecar injection, which involves manually setting several environment variables. Prerequisites The Red Hat OpenShift distributed tracing platform (Tempo) is installed, and a TempoStack instance is deployed. You have access to the cluster through the web console or the OpenShift CLI ( oc ): You are logged in to the web console as a cluster administrator with the cluster-admin role. An active OpenShift CLI ( oc ) session by a cluster administrator with the cluster-admin role. For Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Procedure Create a project for an OpenTelemetry Collector instance. apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability Create a service account. apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment namespace: observability Grant the permissions to the service account for the k8sattributes and resourcedetection processors. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: ["", "config.openshift.io"] resources: ["pods", "namespaces", "infrastructures", "infrastructures/status"] verbs: ["get", "watch", "list"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: observability roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io Deploy the OpenTelemetry Collector instance with the OpenTelemetryCollector custom resource. apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: mode: deployment serviceAccount: otel-collector-deployment config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} opencensus: otlp: protocols: grpc: {} http: {} zipkin: {} processors: batch: {} k8sattributes: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlp: endpoint: "tempo-<example>-distributor:4317" 1 tls: insecure: true service: pipelines: traces: receivers: [jaeger, opencensus, otlp, zipkin] processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp] 1 This points to the Gateway of the TempoStack instance deployed by using the <example> Tempo Operator. Set the environment variables in the container with your instrumented application. Name Description Default value OTEL_SERVICE_NAME Sets the value of the service.name resource attribute. "" OTEL_EXPORTER_OTLP_ENDPOINT Base endpoint URL for any signal type with an optionally specified port number. https://localhost:4317 OTEL_EXPORTER_OTLP_CERTIFICATE Path to the certificate file for the TLS credentials of the gRPC client. https://localhost:4317 OTEL_TRACES_SAMPLER Sampler to be used for traces. parentbased_always_on OTEL_EXPORTER_OTLP_PROTOCOL Transport protocol for the OTLP exporter. grpc OTEL_EXPORTER_OTLP_TIMEOUT Maximum time interval for the OTLP exporter to wait for each batch export. 10s OTEL_EXPORTER_OTLP_INSECURE Disables client transport security for gRPC requests. An HTTPS schema overrides it. False | [
"apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability",
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-sidecar namespace: observability",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: [\"\", \"config.openshift.io\"] resources: [\"pods\", \"namespaces\", \"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-sidecar namespace: observability roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: serviceAccount: otel-collector-sidecar mode: sidecar config: serviceAccount: otel-collector-sidecar receivers: otlp: protocols: grpc: {} http: {} processors: batch: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] timeout: 2s exporters: otlp: endpoint: \"tempo-<example>-gateway:8090\" 1 tls: insecure: true service: pipelines: traces: receivers: [otlp] processors: [memory_limiter, resourcedetection, batch] exporters: [otlp]",
"apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability",
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment namespace: observability",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: [\"\", \"config.openshift.io\"] resources: [\"pods\", \"namespaces\", \"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: observability roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: mode: deployment serviceAccount: otel-collector-deployment config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} opencensus: otlp: protocols: grpc: {} http: {} zipkin: {} processors: batch: {} k8sattributes: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlp: endpoint: \"tempo-<example>-distributor:4317\" 1 tls: insecure: true service: pipelines: traces: receivers: [jaeger, opencensus, otlp, zipkin] processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp]"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/red_hat_build_of_opentelemetry/otel-sending-traces-and-metrics-to-otel-collector |
Appendix C. Configuring a Host for PCI Passthrough | Appendix C. Configuring a Host for PCI Passthrough Note This is one in a series of topics that show how to set up and configure SR-IOV on Red Hat Virtualization. For more information, see Setting Up and Configuring SR-IOV Enabling PCI passthrough allows a virtual machine to use a host device as if the device were directly attached to the virtual machine. To enable the PCI passthrough function, you must enable virtualization extensions and the IOMMU function. The following procedure requires you to reboot the host. If the host is attached to the Manager already, ensure you place the host into maintenance mode first. Prerequisites Ensure that the host hardware meets the requirements for PCI device passthrough and assignment. See PCI Device Requirements for more information. Configuring a Host for PCI Passthrough Enable the virtualization extension and IOMMU extension in the BIOS. See Enabling Intel VT-x and AMD-V virtualization hardware extensions in BIOS in the Red Hat Enterprise Linux Virtualization Deployment and Administration Guide for more information. Enable the IOMMU flag in the kernel by selecting the Hostdev Passthrough & SR-IOV check box when adding the host to the Manager or by editing the grub configuration file manually. To enable the IOMMU flag from the Administration Portal, see Adding Standard Hosts to the Red Hat Virtualization Manager and Kernel Settings Explained . To edit the grub configuration file manually, see Enabling IOMMU Manually . For GPU passthrough, you need to run additional configuration steps on both the host and the guest system. See GPU device passthrough: Assigning a host GPU to a single virtual machine in Setting up an NVIDIA GPU for a virtual machine in Red Hat Virtualization for more information. Enabling IOMMU Manually Enable IOMMU by editing the grub configuration file. Note If you are using IBM POWER8 hardware, skip this step as IOMMU is enabled by default. For Intel, boot the machine, and append intel_iommu=on to the end of the GRUB_CMDLINE_LINUX line in the grub configuration file. For AMD, boot the machine, and append amd_iommu=on to the end of the GRUB_CMDLINE_LINUX line in the grub configuration file. Note If intel_iommu=on or amd_iommu=on works, you can try adding iommu=pt or amd_iommu=pt . The pt option only enables IOMMU for devices used in passthrough and provides better host performance. However, the option might not be supported on all hardware. Revert to option if the pt option doesn't work for your host. If the passthrough fails because the hardware does not support interrupt remapping, you can consider enabling the allow_unsafe_interrupts option if the virtual machines are trusted. The allow_unsafe_interrupts is not enabled by default because enabling it potentially exposes the host to MSI attacks from virtual machines. To enable the option: Refresh the grub.cfg file and reboot the host for these changes to take effect: To enable SR-IOV and assign dedicated virtual NICs to virtual machines, see https://access.redhat.com/articles/2335291 . | [
"vi /etc/default/grub GRUB_CMDLINE_LINUX=\"nofb splash=quiet console=tty0 ... intel_iommu=on",
"vi /etc/default/grub GRUB_CMDLINE_LINUX=\"nofb splash=quiet console=tty0 ... amd_iommu=on",
"vi /etc/modprobe.d options vfio_iommu_type1 allow_unsafe_interrupts=1",
"grub2-mkconfig -o /boot/grub2/grub.cfg",
"reboot"
]
| https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/installing_red_hat_virtualization_as_a_standalone_manager_with_local_databases/configuring_a_host_for_pci_passthrough_sm_localdb_deploy |
Testing guide Camel K | Testing guide Camel K Red Hat build of Apache Camel K 1.10.9 Test your Camel K integration locally and on cloud infrastructure | null | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.9/html/testing_guide_camel_k/index |
1.2. Overview | 1.2. Overview This document contains information about the known issues of Red Hat JBoss Data Grid version 6.6.2. Customers are requested to read this documentation prior to installing this version. Report a bug | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/6.6.2_release_notes/overview34 |
Chapter 4. Plug-in Implemented Server Functionality Reference | Chapter 4. Plug-in Implemented Server Functionality Reference This chapter contains reference information on Red Hat Directory Server plug-ins. The configuration for each part of Directory Server plug-in functionality has its own separate entry and set of attributes under the subtree cn=plugins,cn=config . Some of these attributes are common to all plug-ins while others may be particular to a specific plug-in. Check which attributes are currently being used by a given plug-in by performing an ldapsearch on the cn=config subtree. All plug-ins are instances of the nsSlapdPlugin object class, which in turn inherits from the extensibleObject object class. For plug-in configuration attributes to be taken into account by the server, both of these object classes (in addition to the top object class) must be present in the entry, as shown in the following example: 4.1. Server Plug-in Functionality Reference The following tables provide a quick overview of the plug-ins provided with Directory Server, along with their configurable options, configurable arguments, default setting, dependencies, general performance-related information, and further reading. These tables assist in weighing plug-in performance gains and costs and choose the optimal settings for the deployment. The Further Information section cross-references further reading, where this is available. 4.1.1. 7-bit Check Plug-in Plug-in Parameter Description Plug-in ID NS7bitAtt DN of Configuration Entry cn=7-bit check,cn=plugins,cn=config Description Checks certain attributes are 7-bit clean Type preoperation Configurable Options on off Default Setting on Configurable Arguments List of attributes ( uid mail userpassword ) followed by "," and then suffixes on which the check is to occur. Dependencies Database Performance-Related Information None Further Information 4.1.2. ACL Plug-in Plug-in Parameter Description Plug-in ID acl DN of Configuration Entry cn=ACL Plugin,cn=plugins,cn=config Description ACL access check plug-in Type accesscontrol Configurable Options on off Default Setting on Configurable Arguments None Dependencies Database Performance-Related Information Access control incurs a minimal performance hit. Leave this plug-in enabled since it is the primary means of access control for the server. Further Information 4.1.3. ACL Preoperation Plug-in Plug-in Parameter Description Plug-in ID acl DN of Configuration Entry cn=ACL preoperation,cn=plugins,cn=config Description ACL access check plug-in Type preoperation Configurable Options on off Default Setting on Configurable Arguments None Dependencies Database Performance-Related Information Access control incurs a minimal performance hit. Leave this plug-in enabled since it is the primary means of access control for the server. Further Information 4.1.4. Account Policy Plug-in Plug-in Parameter Description Plug-in ID none DN of Configuration Entry cn=Account Policy Plugin,cn=plugins,cn=config Description Defines a policy to lock user accounts after a certain expiration period or inactivity period. Type object Configurable Options on off Default Setting off Configurable Arguments A pointer to a configuration entry which contains the global account policy settings. Dependencies Database Performance-Related Information None Further Information 4.1.5. Account Usability Plug-in Plug-in Parameter Description Plug-in ID acctusability DN of Configuration Entry cn=Account Usability Plugin,cn=plugins,cn=config Description Checks the authentication status, or usability, of an account without actually authenticating as the given user Type preoperation Configurable Options on off Default Setting on Dependencies Database Performance-Related Information 4.1.6. AD DN Plug-in Plug-in Parameter Description Plug-in ID addn DN of Configuration Entry cn=addn,cn=plugins,cn=config Description Enables the usage of Active Directory-formatted user names, such as user_name and user_name @ domain , for bind operations. Type preoperation Configurable Options on off Default Setting off Configurable Arguments addn_default_domain : Sets the default domain that is automatically appended to user names without domain. Dependencies None Performance-Related Information 4.1.7. Attribute Uniqueness Plug-in Plug-in Parameter Description Plug-in ID NSUniqueAttr DN of Configuration Entry cn=Attribute Uniqueness,cn=plugins,cn=config Description Checks that the values of specified attributes are unique each time a modification occurs on an entry. For example, most sites require that a user ID and email address be unique. Type preoperation Configurable Options on off Default Setting off Configurable Arguments To check for UID attribute uniqueness in all listed subtrees, enter uid "DN" "DN"... . However, to check for UID attribute uniqueness when adding or updating entries with the requiredObjectClass , enter attribute="uid" MarkerObjectclass = "ObjectClassName" and, optionally requiredObjectClass = "ObjectClassName" . This starts checking for the required object classes from the parent entry containing the ObjectClass as defined by the MarkerObjectClass attribute. Dependencies Database Performance-Related Information Directory Server provides the UID Uniqueness Plug-in by default. To ensure unique values for other attributes, create instances of the Attribute Uniqueness Plug-in for those attributes. See the "Using the Attribute Uniqueness Plug-in" section in the Red Hat Directory Server Administration Guide for more information about the Attribute Uniqueness Plug-in. The UID Uniqueness Plug-in is off by default due to operation restrictions that need to be addressed before enabling the plug-in in a multi-supplier replication environment. Turning the plug-in on may slow down Directory Server performance. Further Information 4.1.8. Auto Membership Plug-in Plug-in Parameter Description Plug-in ID Auto Membership DN of Configuration Entry cn=Auto Membership,cn=plugins,cn=config Description Container entry for automember definitions. Automember definitions search new entries and, if they match defined LDAP search filters and regular expression conditions, add the entry to a specified group automatically. Type preoperation Configurable Options on off Default Setting off Configurable Arguments None for the main plug-in entry. The definition entry must specify an LDAP scope, LDAP filter, default group, and member attribute format. The optional regular expression child entry can specify inclusive and exclusive expressions and a different target group. Dependencies Database Performance-Related Information None. Further Information 4.1.9. Binary Syntax Plug-in Warning Binary syntax is deprecated. Use Octet String syntax instead. Plug-in Parameter Description Plug-in ID bin-syntax DN of Configuration Entry cn=Binary Syntax,cn=plugins,cn=config Description Syntax for handling binary data. Type syntax Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.10. Bit String Syntax Plug-in Plug-in Parameter Description Plug-in ID bitstring-syntax DN of Configuration Entry cn=Bit String Syntax,cn=plugins,cn=config Description Supports bit string syntax values and related matching rules from RFC 4517 . Type syntax Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.11. Bitwise Plug-in Plug-in Parameter Description Plug-in ID bitwise DN of Configuration Entry cn=Bitwise Plugin,cn=plugins,cn=config Description Matching rule for performing bitwise operations against the LDAP server Type matchingrule Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.12. Boolean Syntax Plug-in Plug-in Parameter Description Plug-in ID boolean-syntax DN of Configuration Entry cn=Boolean Syntax,cn=plugins,cn=config Description Supports boolean syntax values (TRUE or FALSE) and related matching rules from RFC 4517 . Type syntax Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.13. Case Exact String Syntax Plug-in Plug-in Parameter Description Plug-in ID ces-syntax DN of Configuration Entry cn=Case Exact String Syntax,cn=plugins,cn=config Description Supports case-sensitive matching or Directory String, IA5 String, and related syntaxes. This is not a case-exact syntax; this plug-in provides case-sensitive matching rules for different string syntaxes. Type syntax Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.14. Case Ignore String Syntax Plug-in Plug-in Parameter Description Plug-in ID directorystring-syntax DN of Configuration Entry cn=Case Ignore String Syntax,cn=plugins,cn=config Description Supports case-insensitive matching rules for Directory String, IA5 String, and related syntaxes. This is not a case-insensitive syntax; this plug-in provides case-sensitive matching rules for different string syntaxes. Type syntax Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.15. Chaining Database Plug-in Plug-in Parameter Description Plug-in ID chaining database DN of Configuration Entry cn=Chaining database,cn=plugins,cn=config Description Enables back end databases to be linked Type database Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information There are many performance related tuning parameters involved with the chaining database. See the "Maintaining Database Links" section in the Red Hat Directory Server Administration Guide . Further Information 4.1.16. Class of Service Plug-in Plug-in Parameter Description Plug-in ID cos DN of Configuration Entry cn=Class of Service,cn=plugins,cn=config Description Allows for sharing of attributes between entries Type object Configurable Options on off Default Setting on Configurable Arguments None Dependencies * Type: Database * Named: State Change Plug-in * Named: Views Plug-in Performance-Related Information Do not modify the configuration of this plug-in. Leave this plug-in running at all times. Further Information 4.1.17. Content Synchronization Plug-in Plug-in Parameter Description Plug-in ID content-sync-plugin DN of Configuration Entry cn=Content Synchronization,cn=plugins,cn=config Description Enables support for the SyncRepl protocol in Directory Server according to RFC 4533 . Type object Configurable Options on off Default Setting off Configurable Arguments None Dependencies Retro Changelog Plug-in Performance-Related Information If you know which back end or subtree clients access to synchronize data, limit the scope of the Retro Changelog plug-in accordingly. Further Information 4.1.18. Country String Syntax Plug-in Plug-in Parameter Description Plug-in ID countrystring-syntax DN of Configuration Entry cn=Country String Syntax,cn=plugins,cn=config Description Supports country naming syntax values and related matching rules from RFC 4517 . Type syntax Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.19. Delivery Method Syntax Plug-in Plug-in Parameter Description Plug-in ID delivery-syntax DN of Configuration Entry cn=Delivery Method Syntax,cn=plugins,cn=config Description Supports values that are lists of preferred deliver methods and related matching rules from RFC 4517 . Type syntax Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.20. deref Plug-in Plug-in Parameter Description Plug-in ID Dereference DN of Configuration Entry cn=deref,cn=plugins,cn=config Description For dereference controls in directory searches Type preoperation Configurable Options on off Default Setting on Configurable Arguments None Dependencies Database Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.21. Distinguished Name Syntax Plug-in Plug-in Parameter Description Plug-in ID dn-syntax DN of Configuration Entry cn=Distinguished Name Syntax,cn=plugins,cn=config Description Supports DN value syntaxes and related matching rules from RFC 4517 . Type syntax Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.22. Distributed Numeric Assignment Plug-in Plug-in Information Description Plug-in ID Distributed Numeric Assignment Configuration Entry DN cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config Description Distributed Numeric Assignment plugin Type preoperation Configurable Options on off Default Setting off Configurable Arguments Dependencies Database Performance-Related Information None Further Information 4.1.23. Enhanced Guide Syntax Plug-in Plug-in Parameter Description Plug-in ID enhancedguide-syntax DN of Configuration Entry cn=Enhanced Guide Syntax,cn=plugins,cn=config Description Supports syntaxes and related matching rules for creating complex criteria, based on attributes and filters, to build searches; from RFC 4517 . Type syntax Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.24. Facsimile Telephone Number Syntax Plug-in Plug-in Parameter Description Plug-in ID facsimile-syntax DN of Configuration Entry cn=Facsimile Telephone Number Syntax,cn=plugins,cn=config Description Supports syntaxes and related matching rules for fax numbers; from RFC 4517 . Type syntax Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.25. Fax Syntax Plug-in Plug-in Parameter Description Plug-in ID fax-syntax DN of Configuration Entry cn=Fax Syntax,cn=plugins,cn=config Description Supports syntaxes and related matching rules for storing images of faxed objects; from RFC 4517 . Type syntax Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.26. Generalized Time Syntax Plug-in Plug-in Parameter Description Plug-in ID time-syntax DN of Configuration Entry cn=Generalized Time Syntax,cn=plugins,cn=config Description Supports syntaxes and related matching rules for dealing with dates, times and time zones; from RFC 4517 . Type syntax Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.27. Guide Syntax Plug-in Warning This syntax is deprecated. Use Enhanced Guide syntax instead. Plug-in Parameter Description Plug-in ID guide-syntax DN of Configuration Entry cn=Guide Syntax,cn=plugins,cn=config Description Syntax for creating complex criteria, based on attributes and filters, to build searches Type syntax Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.28. HTTP Client Plug-in Plug-in Parameter Description Plug-in ID http-client DN of Configuration Entry cn=HTTP Client,cn=plugins,cn=config Description HTTP client plug-in Type preoperation Configurable Options on off Default Setting on Configurable Arguments None Dependencies Database Performance-Related Information Further Information 4.1.29. Integer Syntax Plug-in Plug-in Parameter Description Plug-in ID int-syntax DN of Configuration Entry cn=Integer Syntax,cn=plugins,cn=config Description Supports integer syntaxes and related matching rules from RFC 4517 . Type syntax Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.30. Internationalization Plug-in Plug-in Parameter Description Plug-in ID orderingrule DN of Configuration Entry cn=Internationalization Plugin,cn=plugins,cn=config Description Enables internationalized strings to be ordered in the directory Type matchingrule Configurable Options on off Default Setting on Configurable Arguments The Internationalization Plug-in has one argument, which must not be modified, which specifies the location of the /etc/dirsrv/config/slapd-collations.conf file. This file stores the collation orders and locales used by the Internationalization Plug-in. Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.31. JPEG Syntax Plug-in Plug-in Parameter Description Plug-in ID jpeg-syntax DN of Configuration Entry cn=JPEG Syntax,cn=plugins,cn=config Description Supports syntaxes and related matching rules for JPEG image data; from RFC 4517 . Type syntax Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.32. ldbm database Plug-in Plug-in Parameter Description Plug-in ID ldbm-backend DN of Configuration Entry cn=ldbm database,cn=plugins,cn=config Description Implements local databases Type database Configurable Options Default Setting on Configurable Arguments None Dependencies * Syntax * matchingRule Performance-Related Information See Section 4.4, "Database Plug-in Attributes" for further information on database configuration. Further Information See the "Configuring Directory Databases" chapter in the Red Hat Directory Server Administration Guide . 4.1.33. Linked Attributes Plug-in Plug-in Parameter Description Plug-in ID Linked Attributes DN of Configuration Entry cn=Linked Attributes,cn=plugins,cn=config Description Container entry for linked-managed attribute configuration entries. Each configuration entry under the container links one attribute to another, so that when one entry is updated (such as a manager entry), then any entry associated with that entry (such as a custom directReports attribute) are automatically updated with a user-specified corresponding attribute. Type preoperation Configurable Options on off Default Setting off Configurable Arguments None for the main plug-in entry. Each plug-in instance has three possible attributes: * linkType, which sets the primary attribute for the plug-in to monitor * managedType, which sets the attribute which will be managed dynamically by the plug-in whenever the attribute in linkType is modified * linkScope, which restricts the plug-in activity to a specific subtree within the directory tree Dependencies Database Performance-Related Information Any attribute set in linkType must only allow values in a DN format. Any attribute set in managedType must be multi-valued. Further Information 4.1.34. Managed Entries Plug-in Plug-in Information Description Plug-in ID Managed Entries Configuration Entry DN cn=Managed Entries,cn=plugins,cn=config Description Container entry for automatically generated directory entries. Each configuration entry defines a target subtree and a template entry. When a matching entry in the target subtree is created, then the plug-in automatically creates a new, related entry based on the template. Type preoperation Configurable Options on off Default Setting off Configurable Arguments None for the main plug-in entry. Each plug-in instance has four possible attributes: * originScope, which sets the search base * originFilter, which sets the search base for matching entries * managedScope, which sets the subtree under which to create new managed entries * managedTemplate, which is the template entry used to create the managed entries Dependencies Database Performance-Related Information None Further Information 4.1.35. MemberOf Plug-in Plug-in Information Description Plug-in ID memberOf Configuration Entry DN cn=MemberOf Plugin,cn=plugins,cn=config Description Manages the memberOf attribute on user entries, based on the member attributes in the group entry. Type postoperation Configurable Options on off Default Setting off Configurable Arguments * memberOfAttr sets the attribute to generate in people's entries to show their group membership. * memberOfGroupAttr sets the attribute to use to identify group member's DNs. Dependencies Database Performance-Related Information None Further Information 4.1.36. Multi-master Replication Plug-in Plug-in Parameter Description Plug-in ID replication-multimaster DN of Configuration Entry cn=Multimaster Replication plugin,cn=plugins,cn=config Description Enables replication between two current Directory Servers Type object Configurable Options on off Default Setting on Configurable Arguments None Dependencies * Named: ldbm database * Named: DES * Named: Class of Service Performance-Related Information Further Information 4.1.37. Name and Optional UID Syntax Plug-in Plug-in Parameter Description Plug-in ID nameoptuid-syntax DN of Configuration Entry cn=Name And Optional UID Syntax,cn=plugins,cn=config Description Supports syntaxes and related matching rules to store and search for a DN with an optional unique ID; from RFC 4517 . Type syntax Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.38. Numeric String Syntax Plug-in Plug-in Parameter Description Plug-in ID numstr-syntax DN of Configuration Entry cn=Numeric String Syntax,cn=plugins,cn=config Description Supports syntaxes and related matching rules for strings of numbers and spaces; from RFC 4517 . Type syntax Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.39. Octet String Syntax Plug-in Note Use the Octet String syntax instead of Binary, which is deprecated. Plug-in Parameter Description Plug-in ID octetstring-syntax DN of Configuration Entry cn=Octet String Syntax,cn=plugins,cn=config Description Supports octet string syntaxes and related matching rules from RFC 4517 . Type syntax Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.40. OID Syntax Plug-in Plug-in Parameter Description Plug-in ID oid-syntax DN of Configuration Entry cn=OID Syntax,cn=plugins,cn=config Description Supports object identifier (OID) syntaxes and related matching rules from RFC 4517 . Type syntax Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.41. PAM Pass Through Auth Plug-in Plug-in Parameter Description Plug-in ID pam_passthruauth DN of Configuration Entry cn=PAM Pass Through Auth,cn=plugins,cn=config Description Enables pass-through authentication for PAM, meaning that a PAM service can use the Directory Server as its user authentication store. Type preoperation Configurable Options on off Default Setting on Configurable Arguments None Dependencies Database Performance-Related Information Further Information 4.1.42. Pass Through Authentication Plug-in Plug-in Parameter Description Plug-in ID passthruauth DN of Configuration Entry cn=Pass Through Authentication,cn=plugins,cn=config Description Enables pass-through authentication , the mechanism which allows one directory to consult another to authenticate bind requests. Type preoperation Configurable Options on off Default Setting off Configurable Arguments ldap://example.com:389/o=example Dependencies Database Performance-Related Information Pass-through authentication slows down bind requests a little because they have to make an extra hop to the remote server. See the "Using Pass-through Authentication" chapter in the Red Hat Directory Server Administration Guide . Further Information 4.1.43. Password Storage Schemes Directory Server implements the password storage schemes as plug-ins. However, the cn=Password Storage Schemes,cn=plugins,cn=config entry itself is just a container, not a plug-in entry. All password storage scheme plug-ins are stored as a subentry of this container. To display all password storage schemes plug-ins, enter: Warning Red Hat recommends not disabling the password scheme plug-ins nor to change the configurations of the plug-ins to prevent unpredictable authentication behavior. Strong Password Storage Schemes Red Hat recommends using only the following strong password storage schemes (strongest first): PBKDF2_SHA256 (default) The password-based key derivation function 2 (PBKDF2) was designed to expend resources to counter brute force attacks. PBKDF2 supports a variable number of iterations to apply the hashing algorithm. Higher iterations improve security but require more hardware resources. In Directory Server, the PBKDF2_SHA256 scheme is implemented using 30,000 iterations to apply the SHA256 algorithm. This value is hard-coded and will be increased in future versions of Directory Server without requiring interaction by an administrator. Note The network security service (NSS) database in Red Hat Enterprise Linux 6 does not support PBKDF2. Therefore you cannot use this password scheme in a replication topology with Directory Server 9. SSHA512 The salted secure hashing algorithm (SSHA) implements an enhanced version of the secure hashing algorithm (SHA), that uses a randomly generated salt to increase the security of the hashed password. SSHA512 implements the hashing algorithm using 512 bits. Weak Password Storage Schemes Besides the recommended strong password storage schemes, Directory Server supports the following weak schemes for backward compatibility: AES CLEAR CRYPT CRYPT-MD5 CRYPT-SHA256 CRYPT-SHA512 DES MD5 NS-MTA-MD5 [a] SHA [b] SHA256 SHA384 SHA512 SMD5 SSHA SSHA256 SSHA384 [a] Directory Server only supports authentication using this scheme. You can no longer use it to encrypt passwords. [b] 160 bit Important Only continue using a weak scheme over a short time frame, as it increases security risks. 4.1.44. Posix Winsync API Plug-in Plug-in Parameter Description Plug-in ID posix-winsync-plugin DN of Configuration Entry cn=Posix Winsync API,cn=plugins,cn=config Description Enables and configures Windows synchronization for Posix attributes set on Active Directory user and group entries. Type preoperation Configurable Arguments * on off * memberUID mapping (groups) * converting and sorting memberUID values in lower case (groups) * memberOf fix-up tasks with sync operations * use Windows 2003 Posix schema Default Setting off Configurable Arguments None Dependencies 4.1.45. Postal Address String Syntax Plug-in Plug-in Parameter Description Plug-in ID postaladdress-syntax DN of Configuration Entry cn=Postal Address Syntax,cn=plugins,cn=config Description Supports postal address syntaxes and related matching rules from RFC 4517 . Type syntax Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.46. Printable String Syntax Plug-in Plug-in Parameter Description Plug-in ID printablestring-syntax DN of Configuration Entry cn=Printable String Syntax,cn=plugins,cn=config Description Supports syntaxes and matching rules for alphanumeric and select punctuation strings (for strings which conform to printable strings as defined in RFC 4517 ). Type syntax Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.47. Referential Integrity Postoperation Plug-in Plug-in Parameter Description Plug-in ID referint DN of Configuration Entry cn=Referential Integrity Postoperation,cn=plugins,cn=config Description Enables the server to ensure referential integrity Type postoperation Configurable Options All configuration and on off Default Setting off Configurable Arguments When enabled, the post-operation Referential Integrity Plug-in performs integrity updates on the member , uniquemember , owner , and seeAlso attributes immediately after a delete or rename operation. The plug-in can be configured to perform integrity checks on all other attributes. For details, see the corresponding section in the Directory Server Administration Guide . Dependencies Database Performance-Related Information The Referential Integrity Plug-in should be enabled only on one supplier in a multi-supplier replication environment to avoid conflict resolution loops. When enabling the plug-in on chained servers, be sure to analyze the performance resource and time needs as well as integrity needs; integrity checks can be time consuming and demanding on memory and CPU. All attributes specified must be indexed for both presence and equality. Further Information 4.1.48. Retro Changelog Plug-in Plug-in Parameter Description Plug-in ID retrocl DN of Configuration Entry cn=Retro Changelog Plugin,cn=plugins,cn=config Description Used by LDAP clients for maintaining application compatibility with Directory Server 4.x versions. Maintains a log of all changes occurring in the Directory Server. The retro changelog offers the same functionality as the changelog in the 4.x versions of Directory Server. This plug-in exposes the cn=changelog suffix to clients, so that clients can use this suffix with or without persistent search for simple sync applications. Type object Configurable Options on off Default Setting off Configurable Arguments See Section 4.16, "Retro Changelog Plug-in Attributes" for further information on the two configuration attributes for this plug-in. Dependencies * Type: Database * Named: Class of Service Performance-Related Information May slow down Directory Server update performance. Further Information 4.1.49. Roles Plug-in Plug-in Parameter Description Plug-in ID roles DN of Configuration Entry cn=Roles Plugin,cn=plugins,cn=config Description Enables the use of roles in the Directory Server Type object Configurable Options on off Default Setting on Configurable Arguments None Dependencies * Type: Database * Named: State Change Plug-in * Named: Views Plug-in Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.50. RootDN Access Control Plug-in Plug-in Parameter Description Plug-in ID rootdn-access-control DN of Configuration Entry cn=RootDN Access Control,cn=plugins,cn=config Description Enables and configures access controls to use for the root DN entry. Type internalpreoperation Configurable Options on off Default Setting off Configurable Attributes * rootdn-open-time and rootdn-close-time for time-based access controls * rootdn-days-allowed for day-based access controls * rootdn-allow-host, rootdn-deny-host, rootdn-allow-ip, and rootdn-deny-ip for host-based access controls Dependencies None Further Information 4.1.51. Schema Reload Plug-in Plug-in Information Description Plug-in ID schemareload Configuration Entry DN cn=Schema Reload,cn=plugins,cn=config Description Task plug-in to reload schema files Type object Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Further Information 4.1.52. Space Insensitive String Syntax Plug-in Plug-in Parameter Description Plug-in ID none DN of Configuration Entry cn=Space Insensitive String Syntax,cn=plugins,cn=config Description Syntax for handling space-insensitive values Type syntax Configurable Options on off Default Setting off Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.53. State Change Plug-in Plug-in Parameter Description Plug-in ID statechange DN of Configuration Entry cn=State Change Plugin,cn=plugins,cn=config Description Enables state-change-notification service Type postoperation Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Further Information 4.1.54. Syntax Validation Task Plug-in Plug-in Parameter Description Plug-in ID none DN of Configuration Entry cn=Syntax Validation Task,cn=plugins,cn=config Description Enables syntax validation for attribute values Type object Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Further Information 4.1.55. Telephone Syntax Plug-in Plug-in Parameter Description Plug-in ID tele-syntax DN of Configuration Entry cn=Telephone Syntax,cn=plugins,cn=config Description Supports telephone number syntaxes and related matching rules from RFC 4517 . Type syntax Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.56. Teletex Terminal Identifier Syntax Plug-in Plug-in Parameter Description Plug-in ID teletextermid-syntax DN of Configuration Entry cn=Teletex Terminal Identifier Syntax,cn=plugins,cn=config Description Supports international telephone number syntaxes and related matching rules from RFC 4517 . Type syntax Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.57. Telex Number Syntax Plug-in Plug-in Parameter Description Plug-in ID telex-syntax DN of Configuration Entry cn=Telex Number Syntax,cn=plugins,cn=config Description Supports syntaxes and related matching rules for the telex number, country code, and answerback code of a telex terminal; from RFC 4517 . Type syntax Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.58. URI Syntax Plug-in Plug-in Parameter Description Plug-in ID none DN of Configuration Entry cn=URI Syntax,cn=plugins,cn=config Description Supports syntaxes and related matching rules for unique resource identifiers (URIs), including unique resource locators (URLs); from RFC 4517 . Type syntax Configurable Options on off Default Setting off Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. If enabled, Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.59. USN Plug-in Plug-in Parameter Description Plug-in ID USN DN of Configuration Entry cn=USN,cn=plugins,cn=config Description Sets an update sequence number (USN) on an entry, for every entry in the directory, whenever there is a modification, including adding and deleting entries and modifying attribute values. Type object Configurable Options on off Default Setting off Configurable Arguments None Dependencies Database Performance-Related Information For replication, it is recommended that the entryUSN configuration attribute be excluded using fractional replication. Further Information 4.1.60. Views Plug-in Plug-in Parameter Description Plug-in ID views DN of Configuration Entry cn=Views,cn=plugins,cn=config Description Enables the use of views in the Directory Server databases. Type object Configurable Options on off Default Setting on Configurable Arguments None Dependencies * Type: Database * Named: State Change Plug-in Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.2. List of Attributes Common to All Plug-ins This list provides a brief attribute description, the entry DN, valid range, default value, syntax, and an example for each attribute. 4.2.1. nsslapdPlugin (Object Class) Each Directory Server plug-in belongs to the nsslapdPlugin object class. This object class is defined in Directory Server. Superior Class top OID 2.16.840.1.113730.3.2.41 Table 4.1. Required Attributes Attribute Definition objectClass Gives the object classes assigned to the entry. cn Gives the common name of the entry. Section 4.2.8, "nsslapd-pluginPath" Identifies the plugin library name (without the library suffix). Section 4.2.7, "nsslapd-pluginInitfunc" Identifies an initialization function of the plugin. Section 4.2.10, "nsslapd-pluginType" Identifies the type of plugin. Section 4.2.6, "nsslapd-pluginId" Identifies the plugin ID. Section 4.2.12, "nsslapd-pluginVersion" Identifies the version of plugin. Section 4.2.11, "nsslapd-pluginVendor" Identifies the vendor of plugin. Section 4.2.4, "nsslapd-pluginDescription" Identifies the description of the plugin. Section 4.2.5, "nsslapd-pluginEnabled" Identifies whether or not the plugin is enabled. Section 4.2.9, "nsslapd-pluginPrecedence" Sets the priority for the plug-in in the execution order. 4.2.2. nsslapd-logAccess This attribute enables you to log search operations run by the plug-in to the file set in the nsslapd-accesslog parameter in cn=config . Plug-in Parameter Description Entry DN cn=plug-in name,cn=plugins,cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-logAccess: Off 4.2.3. nsslapd-logAudit This attribute enables you to log and audit modifications to the database originated from the plug-in. Successful modification events are logged in the audit log, if the nsslapd-auditlog-logging-enabled parameter is enabled in cn=config . To log failed modification database operations by a plug-in, enable the nsslapd-auditfaillog-logging-enabled attribute in cn=config . Plug-in Parameter Description Entry DN cn=plug-in name,cn=plugins,cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-logAudit: Off 4.2.4. nsslapd-pluginDescription This attribute provides a description of the plug-in. Plug-in Parameter Description Entry DN cn=plug-in name,cn=plugins,cn=config Valid Values Default Value None Syntax DirectoryString Example nsslapd-pluginDescription: acl access check plug-in 4.2.5. nsslapd-pluginEnabled This attribute specifies whether the plug-in is enabled. This attribute can be changed over protocol but will only take effect when the server is restarted. Plug-in Parameter Description Entry DN cn=plug-in name,cn=plugins,cn=config Valid Values on | off Default Value on Syntax DirectoryString Example nsslapd-pluginEnabled: on 4.2.6. nsslapd-pluginId This attribute specifies the plug-in ID. Plug-in Parameter Description Entry DN cn=plug-in name,cn=plugins,cn=config Valid Values Any valid plug-in ID Default Value None Syntax DirectoryString Example nsslapd-pluginId: chaining database 4.2.7. nsslapd-pluginInitfunc This attribute specifies the plug-in function to be initiated. Plug-in Parameter Description Entry DN cn=plug-in name,cn=plugins,cn=config Valid Values Any valid plug-in function Default Value None Syntax DirectoryString Example nsslapd-pluginInitfunc: NS7bitAttr_Init 4.2.8. nsslapd-pluginPath This attribute specifies the full path to the plug-in. Plug-in Parameter Description Entry DN cn=plug-in name,cn=plugins,cn=config Valid Values Any valid path Default Value None Syntax DirectoryString Example nsslapd-pluginPath: uid-plugin 4.2.9. nsslapd-pluginPrecedence This attribute sets the precedence or priority for the execution order of a plug-in. Precedence defines the execution order of plug-ins, which allows more complex environments or interactions since it can enable a plug-in to wait for a completed operation before being executed. This is more important for pre-operation and post-operation plug-ins. Plug-ins with a value of 1 have the highest priority and are run first; plug-ins with a value of 99 have the lowest priority. The default is 50. Plug-in Parameter Description Entry DN cn=plug-in name,cn=plugins,cn=config Valid Values 1 to 99 Default Value 50 Syntax Integer Example nsslapd-pluginPrecedence: 3 4.2.10. nsslapd-pluginType This attribute specifies the plug-in type. See Section 4.3.5, "nsslapd-plugin-depends-on-type" for further information. Plug-in Parameter Description Entry DN cn=plug-in name,cn=plugins,cn=config Valid Values Any valid plug-in type Default Value None Syntax DirectoryString Example nsslapd-pluginType: preoperation 4.2.11. nsslapd-pluginVendor This attribute specifies the vendor of the plug-in. Plug-in Parameter Description Entry DN cn=plug-in name,cn=plugins,cn=config Valid Values Any approved plug-in vendor Default Value Red Hat, Inc. Syntax DirectoryString Example nsslapd-pluginVendor: Red Hat, Inc. 4.2.12. nsslapd-pluginVersion This attribute specifies the plug-in version. Plug-in Parameter Description Entry DN cn=plug-in name,cn=plugins,cn=config Valid Values Any valid plug-in version Default Value Product version number Syntax DirectoryString Example nsslapd-pluginVersion: 11.3 4.3. Attributes Allowed by Certain Plug-ins 4.3.1. nsslapd-dynamic-plugins Directory Server has dynamic plug-ins that can be enabled without restarting the server. The nsslapd-dynamic-plugins attribute specifies whether the server is configured to allow dynamic plug-ins. By default, dynamic plug-ins are disabled. Warning Directory Server does not support dynamic plug-ins. Use it only for testing and debugging purposes. Some plug-ins cannot be configured as dynamic, and they require the server to be restarted. Plug-in Parameter Description Entry DN cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-dynamic-plugins: on 4.3.2. nsslapd-pluginConfigArea Some plug-in entries are container entries, and multiple instances of the plug-in are created beneath this container in cn=plugins,cn=config . However, the cn=plugins,cn=config is not replicated, which means that the plug-in configurations beneath those container entries must be configured manually, in some way, on every Directory Server instance. The nsslapd-pluginConfigArea attribute points to another container entry, in the main database area, which contains the plug-in instance entries. This container entry can be in a replicated database, which allows the plug-in configuration to be replicated. Plug-in Parameter Description Entry DN cn= plug-in name ,cn=plugins,cn=config Valid Values Any valid DN Default Value Syntax DN Example nsslapd-pluginConfigArea: cn=managed entries container,ou=containers,dc=example,dc=com 4.3.3. nsslapd-pluginLoadNow This attribute specifies whether to load all of the symbols used by a plug-in immediately ( true ), as well as all symbols references by those symbols, or to load the symbol the first time it is used ( false ). Plug-in Parameter Description Entry DN cn=plug-in name,cn=plugins,cn=config Valid Values true | false Default Value false Syntax DirectoryString Example nsslapd-pluginLoadNow: false 4.3.4. nsslapd-pluginLoadGlobal This attribute specifies whether the symbols in dependent libraries are made visible locally ( false ) or to the executable and to all shared objects ( true ). Plug-in Parameter Description Entry DN cn=plug-in name,cn=plugins,cn=config Valid Values true | false Default Value false Syntax DirectoryString Example nsslapd-pluginLoadGlobal: false 4.3.5. nsslapd-plugin-depends-on-type Multi-valued attribute used to ensure that plug-ins are called by the server in the correct order. Takes a value which corresponds to the type number of a plug-in, contained in the attribute nsslapd-pluginType . See Section 4.2.10, "nsslapd-pluginType" for further information. All plug-ins with a type value which matches one of the values in the following valid range will be started by the server prior to this plug-in. The following postoperation Referential Integrity Plug-in example shows that the database plug-in will be started prior to the postoperation Referential Integrity Plug-in. Plug-in Parameter Description Entry DN cn=referential integrity postoperation,cn=plugins,cn=config Valid Values database Default Value Syntax DirectoryString Example nsslapd-plugin-depends-on-type: database 4.3.6. nsslapd-plugin-depends-on-named Multi-valued attribute used to ensure that plug-ins are called by the server in the correct order. Takes a value which corresponds to the cn value of a plug-in. The plug-in with a cn value matching one of the following values will be started by the server prior to this plug-in. If the plug-in does not exist, the server fails to start. The following postoperation Referential Integrity Plug-in example shows that the Views plug-in is started before Roles. If Views is missing, the server is not going to start. Plug-in Parameter Description Entry DN cn=referential integrity postoperation,cn=plugins,cn=config Valid Values Class of Service Default Value Syntax DirectoryString Example * nsslapd-plugin-depends-on-named: Views * nsslapd-pluginId: roles 4.4. Database Plug-in Attributes The database plug-in is also organized in an information tree, as shown in Figure 4.1, "Database Plug-in" . Figure 4.1. Database Plug-in All plug-in technology used by the database instances is stored in the cn=ldbm database plug-in node. This section presents the additional attribute information for each of the nodes in bold in the cn=ldbm database,cn=plugins,cn=config information tree. 4.4.1. Database Attributes under cn=config,cn=ldbm database,cn=plugins,cn=config This section covers global configuration attributes common to all instances are stored in the cn=config,cn=ldbm database,cn=plugins,cn=config tree node. 4.4.1.1. nsslapd-backend-implement The nsslapd-backend-implement parameter defines the database back end Directory Server uses. Important Directory Server currently only supports the Berkeley Database (BDB). Therefore, you cannot set this parameter to a different value. Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values bdb Default Value bdb Syntax Directory String Example nsslapd-backend-implement: bdb 4.4.1.2. nsslapd-backend-opt-level This parameter can trigger experimental code to improve write performance. Possible values: 0 : Disables the parameter. 1 : The replication update vector is not written to the database during the transaction 2 : Changes the order of taking the back end lock and starts the transaction 4 : Moves code out of the transaction. All parameters can be combined. For example 7 enables all optimisation features. Warning This parameter is experimental. Never change its value unless you are specifically told to do so by the Red Hat support. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values 0 | 1 | 2 | 4 Default Value 0 Syntax Integer Example nsslapd-backend-opt-level: 0 4.4.1.3. nsslapd-directory This attribute specifies absolute path to database instance. If the database instance is manually created then this attribute must be included, something which is set by default (and modifiable) in the Directory Server Console. Once the database instance is created, do not modify this path as any changes risk preventing the server from accessing data. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values Any valid absolute path to the database instance Default Value Syntax DirectoryString Example nsslapd-directory: /var/lib/dirsrv/slapd- instance /db 4.4.1.4. nsslapd-exclude-from-export This attribute contains a space-separated list of names of attributes to exclude from an entry when a database is exported. This mainly is used for some configuration and operational attributes which are specific to a server instance. Do not remove any of the default values for this attribute, since that may affect server performance. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values Any valid attribute Default Value entrydn entryid dncomp parentid numSubordinates entryusn Syntax DirectoryString Example nsslapd-exclude-from-export: entrydn entryid dncomp parentid numSubordinates entryusn 4.4.1.5. nsslapd-db-transaction-wait If you enable the nsslapd-db-transaction-wait parameter, Directory Server does not start the transaction and waits until lock resources are available. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-db-transaction-wait: off 4.4.1.6. nsslapd-db-private-import-mem The nsslapd-db-private-import-mem parameter manages whether or not Directory Server uses private memory for allocation of regions and mutexes for a database import. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values on | off Default Value on Syntax DirectoryString Example nsslapd-db-private-import-mem: on 4.4.1.7. nsslapd-db-deadlock-policy The nsslapd-db-deadlock-policy parameter sets the libdb library-internal deadlock policy. Important Only change this parameter if instructed by Red Hat Support. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values 0-9 Default Value 0 Syntax DirectoryString Example nsslapd-db-deadlock-policy: 9 4.4.1.8. nsslapd-idl-switch The nsslapd-idl-switch parameter sets the IDL format Directory Server uses. Note that Red Hat no longer supports the old IDL format. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values new | old Default Value new Syntax Directory String Example nsslapd-idl-switch: new 4.4.1.9. nsslapd-idlistscanlimit This performance-related attribute, present by default, specifies the number of entry IDs that are searched during a search operation. Attempting to set a value that is not a number or is too big for a 32-bit signed integer returns an LDAP_UNWILLING_TO_PERFORM error message, with additional error information explaining the problem. It is advisable to keep the default value to improve search performance. For further details, see the corresponding sections in the: Directory Server Performance Tuning Guide Directory Server Administration Guide This parameter can be changed while the server is running, and the new value will affect subsequent searches. The corresponding user-level attribute is nsIDListScanLimit . Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range 100 to the maximum 32-bit integer value (2147483647) entry IDs Default Value 4000 Syntax Integer Example nsslapd-idlistscanlimit: 4000 4.4.1.10. nsslapd-lookthroughlimit This performance-related attribute specifies the maximum number of entries that the Directory Server will check when examining candidate entries in response to a search request. The Directory Manager DN, however, is, by default, unlimited and overrides any other settings specified here. It is worth noting that binder-based resource limits work for this limit, which means that if a value for the operational attribute nsLookThroughLimit is present in the entry as which a user binds, the default limit will be overridden. Attempting to set a value that is not a number or is too big for a 32-bit signed integer returns an LDAP_UNWILLING_TO_PERFORM error message with additional error information explaining the problem. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range -1 to maximum 32-bit integer in entries (where -1 is unlimited) Default Value 5000 Syntax Integer Example nsslapd-lookthroughlimit: 5000 4.4.1.11. nsslapd-mode This attribute specifies the permissions used for newly created index files. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values Any four-digit octal number. However, mode 0600 is recommended. This allows read and write access for the owner of the index files (which is the user as whom the ns-slapd runs) and no access for other users. Default Value 600 Syntax Integer Example nsslapd-mode: 0600 4.4.1.12. nsslapd-pagedidlistscanlimit This performance-related attribute specifies the number of entry IDs that are searched, specifically, for a search operation using the simple paged results control. This attribute works the same as the nsslapd-idlistscanlimit attribute, except that it only applies to searches with the simple paged results control. If this attribute is not present or is set to zero, then the nsslapd-idlistscanlimit is used to paged searches as well as non-paged searches. The corresponding user-level attribute is nsPagedIDListScanLimit . Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range -1 to maximum 32-bit integer in entries (where -1 is unlimited) Default Value 0 Syntax Integer Example nsslapd-pagedidlistscanlimit: 5000 4.4.1.13. nsslapd-pagedlookthroughlimit This performance-related attribute specifies the maximum number of entries that the Directory Server will check when examining candidate entries for a search which uses the simple paged results control. This attribute works the same as the nsslapd-lookthroughlimit attribute, except that it only applies to searches with the simple paged results control. If this attribute is not present or is set to zero, then the nsslapd-lookthroughlimit is used to paged searches as well as non-paged searches. The corresponding user-level attribute is nsPagedLookThroughLimit . Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range -1 to maximum 32-bit integer in entries (where -1 is unlimited) Default Value 0 Syntax Integer Example nsslapd-pagedlookthroughlimit: 25000 4.4.1.14. nsslapd-rangelookthroughlimit This performance-related attribute specifies the maximum number of entries that the Directory Server will check when examining candidate entries in response to a range search request. Range searches use operators to set a bracket to search for and return an entire subset of entries within the directory. For example, this searches for every entry modified at or after midnight on January 1: The nature of a range search is that it must evaluate every single entry within the directory to see if it is within the range given. Essentially, a range search is always an all IDs search. For most users, the look-through limit kicks in and prevents range searches from turning into an all IDs search. This improves overall performance and speeds up range search results. However, some clients or administrative users like Directory Manager may not have a look-through limit set. In that case, a range search can take several minutes to complete or even continue indefinitely. The nsslapd-rangelookthroughlimit attribute sets a separate range look-through limit that applies to all users, including Directory Manager. This allows clients and administrative users to have high look-through limits while still allowing a reasonable limit to be set on potentially performance-impaired range searches. Note Unlike other resource limits, this applies to searches by any user, including the Directory Manager, regular users, and other LDAP clients. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range -1 to maximum 32-bit integer in entries (where -1 is unlimited) Default Value 5000 Syntax Integer Example nsslapd-rangelookthroughlimit: 5000 4.4.1.15. nsslapd-search-bypass-filter-test If you enable the nsslapd-search-bypass-filter-test parameter, Directory Server bypasses filter checks when it builds candidate lists during a search. If you set the parameter to verify , Directory Server evaluates the filter against the search candidate entries. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values on | off | verify Default Value on Syntax Directory String Example nsslapd-search-bypass-filter-test: on 4.4.1.16. nsslapd-search-use-vlv-index The nsslapd-search-use-vlv-index enables and disables virtual list view (VLV) searches. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values on | off Default Value on Syntax Directory String Example nsslapd-search-use-vlv-index: on 4.4.1.17. nsslapd-subtree-rename-switch Every directory entry is stored as a key in an entry index file. The index key maps the current entry DN to its meta entry in the index. This mapping is done either by the RDN of the entry or by the full DN of the entry. When a subtree entry is allowed to be renamed (meaning, an entry with children entries, effectively renaming the whole subtree), its entries are stored in the entryrdn.db index, which associates parent and child entries by an assigned ID rather than their DN. If subtree rename operations are not allowed, then the entryrdn.db index is disabled and the entrydn.db index is used, which simply uses full DNs, with the implicit parent-child relationships. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values off | on Default Value on Syntax DirectoryString Example nsslapd-subtree-rename-switch: on 4.4.2. Database Attributes under cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config This section covers global configuration attributes common to all instances are stored in the cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config tree node. 4.4.2.1. nsslapd-cache-autosize This performance tuning-related attribute sets the percentage of free memory that is used in total for the database and entry cache. For example, if the value is set to 10 , 10% of the system's free RAM is used for both caches. If this value is set to a value greater than 0 , auto-sizing is enabled for the database and entry cache. For optimized performance, Red Hat recommends not to disable auto-sizing. However, in certain situations in can be necessary to disable auto-sizing. In this case, set the nsslapd-cache-autosize attribute to 0 and manually set: the database cache in the nsslapd-dbcachesize attribute. the entry cache in the nsslapd-cachememsize attribute. For further details about auto-sizing, see the corresponding section in the Red Hat Directory Server Performance Tuning Guide . Note If the nsslapd-cache-autosize and nsslapd-cache-autosize-split attribute are both set to high values, such as 100 , Directory Server fails to start. To fix the problem, set both parameters to more reasonable values. For example: Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range 0 to 100. If 0 is set, the default value is used instead. Default Value 10 Syntax Integer Example nsslapd-cache-autosize: 10 4.4.2.2. nsslapd-cache-autosize-split This performance tuning-related attribute sets the percentage of RAM that is used for the database cache. The remaining percentage is used for the entry cache. For example, if the value is set to 40 , the database cache uses 40%, and the entry cache the remaining 60% of the free RAM reserved in the nsslapd-cache-autosize attribute. For further details about auto-sizing, see the corresponding section in the Red Hat Directory Server Performance Tuning Guide . Note If the nsslapd-cache-autosize and nsslapd-cache-autosize-split attribute are both set to high values, such as 100 , Directory Server fails to start. To fix the problem, set both parameters to more reasonable values. For example: Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range 0 to 99. If 0 is set, the default value is used instead. Default Value 40 Syntax Integer Example nsslapd-cache-autosize-split: 40 4.4.2.3. nsslapd-db-checkpoint-interval This sets the amount of time in seconds after which the Directory Server sends a checkpoint entry to the database transaction log. The database transaction log contains a sequential listing of all recent database operations and is used for database recovery only. A checkpoint entry indicates which database operations have been physically written to the directory database. The checkpoint entries are used to determine where in the database transaction log to begin recovery after a system failure. The nsslapd-db-checkpoint-interval attribute is absent from dse.ldif . To change the checkpoint interval, add the attribute to dse.ldif . This attribute can be dynamically modified using ldapmodify . For further information on modifying this attribute, see the "Tuning Directory Server Performance" chapter in the Red Hat Directory Server Administration Guide . This attribute is provided only for system modification/diagnostics and should be changed only with the guidance of Red Hat Technical Support or Red Hat Consulting. Inconsistent settings of this attribute and other configuration attributes may cause the Directory Server to be unstable. For more information on database transaction logging, see the "Monitoring Server and Database Activity" chapter in the Red Hat Directory Server Administration Guide . Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range 10 to 300 seconds Default Value 60 Syntax Integer Example nsslapd-db-checkpoint-interval: 120 4.4.2.4. nsslapd-db-circular-logging This attribute specifies circular logging for the transaction log files. If this attribute is switched off, old transaction log files are not removed and are kept renamed as old log transaction files. Turning circular logging off can severely degrade server performance and, as such, should only be modified with the guidance of Red Hat Technical Support or Red Hat Consulting. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values on | off Default Value on Syntax DirectoryString Example nsslapd-db-circular-logging: on 4.4.2.5. nsslapd-db-compactdb-interval The nsslapd-db-compactdb-interval attribute defines the interval in seconds when Directory Server compacts the databases and replication changelogs. The compact operation returns the unused pages to the file system and the database file size shrinks. Note that compacting the database is resource-intensive and should not be done too often. You do not have to restart the server for this setting to take effect. Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values 0 (no compaction) to 2147483647 second Default Value 2592000 (30 days) Syntax Integer Example nsslapd-db-compactdb-interval: 2592000 4.4.2.6. nsslapd-db-compactdb-time The nsslapd-db-compactdb-time attribute sets the time of the day when Directory Server compacts all databases and their replication changelogs. The compaction task runs after the compaction interval ( nsslapd-db-compactdb-interval ) has been exceeded. You do not have to restart the server for this setting to take effect. Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values HH:MM. Time is set in 24-hour format Default Value 23:59 Syntax DirectoryString Example nsslapd-db-compactdb-time: 23:59 4.4.2.7. nsslapd-db-debug This attribute specifies whether additional error information is to be reported to {Directory Server}. To report error information, set the parameter to on . This parameter is meant for troubleshooting; enabling the parameter may slow down the Directory Server. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-db-debug: off 4.4.2.8. nsslapd-db-durable-transactions This attribute sets whether database transaction log entries are immediately written to the disk. The database transaction log contains a sequential listing of all recent database operations and is used for database recovery only. With durable transactions enabled, every directory change will always be physically recorded in the log file and, therefore, able to be recovered in the event of a system failure. However, the durable transactions feature may also slow the performance of the Directory Server. When durable transactions is disabled, all transactions are logically written to the database transaction log but may not be physically written to disk immediately. If there were a system failure before a directory change was physically written to disk, that change would not be recoverable. The nsslapd-db-durable-transactions attribute is absent from dse.ldif . To disable durable transactions, add the attribute to dse.ldif . This attribute is provided only for system modification/diagnostics and should be changed only with the guidance of Red Hat Technical Support or Red Hat Consulting. Inconsistent settings of this attribute and other configuration attributes may cause the Directory Server to be unstable. For more information on database transaction logging, see the "Monitoring Server and Database Activity" chapter in the Red Hat Directory Server Administration Guide . Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values on | off Default Value on Syntax DirectoryString Example nsslapd-db-durable-transactions: on 4.4.2.9. nsslapd-db-home-directory To move the database to another physical location for performance reasons, use this parameter to specify the home directory. This situation will occur only for certain combinations of the database cache size, the size of physical memory, and kernel tuning attributes. In particular, this situation should not occur if the database cache size is less than 100 megabytes. The disk is heavily used (more than 1 megabyte per second of data transfer). There is a long service time (more than 100ms). There is mostly write activity. If these are all true, use the nsslapd-db-home-directory attribute to specify a subdirectory of a tempfs type filesystem. The directory referenced by the nsslapd-db-home-directory attribute must be a subdirectory of a filesystem of type tempfs (such as /tmp ). However, Directory Server does not create the subdirectory referenced by this attribute. This directory must be created either manually or by using a script. Failure to create the directory referenced by the nsslapd-db-home-directory attribute will result in Directory Server being unable to start. Also, if there are multiple Directory Servers on the same machine, their nsslapd-db-home-directory attributes must be configured with different directories. Failure to do so will result in the databases for both directories becoming corrupted. The use of this attribute causes internal Directory Server database files to be moved to the directory referenced by the attribute. It is possible, but unlikely, that the server will no longer start after the files have been moved because not enough memory can be allocated. This is a symptom of an overly large database cache size being configured for the server. If this happens, reduce the size of the database cache size to a value where the server will start again. Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values Any valid directory name in a tempfs filesystem, such as /tmp Default Value Syntax DirectoryString Example nsslapd-db-home-directory: /tmp/slapd-phonebook 4.4.2.10. nsslapd-db-idl-divisor This attribute specifies the index block size in terms of the number of blocks per database page. The block size is calculated by dividing the database page size by the value of this attribute. A value of 1 makes the block size exactly equal to the page size. The default value of 0 sets the block size to the page size minus an estimated allowance for internal database overhead. For the majority of installations, the default value should not be changed unless there are specific tuning needs. Before modifying the value of this attribute, export all databases using the db2ldif script. Once the modification has been made, reload the databases using the ldif2db script. Warning This parameter should only be used by very advanced users. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range 0 to 8 Default Value 0 Syntax Integer Example nsslapd-db-idl-divisor: 2 4.4.2.11. nsslapd-db-locks Lock mechanisms in Directory Server control how many copies of Directory Server processes can run at the same time. The nsslapd-db-locks parameter sets the maximum number of locks. Only set this parameter to a higher value if Directory Server runs out of locks and logs libdb: Lock table is out of available locks error messages. If you set a higher value without a need, this increases the size of the /var/lib/dirsrv/slapd- instance_name /db__db.* files without any benefit. For more information about monitoring the logs and determining a realistic value, see the corresponding section in the Directory Server Performance Tuning Guide . The service must be restarted for changes to this attribute to take effect. Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range 0 - 2147483647 Default Value 10000 Syntax Integer Example nsslapd-db-locks: 10000 4.4.2.12. nsslapd-db-locks-monitoring-enable Running out of database locks can lead to data corruption. With the nsslapd-db-locks-monitoring-enable parameter, you can enable or disable database lock monitoring. If the parameter is enabled, which is the default, Directory Server terminates all searches if the number of active database locks is higher than the percentage threshold configured in nsslapd-db-locks-monitoring-threshold . If an issue occurs, the administrator can increase the number of database locks in the nsslapd-db-locks parameter. Restart the service for changes to this attribute to take effect. Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values on | off Default Value on Syntax DirectoryString Example nsslapd-db-locks-monitoring-enable: on 4.4.2.13. nsslapd-db-locks-monitoring-pause If monitoring of database locks is enabled in the nsslapd-db-locks-monitoring-enable parameter, nsslapd-db-locks-monitoring-pause defines the interval in milliseconds that the monitoring thread sleeps between the checks. If you set this parameter to a too high value, the server can run out of database locks before the monitoring check happens. However, setting a too low value can slow down the server. You do not have to restart the server for this setting to take effect. Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values 0 - 2147483647 (value in milliseconds) Default Value 500 Syntax DirectoryString Example nsslapd-db-locks-monitoring-pause: 500 4.4.2.14. nsslapd-db-locks-monitoring-threshold If monitoring of database locks is enabled in the nsslapd-db-locks-monitoring-enable parameter, nsslapd-db-locks-monitoring-threshold sets the maximum percentage of used database locks before Directory Server terminates searches to avoid further lock exhaustion. Restart the service for changes to this attribute to take effect. Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values 70 - 95 Default Value 90 Syntax DirectoryString Example nsslapd-db-locks-monitoring-threshold: 90 4.4.2.15. nsslapd-db-logbuf-size This attribute specifies the log information buffer size. Log information is stored in memory until the buffer fills up or the transaction commit forces the buffer to be written to disk. Larger buffer sizes can significantly increase throughput in the presence of long running transactions, highly concurrent applications, or transactions producing large amounts of data. The log information buffer size is the transaction log size divided by four. The nsslapd-db-logbuf-size attribute is only valid if the nsslapd-db-durable-transactions attribute is set to on . Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range 32K to maximum 32-bit integer (limited to the amount of memory available on the machine) Default Value 32K Syntax Integer Example nsslapd-db-logbuf-size: 32K 4.4.2.16. nsslapd-db-logdirectory This attribute specifies the path to the directory that contains the database transaction log. The database transaction log contains a sequential listing of all recent database operations. Directory Server uses this information to recover the database after an instance shut down unexpectedly. By default, the database transaction log is stored in the same directory as the directory database. To update this parameter, you must manually update the /etc/dirsrv/slapd- instance_name /dse.ldif file. For details, see the Changing the Transaction Log Directory section in the Red Hat Directory Server Administration Guide . Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values Any valid path Default Value Syntax DirectoryString Example nsslapd-db-logdirectory: /var/lib/dirsrv/slapd- instance_name /db/ 4.4.2.17. nsslapd-db-logfile-size This attribute specifies the maximum size of a single file in the log in bytes. By default, or if the value is set to 0 , a maximum size of 10 megabytes is used. The maximum size is an unsigned 4-byte value. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range 0 to unsigned 4-byte integer Default Value 10MB Syntax Integer Example nsslapd-db-logfile-size: 10 MB 4.4.2.18. nsslapd-db-page-size This attribute specifies the size of the pages used to hold items in the database in bytes. The minimum size is 512 bytes, and the maximum size is 64 kilobytes. If the page size is not explicitly set, Directory Server defaults to a page size of 8 kilobytes. Changing this default value can have a significant performance impact. If the page size is too small, it results in extensive page splitting and copying, whereas if the page size is too large it can waste disk space. Before modifying the value of this attribute, export all databases using the db2ldif script. Once the modification has been made, reload the databases using the ldif2db script. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range 512 bytes to 64 kilobytes Default Value 8KB Syntax Integer Example nsslapd-db-page-size: 8KB 4.4.2.19. nsslapd-db-spin-count This attribute specifies the number of times that test-and-set mutexes should spin without blocking. Warning Never touch this value unless you are very familiar with the inner workings of Berkeley DB or are specifically told to do so by Red Hat support. The default value of 0 causes BDB to calculate the actual value by multiplying the number of available CPU cores (as reported by the nproc utility or the sysconf(_SC_NPROCESSORS_ONLN) call) by 50 . For example, with a processor with 8 logical cores, leaving this attribute set to 0 is equivalent to setting it to 400 . It is not possible to turn spinning off entirely - if you want to minimize the amount of times test-and-set mutexes will spin without blocking, set this attribute to 1 . Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range 0 to 2147483647 (2^31-1) Default Value 0 Syntax Integer Example nsslapd-db-spin-count: 0 4.4.2.20. nsslapd-db-transaction-batch-max-wait If Section 4.4.2.22, "nsslapd-db-transaction-batch-val" is set, the flushing of transactions is done by a separate thread when the set batch value is reached. However if there are only a few updates, this process might take too long. This parameter controls when transactions should be flushed latest, independently of the batch count. The values is defined in milliseconds. Warning This parameter is experimental. Never change its value unless you are specifically told to do so by the Red Hat support. Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range 0 - 2147483647 (value in milliseconds) Default Value 50 Syntax Integer Example nsslapd-db-transaction-batch-max-wait: 50 4.4.2.21. nsslapd-db-transaction-batch-min-wait If Section 4.4.2.22, "nsslapd-db-transaction-batch-val" is set, the flushing of transactions is done by a separate thread when the set batch value is reached. However if there are only a few updates, this process might take too long. This parameter controls when transactions should be flushed earliest, independently of the batch count. The values is defined in milliseconds. Warning This parameter is experimental. Never change its value unless you are specifically told to do so by the Red Hat support. Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range 0 - 2147483647 (value in milliseconds) Default Value 50 Syntax Integer Example nsslapd-db-transaction-batch-min-wait: 50 4.4.2.22. nsslapd-db-transaction-batch-val This attribute specifies how many transactions will be batched before being committed. This attribute can improve update performance when full transaction durability is not required. This attribute can be dynamically modified using ldapmodify . For further information on modifying this attribute, see the "Tuning Directory Server Performance" chapter in the Red Hat Directory Server Administration Guide . Warning Setting this value will reduce data consistency and may lead to loss of data. This is because if there is a power outage before the server can flush the batched transactions, those transactions in the batch will be lost. Do not set this value unless specifically requested to do so by Red Hat support. If this attribute is not defined or is set to a value of 0 , transaction batching will be turned off, and it will be impossible to make remote modifications to this attribute using LDAP. However, setting this attribute to a value greater than 0 causes the server to delay committing transactions until the number of queued transactions is equal to the attribute value. A value greater than 0 also allows modifications to this attribute remotely using LDAP. A value of 1 for this attribute allows modifications to the attribute setting remotely using LDAP, but results in no batching behavior. A value of 1 at server startup is therefore useful for maintaining normal durability while also allowing transaction batching to be turned on and off remotely when required. Remember that the value for this attribute may require modifying the nsslapd-db-logbuf-size attribute to ensure sufficient log buffer size for accommodating the batched transactions. Note The nsslapd-db-transaction-batch-val attribute is only valid if the nsslapd-db-durable-transaction attribute is set to on . For more information on database transaction logging, see the "Monitoring Server and Database Activity" chapter in the Red Hat Directory Server Administration Guide . Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range 0 to 30 Default Value 0 (or turned off) Syntax Integer Example nsslapd-db-transaction-batch-val: 5 4.4.2.23. nsslapd-db-trickle-percentage This attribute sets that at least the specified percentage of pages in the shared-memory pool are clean by writing dirty pages to their backing files. This is to ensure that a page is always available for reading in new information without having to wait for a write. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range 0 to 100 Default Value 40 Syntax Integer Example nsslapd-db-trickle-percentage: 40 4.4.2.24. nsslapd-db-verbose This attribute specifies whether to record additional informational and debugging messages when searching the log for checkpoints, doing deadlock detection, and performing recovery. This parameter is meant for troubleshooting, and enabling the parameter may slow down the Directory Server. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-db-verbose: off 4.4.2.25. nsslapd-import-cache-autosize This performance tuning-related attribute automatically sets the size of the import cache ( importCache ) to be used during the command-line-based import process of LDIF files to the database (the ldif2db operation). In Directory Server, the import operation can be run as a server task or exclusively on the command-line. In the task mode, the import operation runs as a general Directory Server operation. The nsslapd-import-cache-autosize attribute enables the import cache to be set automatically to a predetermined size when the import operation is run on the command-line. The attribute can also be used by Directory Server during the task mode import for allocating a specified percentage of free memory for import cache. By default, the nsslapd-import-cache-autosize attribute is enabled and is set to a value of -1 . This value autosizes the import cache for the ldif2db operation only, automatically allocating fifty percent (50%) of the free physical memory for the import cache. The percentage value (50%) is hard-coded and cannot be changed. Setting the attribute value to 50 ( nsslapd-import-cache-autosize: 50 ) has the same effect on performance during an ldif2db operation. However, such a setting will have the same effect on performance when the import operation is run as a Directory Server task. The -1 value autosizes the import cache just for the ldif2db operation and not for any, including import, general Directory Server tasks. Note The purpose of a -1 setting is to enable the ldif2db operation to benefit from free physical memory but, at the same time, not compete for valuable memory with the entry cache, which is used for general operations of the Directory Server. Setting the nsslapd-import-cache-autosize attribute value to 0 turns off the import cache autosizing feature - that is, no autosizing occurs during either mode of the import operation. Instead, Directory Server uses the nsslapd-import-cachesize attribute for import cache size, with a default value of 20000000 . There are three caches in the context of Directory Server: database cache, entry cache, and import cache. The import cache is only used during the import operation. The nsslapd-cache-autosize attribute, which is used for autosizing the entry cache and database cache, is used during the Directory Server operations only and not during the ldif2db command-line operation; the attribute value is the percentage of free physical memory to be allocated for the entry cache and database cache. If both the autosizing attributes, nsslapd-cache-autosize and nsslapd-import-cache-autosize , are enabled, ensure that their sum is less than 100. Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range -1, 0 (turns import cache autosizing off) to 100 Default Value -1 (turns import cache autosizing on for ldif2db only and allocates 50% of the free physical memory to import cache) Syntax Integer Example nsslapd-import-cache-autosize: -1 4.4.2.26. nsslapd-dbcachesize This performance tuning-related attribute specifies the database index cache size, in bytes. This is one of the most important values for controlling how much physical RAM the directory server uses. This is not the entry cache. This is the amount of memory the Berkeley database back end will use to cache the indexes (the .db files) and other files. This value is passed to the Berkeley DB API function set_cachesize . If automatic cache resizing is activated, this attribute is overridden when the server replaces these values with its own guessed values at a later stage of the server startup. For more technical information on this attribute, see the cache size section of the Berkeley DB reference guide at https://docs.oracle.com/cd/E17076_04/html/programmer_reference/general_am_conf.html#am_conf_cachesize . Attempting to set a value that is not a number or is too big for a 32-bit signed integer returns an LDAP_UNWILLING_TO_PERFORM error message with additional error information explaining the problem. Note Do not set the database cache size manually. Red Hat recommends to use the database cache auto-sizing feature for optimized performance. For further see the corresponding section in the Red Hat Directory Server Performance Tuning Guide . The server has to be restarted for changes to this attribute to go into effect. Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range 500 kilobytes to 4 gigabytes for 32-bit platforms and 500 kilobytes to 2^64-1 for 64-bit platforms Default Value Syntax Integer Example nsslapd-dbcachesize: 10000000 4.4.2.27. nsslapd-dbncache This attribute can split the LDBM cache into equally sized separate pieces of memory. It is possible to specify caches that are large enough so that they cannot be allocated contiguously on some architectures; for example, some systems limit the amount of memory that may be allocated contiguously by a process. If nsslapd-dbncache is 0 or 1 , the cache will be allocated contiguously in memory. If it is greater than 1 , the cache will be broken up into ncache , equally sized separate pieces of memory. To configure a dbcache size larger than 4 gigabytes, add the nsslapd-dbncache attribute to cn=config,cn=ldbm database,cn=plugins,cn=config between the nsslapd-dbcachesize and nsslapd-db-logdirectory attribute lines. Set this value to an integer that is one-quarter (1/4) the amount of memory in gigabytes. For example, for a 12 gigabyte system, set the nsslapd-dbncache value to 3 ; for an 8 gigabyte system, set it to 2 . This attribute is provided only for system modification/diagnostics and should be changed only with the guidance of Red Hat technical support or Red Hat professional services. Inconsistent settings of this attribute and other configuration attributes may cause the Directory Server to be unstable. The server has to be restarted for changes to this attribute to go into effect. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values 1 to 4 Default Value 1 Syntax Integer Example nsslapd-dbncache: 1 4.4.2.28. nsslapd-search-bypass-filter-test If you enable the nsslapd-search-bypass-filter-test parameter, Directory Server bypasses filter checks when it builds candidate lists during a search. If you set the parameter to verify , Directory Server evaluates the filter against the search candidate entries. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values on | off | verify Default Value on Syntax Directory String Example nsslapd-search-bypass-filter-test: on 4.4.3. Database Attributes under cn=monitor,cn=ldbm database,cn=plugins,cn=config Global read-only attributes containing database statistics for monitoring activity on the databases are stored in the cn=monitor,cn=ldbm database,cn=plugins,cn=config tree node. For more information on these entries, see the "Monitoring Server and Database Activity" chapter in the Red Hat Directory Server Administration Guide . dbcachehits This attribute shows the requested pages found in the database. dbcachetries This attribute shows the total cache lookups. dbcachehitratio This attribute shows the percentage of requested pages found in the database cache (hits/tries). dbcachepagein This attribute shows the pages read into the database cache. dbcachepageout This attribute shows the pages written from the database cache to the backing file. dbcacheroevict This attribute shows the clean pages forced from the cache. dbcacherwevict This attribute shows the dirty pages forced from the cache. normalizedDNcachetries Total number of cache lookups since the instance was started. normalizedDNcachehits Normalized DNs found within the cache. normalizedDNcachemisses Normalized DNs not found within the cache. normalizedDNcachehitratio Percentage of the normalized DNs found in the cache. currentNormalizedDNcachesize Current size of the normalized DN cache in bytes. maxNormalizedDNcachesize Current value of the nsslapd-ndn-cache-max-size parameter. For details how to update this setting, see Section 3.1.1.130, "nsslapd-ndn-cache-max-size" . currentNormalizedDNcachecount Number of normalized cached DNs. 4.4.4. Database Attributes under cn= database_name ,cn=ldbm database,cn=plugins,cn=config The cn= database_name subtree contains all the configuration data for the user-defined database. The cn=userRoot subtree is called userRoot by default. However, this is not hard-coded and, given the fact that there are going to be multiple database instances, this name is changed and defined by the user as and when new databases are added. The cn=userRoot database referenced can be any user database. The following attributes are common to databases, such as cn=userRoot . 4.4.4.1. nsslapd-cachesize This attribute has been deprecated. To resize the entry cache, use nsslapd-cachememsize. This performance tuning-related attribute specifies the cache size in terms of the number of entries it can hold. However, this attribute is deprecated in favor of the nsslapd-cachememsize attribute, which sets an absolute allocation of RAM for the entry cache size, as described in Section 4.4.4.2, "nsslapd-cachememsize" . Attempting to set a value that is not a number or is too big for a 32-bit signed integer (on 32-bit systems) returns an LDAP_UNWILLING_TO_PERFORM error message with additional error information explaining the problem. The server has to be restarted for changes to this attribute to go into effect. Note The performance counter for this setting goes to the highest 64-bit integer, even on 32-bit systems, but the setting itself is limited on 32-bit systems to the highest 32-bit integer because of how the system addresses memory. Parameter Description Entry DN cn= database_name ,cn=ldbm database,cn=plugins,cn=config Valid Range 1 to 2 32 -1 on 32-bit systems or 2 63 -1 on 64-bit systems or -1, which means limitless Default Value -1 Syntax Integer Example nsslapd-cachesize: -1 4.4.4.2. nsslapd-cachememsize This performance tuning-related attribute specifies the size, in bytes, for the available memory space for the entry cache. The simplest method is limiting cache size in terms of memory occupied. Activating automatic cache resizing overrides this attribute, replacing these values with its own guessed values at a later stage of the server startup. Attempting to set a value that is not a number or is too big for a 32-bit signed integer (on 32-bit systems) returns an LDAP_UNWILLING_TO_PERFORM error message with additional error information explaining the problem. The performance counter for this setting goes to the highest 64-bit integer, even on 32-bit systems, but the setting itself is limited on 32-bit systems to the highest 32-bit integer because of how the system addresses memory. Note Do not set the database cache size manually. Red Hat recommends to use the entry cache auto-sizing feature for optimized performance. For further see the corresponding section in the Red Hat Directory Server Performance Tuning Guide . Parameter Description Entry DN cn= database_name ,cn=ldbm database,cn=plugins,cn=config Valid Range 500 kilobytes to 2 64 -1 on 64-bit systems Default Value 209715200 (200 MiB) Syntax Integer Example nsslapd-cachememsize: 209715200 4.4.4.3. nsslapd-directory This attribute specifies the path to the database instance. If it is a relative path, it starts from the path specified by nsslapd-directory in the global database entry cn=config,cn=ldbm database,cn=plugins,cn=config . The database instance directory is named after the instance name and located in the global database directory, by default. After the database instance has been created, do not modify this path, because any changes risk preventing the server from accessing data. Parameter Description Entry DN cn= database_name ,cn=ldbm database,cn=plugins,cn=config Valid Values Any valid path to the database instance Default Value Syntax DirectoryString Example nsslapd-directory: /var/lib/dirsrv/slapd- instance /db/userRoot 4.4.4.4. nsslapd-dncachememsize This performance tuning-related attribute specifies the size, in bytes, for the available memory space for the DN cache. The DN cache is similar to the entry cache for a database, only its table stores only the entry ID and the entry DN. This allows faster lookups for rename and moddn operations. The simplest method is limiting cache size in terms of memory occupied. Attempting to set a value that is not a number or is too big for a 32-bit signed integer (on 32-bit systems) returns an LDAP_UNWILLING_TO_PERFORM error message with additional error information explaining the problem. Note The performance counter for this setting goes to the highest 64-bit integer, even on 32-bit systems, but the setting itself is limited on 32-bit systems to the highest 32-bit integer because of how the system addresses memory. Parameter Description Entry DN cn= database_name ,cn=ldbm database,cn=plugins,cn=config Valid Range 500 kilobytes to 2 32 -1 on 32-bit systems and to 2 64 -1 on 64-bit systems Default Value 10,485,760 (10 megabytes) Syntax Integer Example nsslapd-dncachememsize: 10485760 4.4.4.5. nsslapd-readonly This attribute specifies read-only mode for a single back-end instance. If this attribute has a value of off , then users have all read, write, and execute permissions allowed by their access permissions. Parameter Description Entry DN cn= database_name ,cn=ldbm database,cn=plugins,cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-readonly: off 4.4.4.6. nsslapd-require-index When switched to on , this attribute allows one to refuse unindexed searches. This performance-related attribute avoids saturating the server with erroneous searches. Parameter Description Entry DN cn= database_name ,cn=ldbm database,cn=plugins,cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-require-index: off 4.4.4.7. nsslapd-require-internalop-index When a plug-in modifies data, it has a write lock on the database. On large databases, if a plug-in then executes an unindexed search, the plug-in can use all database locks and corrupt the database or the server becomes unresponsive. To avoid this problem, you can reject internal unindexed searches by enabling the nsslapd-require-internalop-index parameter. Parameter Description Entry DN cn= database_name ,cn=ldbm database,cn=plugins,cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-require-internalop-index: off 4.4.4.8. nsslapd-suffix This attribute specifies the suffix of the database link . This is a single-valued attribute because each database instance can have only one suffix. Previously, it was possible to have more than one suffix on a single database instance, but this is no longer the case. As a result, this attribute is single-valued to enforce the fact that each database instance can only have one suffix entry. Any changes made to this attribute after the entry has been created take effect only after the server containing the database link is restarted. Parameter Description Entry DN cn= database_name ,cn=ldbm database,cn=plugins,cn=config Valid Values Any valid DN Default Value Syntax DirectoryString Example nsslapd-suffix: o=Example 4.4.4.9. vlvBase This attribute sets the base DN for which the browsing or virtual list view (VLV) index is created. For more information on VLV indexes, see the indexing chapter in the Administration Guide . Parameter Description Entry DN cn= index_name ,cn=userRoot,cn=ldbm database,cn=plugins,cn=config Valid Values Any valid DN Default Value Syntax DirectoryString Example vlvBase: ou=People,dc=example,dc=com 4.4.4.10. vlvEnabled The vlvEnabled attribute provides status information about a specific VLV index, and Directory Server sets this attribute at run time. Although vlvEnabled is shown in the configuration, you cannot modify this attribute. For more information on VLV indexes, see the indexing chapter in the Administration Guide . Parameter Description Entry DN cn= index_name ,cn=userRoot,cn=ldbm database,cn=plugins,cn=config Valid Values 0 (disabled) | 1 (enabled) Default Value 1 Syntax DirectoryString Example vlvEnbled: 0 4.4.4.11. vlvFilter The browsing or virtual list view (VLV) index is created by running a search according to a filter and including entries which match that filter in the index. The filter is specified in the vlvFilter attribute. For more information on VLV indexes, see the indexing chapter in the Administration Guide . Parameter Description Entry DN cn= index_name ,cn=userRoot,cn=ldbm database,cn=plugins,cn=config Valid Values Any valid LDAP filter Default Value Syntax DirectoryString Example vlvFilter: ( 4.4.4.12. vlvIndex (Object Class) A browsing index or virtual list view (VLV) index dynamically generates an abbreviated index of entry headers that makes it much faster to visually browse large indexes. A VLV index definition has two parts: one which defines the index and one which defines the search used to identify entries to add to the index. The vlvIndex object class defines the index entry. This object class is defined in Directory Server. Superior Class top OID 2.16.840.1.113730.3.2.42 Table 4.2. Required Attributes Attribute Definition objectClass Defines the object classes for the entry. cn Gives the common name of the entry. Section 4.4.4.15, "vlvSort" Identifies the attribute list that the browsing index (virtual list view index) is sorted on. Table 4.3. Allowed Attributes Attribute Definition Section 4.4.4.10, "vlvEnabled" Stores the availability of the browsing index. Section 4.4.4.16, "vlvUses" Contains the count the browsing index is used. 4.4.4.13. vlvScope This attribute sets the scope of the search to run for entries in the browsing or virtual list view (VLV) index. For more information on VLV indexes, see the indexing chapter in the Administration Guide . Parameter Description Entry DN cn= index_name ,cn=userRoot,cn=ldbm database,cn=plugins,cn=config Valid Values * 1 (one-level or children search) * 2 (subtree search) Default Value Syntax Integer Example vlvScope: 2 4.4.4.14. vlvSearch (Object Class) A browsing index or virtual list view (VLV) index dynamically generates an abbreviated index of entry headers that makes it much faster to visually browse large indexes. A VLV index definition has two parts: one which defines the index and one which defines the search used to identify entries to add to the index. The vlvSearch object class defines the search filter entry. This object class is defined in Directory Server. Superior Class top OID 2.16.840.1.113730.3.2.38 Table 4.4. Required Attributes Attribute Definition objectClass Defines the object classes for the entry. Section 4.4.4.9, "vlvBase" Identifies base DN the browsing index is created. Section 4.4.4.13, "vlvScope" Identifies the scope to define the browsing index. Section 4.4.4.11, "vlvFilter" Identifies the filter string to define the browsing index. Table 4.5. Allowed Attributes Attribute Definition multiLineDescription Gives a text description of the entry. 4.4.4.15. vlvSort This attribute sets the sort order for returned entries in the browsing or virtual list view (VLV) index. Note The entry for this attribute is a vlvIndex entry beneath the vlvSearch entry. For more information on VLV indexes, see the indexing chapter in the Administration Guide . Parameter Description Entry DN cn= index_name ,cn= index_name ,cn=userRoot,cn=ldbm database,cn=plugins,cn=config Valid Values Any Directory Server attributes, in a space-separated list Default Value Syntax DirectoryString Example vlvSort: cn givenName o ou sn 4.4.4.16. vlvUses The vlvUses attribute contains the count the browsing index uses, and Directory Server sets this attribute at run time. Although vlvUses is shown in the configuration, you cannot modify this attribute. For more information on VLV indexes, see the indexing chapter in the Administration Guide . Parameter Description Entry DN cn= index_name ,cn=userRoot,cn=ldbm database,cn=plugins,cn=config Valid Values N/A Default Value Syntax DirectoryString Example vlvUses: 800 4.4.5. Database Attributes under cn=database,cn=monitor,cn=ldbm database,cn=plugins,cn=config The attributes in this tree node entry are all read-only, database performance counters. All of the values for these attributes are 32-bit integers, except for entrycachehits and entrycachetries . If the nsslapd-counters attribute in cn=config is set to on , then some of the counters kept by the Directory Server instance increment using 64-bit integers, even on 32-bit machines or with a 32-bit version of Directory Server. For the database monitoring, the entrycachehits and entrycachetries counters use 64-bit integers. Note The nsslapd-counters attribute enables 64-bit support for these specific database and server counters. The counters which use 64-bit integers are not configurable; the 64-bit integers are either enabled for all the allowed counters or disabled for all allowed counters. nsslapd-db-abort-rate This attribute shows the number of transactions that have been aborted. nsslapd-db-active-txns This attribute shows the number of transactions that are currently active. nsslapd-db-cache-hit This attribute shows the requested pages found in the cache. nsslapd-db-cache-try This attribute shows the total cache lookups. nsslapd-db-cache-region-wait-rate This attribute shows the number of times that a thread of control was forced to wait before obtaining the region lock. nsslapd-db-cache-size-bytes This attribute shows the total cache size in bytes. nsslapd-db-clean-pages This attribute shows the clean pages currently in the cache. nsslapd-db-commit-rate This attribute shows the number of transactions that have been committed. nsslapd-db-deadlock-rate This attribute shows the number of deadlocks detected. nsslapd-db-dirty-pages This attribute shows the dirty pages currently in the cache. nsslapd-db-hash-buckets This attribute shows the number of hash buckets in buffer hash table. nsslapd-db-hash-elements-examine-rate This attribute shows the total number of hash elements traversed during hash table lookups. nsslapd-db-hash-search-rate This attribute shows the total number of buffer hash table lookups. nsslapd-db-lock-conflicts This attribute shows the total number of locks not immediately available due to conflicts. nsslapd-db-lock-region-wait-rate This attribute shows the number of times that a thread of control was forced to wait before obtaining the region lock. nsslapd-db-lock-request-rate This attribute shows the total number of locks requested. nsslapd-db-lockers This attribute shows the number of current lockers. nsslapd-db-log-bytes-since-checkpoint This attribute shows the number of bytes written to this log since the last checkpoint. nsslapd-db-log-region-wait-rate This attribute shows the number of times that a thread of control was forced to wait before obtaining the region lock. nsslapd-db-log-write-rate This attribute shows the number of megabytes and bytes written to this log. nsslapd-db-longest-chain-length This attribute shows the longest chain ever encountered in buffer hash table lookups. nsslapd-db-page-create-rate This attribute shows the pages created in the cache. nsslapd-db-page-read-rate This attribute shows the pages read into the cache. nsslapd-db-page-ro-evict-rate This attribute shows the clean pages forced from the cache. nsslapd-db-page-rw-evict-rate This attribute shows the dirty pages forced from the cache. nsslapd-db-page-trickle-rate This attribute shows the dirty pages written using the memp_trickle interface. nsslapd-db-page-write-rate This attribute shows the pages read into the cache. nsslapd-db-pages-in-use This attribute shows all pages, clean or dirty, currently in use. nsslapd-db-txn-region-wait-rate This attribute shows the number of times that a thread of control was force to wait before obtaining the region lock. currentdncachecount This attribute shows the number of DNs currently present in the DN cache. currentdncachesize This attribute shows the total size, in bytes, of DNs currently present in the DN cache. maxdncachesize This attribute shows the maximum size, in bytes, of DNs that can be maintained in the database DN cache. 4.4.6. Database Attributes under cn=monitor,cn=userRoot,cn=ldbm database,cn=plugins,cn=config The attributes in this tree node entry are all read-only, database performance counters. If the nsslapd-counters attribute in cn=config is set to on , then some of the counters kept by the Directory Server instance increment using 64-bit integers, even on 32-bit machines or with a 32-bit version of Directory Server. For database monitoring, the entrycachehits and entrycachetries counters use 64-bit integers. Note The nsslapd-counters attribute enables 64-bit support for these specific database and server counters. The counters which use 64-bit integers are not configurable; the 64-bit integers are either enabled for all the allowed counters or disabled for all allowed counters. dbfilename- number This attribute gives the name of the file and provides a sequential integer identifier (starting at 0) for the file. All associated statistics for the file are given this same numerical identifier. dbfilecachehit- number This attribute gives the number of times that a search requiring data from this file was performed and that the data were successfully obtained from the cache. The number in this attributes name corresponds to the one in dbfilename . dbfilecachemiss- number This attribute gives the number of times that a search requiring data from this file was performed and that the data could not be obtained from the cache. The number in this attributes name corresponds to the one in dbfilename . dbfilepagein- number This attribute gives the number of pages brought to the cache from this file. The number in this attributes name corresponds to the one in dbfilename . dbfilepageout- number This attribute gives the number of pages for this file written from cache to disk. The number in this attributes name corresponds to the one in dbfilename . currentDNcachecount Number of cached DNs. currentDNcachesize Current size of the DN cache in bytes. DNcachehitratio Percentage of the DNs found in the cache. DNcachehits DNs found within the cache. DNcachemisses DNs not found within the cache. DNcachetries Total number of cache lookups since the instance was started. maxDNcachesize Current value of the nsslapd-ndn-cache-max-size parameter. For details how to update this setting, see Section 3.1.1.130, "nsslapd-ndn-cache-max-size" . 4.4.7. Database Attributes under cn=default indexes,cn=config,cn=ldbm database,cn=plugins,cn=config The set of default indexes is stored here. Default indexes are configured per back end in order to optimize Directory Server functionality for the majority of setup scenarios. All indexes, except system-essential ones, can be removed, but care should be taken so as not to cause unnecessary disruptions. For further information on indexes, see the "Managing Indexes" chapter in the Red Hat Directory Server Administration Guide . 4.4.7.1. cn This attribute provides the name of the attribute to index. Parameter Description Entry DN cn=default indexes,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values Any valid index cn Default Value None Syntax DirectoryString Example cn: aci 4.4.7.2. nsIndex This object class defines an index in the back end database. This object is defined in Directory Server. Superior Class top OID 2.16.840.1.113730.3.2.44 Table 4.6. Required Attributes Attribute Definition objectClass Defines the object classes for the entry. cn Gives the common name of the entry. Section 4.4.7.5, "nsSystemIndex" Identify whether or not the index is a system defined index. Table 4.7. Allowed Attributes Attribute Definition description Gives a text description of the entry. Section 4.4.7.3, "nsIndexType" Identifies the index type. Section 4.4.7.4, "nsMatchingRule" Identifies the matching rule. 4.4.7.3. nsIndexType This optional, multi-valued attribute specifies the type of index for Directory Server operations and takes the values of the attributes to be indexed. Each required index type has to be entered on a separate line. Parameter Description Entry DN cn=default indexes,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values * pres = presence index * eq = equality index * approx = approximate index * sub = substring index * matching rule = international index * index browse = browsing index Default Value Syntax DirectoryString Example nsIndexType: eq 4.4.7.4. nsMatchingRule This optional, multi-valued attribute specifies the ordering matching rule name or OID used to match values and to generate index keys for the attribute. This is most commonly used to ensure that equality and range searches work correctly for languages other than English (7-bit ASCII). This is also used to allow range searches to work correctly for integer syntax attributes that do not specify an ordering matching rule in their schema definition. uidNumber and gidNumber are two commonly used attributes that fall into this category. For example, for a uidNumber that uses integer syntax, the rule attribute could be nsMatchingRule: integerOrderingMatch . Note Any change to this attribute will not take effect until the change is saved and the index is rebuilt using db2index , which is described in more detail in the "Managing Indexes" chapter of the Red Hat Directory Server Administration Guide ). Parameter Description Entry DN cn=default indexes,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values Any valid collation order object identifier (OID) Default Value None Syntax DirectoryString Example nsMatchingRule: 2.16.840.1.113730.3.3.2.3.1 (For Bulgarian) 4.4.7.5. nsSystemIndex This mandatory attribute specifies whether the index is a system index , an index which is vital for Directory Server operations. If this attribute has a value of true , then it is system-essential. System indexes should not be removed, as this will seriously disrupt server functionality. Parameter Description Entry DN cn=default indexes,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values true | false Default Value Syntax DirectoryString Example nssystemindex: true 4.4.8. Database Attributes under cn=index,cn= database_name ,cn=ldbm database,cn=plugins,cn=config In addition to the set of default indexes that are stored under cn=default indexes,cn=config,cn=ldbm database,cn=plugins,cn=config , custom indexes can be created for user-defined back end instances; these are stored under cn=index,cn= database_name ,cn=ldbm database,cn=plugins,cn=config . Each indexed attribute represents a subentry under the cn=config information tree nodes, as shown in the following diagram: Figure 4.2. Indexed Attribute Representing a Subentry For example, the index file for the aci attribute under o=UserRoot appears in the Directory Server as follows: These entries share all of the indexing attributes listed for the default indexes in Section 4.4.7, "Database Attributes under cn=default indexes,cn=config,cn=ldbm database,cn=plugins,cn=config" . For further information about indexes, see the "Managing Indexes" chapter in the Red Hat Directory Server Administration Guide . 4.4.8.1. nsIndexIDListScanLimit This multi-valued parameter defines a search limit for certain indices or to use no ID list. For further information, see the corresponding section in the Directory Server Performance Tuning Guide . Parameter Description Entry DN cn= attribute_name ,cn=index,cn= database_name ,cn=ldbm database,cn=plugins,cn=config Valid Values See the corresponding section in the Directory Server Performance Tuning Guide . Default Value Syntax DirectoryString Example nsIndexIDListScanLimit: limit=0 type=eq values=inetorgperson 4.4.8.2. nsSubStrBegin By default, for a search to be indexed, the search string must be at least three characters long, without counting any wildcard characters. For example, the string abc would be an indexed search while ab* would not be. Indexed searches are significantly faster than unindexed searches, so changing the minimum length of the search key is helpful to increase the number of indexed searches. This substring length can be edited based on the position of any wildcard characters. The nsSubStrBegin attribute sets the required number of characters for an indexed search for the beginning of a search string, before the wildcard. For example: If the value of this attribute is changed, then the index must be regenerated using db2index . Parameter Description Entry DN cn= attribute_name ,cn=index,cn= database_name ,cn=ldbm database,cn=plugins,cn=config Valid Values Any integer Default Value 3 Syntax Integer Example nsSubStrBegin: 2 4.4.8.3. nsSubStrEnd By default, for a search to be indexed, the search string must be at least three characters long, without counting any wildcard characters. For example, the string abc would be an indexed search while ab* would not be. Indexed searches are significantly faster than unindexed searches, so changing the minimum length of the search key is helpful to increase the number of indexed searches. This substring length can be edited based on the position of any wildcard characters. The nsSubStrEnd attribute sets the required number of characters for an indexed search for the end of a search string, after the wildcard. For example: If the value of this attribute is changed, then the index must be regenerated using db2index . Parameter Description Entry DN cn= attribute_name ,cn=index,cn= database_name ,cn=ldbm database,cn=plugins,cn=config Valid Values Any integer Default Value 3 Syntax Integer Example nsSubStrEnd: 2 4.4.8.4. nsSubStrMiddle By default, for a search to be indexed, the search string must be at least three characters long, without counting any wildcard characters. For example, the string abc would be an indexed search while ab* would not be. Indexed searches are significantly faster than unindexed searches, so changing the minimum length of the search key is helpful to increase the number of indexed searches. This substring length can be edited based on the position of any wildcard characters. The nsSubStrMiddle attribute sets the required number of characters for an indexed search where a wildcard is used in the middle of a search string. For example: If the value of this attribute is changed, then the index must be regenerated using db2index . Parameter Description Entry DN cn= attribute_name ,cn=index,cn= database_name ,cn=ldbm database,cn=plugins,cn=config Valid Values Any integer Default Value 3 Syntax Integer Example nsSubStrMiddle: 3 4.4.9. Database Attributes under cn=attributeName,cn=encrypted attributes,cn=database_name,cn=ldbm database,cn=plugins,cn=config The nsAttributeEncryption object class allows selective encryption of attributes within a database. Extremely sensitive information such as credit card numbers and government identification numbers may not be protected enough by routine access control measures. Normally, these attribute values are stored in CLEAR within the database; encrypting them while they are stored adds another layer of protection. This object class has one attribute, nsEncryptionAlgorithm , which sets the encryption cipher used per attribute. Each encrypted attribute represents a subentry under the above cn=config information tree nodes, as shown in the following diagram: Figure 4.3. Encrypted Attributes under the cn=config Node For example, the database encryption file for the userPassword attribute under o=UserRoot appears in the Directory Server as follows: To configure database encryption, see the "Database Encryption" section of the "Configuring Directory Databases" chapter in the Red Hat Directory Server Administration Guide . For more information about indexes, see the "Managing Indexes" chapter in the Red Hat Directory Server Administration Guide . 4.4.9.1. nsAttributeEncryption (Object Class) This object class is used for core configuration entries which identify and encrypt selected attributes within a Directory Server database. This object class is defined in Directory Server. Superior Class top OID 2.16.840.1.113730.3.2.316 Table 4.8. Required Attributes objectClass Defines the object classes for the entry. cn Specifies the attribute being encrypted using its common name. Section 4.4.9.2, "nsEncryptionAlgorithm" The encryption cipher used. 4.4.9.2. nsEncryptionAlgorithm nsEncryptionAlgorithm selects the cipher used by nsAttributeEncryption . The algorithm can be set per encrypted attribute. Parameter Description Entry DN cn=attributeName,cn=encrypted attributes,cn= database_name ,cn=ldbm database,cn=plugins,cn=config Valid Values The following are supported ciphers: * Advanced Encryption Standard Block Cipher (AES) * Triple Data Encryption Standard Block Cipher (3DES) Default Value Syntax DirectoryString Example nsEncryptionAlgorithm: AES 4.5. Database Link Plug-in Attributes (Chaining Attributes) The database link plug-in attributes are also organized in an information tree, as shown in the following diagram: Figure 4.4. Database Link Plug-in All plug-in technology used by the database link instances is stored in the cn=chaining database plug-in node. This section presents the additional attribute information for the three nodes marked in bold in the cn=chaining database,cn=plugins,cn=config information tree in Figure 4.4, "Database Link Plug-in" . 4.5.1. Database Link Attributes under cn=config,cn=chaining database,cn=plugins,cn=config This section covers global configuration attributes common to all instances are stored in the cn=config,cn=chaining database,cn=plugins,cn=config tree node. 4.5.1.1. nsActiveChainingComponents This attribute lists the components using chaining. A component is any functional unit in the server. The value of this attribute overrides the value in the global configuration attribute. To disable chaining on a particular database instance, use the value None . This attribute also allows the components used to chain to be altered. By default, no components are allowed to chain, which explains why this attribute will probably not appear in a list of cn=config,cn=chaining database,cn=config attributes, as LDAP considers empty attributes to be non-existent. Parameter Description Entry DN cn=config,cn=chaining database,cn=plugins,cn=config Valid Values Any valid component entry Default Value None Syntax DirectoryString Example nsActiveChainingComponents: cn=uid uniqueness,cn=plugins,cn=config 4.5.1.2. nsMaxResponseDelay This error detection, performance-related attribute specifies the maximum amount of time it can take a remote server to respond to an LDAP operation request made by a database link before an error is suspected. Once this delay period has been met, the database link tests the connection with the remote server. Parameter Description Entry DN cn=config,cn=chaining database,cn=plugins,cn=config Valid Values Any valid delay period in seconds Default Value 60 seconds Syntax Integer Example nsMaxResponseDelay: 60 4.5.1.3. nsMaxTestResponseDelay This error detection, performance-related attribute specifies the duration of the test issued by the database link to check whether the remote server is responding. If a response from the remote server is not returned before this period has passed, the database link assumes the remote server is down, and the connection is not used for subsequent operations. Parameter Description Entry DN cn=config,cn=chaining database,cn=plugins,cn=config Valid Values Any valid delay period in seconds Default Value 15 seconds Syntax Integer Example nsMaxTestResponseDelay: 15 4.5.1.4. nsTransmittedControls This attribute, which can be both a global (and thus dynamic) configuration or an instance (that is, cn= database link instance , cn=chaining database,cn=plugins,cn=config ) configuration attribute, allows the controls the database link forwards to be altered. The following controls are forwarded by default by the database link: Managed DSA (OID: 2.16.840.1.113730.3.4.2) Virtual list view (VLV) (OID: 2.16.840.1.113730.3.4.9) Server side sorting (OID: 1.2.840.113556.1.4.473) Loop detection (OID: 1.3.6.1.4.1.1466.29539.12) Other controls, such as dereferencing and simple paged results for searches, can be added to the list of controls to forward. Parameter Description Entry DN cn=config,cn=chaining database,cn=plugins,cn=config Valid Values Any valid OID or the above listed controls forwarded by the database link Default Value None Syntax Integer Example nsTransmittedControls: 1.2.840.113556.1.4.473 4.5.2. Database Link Attributes under cn=default instance config,cn=chaining database,cn=plugins,cn=config Default instance configuration attributes for instances are housed in the cn=default instance config,cn=chaining database,cn=plugins,cn=config tree node. 4.5.2.1. nsAbandonedSearchCheckInterval This attribute shows the number of seconds that pass before the server checks for abandoned operations. Parameter Description Entry DN cn=default instance config,cn=chaining database,cn=plugins,cn=config Valid Range 0 to maximum 32-bit integer (2147483647) seconds Default Value 1 Syntax Integer Example nsAbandonedSearchCheckInterval: 10 4.5.2.2. nsBindConnectionsLimit This attribute shows the maximum number of TCP connections the database link establishes with the remote server. Parameter Description Entry DN cn=default instance config,cn=chaining database,cn=plugins,cn=config Valid Range 1 to 50 connections Default Value 3 Syntax Integer Example nsBindConnectionsLimit: 3 4.5.2.3. nsBindRetryLimit Contrary to what the name suggests, this attribute does not specify the number of times a database link re tries to bind with the remote server but the number of times it tries to bind with the remote server. A value of 1 here indicates that the database link only attempts to bind once. Note Retries only occur for connection failures and not for other types of errors, such as invalid bind DNs or bad passwords. Parameter Description Entry DN cn=default instance config,cn=chaining database,cn=plugins,cn=config Valid Range 0 to 5 Default Value 3 Syntax Integer Example nsBindRetryLimit: 3 4.5.2.4. nsBindTimeout This attribute shows the amount of time before the bind attempt times out. There is no real valid range for this attribute, except reasonable patience limits. Parameter Description Entry DN cn=default instance config,cn=chaining database,cn=plugins,cn=config Valid Range 0 to 60 seconds Default Value 15 Syntax Integer Example nsBindTimeout: 15 4.5.2.5. nsCheckLocalACI Reserved for advanced use only. This attribute controls whether ACIs are evaluated on the database link as well as the remote data server. Changes to this attribute only take effect once the server has been restarted. Parameter Description Entry DN cn=default instance config,cn=chaining database,cn=plugins,cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsCheckLocalACI: on 4.5.2.6. nsConcurrentBindLimit This attribute shows the maximum number of concurrent bind operations per TCP connection. Parameter Description Entry DN cn=default instance config,cn=chaining database,cn=plugins,cn=config Valid Range 1 to 25 binds Default Value 10 Syntax Integer Example nsConcurrentBindLimit: 10 4.5.2.7. nsConcurrentOperationsLimit This attribute specifies the maximum number of concurrent operations allowed. Parameter Description Entry DN cn=default instance config,cn=chaining database,cn=plugins,cn=config Valid Range 1 to 50 operations Default Value 2 Syntax Integer Example nsConcurrentOperationsLimit: 5 4.5.2.8. nsConnectionLife This attribute specifies connection lifetime. Connections between the database link and the remote server can be kept open for an unspecified time or closed after a specific period of time. It is faster to keep the connections open, but it uses more resources. When the value is 0 and a list of failover servers is provided in the nsFarmServerURL attribute, the main server is never contacted after failover to the alternate server. Parameter Description Entry DN cn=default instance config,cn=chaining database,cn=plugins,cn=config Valid Range 0 to limitless seconds (where 0 means forever) Default Value 0 Syntax Integer Example nsConnectionLife: 0 4.5.2.9. nsOperationConnectionsLimit This attribute shows the maximum number of LDAP connections the database link establishes with the remote server. Parameter Description Entry DN cn=default instance config,cn=chaining database,cn=plugins,cn=config Valid Range 1 to n connections Default Value 20 Syntax Integer Example nsOperationConnectionsLimit: 10 4.5.2.10. nsProxiedAuthorization Reserved for advanced use only. If you disable proxied authorization, binds for chained operations are executed as the user set in the nsMultiplexorBindDn attribute. Parameter Description Entry DN cn=default instance config,cn=chaining database,cn=plugins,cn=config Valid Values on | off Default Value on Syntax DirectoryString Example nsProxiedAuthorization: on 4.5.2.11. nsReferralOnScopedSearch This attribute controls whether referrals are returned by scoped searches. This attribute can be used to optimize the directory because returning referrals in response to scoped searches is more efficient. A referral is returned to all the configured farm servers. Parameter Description Entry DN cn=default instance config,cn=chaining database,cn=plugins,cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsReferralOnScopedSearch: off 4.5.2.12. nsSizeLimit This attribute shows the default size limit for the database link in bytes. Parameter Description Entry DN cn=default instance config,cn=chaining database,cn=plugins,cn=config Valid Range -1 (no limit) to maximum 32-bit integer (2147483647) entries Default Value 2000 Syntax Integer Example nsSizeLimit: 2000 4.5.2.13. nsTimeLimit This attribute shows the default search time limit for the database link. Parameter Description Entry DN cn=default instance config,cn=chaining database,cn=plugins,cn=config Valid Range -1 to maximum 32-bit integer (2147483647) seconds Default Value 3600 Syntax Integer Example nsTimeLimit: 3600 4.5.3. Database Link Attributes under cn=database_link_name,cn=chaining database,cn=plugins,cn=config This information node stores the attributes concerning the server containing the data. A farm server is a server which contains data on databases. This attribute can contain optional servers for failover, separated by spaces. For cascading chaining, this URL can point to another database link. 4.5.3.1. nsBindMechanism This attribute sets a bind mechanism for the farm server to connect to the remote server. A farm server is a server containing data in one or more databases. This attribute configures the connection type, either standard, TLS, or SASL. empty. This performs simple authentication and requires the nsMultiplexorBindDn and nsMultiplexorCredentials attributes to give the bind information. EXTERNAL. This uses an TLS certificate to authenticate the farm server to the remote server. Either the farm server URL must be set to the secure URL ( ldaps ) or the nsUseStartTLS attribute must be set to on . Additionally, the remote server must be configured to map the farm server's certificate to its bind identity. Certificate mapping is described in the Administration Guide . DIGEST-MD5. This uses SASL with DIGEST-MD5 encryption. As with simple authentication, this requires the nsMultiplexorBindDn and nsMultiplexorCredentials attributes to give the bind information. GSSAPI. This uses Kerberos-based authentication over SASL. The farm server must be connected over the standard port, meaning the URL has ldap , because the Directory Server does not support SASL/GS-API over TLS. The farm server must be configured with a Kerberos keytab, and the remote server must have a defined SASL mapping for the farm server's bind identity. Setting up Kerberos keytabs and SASL mappings is described in the Administration Guide . Parameter Description Entry DN cn= database_link_name ,cn=chaining database,cn=plugins,cn=config Valid Values * empty * EXTERNAL * DIGEST-MD5 * GSSAPI Default Value empty Syntax DirectoryString Example nsBindMechanism: GSSAPI 4.5.3.2. nsFarmServerURL This attribute gives the LDAP URL of the remote server. A farm server is a server containing data in one or more databases. This attribute can contain optional servers for failover, separated by spaces. If using cascading changing, this URL can point to another database link. Parameter Description Entry DN cn= database_link_name ,cn=chaining database,cn=plugins,cn=config Valid Values Any valid remote server LDAP URL Default Value Syntax DirectoryString Example nsFarmServerURL: ldap://farm1.example.com farm2.example.com:389 farm3.example.com:1389/ 4.5.3.3. nsMultiplexorBindDN This attribute gives the DN of the administrative entry used to communicate with the remote server. The multiplexor is the server that contains the database link and communicates with the farm server. This bind DN cannot be the Directory Manager, and, if this attribute is not specified, the database link binds as anonymous . Parameter Description Entry DN cn= database_link_name ,cn=chaining database,cn=plugins,cn=config Valid Values Default Value DN of the multiplexor Syntax DirectoryString Example nsMultiplexerBindDN: cn=proxy manager 4.5.3.4. nsMultiplexorCredentials Password for the administrative user, given in plain text. If no password is provided, it means that users can bind as anonymous . The password is encrypted in the configuration file. The example below is what is shown, not what is typed. Parameter Description Entry DN cn= database_link_name ,cn=chaining database,cn=plugins,cn=config Valid Values Any valid password, which will then be encrypted using the DES reversible password encryption schema Default Value Syntax DirectoryString Example nsMultiplexerCredentials: {DES} 9Eko69APCJfF 4.5.3.5. nshoplimit This attribute specifies the maximum number of times a database is allowed to chain; that is, the number of times a request can be forwarded from one database link to another. Parameter Description Entry DN cn= database_link_name ,cn=chaining database,cn=plugins,cn=config Valid Range 1 to an appropriate upper limit for the deployment Default Value 10 Syntax Integer Example nsHopLimit: 3 4.5.3.6. nsUseStartTLS This attribute sets whether to use Start TLS to initiate a secure, encrypted connection over an insecure port. This attribute can be used if the nsBindMechanism attribute is set to EXTERNAL but the farm server URL set to the standard URL ( ldap ) or if the nsBindMechanism attribute is left empty. Parameter Description Entry DN cn= database_link_name ,cn=chaining database,cn=plugins,cn=config Valid Values off | on Default Value off Syntax DirectoryString Example nsUseStartTLS: on 4.5.4. Database Link Attributes under cn=monitor,cn=database instance name,cn=chaining database,cn=plugins,cn=config Attributes used for monitoring activity on the instances are stored in the cn=monitor,cn=database instance name,cn=chaining database,cn=plugins,cn=config information tree. nsAddCount This attribute gives the number of add operations received. nsDeleteCount This attribute gives the number of delete operations received. nsModifyCount This attribute gives the number of modify operations received. nsRenameCount This attribute gives the number of rename operations received. nsSearchBaseCount This attribute gives the number of base level searches received. nsSearchOneLevelCount This attribute gives the number of one-level searches received. nsSearchSubtreeCount This attribute gives the number of subtree searches received. nsAbandonCount This attribute gives the number of abandon operations received. nsBindCount This attribute gives the number of bind requests received. nsUnbindCount This attribute gives the number of unbinds received. nsCompareCount This attribute gives the number of compare operations received. nsOperationConnectionCount This attribute gives the number of open connections for normal operations. nsOpenBindConnectionCount This attribute gives the number of open connections for bind operations. 4.6. PAM Pass Through Auth Plug-in Attributes Local PAM configurations on Unix systems can leverage an external authentication store for LDAP users. This is a form of pass-through authentication which allows the Directory Server to use the externally-stored user credentials for directory access. PAM pass-through authentication is configured in child entries beneath the PAM Pass Through Auth Plug-in container entry. All of the possible configuration attributes for PAM authentication (defined in the 60pam-plugin.ldif schema file) are available to a child entry; the child entry must be an instance of the PAM configuration object class. Example 4.1. Example PAM Pass Through Auth Configuration Entries The PAM configuration, at a minimum, must define a mapping method (a way to identify what the PAM user ID is from the Directory Server entry), the PAM server to use, and whether to use a secure connection to the service. The configuration can be expanded for special settings, such as to exclude or specifically include subtrees or to map a specific attribute value to the PAM user ID. 4.6.1. pamConfig (Object Class) This object class is used to define the PAM configuration to interact with the directory service. This object class is defined in Directory Server. Superior Class top OID 2.16.840.1.113730.3.2.318 Allowed Attributes Section 4.6.2, "pamExcludeSuffix" Section 4.6.7, "pamIncludeSuffix" Section 4.6.8, "pamMissingSuffix" Section 4.6.4, "pamFilter" Section 4.6.5, "pamIDAttr" Section 4.6.6, "pamIDMapMethod" Section 4.6.3, "pamFallback" Section 4.6.10, "pamSecure" Section 4.6.11, "pamService" nsslapd-pluginConfigArea 4.6.2. pamExcludeSuffix This attribute specifies a suffix to exclude from PAM authentication. OID 2.16.840.1.113730.3.1.2068 Syntax DN Multi- or Single-Valued Multi-valued Defined in Directory Server 4.6.3. pamFallback Sets whether to fallback to regular LDAP authentication if PAM authentication fails. OID 2.16.840.1.113730.3.1.2072 Syntax Boolean Multi- or Single-Valued Single-valued Defined in Directory Server 4.6.4. pamFilter Sets an LDAP filter to use to identify specific entries within the included suffixes for which to use PAM pass-through authentication. If not set, all entries within the suffix are targeted by the configuration entry. OID 2.16.840.1.113730.3.1.2131 Syntax Boolean Multi- or Single-Valued Single-valued Defined in Directory Server 4.6.5. pamIDAttr This attribute contains the attribute name which is used to hold the PAM user ID. OID 2.16.840.1.113730.3.1.2071 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Directory Server 4.6.6. pamIDMapMethod Gives the method to use to map the LDAP bind DN to a PAM identity. Note Directory Server user account inactivation is only validated using the ENTRY mapping method. With RDN or DN, a Directory Server user whose account is inactivated can still bind to the server successfully. OID 2.16.840.1.113730.3.1.2070 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Directory Server 4.6.7. pamIncludeSuffix This attribute sets a suffix to include for PAM authentication. OID 2.16.840.1.113730.3.1.2067 Syntax DN Multi- or Single-Valued Multi-valued Defined in Directory Server 4.6.8. pamMissingSuffix Identifies how to handle missing include or exclude suffixes. The options are ERROR (which causes the bind operation to fail); ALLOW, which logs an error but allows the operation to proceed; and IGNORE, which allows the operation and does not log any errors. OID 2.16.840.1.113730.3.1.2069 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Directory Server 4.6.9. pamModuleIsThreadSafe By default, Directory Server serializes the Pluggable Authentication Module (PAM) authentications. If you set the pamModuleIsThreadSafe attribute to on , Directory Server starts to perform PAM authentications in parallel. However, ensure that the PAM module you are using is a thread safe module. Currently, you can use the ldapmodify utility to configure the pamModuleIsThreadSafe attribute: To apply changes, restart the server. OID 2.16.840.1.113730.3.1.2399 Syntax Boolean Multi- or Single-Valued Single-valued Defined in Directory Server 4.6.10. pamSecure Requires secure TLS connection for PAM authentication. OID 2.16.840.1.113730.3.1.2073 Syntax Boolean Multi- or Single-Valued Single-valued Defined in Directory Server 4.6.11. pamService Contains the service name to pass to PAM. This assumes that the service specified has a configuration file in the /etc/pam.d/ directory. Important The pam_fprintd.so module cannot be in the configuration file referenced by the pamService attribute of the PAM Pass-Through Authentication Plug-in configuration. Using the PAM pam_fprintd.so module causes the Directory Server to hit the max file descriptor limit and can cause the Directory Server process to abort. Important The pam_fprintd.so module cannot be in the configuration file referenced by the pamService attribute of the PAM Pass-Through Authentication Plug-in configuration. Using the PAM fprintd module causes the Directory Server to hit the max file descriptor limit and can cause the Directory Server process to abort. OID 2.16.840.1.113730.3.1.2074 Syntax IA5String Multi- or Single-Valued Single-valued Defined in Directory Server 4.7. Account Policy Plug-in Attributes Account policies can be set that automatically lock an account after a certain amount of time has elapsed. This can be used to create temporary accounts that are only valid for a preset amount of time or to lock users which have been inactive for a certain amount of time. The Account Policy Plug-in itself only accept on argument, which points to a plug-in configuration entry. The account policy configuration entry defines, for the entire server, what attributes to use for account policies. Most of the configuration defines attributes to use to evaluate account policies and expiration times, but the configuration also defines what object class to use to identify subtree-level account policy definitions. One the plug-in is configured globally, account policy entries can be created within the user subtrees, and then these policies can be applied to users and to roles through classes of service. Example 4.2. Account Policy Definition Any entry, both individual users and roles or CoS templates, can be an account policy subentry. Every account policy subentry has its creation and login times tracked against any expiration policy. Example 4.3. User Account with Account Policy 4.7.1. altstateattrname Account expiration policies are based on some timed criteria for the account. For example, for an inactivity policy, the primary criteria may be the last login time, lastLoginTime . However, there may be instances where that attribute does not exist on an entry, such as a user who never logged into his account. The altstateattrname attribute provides a backup attribute for the server to reference to evaluate the expiration time. Parameter Description Entry DN cn=config,cn=Account Policy Plugin,cn=plugins,cn=config Valid Range Any time-based entry attribute Default Value None Syntax DirectoryString Example altstateattrname: createTimeStamp 4.7.2. alwaysRecordLogin By default, only entries which have an account policy directly applied to them - meaning, entries with the acctPolicySubentry attribute - have their login times tracked. If account policies are applied through classes of service or roles, then the acctPolicySubentry attribute is on the template or container entry, not the user entries themselves. The alwaysRecordLogin attribute sets that every entry records its last login time. This allows CoS and roles to be used to apply account policies. Parameter Description Entry DN cn=config,cn=Account Policy Plugin,cn=plugins,cn=config Valid Range yes | no Default Value no Syntax DirectoryString Example alwaysRecordLogin: no 4.7.3. alwaysRecordLoginAttr The Account Policy plug-in uses the attribute name set in the alwaysRecordLoginAttr parameter to store the time of the last successful login in this attribute in the user's directory entry. For further information, see the corresponding section in the Directory Server Administration Guide . Parameter Description Entry DN cn=config,cn=Account Policy Plugin,cn=plugins,cn=config Valid Range Any valid attribute name Default Value stateAttrName Syntax DirectoryString Example alwaysRecordLoginAttr: lastLoginTime 4.7.4. limitattrname The account policy entry in the user directory defines the time limit for the account lockout policy. This time limit can be set in any time-based attribute, and a policy entry could have multiple time-based attributes in ti. The attribute within the policy to use for the account inactivation limit is defined in the limitattrname attribute in the Account Policy Plug-in, and it is applied globally to all account policies. Parameter Description Entry DN cn=config,cn=Account Policy Plugin,cn=plugins,cn=config Valid Range Any time-based entry attribute Default Value None Syntax DirectoryString Example limitattrname: accountInactivityLimit 4.7.5. specattrname There are really two configuration entries for an account policy: the global settings in the plug-in configuration entry and then yser- or subtree-level settings in an entry within the user directory. An account policy can be set directly on a user entry or it can be set as part of a CoS or role configuration. The way that the plug-in identifies which entries are account policy configuration entries is by identifying a specific attribute on the entry which flags it as an account policy. This attribute in the plug-in configuration is is specattrname ; its will usually be set to acctPolicySubentry . Parameter Description Entry DN cn=config,cn=Account Policy Plugin,cn=plugins,cn=config Valid Range Any time-based entry attribute Default Value None Syntax DirectoryString Example specattrname: acctPolicySubentry 4.7.6. stateattrname Account expiration policies are based on some timed criteria for the account. For example, for an inactivity policy, the primary criteria may be the last login time, lastLoginTime . The primary time attribute used to evaluate an account policy is set in the stateattrname attribute. Parameter Description Entry DN cn=config,cn=Account Policy Plugin,cn=plugins,cn=config Valid Range Any time-based entry attribute Default Value None Syntax DirectoryString Example stateattrname: lastLoginTime 4.8. AD DN Plug-in Attributes The AD DN plug-in supports multiple domain configurations. Create one configuration entry for each domain. For details, see the corresponding section in the Red Hat Directory Server Administration Guide . 4.8.1. cn Sets the domain name of the configuration entry. The plug-in uses the domain name from the authenticating user name to select the corresponding configuration entry. Parameter Description Entry DN cn= domain_name ,cn=addn,cn=plugins,cn=config Valid Entry Any string Default Value None Syntax DirectoryString Example cn: example.com 4.8.2. addn_base Sets the base DN under which Directory Server searches the user's DN. Parameter Description Entry DN cn= domain_name ,cn=addn,cn=plugins,cn=config Valid Entry Any valid DN Default Value None Syntax DirectoryString Example addn_base: ou=People,dc=example,dc=com 4.8.3. addn_filter Sets the search filter. Directory Server replaces the %s variable automatically with the non-domain part of the authenticating user. For example, if the user name in the bind is [email protected] , the filter searches the corresponding DN which is (&(objectClass=account)(uid=user_name)) . Parameter Description Entry DN cn= domain_name ,cn=addn,cn=plugins,cn=config Valid Entry Any valid DN Default Value None Syntax DirectoryString Example addn_filter: (&(objectClass=account)(uid=%s)) 4.9. Auto Membership Plug-in Attributes Automembership essentially allows a static group to act like a dynamic group. Different automembership definitions create searches that are automatically run on all new directory entries. The automembership rules search for and identify matching entries - much like the dynamic search filters - and then explicitly add those entries as members to the specified static group. The Auto Membership Plug-in itself is a container entry. Each automember definition is a child of the Auto Membership Plug-in. The automember definition defines the LDAP search base and filter to identify entries and a default group to add them to. Each automember definition can have its own child entry that defines additional conditions for assigning the entry to group. Regular expressions can be used to include or exclude entries and assign them to specific groups based on those conditions. If the entry matches the main definition and not any of the regular expression conditions, then it uses the group in the main definition. If it matches a regular expression condition, then it is added to the regular expression condition group. 4.9.1. autoMemberDefaultGroup This attribute sets a default or fallback group to add the entry to as a member. If only the definition entry is used, then this is the group to which all matching entries are added. If regular expression conditions are used, then this group is used as a fallback if an entry which matches the LDAP search filter do not match any of the regular expressions. Parameter Description Entry DN cn=Auto Membership Plugin,cn=plugins,cn=config Valid Range Any existing Directory Server group Default Value None Single- or Multi-Valued Single Syntax DirectoryString Example autoMemberDefaultGroup: cn=hostgroups,ou=groups,dc=example,dc=com 4.9.2. autoMemberDefinition (Object Class) This attribute identifies the entry as an automember definition. This entry must be a child of the Auto Membership Plug-in, cn=Auto Membership Plugin,cn=plugins,cn=config . Allowed Attributes autoMemberScope autoMemberFilter autoMemberDefaultGroup autoMemberGroupingAttr 4.9.3. autoMemberExclusiveRegex This attribute sets a single regular expression to use to identify entries to exclude . If an entry matches the exclusion condition, then it is not included in the group. Multiple regular expressions could be used, and if an entry matches any one of those expressions, it is excluded in the group. The format of the expression is a Perl-compatible regular expression (PCRE). For more information on PCRE patterns, see the pcresyntax(3) man page . Note Exclude conditions are evaluated first and take precedence over include conditions. Parameter Description Entry DN cn=Auto Membership Plugin,cn=plugins,cn=config Valid Range Any regular expression Default Value None Single- or Multi-Valued Multi-valued Syntax DirectoryString Example autoMemberExclusiveRegex: fqdn=^www\.web[0-9]+\.example\.com 4.9.4. autoMemberFilter This attribute sets a standard LDAP search filter to use to search for matching entries. Parameter Description Entry DN cn=Auto Membership Plugin,cn=plugins,cn=config Valid Range Any valid LDAP search filter Default Value None Single- or Multi-Valued Single Syntax DirectoryString Example autoMemberFilter:objectclass=ntUser 4.9.5. autoMemberGroupingAttr This attribute gives the name of the member attribute in the group entry and the attribute in the object entry that supplies the member attribute value, in the format group_member_attr:entry_attr . This structures how the Automembership Plug-in adds a member to the group, depending on the group configuration. For example, for a groupOfUniqueNames user group, each member is added as a uniqueMember attribute. The value of uniqueMember is the DN of the user entry. In essence, each group member is identified by the attribute-value pair of uniqueMember: user_entry_DN . The member entry format, then, is uniqueMember:dn . Parameter Description Entry DN cn=Auto Membership Plugin,cn=plugins,cn=config Valid Range Any Directory Server attribute Default Value None Single- or Multi-Valued Single Syntax DirectoryString Example autoMemberGroupingAttr: member:dn 4.9.6. autoMemberInclusiveRegex This attribute sets a single regular expression to use to identify entries to include . Multiple regular expressions could be used, and if an entry matches any one of those expressions, it is included in the group (assuming it does not match an exclude expression). The format of the expression is a Perl-compatible regular expression (PCRE). For more information on PCRE patterns, see the pcresyntax(3) man page . Parameter Description Entry DN cn=Auto Membership Plugin,cn=plugins,cn=config Valid Range Any regular expression Default Value None Single- or Multi-Valued Multi-valued Syntax DirectoryString Example autoMemberInclusiveRegex: fqdn=^www\.web[0-9]+\.example\.com 4.9.7. autoMemberProcessModifyOps By default, the Directory Server invokes the Automembership plug-in for add and modify operations. With this setting, the plug-in changes groups when you add a group entry to a user or modify a group entry of a user. If you set the autoMemberProcessModifyOps to off , Directory Server only invokes the Automembership plug-in when you add a group entry to a user. In this case, if an administrator changes a user entry, and that entry impactes what Automembership groups the user belongs to, the plug-in does not remove the user from the old group and only adds the new group. To update the old group, you must then manually run a fix-up task. Parameter Description Entry DN cn=Auto Membership Plugin,cn=plugins,cn=config Valid Values on | off Default Value on Single- or Multi-Valued Single Syntax DirectoryString Example autoMemberProcessModifyOps: on 4.9.8. autoMemberRegexRule (Object Class) This attribute identifies the entry as a regular expression rule. This entry must be a child of an automember definition ( objectclass: autoMemberDefinition ). Allowed Attributes autoMemberInclusiveRegex autoMemberExclusiveRegex autoMemberTargetGroup 4.9.9. autoMemberScope This attribute sets the subtree DN to search for entries. This is the search base. Parameter Description Entry DN cn=Auto Membership Plugin,cn=plugins,cn=config Valid Range Any Directory Server subtree Default Value None Single- or Multi-Valued Single Syntax DirectoryString Example autoMemberScope: dc=example,dc=com 4.9.10. autoMemberTargetGroup This attribute sets which group to add the entry to as a member, if it meets the regular expression conditions. Parameter Description Entry DN cn=Auto Membership Plugin,cn=plugins,cn=config Valid Range Any Directory Server group Default Value None Single- or Multi-Valued Single Syntax DirectoryString Example autoMemberTargetGroup: cn=webservers,cn=hostgroups,ou=groups,dc=example,dc=com 4.10. Distributed Numeric Assignment Plug-in Attributes The Distributed Numeric Assignment Plug-in manages ranges of numbers and assigns unique numbers within that range to entries. By breaking number assignments into ranges, the Distributed Numeric Assignment Plug-in allows multiple servers to assign numbers without conflict. The plug-in also manages the ranges assigned to servers, so that if one instance runs through its range quickly, it can request additional ranges from the other servers. Distributed numeric assignment can be configured to work with single attribute types or multiple attribute types, and is only applied to specific suffixes and specific entries within the subtree. Distributed numeric assignment is handled per-attribute and is only applied to specific suffixes and specific entries within the subtree. 4.10.1. dnaPluginConfig (Object Class) This object class is used for entries which configure the DNA Plug-in and numeric ranges to assign to entries. This object class is defined in Directory Server. Superior Class top OID 2.16.840.1.113730.3.2.324 Allowed Attributes dnaType dnaPrefix dnaNextValue dnaMaxValue dnaInterval dnaMagicRegen dnaFilter dnaScope dnaSharedCfgDN dnaThreshold dnaNextRange dnaRangeRequestTimeout cn 4.10.2. dnaFilter This attribute sets an LDAP filter to use to search for and identify the entries to which to apply the distributed numeric assignment range. The dnaFilter attribute is required to set up distributed numeric assignment for an attribute. Parameter Description Entry DN cn= DNA_config_entry ,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config Valid Range Any valid LDAP filter Default Value None Syntax DirectoryString Example dnaFilter: (objectclass=person) 4.10.3. dnaInterval This attribute sets an interval to use to increment through numbers in a range. Essentially, this skips numbers at a predefined rate. If the interval is 3 and the first number in the range is 1 , the number used in the range is 4 , then 7 , then 10 , incrementing by three for every new number assignment. In a replication environment, the dnaInterval enables multiple servers to share the same range. However, when you configure different servers that share the same range, set the dnaInterval and dnaNextVal parameters accordingly so that the different servers do not generate the same values. You must also consider this if you add new servers to the replication topology. Parameter Description Entry DN cn= DNA_config_entry ,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config Valid Range Any integer Default Value 1 Syntax Integer Example dnaInterval: 1 4.10.4. dnaMagicRegen This attribute sets a user-defined value that instructs the plug-in to assign a new value for the entry. The magic value can be used to assign new unique numbers to existing entries or as a standard setting when adding new entries. The magic entry should be outside of the defined range for the server so that it cannot be triggered by accident. Note that this attribute does not have to be a number when used on a DirectoryString or other character type. However, in most cases the DNA plug-in is used on attributes which only accept integer values, and in such cases the dnamagicregen value must also be an integer. Parameter Description Entry DN cn= DNA_config_entry ,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config Valid Range Any string Default Value None Syntax DirectoryString Example dnaMagicRegen: -1 4.10.5. dnaMaxValue This attribute sets the maximum value that can be assigned for the range. The default is -1 , which is the same as setting the highest 64-bit integer. Parameter Description Entry DN cn= DNA_config_entry ,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config Valid Range 1 to the maximum 32-bit integer on 32-bit systems and to the maximum 64-bit integer on 64-bit systems; -1 is unlimited Default Value -1 Syntax Integer Example dnaMaxValue: 1000 4.10.6. dnaNextRange This attribute defines the range to use when the current range is exhausted. This value is automatically set when range is transferred between servers, but it can also be manually set to add a range to a server if range requests are not used. The dnaNextRange attribute should be set explicitly only if a separate, specific range has to be assigned to other servers. Any range set in the dnaNextRange attribute must be unique from the available range for the other servers to avoid duplication. If there is no request from the other servers and the server where dnaNextRange is set explicitly has reached its set dnaMaxValue , the set of values (part of the dnaNextRange ) is allocated from this deck. The dnaNextRange allocation is also limited by the dnaThreshold attribute that is set in the DNA configuration. Any range allocated to another server for dnaNextRange cannot violate the threshold for the server, even if the range is available on the deck of dnaNextRange . Note If the dnaNextRange attribute is handled internally if it is not set explicitly. When it is handled automatically, the dnaMaxValue attribute serves as upper limit for the range. The attribute sets the range in the format lower_range-upper_range . Parameter Description Entry DN cn= DNA_config_entry ,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config Valid Range 1 to the maximum 32-bit integer on 32-bit systems and to the maximum 64-bit integer on 64-bit systems for the lower and upper ranges Default Value None Syntax DirectoryString Example dnaNextRange: 100-500 4.10.7. dnaNextValue This attribute gives the available number which can be assigned. After being initially set in the configuration entry, this attribute is managed by the Distributed Numeric Assignment Plug-in. The dnaNextValue attribute is required to set up distributed numeric assignment for an attribute. Parameter Description Entry DN cn= DNA_config_entry ,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config Valid Range 1 to the maximum 32-bit integer on 32-bit systems and to the maximum 64-bit integer on 64-bit systems Default Value -1 Syntax Integer Example dnaNextValue: 1 4.10.8. dnaPrefix This attribute defines a prefix that can be prepended to the generated number values for the attribute. For example, to generate a user ID such as user1000 , the dnaPrefix setting would be user . dnaPrefix can hold any kind of string. However, some possible values for dnaType (such as uidNumber and gidNumber ) require only integer values. To use a prefix string, consider using a custom attribute for dnaType which allows strings. Parameter Description Entry DN cn= DNA_config_entry ,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config Valid Range Any string Default Value None Example dnaPrefix: id 4.10.9. dnaRangeRequestTimeout One potential situation with the Distributed Numeric Assignment Plug-in is that one server begins to run out of numbers to assign. The dnaThreshold attribute sets a threshold of available numbers in the range, so that the server can request an additional range from the other servers before it is unable to perform number assignments. The dnaRangeRequestTimeout attribute sets a timeout period, in seconds, for range requests so that the server does not stall waiting on a new range from one server and can request a range from a new server. For range requests to be performed, the dnaSharedCfgDN attribute must be set. Parameter Description Entry DN cn= DNA_config_entry ,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config Valid Range 1 to the maximum 32-bit integer on 32-bit systems and to the maximum 64-bit integer on 64-bit systems Default Value 10 Syntax Integer Example dnaRangeRequestTimeout: 15 4.10.10. dnaScope This attribute sets the base DN to search for entries to which to apply the distributed numeric assignment. This is analogous to the base DN in an ldapsearch . Parameter Description Entry DN cn= DNA_config_entry ,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config Valid Range Any Directory Server entry Default Value None Syntax DirectoryString Example dnaScope: ou=people,dc=example,dc=com 4.10.11. dnaSharedCfgDN This attribute defines a shared identity that the servers can use to transfer ranges to one another. This entry is replicated between servers and is managed by the plug-in to let the other servers know what ranges are available. This attribute must be set for range transfers to be enabled. Note The shared configuration entry must be configured in the replicated subtree, so that the entry can be replicated to the servers. For example, if the ou=People,dc=example,dc=com subtree is replicated, then the configuration entry must be in that subtree, such as ou=UID Number Ranges , ou=People,dc=example,dc=com . The entry identified by this setting must be manually created by the administrator. The server will automatically contain a sub-entry beneath it to transfer ranges. Parameter Description Entry DN cn= DNA_config_entry ,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config Valid Range Any DN Default Value None Syntax DN Example dnaSharedCfgDN: cn=range transfer user,cn=config 4.10.12. dnaThreshold One potential situation with the Distributed Numeric Assignment Plug-in is that one server begins to run out of numbers to assign, which can cause problems. The Distributed Numeric Assignment Plug-in allows the server to request a new range from the available ranges on other servers. So that the server can recognize when it is reaching the end of its assigned range, the dnaThreshold attribute sets a threshold of remaining available numbers in the range. When the server hits the threshold, it sends a request for a new range. For range requests to be performed, the dnaSharedCfgDN attribute must be set. Parameter Description Entry DN cn= DNA_config_entry ,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config Valid Range 1 to the maximum 32-bit integer on 32-bit systems and to the maximum 64-bit integer on 64-bit systems Default Value 100 Syntax Integer Example dnaThreshold: 100 4.10.13. dnaType This attribute sets which attributes have unique numbers being generated for them. In this case, whenever the attribute is added to the entry with the magic number, an assigned value is automatically supplied. This attribute is required to set a distributed numeric assignment for an attribute. If the dnaPrefix attribute is set, then the prefix value is prepended to whatever value is generated by dnaType . The dnaPrefix value can be any kind of string, but some reasonable values for dnaType (such as uidNumber and gidNumber ) require only integer values. To use a prefix string, consider using a custom attribute for dnaType which allows strings. Parameter Description Entry DN cn= DNA_config_entry ,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config Valid Range Any Directory Server attribute Default Value None Example dnaType: uidNumber 4.10.14. dnaSharedConfig (Object Class) This object class is used to configure the shared configuration entry that is replicated between suppliers that are all using the same DNA Plug-in configuration for numeric assignements. This object class is defined in Directory Server. Superior Class top OID 2.16.840.1.113730.3.2.325 Allowed Attributes dnaHostname dnaPortNum dnaSecurePortNum dnaRemainingValues 4.10.15. dnaHostname This attribute identifies the host name of a server in a shared range, as part of the DNA range configuration for that specific host in multi-supplier replication. Available ranges are tracked by host and the range information is replicated among all suppliers so that if any supplier runs low on available numbers, it can use the host information to contact another supplier and request an new range. Parameter Description Entry DN cn= DNA_config_entry ,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config Syntax DirectoryString Valid Range Any valid host name Default Value None Example dnahostname: ldap1.example.com 4.10.16. dnaPortNum This attribute gives the standard port number to use to connect to the host identified in dnaHostname . Parameter Description Entry DN cn= DNA_config_entry ,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config Syntax Integer Valid Range 0 to 65535 Default Value 389 Example dnaPortNum: 389 4.10.17. dnaRemainingValues This attribute contains the number of values that are remaining and available to a server to assign to entries. Parameter Description Entry DN dnaHostname= host_name +dnaPortNum= port_number ,ou=ranges,dc=example,dc=com Syntax Integer Valid Range Any integer Default Value None Example dnaRemainingValues: 1000 4.10.18. dnaRemoteBindCred Specifies the Replication Manager's password. If you set a bind method in the dnaRemoteBindMethod attribute that requires authentication, additionally set the dnaRemoteBindDN and dnaRemoteBindCred parameter for every server in the replication deployment in the plug-in configuration entry under the cn=config entry. Set the parameter in plain text. The value is automatically AES-encrypted before it is stored. A server restart is required for the change to take effect. Parameter Description Entry DN cn= DNA_config_entry ,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config Syntax DirectoryString {AES} encrypted_password Valid Values Any valid AES-encrypted password. Default Value Example dnaRemoteBindCred: {AES-TUhNR0NTcUdTSWIzRFFFRkRUQm1NRVVHQ1NxR1NJYjNEUUVGRERBNEJDUmxObUk0WXpjM1l5MHdaVE5rTXpZNA0KTnkxaE9XSmhORGRoT0MwMk1ESmpNV014TUFBQ0FRSUNBU0F3Q2dZSUtvWklodmNOQWdjd0hRWUpZSVpJQVdVRA0KQkFFcUJCQk5KbUFDUWFOMHlITWdsUVp3QjBJOQ==}bBR3On6cBmw0DdhcRx826g== 4.10.19. dnaRemoteBindDN Specifies the Replication Manager DN. If you set a bind method in the dnaRemoteBindMethod attribute that requires authentication, additionally set the dnaRemoteBindDN and dnaRemoteBindCred parameter for every server in the replication deployment in the plug-in configuration under the cn=config entry. A server restart is required for the change to take effect. Parameter Description Entry DN cn= DNA_config_entry ,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config Syntax DirectoryString Valid Values Any valid Replication Manager DN. Default Value Example dnaRemoteBindDN: cn=replication manager,cn=config 4.10.20. dnaRemoteBindMethod Specifies the remote bind method. If you set a bind method in this attribute that requires authentication, additionally set the dnaRemoteBindDN and dnaRemoteBindCred parameter for every server in the replication deployment in the plug-in configuration entry under the cn=config entry. A server restart is required for the change to take effect. Parameter Description Entry DN dnaHostname= host_name +dnaPortNum= port_number ,ou=ranges,dc=example,dc=com Syntax DirectoryString Valid Values SIMPLE | SSL | SASL/GSSAPI | SASL/DIGEST-MD5 Default Value Example dnaRemoteBindMethod: SIMPLE 4.10.21. dnaRemoteConnProtocol Specifies the remote connection protocol. A server restart is required for the change to take effect. Parameter Description Entry DN dnaHostname= host_name +dnaPortNum= port_number ,ou=ranges,dc=example,dc=com Syntax DirectoryString Valid Values LDAP , SSL , or TLS Default Value Example dnaRemoteConnProtocol: LDAP 4.10.22. dnaSecurePortNum This attribute gives the secure (TLS) port number to use to connect to the host identified in dnaHostname . Parameter Description Entry DN dnaHostname= host_name +dnaPortNum= port_number ,ou=ranges,dc=example,dc=com Syntax Integer Valid Range 0 to 65535 Default Value 636 Example dnaSecurePortNum: 636 4.11. Linked Attributes Plug-in Attributes Many times, entries have inherent relationships to each other (such as managers and employees, document entries and their authors, or special groups and group members). While attributes exist that reflect these relationships, these attributes have to be added and updated on each entry manually. That can lead to a whimsically inconsistent set of directory data, where these entry relationships are unclear, outdated, or missing. The Linked Attributes Plug-in allows one attribute, set in one entry, to update another attribute in another entry automatically. The first attribute has a DN value, which points to the entry to update; the second entry attribute also has a DN value which is a back-pointer to the first entry. The link attribute which is set by users and the dynamically-updated "managed" attribute in the affected entries are both defined by administrators in the Linked Attributes Plug-in instance. Conceptually, this is similar to the way that the MemberOf Plug-in uses the member attribute in group entries to set memberOf attribute in user entries. Only with the Linked Attributes Plug-in, all of the link/managed attributes are user-defined and there can be multiple instances of the plug-in, each reflecting different link-managed relationships. There are a couple of caveats for linking attributes: Both the link attribute and the managed attribute must have DNs as values. The DN in the link attribute points to the entry to add the managed attribute to. The managed attribute contains the linked entry DN as its value. The managed attribute must be multi-valued. Otherwise, if multiple link attributes point to the same managed entry, the managed attribute value would not be updated accurately. 4.11.1. linkScope This restricts the scope of the plug-in, so it operates only in a specific subtree or suffix. If no scope is given, then the plug-in will update any part of the directory tree. Parameter Description Entry DN cn= plugin_instance ,cn=Linked Attributes,cn=plugins,cn=config Valid Range Any DN Default Value None Syntax DN Example linkScope: ou=People,dc=example,dc=com 4.11.2. linkType This sets the user-managed attribute. This attribute is modified and maintained by users, and then when this attribute value changes, the linked attribute is automatically updated in the targeted entries. Parameter Description Entry DN cn= plugin_instance ,cn=Linked Attributes,cn=plugins,cn=config Valid Range Any Directory Server attribute Default Value None Syntax DirectoryString Example linkType: directReport 4.11.3. managedType This sets the managed, or plug-in maintained, attribute. This attribute is managed dynamically by the Linked Attributes Plug-in instance. Whenever a change is made to the managed attribute, then the plug-in updates all of the linked attributes on the targeted entries. Parameter Description Entry DN cn= plugin_instance ,cn=Linked Attributes,cn=plugins,cn=config Valid Range Any Directory Server attribute Default Value None Syntax DN Example managedType: manager 4.12. Managed Entries Plug-in Attributes In some unique circumstances, it is useful to have an entry created automatically when another entry is created. For example, this can be part of Posix integration by creating a specific group entry when a new user is created. Each instance of the Managed Entries Plug-in identifies two areas: The scope of the plug-in, meaning the subtree and the search filter to use to identify entries which require a corresponding managed entry A template entry that defines what the managed entry should look like 4.12.1. managedBase This attribute sets the subtree under which to create the managed entries. This can be any entry in the directory tree. Parameter Description Entry DN cn= instance_name ,cn=Managed Entries Plugin,cn=plugins,cn=config Valid Values Any Directory Server subtree Default Value None Syntax DirectoryString Example managedBase: ou=groups,dc=example,dc=com 4.12.2. managedTemplate This attribute identifies the template entry to use to create the managed entry. This entry can be located anywhere in the directory tree; however, it is recommended that this entry is in a replicated suffix so that all suppliers and consumers in replication are using the same template. The attributes used to create the managed entry template are described in the Red Hat Directory Server Configuration, Command, and File Reference . Parameter Description Entry DN cn= instance_name ,cn=Managed Entries Plugin,cn=plugins,cn=config Valid Values Any Directory Server entry of the mepTemplateEntry object class Default Value None Syntax DirectoryString Example managedTemplate: cn=My Template,ou=Templates,dc=example,dc=com 4.12.3. originFilter This attribute sets the search filter to use to search for and identify the entries within the subtree which require a managed entry. The filter allows the managed entries behavior to be limited to a specific type of entry or subset of entries. The syntax is the same as a regular search filter. Parameter Description Entry DN cn= instance_name ,cn=Managed Entries Plugin,cn=plugins,cn=config Valid Values Any valid LDAP filter Default Value None Syntax DirectoryString Example originFilter: objectclass=posixAccount 4.12.4. originScope This attribute sets the scope of the search to use to see which entries the plug-in monitors. If a new entry is created within the scope subtree, then the Managed Entries Plug-in creates a new managed entry that corresponds to it. Parameter Description Entry DN cn= instance_name ,cn=Managed Entries Plugin,cn=plugins,cn=config Valid Values Any Directory Server subtree Default Value None Syntax DirectoryString Example originScope: ou=people,dc=example,dc=com 4.13. MemberOf Plug-in Attributes Group membership is defined within group entries using attributes such as member . Searching for the member attribute makes it easy to list all of the members for the group. However, group membership is not reflected in the member's user entry, so it is impossible to tell to what groups a person belongs by looking at the user's entry. The MemberOf Plug-in synchronizes the group membership in group members with the members' individual directory entries by identifying changes to a specific member attribute (such as member ) in the group entry and then working back to write the membership changes over to a specific attribute in the members' user entries. 4.13.1. cn Sets the name of the plug-in instance. Parameter Description Entry DN cn=MemberOf Plugin,cn=plugins,cn=config Valid Values Any valid string Default Value Syntax DirectoryString Example cn: Example MemberOf Plugin Instance 4.13.2. memberOfAllBackends This attribute specifies whether to search the local suffix for user entries or all available suffixes. This can be desirable in directory trees where users may be distributed across multiple databases so that group membership is evaluated comprehensively and consistently. Parameter Description Entry DN cn=MemberOf Plugin,cn=plugins,cn=config Valid Values on | off Default Value off Syntax DirectoryString Example memberOfAllBackends: on 4.13.3. memberOfAttr This attribute specifies the attribute in the user entry for the Directory Server to manage to reflect group membership. The MemberOf Plug-in generates the value of the attribute specified here in the directory entry for the member. There is a separate attribute for every group to which the user belongs. Parameter Description Entry DN cn=MemberOf Plugin,cn=plugins,cn=config Valid Range Any Directory Server attribute Default Value memberOf Syntax DirectoryString Example memberOfAttr: memberOf 4.13.4. memberOfAutoAddOC To enable the memberOf plug-in to add the memberOf attribute to a user, the user object must contain an object class that allows this attribute. If an entry does not have an object class that allows the memberOf attribute then the memberOf plugin will automatically add the object class listed in the memberOfAutoAddOC parameter. This setting does not require restarting the server to take effect. Parameter Description Entry DN cn=MemberOf Plugin,cn=plugins,cn=config Valid Values Any Directory Server object class Default Value nsMemberOf Syntax DirectoryString Example memberOfAutoAddOC: nsMemberOf 4.13.5. memberOfEntryScope If you configured several back ends or multiple-nested suffixes, the multi-valued memberOfEntryScope parameter enables you to set what suffixes the MemberOf plug-in works on. If the parameter is not set, the plug-in works on all suffixes. The value set in the memberOfEntryScopeExcludeSubtree parameter has a higher priority than values set in memberOfEntryScope . For further details, see the corresponding section in the Directory Server Administration Guide . This setting does not require restarting the server to take effect. Parameter Description Entry DN cn=MemberOf Plugin,cn=plugins,cn=config Valid Range Any Directory Server entry DN. Default Value Syntax DirectoryString Example memberOfEntryScope: ou=people,dc=example,dc=com 4.13.6. memberOfEntryScopeExcludeSubtree If you configured several back ends or multiple-nested suffixes, the multi-valued memberOfEntryScopeExcludeSubtree parameter enables you to set what suffixes the MemberOf plug-in excludes. The value set in the memberOfEntryScopeExcludeSubtree parameter has a higher priority than values set in memberOfEntryScope . If the scopes set in both parameters overlap, the MemberOf plug-in only works on the non-overlapping directory entries. For further details, see the corresponding section in the Directory Server Administration Guide . This setting does not require restarting the server to take effect. Parameter Description Entry DN cn=MemberOf Plugin,cn=plugins,cn=config Valid Range Any Directory Server entry DN. Default Value Syntax DirectoryString Example memberOfEntryScopeExcludeSubtree: ou=sample,dc=example,dc=com 4.13.7. memberOfGroupAttr This attribute specifies the attribute in the group entry to use to identify the DNs of group members. By default, this is the member attribute, but it can be any membership-related attribute that contains a DN value, such as uniquemember or member . Note Any attribute can be used for the memberOfGroupAttr value, but the MemberOf Plug-in only works if the value of the target attribute contains the DN of the member entry. For example, the member attribute contains the DN of the member's user entry: Some member-related attributes do not contain a DN, like the memberURL attribute. That attribute will not work as a value for memberOfGroupAttr . The memberURL value is a URL, and a non-DN value cannot work with the MemberOf Plug-in. Parameter Description Entry DN cn=MemberOf Plugin,cn=plugins,cn=config Valid Range Any Directory Server attribute Default Value member Syntax DirectoryString Example memberOfGroupAttr: member 4.14. Attribute Uniqueness Plug-in Attributes The Attribute Uniqueness plug-in ensures that the value of an attribute is unique across the directory or subtree. 4.14.1. cn Sets the name of the Attribute Uniqueness plug-in configuration record. You can use any string, but Red Hat recommends naming the configuration record attribute_name Attribute Uniqueness . Parameter Description Entry DN cn= attribute_uniqueness_configuration_record_name ,cn=plugins,cn=config Valid Values Any valid string Default Value None Syntax DirectoryString Example cn: mail Attribute Uniqueness 4.14.2. uniqueness-attribute-name Sets the name of the attribute whose values must be unique. This attribute is multi-valued. Parameter Description Entry DN cn= attribute_uniqueness_configuration_record_name ,cn=plugins,cn=config Valid Values Any valid attribute name Default Value None Syntax DirectoryString Example uniqueness-attribute-name: mail 4.14.3. uniqueness-subtrees Sets the DN under which the plug-in checks for uniqueness of the attribute's value. This attribute is multi-valued. Parameter Description Entry DN cn= attribute_uniqueness_configuration_record_name ,cn=plugins,cn=config Valid Values Any valid subtree DN Default Value None Syntax DirectoryString Example uniqueness-subtrees: ou=Sales,dc=example,dc=com 4.14.4. uniqueness-across-all-subtrees If enabled ( on ), the plug-in checks that the attribute is unique across all subtrees set. If you set the attribute to off , uniqueness is only enforced within the subtree of the updated entry. Parameter Description Entry DN cn= attribute_uniqueness_configuration_record_name ,cn=plugins,cn=config Valid Values on | off Default Value off Syntax DirectoryString Example uniqueness-across-all-subtrees: off 4.14.5. uniqueness-top-entry-oc Directory Server searches this object class in the parent entry of the updated object. If it was not found, the search continues at the higher level entry up to the root of the directory tree. If the object class was found, Directory Server verifies that the value of the attribute set in uniqueness-attribute-name is unique in this subtree. Parameter Description Entry DN cn= attribute_uniqueness_configuration_record_name ,cn=plugins,cn=config Valid Values Any valid object class Default Value None Syntax DirectoryString Example uniqueness-top-entry-oc: nsContainer 4.14.6. uniqueness-subtree-entries-oc Optionally, when using the uniqueness-top-entry-oc parameter, you can configure that the Attribute Uniqueness plug-in only verifies if an attribute is unique, if the entry contains the object class set in this parameter. Parameter Description Entry DN cn= attribute_uniqueness_configuration_record_name ,cn=plugins,cn=config Valid Values Any valid object class Default Value None Syntax DirectoryString Example uniqueness-subtree-entries-oc: inetOrgPerson 4.15. Posix Winsync API Plug-in Attributes By default, Posix-related attributes are not synchronized between Active Directory and Red Hat Directory Server. On Linux systems, system users and groups are identified as Posix entries, and LDAP Posix attributes contain that required information. However, when Windows users are synced over, they have ntUser and ntGroup attributes automatically added which identify them as Windows accounts, but no Posix attributes are synced over (even if they exist on the Active Directory entry) and no Posix attributes are added on the Directory Server side. The Posix Winsync API Plug-in synchronizes POSIX attributes between Active Directory and Directory Server entries. Note All POSIX attributes (such as uidNumber , gidNumber , and homeDirectory ) are synchronized between Active Directory and Directory Server entries. However, if a new POSIX entry or POSIX attributes are added to an existing entry in the Directory Server, only the POSIX attributes are synchronized over to the Active Directory corresponding entry . The POSIX object class ( posixAccount for users and posixGroup for groups) is not added to the Active Directory entry. This plug-in is disabled by default and must be enabled before any Posix attributes will be synchronized from the Active Directory entry to the Directory Server entry. 4.15.1. posixWinsyncCreateMemberOfTask This attribute sets whether to run the memberOf fix-up task immediately after a sync run in order to update group memberships for synced users. This is disabled by default because the memberOf fix-up task can be resource-intensive and cause performance issues if it is run too frequently. Parameter Description Entry DN cn=Posix Winsync API Plugin,cn=plugins,cn=config Valid Range true | false Default Value false Example posixWinsyncCreateMemberOfTask: false 4.15.2. posixWinsyncLowerCaseUID This attribute sets whether to store (and, if necessary, convert) the UID value in the memberUID attribute in lower case. Parameter Description Entry DN cn=Posix Winsync API Plugin,cn=plugins,cn=config Valid Range true | false Default Value false Example posixWinsyncLowerCaseUID: false 4.15.3. posixWinsyncMapMemberUID This attribute sets whether to map the memberUID attribute in an Active Directory group to the uniqueMember attribute in a Directory Server group. Parameter Description Entry DN cn=Posix Winsync API Plugin,cn=plugins,cn=config Valid Range true | false Default Value true Example posixWinsyncMapMemberUID: false 4.15.4. posixWinsyncMapNestedGrouping The posixWinsyncMapNestedGrouping parameter manages if nested groups are updated when memberUID attributes in an Active Directory POSIX group change. Updating nested groups is supported up a depth of five levels. Parameter Description Entry DN cn=Posix Winsync API Plugin,cn=plugins,cn=config Valid Range true | false Default Value false Example posixWinsyncMapNestedGrouping: false 4.15.5. posixWinsyncMsSFUSchema This attribute sets whether to the older Microsoft System Services for Unix 3.0 (msSFU30) schema when syncing Posix attributes from Active Directory. By default, the Posix Winsync API Plug-in uses Posix schema for modern Active Directory servers: 2005, 2008, and later versions. There are slight differences between the modern Active Directory Posix schema and the Posix schema used by Windows Server 2003 and older Windows servers. If an Active Directory domain is using the older-style schema, then the older-style schema can be used instead. Parameter Description Entry DN cn=Posix Winsync API Plugin,cn=plugins,cn=config Valid Range true | false Default Value false Example posixWinsyncMsSFUSchema: true 4.16. Retro Changelog Plug-in Attributes Two different types of changelogs are maintained by Directory Server. The first type, referred to as simply a changelog , is used by multi-supplier replication, and the second changelog, a plug-in referred to as the retro changelog , is intended for use by LDAP clients for maintaining application compatibility with Directory Server 4.x versions. This Retro Changelog Plug-in is used to record modifications made to a supplier server. When the supplier server's directory is modified, an entry is written to the Retro Changelog that contains both of the following: A number that uniquely identifies the modification. This number is sequential with respect to other entries in the changelog. The modification action; that is, exactly how the directory was modified. It is through the Retro Changelog Plug-in that the changes performed to the Directory Server are accessed using searches to cn=changelog suffix. 4.16.1. isReplicated This optional attribute sets a flag to indicate on a change in the changelog whether the change is newly made on that server or whether it was replicated over from another server. Parameter Description OID 2.16.840.1.113730.3.1.2085 Entry DN cn=Retro Changelog Plugin,cn=plugins,cn=config Valid Values true | false Default Value None Syntax Boolean Example isReplicated: true 4.16.2. nsslapd-attribute This attribute explicitly specifies another Directory Server attribute which must be included in the retro changelog entries. Many operational attributes and other types of attributes are commonly excluded from the retro changelog, but these attributes may need to be present for a third-party application to use the changelog data. This is done by listing the attribute in the retro changelog plug-in configuration using the nsslapd-attribute parameter. It is also possible to specify an optional alias for the specified attribute within the nsslapd-attribute value. Using an alias for the attribute can help avoid conflicts with other attributes in an external server or application which may use the retro changelog records. Note Setting the value of the nsslapd-attribute attribute to isReplicated is a way of indicating, in the retro changelog entry itself, whether the modification was done on the local server (that is, whether the change is an original change) or whether the change was replicated over to the server. Parameter Description Entry DN cn=Retro Changelog Plugin,cn=plugins,cn=config Valid Values Any valid directory attribute (standard or custom) Default Value None Syntax DirectoryString Example nsslapd-attribute: nsUniqueId: uniqueID 4.16.3. nsslapd-changelogdir This attribute specifies the name of the directory in which the changelog database is created the first time the plug-in is run. By default, the database is stored with all the other databases under /var/lib/dirsrv/slapd- instance /changelogdb . Note For performance reasons, store this database on a different physical disk. The server has to be restarted for changes to this attribute to go into effect. Parameter Description Entry DN cn=Retro Changelog Plugin,cn=plugins,cn=config Valid Values Any valid path to the directory Default Value None Syntax DirectoryString Example nsslapd-changelogdir: /var/lib/dirsrv/slapd- instance /changelogdb 4.16.4. nsslapd-changelogmaxage (Max Changelog Age) This attribute specifies the maximum age of any entry in the changelog. The changelog contains a record for each directory modification and is used when synchronizing consumer servers. Each record contains a timestamp. Any record with a timestamp that is older than the value specified in this attribute is removed. If nsslapd-changelogmaxage attribute is absent, there is no age limit on changelog records. Note Expired changelog records will not be removed if there is an agreement that has fallen behind further than the maximum age. Parameter Description Entry DN cn=Retro Changelog Plugin,cn=plugins,cn=config Valid Range 0 (meaning that entries are not removed according to their age) to the maximum 32 bit integer value (2147483647) Default Value 7d Syntax DirectoryString Integer AgeID AgeID is s (S) for seconds, m (M) for minutes, h (H) for hours, d (D) for days, w (W) for weeks. Example nsslapd-changelogmaxage: 30d 4.16.5. nsslapd-exclude-attrs The nsslapd-exclude-attrs parameter stores an attribute name to exclude from the retro changelog database. To exclude multiple attributes, add one nsslapd-exclude-attrs parameter for each attribute to exclude. Parameter Description Entry DN cn=Retro Changelog Plugin,cn=plugins,cn=config Valid Values Any valid attribute name Default Value None Syntax DirectoryString Example nsslapd-exclude-attrs: example 4.16.6. nsslapd-exclude-suffix The nsslapd-exclude-suffix parameter stores a suffix to exclude from the retro changelog database. You can add the parameter multiple times to exclude multiple suffixes. Parameter Description Entry DN cn=Retro Changelog Plugin,cn=plugins,cn=config Valid Values Any valid attribute name Default Value None Syntax DirectoryString Example nsslapd-exclude-suffix: ou=demo,dc=example,dc=com 4.17. RootDN Access Control Plug-in Attributes The root DN, cn=Directory Manager, is a special user entry that is defined outside the normal user database. Normal access control rules are not applied to the root DN, but because of the powerful nature of the root user, it can be beneficial to apply some kind of access control rules to the root user. The RootDN Access Control Plug-in sets normal access controls - host and IP address restrictions, time-of-day restrictions, and day of week restrictions - on the root user. This plug-in is disabled by default. 4.17.1. rootdn-allow-host This sets what hosts, by fully-qualified domain name, the root user is allowed to use to access the Directory Server. Any hosts not listed are implicitly denied. Wild cards are allowed. This attribute can be used multiple times to specify multiple hosts, domains, or subdomains. Parameter Description Entry DN cn=RootDN Access Control Plugin,cn=plugins,cn=config Valid Range Any valid host name or domain, including asterisks (*) for wildcards Default Value None Syntax DirectoryString Example rootdn-allow-host: *.example.com 4.17.2. rootdn-allow-ip This sets what IP addresses, either IPv4 or IPv6, for machines the root user is allowed to use to access the Directory Server. Any IP addresses not listed are implicitly denied. Wild cards are allowed. This attribute can be used multiple times to specify multiple addresses, domains, or subnets. Parameter Description Entry DN cn=RootDN Access Control Plugin,cn=plugins,cn=config Valid Range Any valid IPv4 or IPv6 address, including asterisks (*) for wildcards Default Value None Syntax DirectoryString Example rootdn-allow-ip: 192.168. . 4.17.3. rootdn-close-time This sets part of a time period or range when the root user is allowed to access the Directory Server. This sets when the time-based access ends , when the root user is no longer allowed to access the Directory Server. This is used in conjunction with the rootdn-open-time attribute. Parameter Description Entry DN cn=RootDN Access Control Plugin,cn=plugins,cn=config Valid Range Any valid time, in a 24-hour format Default Value None Syntax Integer Example rootdn-close-time: 1700 4.17.4. rootdn-days-allowed This gives a comma-separated list of what days the root user is allowed to use to access the Directory Server. Any days listed are implicitly denied. This can be used with rootdn-close-time and rootdn-open-time to combine time-based access and days-of-week or it can be used by itself (with all hours allowed on allowed days). Parameter Description Entry DN cn=RootDN Access Control Plugin,cn=plugins,cn=config Valid Values * Sun * Mon * Tue * Wed * Thu * Fri * Sat Default Value None Syntax DirectoryString Example rootdn-days-allowed: Mon, Tue, Wed, Thu, Fri 4.17.5. rootdn-deny-ip This sets what IP addresses, either IPv4 or IPv6, for machines the root user is not allowed to use to access the Directory Server. Any IP addresses not listed are implicitly allowed. Note Deny rules supercede allow rules, so if an IP address is listed in both the rootdn-allow-ip and rootdn-deny-ip attributes, it is denied access. Wild cards are allowed. This attribute can be used multiple times to specify multiple addresses, domains, or subnets. Parameter Description Entry DN cn=RootDN Access Control Plugin,cn=plugins,cn=config Valid Range Any valid IPv4 or IPv6 address, including asterisks (*) for wildcards Default Value None Syntax DirectoryString Example rootdn-deny-ip: 192.168.0.0 4.17.6. rootdn-open-time This sets part of a time period or range when the root user is allowed to access the Directory Server. This sets when the time-based access begins . This is used in conjunction with the rootdn-close-time attribute. Parameter Description Entry DN cn=RootDN Access Control Plugin,cn=plugins,cn=config Valid Range Any valid time, in a 24-hour format Default Value None Syntax Integer Example rootdn-open-time: 0800 4.18. Referential Integrity Plug-in Attributes Referential Integrity ensures that when you perform update or remove operations to an entry in the the directory, the server also updates information for entries that reference removed/updated one. For example, if a user's entry is removed from the directory and Referential Integrity is enabled, the server also removes the user from any groups where the user is a member. 4.18.1. nsslapd-pluginAllowReplUpdates Referential Integrity can be a very resource demanding procedure. So if you configured multi-supplier replication the Referential Integrity plug-in will ignore replicated updates by default. However, sometimes it is not possible to enable the Referential Integrity plug-in, or the plug-in is not available. For example, one of your suppliers in the replication topology is Active Directory (see chapter Windows Synchronization for more details) that does not support Referential Integrity. In cases like this you can allow the Referential Integrity plug-in on another supplier to process replicated updates using nsslapd-pluginAllowReplUpdates attribute. Important Only one supplier must have the nsslapd-pluginAllowReplUpdates attribute value on in multi-supplier replication topology. Otherwise, it can lead to replication errors, and requires a full initialization to fix the problem. On the other hand, the Referential Integrity plug-in must be enabled on all supplies where possible. Parameter Description Entry DN cn=referential integrity postoperation,cn=plugins,cn=config Valid Values on/off Default Value off Syntax Boolean Example nsslapd-pluginAllowReplUpdates: on | [
"dn: cn=Telephone Syntax,cn=plugins,cn=config objectclass: top objectclass: nsSlapdPlugin objectclass: extensibleObject cn: Telephone Syntax nsslapd-pluginPath: libsyntax-plugin nsslapd-pluginInitfunc: tel_init nsslapd-pluginType: syntax nsslapd-pluginEnabled: on",
"dn:cn=ACL Plugin,cn=plugins,cn=config objectclass:top objectclass:nsSlapdPlugin objectclass:extensibleObject",
"ldapsearch -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -x -b \"cn=Password Storage Schemes,cn=plugins,cn=config\" -s sub \"(objectclass=*)\" dn",
"(modifyTimestamp>=20200101010101Z)",
"nsslapd-cache-autosize: 10 nsslapd-cache-autosize-split: 40",
"nsslapd-cache-autosize: 10 nsslapd-cache-autosize-split: 40",
"dn:cn=aci,cn=index,cn=UserRoot,cn=ldbm database,cn=plugins,cn=config objectclass:top objectclass:nsIndex cn:aci nsSystemIndex:true nsIndexType:pres",
"abc*",
"*xyz",
"ab*z",
"dn:cn=userPassword,cn=encrypted attributes,o=UserRoot,cn=ldbm database, cn=plugins,cn=config objectclass:top objectclass:nsAttributeEncryption cn:userPassword nsEncryptionAlgorithm:AES",
"dn: cn=PAM Pass Through Auth,cn=plugins,cn=config objectClass: top objectClass: nsSlapdPlugin objectClass: extensibleObject objectClass: pamConfig cn: PAM Pass Through Auth nsslapd-pluginPath: libpam-passthru-plugin nsslapd-pluginInitfunc: pam_passthruauth_init nsslapd-pluginType: preoperation nsslapd-pluginEnabled: on nsslapd-pluginLoadGlobal: true nsslapd-plugin-depends-on-type: database nsslapd-pluginId: pam_passthruauth nsslapd-pluginVersion: 9.0.0 nsslapd-pluginVendor: Red Hat nsslapd-pluginDescription: PAM pass through authentication plugin dn: cn=Example PAM Config,cn=PAM Pass Through Auth,cn=plugins,cn=config objectClass: top objectClass: nsSlapdPlugin objectClass: extensibleObject objectClass: pamConfig cn: Example PAM Config pamMissingSuffix: ALLOW pamExcludeSuffix: cn=config pamIDMapMethod: RDN ou=people,dc=example,dc=com pamIDMapMethod: ENTRY ou=engineering,dc=example,dc=com pamIDAttr: customPamUid pamFilter: (manager=uid=bjensen,ou=people,dc=example,dc=com) pamFallback: FALSE pamSecure: TRUE pamService: ldapserver",
"pamIDMapMethod: RDN pamSecure: FALSE pamService: ldapserver",
"ldapmodify -D \"cn=Directory Manager\" -W -H ldap:// server.example.com -x dn: cn=Example PAM config entry,cn=PAM Pass Through Auth,cn=plugins,cn=config changetype: modify add: pamModuleIsThreadSafe pamModuleIsThreadSafe: on",
"dn: cn=Account Policy Plugin,cn=plugins,cn=config nsslapd-pluginarg0: cn=config,cn=Account Policy Plugin,cn=plugins,cn=config",
"dn: cn=config,cn=Account Policy Plugin,cn=plugins,cn=config objectClass: top objectClass: extensibleObject cn: config ... attributes for evaluating accounts alwaysRecordLogin: yes stateattrname: lastLoginTime altstateattrname: createTimestamp ... attributes for account policy entries specattrname: acctPolicySubentry limitattrname: accountInactivityLimit",
"dn: cn=AccountPolicy,dc=example,dc=com objectClass: top objectClass: ldapsubentry objectClass: extensibleObject objectClass: accountpolicy 86400 seconds per day * 30 days = 2592000 seconds accountInactivityLimit: 2592000 cn: AccountPolicy",
"dn: uid=scarter,ou=people,dc=example,dc=com lastLoginTime: 20060527001051Z acctPolicySubentry: cn=AccountPolicy,dc=example,dc=com",
"dn: cn=Hostgroups,cn=Auto Membership Plugin,cn=plugins,cn=config objectclass: autoMemberDefinition cn: Hostgroups autoMemberScope: dc=example,dc=com autoMemberFilter: objectclass=ipHost autoMemberDefaultGroup: cn=systems,cn=hostgroups,ou=groups,dc=example,dc=com autoMemberGroupingAttr: member:dn",
"dn: cn=webservers,cn=Hostgroups,cn=Auto Membership Plugin,cn=plugins,cn=config objectclass: autoMemberRegexRule description: Group for webservers cn: webservers autoMemberTargetGroup: cn=webservers,cn=hostgroups,dc=example,dc=com autoMemberInclusiveRegex: fqdn=^www\\.web[0-9]+\\.example\\.com",
"member: uid=jsmith,ou=People,dc=example,dc=com",
"nsslapd-attribute: attribute : alias"
]
| https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/configuration_command_and_file_reference/plug_in_implemented_server_functionality_reference |
Chapter 22. KafkaAuthorizationOpa schema reference | Chapter 22. KafkaAuthorizationOpa schema reference Used in: KafkaClusterSpec Full list of KafkaAuthorizationOpa schema properties To use Open Policy Agent authorization, set the type property in the authorization section to the value opa , and configure OPA properties as required. AMQ Streams uses Open Policy Agent plugin for Kafka authorization as the authorizer. For more information about the format of the input data and policy examples, see Open Policy Agent plugin for Kafka authorization . 22.1. url The URL used to connect to the Open Policy Agent server. The URL has to include the policy which will be queried by the authorizer. Required. 22.2. allowOnError Defines whether a Kafka client should be allowed or denied by default when the authorizer fails to query the Open Policy Agent, for example, when it is temporarily unavailable. Defaults to false - all actions will be denied. 22.3. initialCacheCapacity Initial capacity of the local cache used by the authorizer to avoid querying the Open Policy Agent for every request. Defaults to 5000 . 22.4. maximumCacheSize Maximum capacity of the local cache used by the authorizer to avoid querying the Open Policy Agent for every request. Defaults to 50000 . 22.5. expireAfterMs The expiration of the records kept in the local cache to avoid querying the Open Policy Agent for every request. Defines how often the cached authorization decisions are reloaded from the Open Policy Agent server. In milliseconds. Defaults to 3600000 milliseconds (1 hour). 22.6. tlsTrustedCertificates Trusted certificates for TLS connection to the OPA server. 22.7. superUsers A list of user principals treated as super users, so that they are always allowed without querying the open Policy Agent policy. An example of Open Policy Agent authorizer configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # ... authorization: type: opa url: http://opa:8181/v1/data/kafka/allow allowOnError: false initialCacheCapacity: 1000 maximumCacheSize: 10000 expireAfterMs: 60000 superUsers: - CN=fred - sam - CN=edward # ... 22.8. KafkaAuthorizationOpa schema properties The type property is a discriminator that distinguishes use of the KafkaAuthorizationOpa type from KafkaAuthorizationSimple , KafkaAuthorizationKeycloak , KafkaAuthorizationCustom . It must have the value opa for the type KafkaAuthorizationOpa . Property Description type Must be opa . string url The URL used to connect to the Open Policy Agent server. The URL has to include the policy which will be queried by the authorizer. This option is required. string allowOnError Defines whether a Kafka client should be allowed or denied by default when the authorizer fails to query the Open Policy Agent, for example, when it is temporarily unavailable). Defaults to false - all actions will be denied. boolean initialCacheCapacity Initial capacity of the local cache used by the authorizer to avoid querying the Open Policy Agent for every request Defaults to 5000 . integer maximumCacheSize Maximum capacity of the local cache used by the authorizer to avoid querying the Open Policy Agent for every request. Defaults to 50000 . integer expireAfterMs The expiration of the records kept in the local cache to avoid querying the Open Policy Agent for every request. Defines how often the cached authorization decisions are reloaded from the Open Policy Agent server. In milliseconds. Defaults to 3600000 . integer tlsTrustedCertificates Trusted certificates for TLS connection to the OPA server. CertSecretSource array superUsers List of super users, which is specifically a list of user principals that have unlimited access rights. string array enableMetrics Defines whether the Open Policy Agent authorizer plugin should provide metrics. Defaults to false . boolean | [
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # authorization: type: opa url: http://opa:8181/v1/data/kafka/allow allowOnError: false initialCacheCapacity: 1000 maximumCacheSize: 10000 expireAfterMs: 60000 superUsers: - CN=fred - sam - CN=edward #"
]
| https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-KafkaAuthorizationOpa-reference |
Chapter 27. Performing advanced container image management | Chapter 27. Performing advanced container image management The default container image configuration suits most environments. In some situations, your container image configuration might require some customization, such as version pinning. 27.1. Pinning container images for the undercloud In certain circumstances, you might require a set of specific container image versions for your undercloud. In this situation, you must pin the images to a specific version. To pin your images, you must generate and modify a container configuration file, and then combine the undercloud roles data with the container configuration file to generate an environment file that contains a mapping of services to container images. Then include this environment file in the custom_env_files parameter in the undercloud.conf file. Procedure Log in to the undercloud host as the stack user. Run the openstack tripleo container image prepare default command with the --output-env-file option to generate a file that contains the default image configuration: Modify the undercloud-container-image-prepare.yaml file according to the requirements of your environment. Remove the tag: parameter so that director can use the tag_from_label: parameter. Director uses this parameter to identify the latest version of each container image, pull each image, and tag each image on the container registry in director. Remove the Ceph labels for the undercloud. Ensure that the neutron_driver: parameter is empty. Do not set this parameter to OVN because OVN is not supported on the undercloud. Include your container image registry credentials: Note You cannot push container images to the undercloud registry on new underclouds because the image-serve registry is not installed yet. You must set the push_destination value to false , or use a custom value, to pull images directly from source. For more information, see Container image preparation parameters . Generate a new container image configuration file that uses the undercloud roles file combined with your custom undercloud-container-image-prepare.yaml file: The undercloud-container-images.yaml file is an environment file that contains a mapping of service parameters to container images. For example, OpenStack Identity (keystone) uses the ContainerKeystoneImage parameter to define its container image: Note that the container image tag matches the {version}-{release} format. Include the undercloud-container-images.yaml file in the custom_env_files parameter in the undercloud.conf file. When you run the undercloud installation, the undercloud services use the pinned container image mapping from this file. 27.2. Pinning container images for the overcloud In certain circumstances, you might require a set of specific container image versions for your overcloud. In this situation, you must pin the images to a specific version. To pin your images, you must create the containers-prepare-parameter.yaml file, use this file to pull your container images to the undercloud registry, and generate an environment file that contains a pinned image list. For example, your containers-prepare-parameter.yaml file might contain the following content: The ContainerImagePrepare parameter contains a single rule set . This rule set must not include the tag parameter and must rely on the tag_from_label parameter to identify the latest version and release of each container image. Director uses this rule set to identify the latest version of each container image, pull each image, and tag each image on the container registry in director. Procedure Run the openstack tripleo container image prepare command, which pulls all images from the source defined in the containers-prepare-parameter.yaml file. Include the --output-env-file to specify the output file that will contain the list of pinned container images: The overcloud-images.yaml file is an environment file that contains a mapping of service parameters to container images. For example, OpenStack Identity (keystone) uses the ContainerKeystoneImage parameter to define its container image: Note that the container image tag matches the {version}-{release} format. Include the containers-prepare-parameter.yaml and overcloud-images.yaml files in that specific order with your environment file collection when you run the openstack overcloud deploy command: The overcloud services use the pinned images listed in the overcloud-images.yaml file. | [
"sudo openstack tripleo container image prepare default --output-env-file undercloud-container-image-prepare.yaml",
"ContainerImageRegistryCredentials: registry.redhat.io myser: 'p@55w0rd!'",
"sudo openstack tripleo container image prepare -r /usr/share/openstack-tripleo-heat-templates/roles_data_undercloud.yaml -e undercloud-container-image-prepare.yaml --output-env-file undercloud-container-images.yaml",
"ContainerKeystoneImage: undercloud.ctlplane.localdomain:8787/rhosp-rhel8/openstack-keystone:16.2.4-5",
"parameter_defaults: ContainerImagePrepare: - push_destination: true set: name_prefix: openstack- name_suffix: '' namespace: registry.redhat.io/rhosp-rhel8 neutron_driver: ovn tag_from_label: '{version}-{release}' ContainerImageRegistryCredentials: registry.redhat.io: myuser: 'p@55w0rd!'",
"sudo openstack tripleo container image prepare -e /home/stack/templates/containers-prepare-parameter.yaml --output-env-file overcloud-images.yaml",
"ContainerKeystoneImage: undercloud.ctlplane.localdomain:8787/rhosp-rhel8/openstack-keystone:16.2.4-5",
"openstack overcloud deploy --templates -e /home/stack/containers-prepare-parameter.yaml -e /home/stack/overcloud-images.yaml"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/director_installation_and_usage/assembly_performing-advanced-container-image-management |
Chapter 10. Integrating a Camel application with the A-MQ broker | Chapter 10. Integrating a Camel application with the A-MQ broker This tutorial shows how to deploy a quickstart using the A-MQ image. 10.1. Building and deploying a Spring Boot Camel A-MQ quickstart This quickstart demonstrates how to connect a Spring Boot application to AMQ Broker and use JMS messaging between two Camel routes using Fuse on OpenShift. Prerequisites Ensure that AMQ Broker is installed and running as described in Deploying AMQ Broker on OpenShift . Ensure that OpenShift is running correctly and the Fuse image streams are already installed in OpenShift. See Getting Started for Administrators . Ensure that Maven Repositories are configured for fuse, see Configuring Maven Repositories . Procedure Log in to the OpenShift server as a developer. Create a new project for quickstart, for example:. Retrieve the quickstart project by using the Maven archetype: Navigate to the quickstart directory fuse713-spring-boot-camel-amq . Run the following commands to apply configuration files to AMQ Broker. These configuration files create the AMQ Broker user and the queue, both with the admin privileges. Create the ConfigMap for the application, for example: Run the mvn command to deploy the quickstart to the OpenShift server, by using the ImageStream from Step 3: To verify that the quickstart is running successfully: Navigate to the OpenShift web console in your browser ( https://OPENSHIFT_IP_ADDR , replace OPENSHIFT_IP_ADDR with the IP address of the cluster) and log in to the console with your credentials (for example, with username developer and password, developer). In the left hand side panel, expand Home and then click Status to view the Project Status page for openshift project. Click fuse713-spring-boot-camel-amq to view the Overview information page for the quickstart. In the left hand side panel, expand Workloads . Click Pods and then click fuse713-spring-boot-camel-amq-xxxxx . The pod details for the quickstart are displayed. Click Logs to view the logs for the application. The output shows the messages are sent successfully. To view the routes on the web interface, click Open Java Console and check the messages in the AMQ queue. | [
"login -u developer -p developer",
"new-project quickstart",
"mvn org.apache.maven.plugins:maven-archetype-plugin:2.4:generate -DarchetypeCatalog=https://maven.repository.redhat.com/ga/io/fabric8/archetypes/archetypes-catalog/2.2.0.fuse-sb2-790047-redhat-00004/archetypes-catalog-2.2.0.fuse-sb2-790047-redhat-00004-archetype-catalog.xml -DarchetypeGroupId=org.jboss.fuse.fis.archetypes -DarchetypeArtifactId=spring-boot-camel-amq-archetype -DarchetypeVersion=2.2.0.fuse-sb2-790047-redhat-00004",
"cd fuse713-spring-boot-camel-amq",
"login -u admin -p admin apply -f src/main/resources/k8s",
"kind: ConfigMap apiVersion: v1 metadata: name: spring-boot-camel-amq-config namespace: quickstarts data: service.host: 'fuse-broker-amqps-0-svc' service.port.amqp: '5672' service.port.amqps: '5671'",
"mvn oc:deploy -Popenshift -Djkube.generator.fromMode=istag -Djkube.generator.from=openshift/fuse-java-openshift:1.13",
"10:17:59.825 [Camel (camel) thread #10 - timer://order] INFO generate-order-route - Generating order order1379.xml 10:17:59.829 [Camel (camel) thread #8 - JmsConsumer[incomingOrders]] INFO jms-cbr-route - Sending order order1379.xml to the UK 10:17:59.829 [Camel (camel) thread #8 - JmsConsumer[incomingOrders]] INFO jms-cbr-route - Done processing order1379.xml 10:18:02.825 [Camel (camel) thread #10 - timer://order] INFO generate-order-route - Generating order order1380.xml 10:18:02.829 [Camel (camel) thread #7 - JmsConsumer[incomingOrders]] INFO jms-cbr-route - Sending order order1380.xml to another country 10:18:02.829 [Camel (camel) thread #7 - JmsConsumer[incomingOrders]] INFO jms-cbr-route - Done processing order1380.xml"
]
| https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/fuse_on_openshift_guide/integrate-camel-application-with-amq |
Appendix A. Configuring a Local Repository for Offline Red Hat Virtualization Manager Installation | Appendix A. Configuring a Local Repository for Offline Red Hat Virtualization Manager Installation To install Red Hat Virtualization Manager on a system that does not have a direct connection to the Content Delivery Network, download the required packages on a system that has internet access, then create a repository that can be shared with the offline Manager machine. The system hosting the repository must be connected to the same network as the client systems where the packages are to be installed. Prerequisites A Red Hat Enterprise Linux 8 Server installed on a system that has access to the Content Delivery Network. This system downloads all the required packages, and distributes them to your offline systems. A large amount of free disk space available. This procedure downloads a large number of packages, and requires up to 50GB of free disk space. Begin by enabling the Red Hat Virtualization Manager repositories on the online system: Enabling the Red Hat Virtualization Manager Repositories You need to log in and register the online machine with Red Hat Subscription Manager, attach the Red Hat Virtualization Manager subscription, and enable the Manager repositories. Procedure Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted: # subscription-manager register Note If you are using an IPv6 network, use an IPv6 transition mechanism to access the Content Delivery Network and subscription manager. Find the Red Hat Virtualization Manager subscription pool and record the pool ID: # subscription-manager list --available Use the pool ID to attach the subscription to the system: # subscription-manager attach --pool= pool_id Note To view currently attached subscriptions: # subscription-manager list --consumed To list all enabled repositories: # dnf repolist Configure the repositories: # subscription-manager repos \ --disable='*' \ --enable=rhel-8-for-x86_64-baseos-eus-rpms \ --enable=rhel-8-for-x86_64-appstream-eus-rpms \ --enable=rhv-4.4-manager-for-rhel-8-x86_64-rpms \ --enable=fast-datapath-for-rhel-8-x86_64-rpms \ --enable=jb-eap-7.4-for-rhel-8-x86_64-rpms \ --enable=openstack-16.2-cinderlib-for-rhel-8-x86_64-rpms \ --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms \ --enable=rhel-8-for-x86_64-appstream-tus-rpms \ --enable=rhel-8-for-x86_64-baseos-tus-rpms Set the RHEL version to 8.6: # subscription-manager release --set=8.6 Enable the pki-deps module. # dnf module -y enable pki-deps Enable version 12 of the postgresql module. # dnf module -y enable postgresql:12 Enable version 14 of the nodejs module: # dnf module -y enable nodejs:14 Synchronize installed packages to update them to the latest available versions. # dnf distro-sync --nobest Additional resources For information on modules and module streams, see the following sections in Installing, managing, and removing user-space components Module streams Selecting a stream before installation of packages Resetting module streams Switching to a later stream Configuring the Offline Repository Servers that are not connected to the Internet can access software repositories on other systems using File Transfer Protocol (FTP). To create the FTP repository, install and configure vsftpd on the intended Manager machine: Install the vsftpd package: # dnf install vsftpd Enable ftp access for an anonymous user to have access to rpm files from the intended Manager machine, and to keep it secure, disable write on ftp server. Edit the /etc/vsftpd/vsftpd.conf file and change the values for anonymous_enable and write_enable as follows: anonymous_enable=YES write_enable=NO Start the vsftpd service, and ensure the service starts on boot: # systemctl start vsftpd.service # systemctl enable vsftpd.service Create a firewall rule to allow FTP service and reload the firewalld service to apply changes: # firewall-cmd --permanent --add-service=ftp # firewall-cmd --reload Red Hat Enterprise Linux 8 enforces SELinux by default, so configure SELinux to allow FTP access: # setsebool -P allow_ftpd_full_access=1 Create a sub-directory inside the /var/ftp/pub/ directory, where the downloaded packages are made available: # mkdir /var/ftp/pub/rhvrepo Download packages from all configured software repositories to the rhvrepo directory. This includes repositories for all Content Delivery Network subscription pools attached to the system, and any locally configured repositories: # reposync -p /var/ftp/pub/rhvrepo --download-metadata This command downloads a large number of packages and their metadata, and takes a long time to complete. Create a repository file, and copy it to the /etc/yum.repos.d/ directory on the intended Manager machine. You can create the configuration file manually or with a script. Run the script below on the machine hosting the repository, replacing ADDRESS in the baseurl with the IP address or FQDN of the machine hosting the repository: #!/bin/sh REPOFILE="/etc/yum.repos.d/rhev.repo" echo -e " " > USDREPOFILE for DIR in USD(find /var/ftp/pub/rhvrepo -maxdepth 1 -mindepth 1 -type d); do echo -e "[USD(basename USDDIR)]" >> USDREPOFILE echo -e "name=USD(basename USDDIR)" >> USDREPOFILE echo -e "baseurl=ftp://__ADDRESS__/pub/rhvrepo/`basename USDDIR`" >> USDREPOFILE echo -e "enabled=1" >> USDREPOFILE echo -e "gpgcheck=0" >> USDREPOFILE echo -e "\n" >> USDREPOFILE done Return to Configuring the Manager . Packages are installed from the local repository, instead of from the Content Delivery Network. Troubleshooting When running reposync , the following error message appears No available modular metadata for modular package "package_name_from_module" it cannot be installed on the system Solution Ensure you have yum-utils-4.0.8-3.el8.noarch or higher installed so reposync correctly downloads all the packages. For more information, see Create a local repo with Red Hat Enterprise Linux 8 . | [
"subscription-manager register",
"subscription-manager list --available",
"subscription-manager attach --pool= pool_id",
"subscription-manager list --consumed",
"dnf repolist",
"subscription-manager repos --disable='*' --enable=rhel-8-for-x86_64-baseos-eus-rpms --enable=rhel-8-for-x86_64-appstream-eus-rpms --enable=rhv-4.4-manager-for-rhel-8-x86_64-rpms --enable=fast-datapath-for-rhel-8-x86_64-rpms --enable=jb-eap-7.4-for-rhel-8-x86_64-rpms --enable=openstack-16.2-cinderlib-for-rhel-8-x86_64-rpms --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-appstream-tus-rpms --enable=rhel-8-for-x86_64-baseos-tus-rpms",
"subscription-manager release --set=8.6",
"dnf module -y enable pki-deps",
"dnf module -y enable postgresql:12",
"dnf module -y enable nodejs:14",
"dnf distro-sync --nobest",
"dnf install vsftpd",
"anonymous_enable=YES write_enable=NO",
"systemctl start vsftpd.service systemctl enable vsftpd.service",
"firewall-cmd --permanent --add-service=ftp firewall-cmd --reload",
"setsebool -P allow_ftpd_full_access=1",
"mkdir /var/ftp/pub/rhvrepo",
"reposync -p /var/ftp/pub/rhvrepo --download-metadata",
"#!/bin/sh REPOFILE=\"/etc/yum.repos.d/rhev.repo\" echo -e \" \" > USDREPOFILE for DIR in USD(find /var/ftp/pub/rhvrepo -maxdepth 1 -mindepth 1 -type d); do echo -e \"[USD(basename USDDIR)]\" >> USDREPOFILE echo -e \"name=USD(basename USDDIR)\" >> USDREPOFILE echo -e \"baseurl=ftp://__ADDRESS__/pub/rhvrepo/`basename USDDIR`\" >> USDREPOFILE echo -e \"enabled=1\" >> USDREPOFILE echo -e \"gpgcheck=0\" >> USDREPOFILE echo -e \"\\n\" >> USDREPOFILE done"
]
| https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/installing_red_hat_virtualization_as_a_standalone_manager_with_remote_databases/configuring_an_offline_repository_for_red_hat_virtualization_manager_installation_sm_remotedb_deploy |
1.2. The Identity Management Domain | 1.2. The Identity Management Domain The Identity Management (IdM) domain consists of a group of machines that share the same configuration, policies, and identity stores. The shared properties allow the machines within the domain to be aware of each other and operate together. From the perspective of IdM, the domain includes the following types of machines: IdM servers, which work as domain controllers IdM clients, which are enrolled with the servers IdM servers are also IdM clients enrolled with themselves: server machines provide the same functionality as clients. IdM supports Red Hat Enterprise Linux machines as the IdM servers and clients. Note This guide describes using IdM in Linux environments. For more information on integration with Active Directory, see the Windows Integration Guide . 1.2.1. Identity Management Servers The IdM servers act as central repositories for identity and policy information. They also host the services used by domain members. IdM provides a set of management tools to manage all the IdM-associated services centrally: the IdM web UI and command-line utilities. For information on installing IdM servers, see Chapter 2, Installing and Uninstalling an Identity Management Server . To support redundancy and load balancing, the data and configuration can be replicated from one IdM server to another: a replica of the initial server. You can configure servers and their replicas to provide different services to clients. For more details on IdM replicas, see Chapter 4, Installing and Uninstalling Identity Management Replicas . 1.2.1.1. Services Hosted by IdM Servers Most of the following services are not strictly required to be installed on the IdM server. For example, services such as a certificate authority (CA), a DNS server, or a Network Time Protocol (NTP) server can be installed on an external server outside the IdM domain. Kerberos: krb5kdc and kadmin IdM uses the Kerberos protocol to support single sign-on. With Kerberos, users only need to present the correct username and password once and can access IdM services without the system prompting for credentials again. Kerberos is divided into two parts: The krb5kdc service is the Kerberos Authentication service and Key Distribution Center (KDC) daemon. The kadmin service is the Kerberos database administration program. For details on how Kerberos works, see the Using Kerberos in the System-Level Authentication Guide . For information on how to authenticate using Kerberos in IdM, see Section 5.2, "Logging into IdM Using Kerberos" . For information on managing Kerberos in IdM, see Chapter 29, Managing the Kerberos Domain . LDAP directory server: dirsrv The IdM internal LDAP directory server instance stores all IdM information, such as information related to Kerberos, user accounts, host entries, services, policies, DNS, and others. The LDAP directory server instance is based on the same technology as Red Hat Directory Server . However, it is tuned to IdM-specific tasks. Note This guide refers to this component as Directory Server. Certificate Authority: pki-tomcatd The integrated Certificate Authority (CA) is based on the same technology as Red Hat Certificate System . pki is the Command-Line Interface for accessing Certificate System services. For more details on installing an IdM server with different CA configurations, see Section 2.3.2, "Determining What CA Configuration to Use" . Note This guide refers to this component as Certificate System when addressing the implementation and as certificate authority when addressing the services provided by the implementation. For information relating to Red Hat Certificate System, a standalone Red Hat product, see Product Documentation for Red Hat Certificate System . Domain Name System (DNS): named IdM uses DNS for dynamic service discovery. The IdM client installation utility can use information from DNS to automatically configure the client machine. After the client is enrolled in the IdM domain, it uses DNS to locate IdM servers and services within the domain. The BIND (Berkeley Internet Name Domain) implementation of the DNS (Domain Name System) protocols in Red Hat Enterprise Linux includes the named DNS server. named-pkcs11 is a version of the BIND DNS server built with native support for the PKCS#11 cryptographic standard. For more information about service discovery, see the Configuring DNS Service Discovery in the System-Level Authentication Guide . For more information on the DNS server, see BIND in the Red Hat Enterprise Linux Networking Guide . For information on using DNS with IdM and important prerequisites, see Section 2.1.5, "Host Name and DNS Configuration" . For details on installing an IdM server with or without integrated DNS, see Section 2.3.1, "Determining Whether to Use Integrated DNS" . Network Time Protocol: ntpd Many services require that servers and clients have the same system time, within a certain variance. For example, Kerberos tickets use time stamps to determine their validity and to prevent replay attacks. If the times between the server and client skew outside the allowed range, the Kerberos tickets are invalidated. By default, IdM uses the Network Time Protocol (NTP) to synchronize clocks over a network via the ntpd service. With NTP, a central server acts as an authoritative clock and the clients synchronize their times to match the server clock. The IdM server is configured as the NTP server for the IdM domain during the server installation process. Note Running an NTP server on an IdM server installed on a virtual machine can lead to inaccurate time synchronization in some environments. To avoid potential problems, do not run NTP on IdM servers installed on virtual machines. For more information on the reliability of an NTP server on a virtual machine, see this Knowledgebase solution . Apache HTTP Server: httpd The Apache HTTP web server provides the IdM Web UI, and also manages communication between the Certificate Authority and other IdM services. For more information, see The Apache HTTP Server in the System Administrator's Guide . Samba / Winbind: smb , winbind Samba implements the Server Message Block (SMB) protocol, also known as the Common Internet File System (CIFS) protocol), in Red Hat Enterprise Linux. Via the smb service, the SMB protocol enables you to access resources on a server, such as file shares and shared printers. If you have configured a Trust with an Active Directory (AD) environment, the Winbind service manages communication between IdM servers and AD servers. For more information, see Samba in the System Administrator's Guide . For more information, see the Winbind in the System-Level Authentication Guide One-time password (OTP) authentication: ipa-otpd One-time passwords (OTP) are passwords that are generated by an authentication token for only one session, as part of two-factor authentication. OTP authentication is implemented in Red Hat Enterprise Linux via the ipa-otpd service. For more information about OTP authentication, see Section 22.3, "One-Time Passwords" . Custodia: ipa-custodia Custodia is a Secrets Services provider, it stores and shares access to secret material such as passwords, keys, tokens, certificates. OpenDNSSEC: ipa-dnskeysyncd OpenDNSSEC is a DNS manager that automates the process of keeping track of DNS security extensions (DNSSEC) keys and the signing of zones. The ipa-dnskeysyncd servuce manages synchronization between the IdM Directory Server and OpenDNSSEC. Figure 1.1. The Identity Management Server: Unifying Services 1.2.2. Identity Management Clients IdM clients are machines configured to operate within the IdM domain. They interact with the IdM servers to access domain resources. For example, they belong to the Kerberos domains configured on the servers, receive certificates and tickets issued by the servers, and use other centralized services for authentication and authorization. An IdM client does not require dedicated client software to interact as a part of the domain. It only requires proper system configuration of certain services and libraries, such as Kerberos or DNS. This configuration directs the client machine to use IdM services. For information on installing IdM clients, see Chapter 3, Installing and Uninstalling Identity Management Clients . 1.2.2.1. Services Hosted by IdM Clients System Security Services Daemon: sssd The System Security Services Daemon (SSSD) is the client-side application that manages user authentication and caching credentials. Caching enables the local system to continue normal authentication operations if the IdM server becomes unavailable or if the client goes offline. For more information, see Configuring SSSD in the System-Level Authentication Guide . SSSD also supports Windows Active Directory (AD). For more information about using SSSD with AD, see the Using Active Directory as an Identity Provider for SSSD in the Windows Integration Guide . certmonger The certmonger service monitors and renews the certificates on the client. It can request new certificates for the services on the system. For more information, see Working with certmonger in the System-Level Authentication Guide . Figure 1.2. Interactions Between IdM Services | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/idm-domain |
Chapter 1. Deployments | Chapter 1. Deployments 1.1. Custom domains for applications Note Starting with Red Hat OpenShift Service on AWS 4.14, the Custom Domain Operator is deprecated. To manage Ingress in Red Hat OpenShift Service on AWS 4.14, use the Ingress Operator. The functionality is unchanged for Red Hat OpenShift Service on AWS 4.13 and earlier versions. You can configure a custom domain for your applications. Custom domains are specific wildcard domains that can be used with Red Hat OpenShift Service on AWS applications. 1.1.1. Configuring custom domains for applications The top-level domains (TLDs) are owned by the customer that is operating the Red Hat OpenShift Service on AWS cluster. The Custom Domains Operator sets up a new ingress controller with a custom certificate as a second day operation. The public DNS record for this ingress controller can then be used by an external DNS to create a wildcard CNAME record for use with a custom domain. Note Custom API domains are not supported because Red Hat controls the API domain. However, customers can change their application domains. For private custom domains with a private IngressController , set .spec.scope to Internal in the CustomDomain CR. Prerequisites A user account with dedicated-admin privileges A unique domain or wildcard domain, such as *.apps.<company_name>.io A custom certificate or wildcard custom certificate, such as CN=*.apps.<company_name>.io Access to a cluster with the latest version of the oc CLI installed Important Do not use the reserved names default or apps* , such as apps or apps2 , in the metadata/name: section of the CustomDomain CR. Procedure Create a new TLS secret from a private key and a public certificate, where fullchain.pem and privkey.pem are your public or private wildcard certificates. Example USD oc create secret tls <name>-tls --cert=fullchain.pem --key=privkey.pem -n <my_project> Create a new CustomDomain custom resource (CR): Example <company_name>-custom-domain.yaml apiVersion: managed.openshift.io/v1alpha1 kind: CustomDomain metadata: name: <company_name> spec: domain: apps.<company_name>.io 1 scope: External loadBalancerType: Classic 2 certificate: name: <name>-tls 3 namespace: <my_project> routeSelector: 4 matchLabels: route: acme namespaceSelector: 5 matchLabels: type: sharded 1 The custom domain. 2 The type of load balancer for your custom domain. This type can be the default classic or NLB if you use a network load balancer. 3 The secret created in the step. 4 Optional: Filters the set of routes serviced by the CustomDomain ingress. If no value is provided, the default is no filtering. 5 Optional: Filters the set of namespaces serviced by the CustomDomain ingress. If no value is provided, the default is no filtering. Apply the CR: Example USD oc apply -f <company_name>-custom-domain.yaml Get the status of your newly created CR: USD oc get customdomains Example output NAME ENDPOINT DOMAIN STATUS <company_name> xxrywp.<company_name>.cluster-01.opln.s1.openshiftapps.com *.apps.<company_name>.io Ready Using the endpoint value, add a new wildcard CNAME recordset to your managed DNS provider, such as Route53. Example *.apps.<company_name>.io -> xxrywp.<company_name>.cluster-01.opln.s1.openshiftapps.com Create a new application and expose it: Example USD oc new-app --docker-image=docker.io/openshift/hello-openshift -n my-project USD oc create route <route_name> --service=hello-openshift hello-openshift-tls --hostname hello-openshift-tls-my-project.apps.<company_name>.io -n my-project USD oc get route -n my-project USD curl https://hello-openshift-tls-my-project.apps.<company_name>.io Hello OpenShift! Troubleshooting Error creating TLS secret Troubleshooting: CustomDomain in NotReady state 1.1.2. Renewing a certificate for custom domains You can renew certificates with the Custom Domains Operator (CDO) by using the oc CLI tool. Prerequisites You have the latest version oc CLI tool installed. Procedure Create new secret USD oc create secret tls <secret-new> --cert=fullchain.pem --key=privkey.pem -n <my_project> Patch CustomDomain CR USD oc patch customdomain <company_name> --type='merge' -p '{"spec":{"certificate":{"name":"<secret-new>"}}}' Delete old secret USD oc delete secret <secret-old> -n <my_project> Troubleshooting Error creating TLS secret | [
"oc create secret tls <name>-tls --cert=fullchain.pem --key=privkey.pem -n <my_project>",
"apiVersion: managed.openshift.io/v1alpha1 kind: CustomDomain metadata: name: <company_name> spec: domain: apps.<company_name>.io 1 scope: External loadBalancerType: Classic 2 certificate: name: <name>-tls 3 namespace: <my_project> routeSelector: 4 matchLabels: route: acme namespaceSelector: 5 matchLabels: type: sharded",
"oc apply -f <company_name>-custom-domain.yaml",
"oc get customdomains",
"NAME ENDPOINT DOMAIN STATUS <company_name> xxrywp.<company_name>.cluster-01.opln.s1.openshiftapps.com *.apps.<company_name>.io Ready",
"*.apps.<company_name>.io -> xxrywp.<company_name>.cluster-01.opln.s1.openshiftapps.com",
"oc new-app --docker-image=docker.io/openshift/hello-openshift -n my-project",
"oc create route <route_name> --service=hello-openshift hello-openshift-tls --hostname hello-openshift-tls-my-project.apps.<company_name>.io -n my-project",
"oc get route -n my-project",
"curl https://hello-openshift-tls-my-project.apps.<company_name>.io Hello OpenShift!",
"oc create secret tls <secret-new> --cert=fullchain.pem --key=privkey.pem -n <my_project>",
"oc patch customdomain <company_name> --type='merge' -p '{\"spec\":{\"certificate\":{\"name\":\"<secret-new>\"}}}'",
"oc delete secret <secret-old> -n <my_project>"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/applications/deployments |
Chapter 4. Specifics of Individual Software Collections | Chapter 4. Specifics of Individual Software Collections This chapter is focused on the specifics of certain Software Collections and provides additional details concerning these components. 4.1. Red Hat Developer Toolset Red Hat Developer Toolset is designed for developers working on the Red Hat Enterprise Linux platform. Red Hat Developer Toolset provides current versions of the GNU Compiler Collection , GNU Debugger , and other development, debugging, and performance monitoring tools. Similarly to other Software Collections, an additional set of tools is installed into the /opt/ directory. These tools are enabled by the user on demand using the supplied scl utility. Similarly to other Software Collections, these do not replace the Red Hat Enterprise Linux system versions of these tools, nor will they be used in preference to those system versions unless explicitly invoked using the scl utility. For an overview of features, refer to the Features section of the Red Hat Developer Toolset Release Notes . 4.2. Ruby on Rails 5.0 Red Hat Software Collections 3.4 provides the rh-ruby24 Software Collection together with the rh-ror50 Collection. To install Ruby on Rails 5.0 , type the following command as root : yum install rh-ror50 Installing any package from the rh-ror50 Software Collection automatically pulls in rh-ruby24 and rh-nodejs6 as dependencies. The rh-nodejs6 Collection is used by certain gems in an asset pipeline to post-process web resources, for example, sass or coffee-script source files. Additionally, the Action Cable framework uses rh-nodejs6 for handling WebSockets in Rails. To run the rails s command without requiring rh-nodejs6 , disable the coffee-rails and uglifier gems in the Gemfile . To run Ruby on Rails without Node.js , run the following command, which will automatically enable rh-ruby24 : scl enable rh-ror50 bash To run Ruby on Rails with all features, enable also the rh-nodejs6 Software Collection: scl enable rh-ror50 rh-nodejs6 bash The rh-ror50 Software Collection is supported together with the rh-ruby24 and rh-nodejs6 components. 4.3. MongoDB 3.6 The rh-mongodb36 Software Collection is available only for Red Hat Enterprise Linux 7. See Section 4.4, "MongoDB 3.4" for instructions on how to use MongoDB 3.4 on Red Hat Enterprise Linux 6. To install the rh-mongodb36 collection, type the following command as root : yum install rh-mongodb36 To run the MongoDB shell utility, type the following command: scl enable rh-mongodb36 'mongo' Note The rh-mongodb36-mongo-cxx-driver package has been built with the -std=gnu++14 option using GCC from Red Hat Developer Toolset 6. Binaries using the shared library for the MongoDB C++ Driver that use C++11 (or later) features have to be built also with Red Hat Developer Toolset 6 or later. See C++ compatibility details in the Red Hat Developer Toolset 6 User Guide . To start the MongoDB daemon, type the following command as root : systemctl start rh-mongodb36-mongod.service To start the MongoDB daemon on boot, type this command as root : systemctl enable rh-mongodb36-mongod.service To start the MongoDB sharding server, type the following command as root : systemctl start rh-mongodb36-mongos.service To start the MongoDB sharding server on boot, type this command as root : systemctl enable rh-mongodb36-mongos.service Note that the MongoDB sharding server does not work unless the user starts at least one configuration server and specifies it in the mongos.conf file. 4.4. MongoDB 3.4 To install the rh-mongodb34 collection, type the following command as root : yum install rh-mongodb34 To run the MongoDB shell utility, type the following command: scl enable rh-mongodb34 'mongo' Note The rh-mongodb34-mongo-cxx-driver package has been built with the -std=gnu++14 option using GCC from Red Hat Developer Toolset 6. Binaries using the shared library for the MongoDB C++ Driver that use C++11 (or later) features have to be built also with Red Hat Developer Toolset 6. See C++ compatibility details in the Red Hat Developer Toolset 6 User Guide . MongoDB 3.4 on Red Hat Enterprise Linux 6 If you are using Red Hat Enterprise Linux 6, the following instructions apply to your system. To start the MongoDB daemon, type the following command as root : service rh-mongodb34-mongod start To start the MongoDB daemon on boot, type this command as root : chkconfig rh-mongodb34-mongod on To start the MongoDB sharding server, type this command as root : service rh-mongodb34-mongos start To start the MongoDB sharding server on boot, type the following command as root : chkconfig rh-mongodb34-mongos on Note that the MongoDB sharding server does not work unless the user starts at least one configuration server and specifies it in the mongos.conf file. MongoDB 3.4 on Red Hat Enterprise Linux 7 When using Red Hat Enterprise Linux 7, the following commands are applicable. To start the MongoDB daemon, type the following command as root : systemctl start rh-mongodb34-mongod.service To start the MongoDB daemon on boot, type this command as root : systemctl enable rh-mongodb34-mongod.service To start the MongoDB sharding server, type the following command as root : systemctl start rh-mongodb34-mongos.service To start the MongoDB sharding server on boot, type this command as root : systemctl enable rh-mongodb34-mongos.service Note that the MongoDB sharding server does not work unless the user starts at least one configuration server and specifies it in the mongos.conf file. 4.5. Maven The rh-maven35 Software Collection, available only for Red Hat Enterprise Linux 7, provides a software project management and comprehension tool. Based on the concept of a project object model (POM), Maven can manage a project's build, reporting, and documentation from a central piece of information. To install the rh-maven36 Collection, type the following command as root : yum install rh-maven36 To enable this collection, type the following command at a shell prompt: scl enable rh-maven36 bash Global Maven settings, such as remote repositories or mirrors, can be customized by editing the /opt/rh/rh-maven36/root/etc/maven/settings.xml file. For more information about using Maven, refer to the Maven documentation . Usage of plug-ins is described in this section ; to find documentation regarding individual plug-ins, see the index of plug-ins . 4.6. Passenger The rh-passenger40 Software Collection provides Phusion Passenger , a web and application server designed to be fast, robust and lightweight. The rh-passenger40 Collection supports multiple versions of Ruby , particularly the ruby193 , ruby200 , and rh-ruby22 Software Collections together with Ruby on Rails using the ror40 or rh-ror41 Collections. Prior to using Passenger with any of the Ruby Software Collections, install the corresponding package from the rh-passenger40 Collection: the rh-passenger-ruby193 , rh-passenger-ruby200 , or rh-passenger-ruby22 package. The rh-passenger40 Software Collection can also be used with Apache httpd from the httpd24 Software Collection. To do so, install the rh-passenger40-mod_passenger package. Refer to the default configuration file /opt/rh/httpd24/root/etc/httpd/conf.d/passenger.conf for an example of Apache httpd configuration, which shows how to use multiple Ruby versions in a single Apache httpd instance. Additionally, the rh-passenger40 Software Collection can be used with the nginx 1.6 web server from the nginx16 Software Collection. To use nginx 1.6 with rh-passenger40 , you can run Passenger in Standalone mode using the following command in the web appplication's directory: scl enable nginx16 rh-passenger40 'passenger start' Alternatively, edit the nginx16 configuration files as described in the upstream Passenger documentation . 4.7. Database Connectors Database connector packages provide the database client functionality, which is necessary for local or remote connection to a database server. Table 4.1, "Interoperability Between Languages and Databases" lists Software Collections with language runtimes that include connectors for certain database servers: yes - the combination is supported no - the combination is not supported Table 4.1. Interoperability Between Languages and Databases Database Language (Software Collection) MariaDB MongoDB MySQL PostgreSQL Redis rh-nodejs4 no no no no no rh-nodejs6 no no no no no rh-nodejs8 no no no no no rh-nodejs10 no no no no no rh-nodejs12 no no no no no rh-perl520 yes no yes yes no rh-perl524 yes no yes yes no rh-perl526 yes no yes yes no rh-php56 yes yes yes yes no rh-php70 yes no yes yes no rh-php71 yes no yes yes no rh-php72 yes no yes yes no rh-php73 yes no yes yes no python27 yes yes yes yes no rh-python34 no yes no yes no rh-python35 yes yes yes yes no rh-python36 yes yes yes yes no rh-ror41 yes yes yes yes no rh-ror42 yes yes yes yes no rh-ror50 yes yes yes yes no rh-ruby25 yes yes yes yes no rh-ruby26 yes yes yes yes no | null | https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.4_release_notes/chap-Individual_Collections |
Chapter 1. Red Hat Software Collections 3.6 | Chapter 1. Red Hat Software Collections 3.6 This chapter serves as an overview of the Red Hat Software Collections 3.6 content set. It provides a list of components and their descriptions, sums up changes in this version, documents relevant compatibility information, and lists known issues. 1.1. About Red Hat Software Collections For certain applications, more recent versions of some software components are often needed in order to use their latest new features. Red Hat Software Collections is a Red Hat offering that provides a set of dynamic programming languages, database servers, and various related packages that are either more recent than their equivalent versions included in the base Red Hat Enterprise Linux system, or are available for this system for the first time. Red Hat Software Collections 3.6 is available for Red Hat Enterprise Linux 7; selected previously released components also for Red Hat Enterprise Linux 6. For a complete list of components that are distributed as part of Red Hat Software Collections and a brief summary of their features, see Section 1.2, "Main Features" . Red Hat Software Collections does not replace the default system tools provided with Red Hat Enterprise Linux 6 or Red Hat Enterprise Linux 7. Instead, a parallel set of tools is installed in the /opt/ directory and can be optionally enabled per application by the user using the supplied scl utility. The default versions of Perl or PostgreSQL, for example, remain those provided by the base Red Hat Enterprise Linux system. Note In Red Hat Enterprise Linux 8, similar components are provided as Application Streams . All Red Hat Software Collections components are fully supported under Red Hat Enterprise Linux Subscription Level Agreements, are functionally complete, and are intended for production use. Important bug fix and security errata are issued to Red Hat Software Collections subscribers in a similar manner to Red Hat Enterprise Linux for at least two years from the release of each major version. In each major release stream, each version of a selected component remains backward compatible. For detailed information about length of support for individual components, refer to the Red Hat Software Collections Product Life Cycle document. 1.1.1. Red Hat Developer Toolset Red Hat Developer Toolset is a part of Red Hat Software Collections, included as a separate Software Collection. For more information about Red Hat Developer Toolset, refer to the Red Hat Developer Toolset Release Notes and the Red Hat Developer Toolset User Guide . 1.2. Main Features Table 1.1, "Red Hat Software Collections Components" lists components that are supported at the time of the Red Hat Software Collections 3.6 release. Table 1.1. Red Hat Software Collections Components Component Software Collection Description Red Hat Developer Toolset 10.0 devtoolset-10 Red Hat Developer Toolset is designed for developers working on the Red Hat Enterprise Linux platform. It provides current versions of the GNU Compiler Collection , GNU Debugger , and other development, debugging, and performance monitoring tools. For a complete list of components, see the Red Hat Developer Toolset Components table in the Red Hat Developer Toolset User Guide . Perl 5.26.3 [a] rh-perl526 A release of Perl, a high-level programming language that is commonly used for system administration utilities and web programming. The rh-perl526 Software Collection provides additional utilities, scripts, and database connectors for MySQL and PostgreSQL . It includes the DateTime Perl module and the mod_perl Apache httpd module, which is supported only with the httpd24 Software Collection. Additionally, it provides the cpanm utility for easy installation of CPAN modules. The rh-perl526 packaging is aligned with upstream; the perl526-perl package installs also core modules, while the interpreter is provided by the perl-interpreter package. Perl 5.30.1 [a] rh-perl530 A release of Perl, a high-level programming language that is commonly used for system administration utilities and web programming. The rh-perl530 Software Collection provides additional utilities, scripts, and database connectors for MySQL , PostgreSQL , and SQLite . It includes the DateTime Perl module and the mod_perl Apache httpd module, which is supported only with the httpd24 Software Collection. Additionally, it provides the cpanm utility for easy installation of CPAN modules, the LWP::UserAgent module for communicating with the HTTP servers, and the LWP::Protocol::https module for securing the communication. The rh-perl530 packaging is aligned with upstream; the perl530-perl package installs also core modules, while the interpreter is provided by the perl-interpreter package. PHP 7.3.20 [a] rh-php73 A release of PHP 7.3 with PEAR 1.10.9, APCu 5.1.17, and the Xdebug extension. Python 2.7.18 python27 A release of Python 2.7 with a number of additional utilities. This Python version provides various features and enhancements, including an ordered dictionary type, faster I/O operations, and improved forward compatibility with Python 3. The python27 Software Collections contains the Python 2.7.13 interpreter , a set of extension libraries useful for programming web applications and mod_wsgi (only supported with the httpd24 Software Collection), MySQL and PostgreSQL database connectors, and numpy and scipy . Python 3.8.6 [a] rh-python38 The rh-python38 Software Collection contains Python 3.8, which introduces new Python modules, such as contextvars , dataclasses , or importlib.resources , new language features, improved developer experience, and performance improvements . In addition, a set of popular extension libraries is provided, including mod_wsgi (supported only together with the httpd24 Software Collection), numpy , scipy , and the psycopg2 PostgreSQL database connector. Ruby 2.5.5 [a] rh-ruby25 A release of Ruby 2.5. This version provides multiple performance improvements and new features, for example, simplified usage of blocks with the rescue , else , and ensure keywords, a new yield_self method, support for branch coverage and method coverage measurement, new Hash#slice and Hash#transform_keys methods . Ruby 2.5.0 maintains source-level backward compatibility with Ruby 2.4. Ruby 2.6.2 [a] rh-ruby26 A release of Ruby 2.6. This version provides multiple performance improvements and new features, such as endless ranges, the Binding#source_location method, and the USDSAFE process global state . Ruby 2.6.0 maintains source-level backward compatibility with Ruby 2.5. Ruby 2.7.1 [a] rh-ruby27 A release of Ruby 2.7. This version provides multiple performance improvements and new features, such as Compaction GC or command-line interface for the LALR(1) parser generator, and an enhancement to REPL. Ruby 2.7 maintains source-level backward compatibility with Ruby 2.6. MariaDB 10.3.27 [a] rh-mariadb103 A release of MariaDB, an alternative to MySQL for users of Red Hat Enterprise Linux. For all practical purposes, MySQL is binary compatible with MariaDB and can be replaced with it without any data conversions. This version introduces system-versioned tables, invisible columns, a new instant ADD COLUMN operation for InnoDB , and a JDBC connector for MariaDB and MySQL . MongoDB 3.6.3 [a] rh-mongodb36 A release of MongoDB, a cross-platform document-oriented database system classified as a NoSQL database. This release introduces change streams, retryable writes, and JSON Schema , as well as other features. MySQL 8.0.21 [a] rh-mysql80 A release of the MySQL server, which introduces a number of new security and account management features and enhancements. PostgreSQL 10.15 [a] rh-postgresql10 A release of PostgreSQL, which includes a significant performance improvement and a number of new features, such as logical replication using the publish and subscribe keywords, or stronger password authentication based on the SCRAM-SHA-256 mechanism . PostgreSQL 12.5 [a] rh-postgresql12 A release of PostgreSQL, which provides the pgaudit extension, various enhancements to partitioning and parallelism, support for the SQL/JSON path language, and performance improvements. Node.js 10.21.0 [a] rh-nodejs10 A release of Node.js, which provides multiple API enhancements and new features, including V8 engine version 6.6, full N-API support , and stability improvements. Node.js 12.19.1 [a] rh-nodejs12 A release of Node.js with V8 engine version 7.6, support for ES6 modules, and improved support for native modules. Node.js 14.15.0 [a] rh-nodejs14 A release of Node.js with V8 version 8.3, a new experimental WebAssembly System Interface (WASI), and a new experimental Async Local Storage API. nginx 1.16.1 [a] rh-nginx116 A release of nginx, a web and proxy server with a focus on high concurrency, performance, and low memory usage. This version introduces numerous updates related to SSL, several new directives and parameters, and various enhancements. nginx 1.18.0 [a] rh-nginx118 A release of nginx, a web and proxy server with a focus on high concurrency, performance, and low memory usage. This version introduces enhancements to HTTP request rate and connection limiting, and a new auth_delay directive . In addition, support for new variables has been added to multiple directives. Apache httpd 2.4.34 [a] httpd24 A release of the Apache HTTP Server (httpd), including a high performance event-based processing model, enhanced SSL module and FastCGI support . The mod_auth_kerb , mod_auth_mellon , and ModSecurity modules are also included. Varnish Cache 5.2.1 [a] rh-varnish5 A release of Varnish Cache, a high-performance HTTP reverse proxy. This version includes the shard director, experimental HTTP/2 support, and improvements to Varnish configuration through separate VCL files and VCL labels. Varnish Cache 6.0.6 [a] rh-varnish6 A release of Varnish Cache, a high-performance HTTP reverse proxy. This version includes support for Unix Domain Sockets (both for clients and for back-end servers), new level of the VCL language ( vcl 4.1 ), and improved HTTP/2 support . Maven 3.6.1 [a] rh-maven36 A release of Maven, a software project management and comprehension tool. This release provides various enhancements and bug fixes. Git 2.18.4 [a] rh-git218 A release of Git, a distributed revision control system with a decentralized architecture. As opposed to centralized version control systems with a client-server model, Git ensures that each working copy of a Git repository is its exact copy with complete revision history. This version includes the Large File Storage (LFS) extension . Git 2.27.0 [a] rh-git227 A release of Git, a distributed revision control system with a decentralized architecture. This version introduces numerous enhancements; for example, the git checkout command split into git switch and git restore , and changed behavior of the git rebase command . In addition, Git Large File Storage (LFS) has been updated to version 2.11.0. Redis 5.0.5 [a] rh-redis5 A release of Redis 5.0, a persistent key-value database . Redis now provides redis-trib , a cluster management tool . HAProxy 1.8.24 [a] rh-haproxy18 A release of HAProxy 1.8, a reliable, high-performance network load balancer for TCP and HTTP-based applications. JDK Mission Control 7.1.1 [a] rh-jmc This Software Collection includes JDK Mission Control (JMC) , a powerful profiler for HotSpot JVMs. JMC provides an advanced set of tools for efficient and detailed analysis of extensive data collected by the JDK Flight Recorder. JMC requires JDK version 8 or later to run. Target Java applications must run with at least OpenJDK version 11 so that JMC can access JDK Flight Recorder features. The rh-jmc Software Collection requires the rh-maven35 Software Collection. [a] This Software Collection is available only for Red Hat Enterprise Linux 7 Previously released Software Collections remain available in the same distribution channels. All Software Collections, including retired components, are listed in the Table 1.2, "All Available Software Collections" . Software Collections that are no longer supported are marked with an asterisk ( * ). See the Red Hat Software Collections Product Life Cycle document for information on the length of support for individual components. For detailed information regarding previously released components, refer to the Release Notes for earlier versions of Red Hat Software Collections. Table 1.2. All Available Software Collections Component Software Collection Availability Architectures supported on RHEL7 Components New in Red Hat Software Collections 3.6 Red Hat Developer Toolset 10.0 devtoolset-10 RHEL7 x86_64, s390x, ppc64, ppc64le Git 2.27.0 rh-git227 RHEL7 x86_64, s390x, ppc64le nginx 1.18.0 rh-nginx118 RHEL7 x86_64, s390x, ppc64le Node.js 14.15.0 rh-nodejs14 RHEL7 x86_64, s390x, ppc64le Table 1.2. All Available Software Collections Components Updated in Red Hat Software Collections 3.6 Apache httpd 2.4.34 httpd24 RHEL7 x86_64, s390x, aarch64, ppc64le PHP 7.3.20 rh-php73 RHEL7 x86_64, s390x, aarch64, ppc64le HAProxy 1.8.24 rh-haproxy18 RHEL7 x86_64 Perl 5.30.1 rh-perl530 RHEL7 x86_64, s390x, aarch64, ppc64le Ruby 2.5.5 rh-ruby25 RHEL7 x86_64, s390x, aarch64, ppc64le Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 3.5 Red Hat Developer Toolset 9.1 devtoolset-9 RHEL7 x86_64, s390x, aarch64, ppc64, ppc64le Python 3.8.6 rh-python38 RHEL7 x86_64, s390x, aarch64, ppc64le Ruby 2.7.1 rh-ruby27 RHEL7 x86_64, s390x, aarch64, ppc64le JDK Mission Control 7.1.1 rh-jmc RHEL7 x86_64 Varnish Cache 6.0.6 rh-varnish6 RHEL7 x86_64, s390x, aarch64, ppc64le Apache httpd 2.4.34 (the last update for RHEL6) httpd24 (RHEL6)* RHEL6 x86_64 Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 3.4 Node.js 12.19.1 rh-nodejs12 RHEL7 x86_64, s390x, aarch64, ppc64le nginx 1.16.1 rh-nginx116 RHEL7 x86_64, s390x, aarch64, ppc64le PostgreSQL 12.5 rh-postgresql12 RHEL7 x86_64, s390x, aarch64, ppc64le Maven 3.6.1 rh-maven36 RHEL7 x86_64, s390x, aarch64, ppc64le Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 3.3 Red Hat Developer Toolset 8.1 devtoolset-8 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64, ppc64le MariaDB 10.3.27 rh-mariadb103 RHEL7 x86_64, s390x, aarch64, ppc64le Redis 5.0.5 rh-redis5 RHEL7 x86_64, s390x, aarch64, ppc64le Ruby 2.6.2 rh-ruby26 RHEL7 x86_64, s390x, aarch64, ppc64le Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 3.2 PHP 7.2.24 rh-php72 * RHEL7 x86_64, s390x, aarch64, ppc64le MySQL 8.0.21 rh-mysql80 RHEL7 x86_64, s390x, aarch64, ppc64le Node.js 10.21.0 rh-nodejs10 RHEL7 x86_64, s390x, aarch64, ppc64le nginx 1.14.1 rh-nginx114 * RHEL7 x86_64, s390x, aarch64, ppc64le Git 2.18.4 rh-git218 RHEL7 x86_64, s390x, aarch64, ppc64le Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 3.1 Red Hat Developer Toolset 7.1 devtoolset-7 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64, ppc64le Perl 5.26.3 rh-perl526 RHEL7 x86_64, s390x, aarch64, ppc64le MongoDB 3.6.3 rh-mongodb36 RHEL7 x86_64, s390x, aarch64, ppc64le Varnish Cache 5.2.1 rh-varnish5 RHEL7 x86_64, s390x, aarch64, ppc64le PostgreSQL 10.15 rh-postgresql10 RHEL7 x86_64, s390x, aarch64, ppc64le PHP 7.0.27 rh-php70 * RHEL6, RHEL7 x86_64 MySQL 5.7.24 rh-mysql57 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 3.0 PHP 7.1.8 rh-php71 * RHEL7 x86_64, s390x, aarch64, ppc64le nginx 1.12.1 rh-nginx112 * RHEL7 x86_64, s390x, aarch64, ppc64le Python 3.6.12 rh-python36 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le Maven 3.5.0 rh-maven35 * RHEL7 x86_64, s390x, aarch64, ppc64le MariaDB 10.2.22 rh-mariadb102 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le PostgreSQL 9.6.19 rh-postgresql96 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le MongoDB 3.4.9 rh-mongodb34 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le Node.js 8.11.4 rh-nodejs8 * RHEL7 x86_64, s390x, aarch64, ppc64le Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 2.4 Red Hat Developer Toolset 6.1 devtoolset-6 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64, ppc64le Scala 2.10.6 rh-scala210 * RHEL7 x86_64 nginx 1.10.2 rh-nginx110 * RHEL6, RHEL7 x86_64 Node.js 6.11.3 rh-nodejs6 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le Ruby 2.4.6 rh-ruby24 * RHEL6, RHEL7 x86_64 Ruby on Rails 5.0.1 rh-ror50 * RHEL6, RHEL7 x86_64 Eclipse 4.6.3 rh-eclipse46 * RHEL7 x86_64 Python 2.7.18 python27 RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le Thermostat 1.6.6 rh-thermostat16 * RHEL6, RHEL7 x86_64 Maven 3.3.9 rh-maven33 * RHEL6, RHEL7 x86_64 Common Java Packages rh-java-common * RHEL6, RHEL7 x86_64 Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 2.3 Git 2.9.3 rh-git29 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le Redis 3.2.4 rh-redis32 * RHEL6, RHEL7 x86_64 Perl 5.24.0 rh-perl524 * RHEL6, RHEL7 x86_64 Python 3.5.1 rh-python35 * RHEL6, RHEL7 x86_64 MongoDB 3.2.10 rh-mongodb32 * RHEL6, RHEL7 x86_64 Ruby 2.3.8 rh-ruby23 * RHEL6, RHEL7 x86_64 PHP 5.6.25 rh-php56 * RHEL6, RHEL7 x86_64 Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 2.2 Red Hat Developer Toolset 4.1 devtoolset-4 * RHEL6, RHEL7 x86_64 MariaDB 10.1.29 rh-mariadb101 * RHEL6, RHEL7 x86_64 MongoDB 3.0.11 upgrade collection rh-mongodb30upg * RHEL6, RHEL7 x86_64 Node.js 4.6.2 rh-nodejs4 * RHEL6, RHEL7 x86_64 PostgreSQL 9.5.14 rh-postgresql95 * RHEL6, RHEL7 x86_64 Ruby on Rails 4.2.6 rh-ror42 * RHEL6, RHEL7 x86_64 MongoDB 2.6.9 rh-mongodb26 * RHEL6, RHEL7 x86_64 Thermostat 1.4.4 thermostat1 * RHEL6, RHEL7 x86_64 Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 2.1 Varnish Cache 4.0.3 rh-varnish4 * RHEL6, RHEL7 x86_64 nginx 1.8.1 rh-nginx18 * RHEL6, RHEL7 x86_64 Node.js 0.10 nodejs010 * RHEL6, RHEL7 x86_64 Maven 3.0.5 maven30 * RHEL6, RHEL7 x86_64 V8 3.14.5.10 v8314 * RHEL6, RHEL7 x86_64 Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 2.0 Red Hat Developer Toolset 3.1 devtoolset-3 * RHEL6, RHEL7 x86_64 Perl 5.20.1 rh-perl520 * RHEL6, RHEL7 x86_64 Python 3.4.2 rh-python34 * RHEL6, RHEL7 x86_64 Ruby 2.2.9 rh-ruby22 * RHEL6, RHEL7 x86_64 Ruby on Rails 4.1.5 rh-ror41 * RHEL6, RHEL7 x86_64 MariaDB 10.0.33 rh-mariadb100 * RHEL6, RHEL7 x86_64 MySQL 5.6.40 rh-mysql56 * RHEL6, RHEL7 x86_64 PostgreSQL 9.4.14 rh-postgresql94 * RHEL6, RHEL7 x86_64 Passenger 4.0.50 rh-passenger40 * RHEL6, RHEL7 x86_64 PHP 5.4.40 php54 * RHEL6, RHEL7 x86_64 PHP 5.5.21 php55 * RHEL6, RHEL7 x86_64 nginx 1.6.2 nginx16 * RHEL6, RHEL7 x86_64 DevAssistant 0.9.3 devassist09 * RHEL6, RHEL7 x86_64 Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 1 Git 1.9.4 git19 * RHEL6, RHEL7 x86_64 Perl 5.16.3 perl516 * RHEL6, RHEL7 x86_64 Python 3.3.2 python33 * RHEL6, RHEL7 x86_64 Ruby 1.9.3 ruby193 * RHEL6, RHEL7 x86_64 Ruby 2.0.0 ruby200 * RHEL6, RHEL7 x86_64 Ruby on Rails 4.0.2 ror40 * RHEL6, RHEL7 x86_64 MariaDB 5.5.53 mariadb55 * RHEL6, RHEL7 x86_64 MongoDB 2.4.9 mongodb24 * RHEL6, RHEL7 x86_64 MySQL 5.5.52 mysql55 * RHEL6, RHEL7 x86_64 PostgreSQL 9.2.18 postgresql92 * RHEL6, RHEL7 x86_64 Legend: RHEL6 - Red Hat Enterprise Linux 6 RHEL7 - Red Hat Enterprise Linux 7 x86_64 - AMD64 and Intel 64 architectures s390x - 64-bit IBM Z aarch64 - The 64-bit ARM architecture ppc64 - IBM POWER, big endian ppc64le - IBM POWER, little endian * - Retired component; this Software Collection is no longer supported The tables above list the latest versions available through asynchronous updates. Note that Software Collections released in Red Hat Software Collections 2.0 and later include a rh- prefix in their names. Eclipse is available as a part of the Red Hat Developer Tools offering. 1.3. Changes in Red Hat Software Collections 3.6 1.3.1. Overview Architectures The Red Hat Software Collections offering contains packages for Red Hat Enterprise Linux 7 running on AMD64 and Intel 64 architectures; certain earlier Software Collections are available also for Red Hat Enterprise Linux 6. In addition, Red Hat Software Collections 3.6 supports the following architectures on Red Hat Enterprise Linux 7: 64-bit IBM Z IBM POWER, little endian For a full list of components and their availability, see Table 1.2, "All Available Software Collections" . New Software Collections Red Hat Software Collections 3.6 adds the following new Software Collections: devtoolset-10 - see Section 1.3.2, "Changes in Red Hat Developer Toolset" rh-git227 - see Section 1.3.3, "Changes in Git" rh-nginx118 - see Section 1.3.4, "Changes in nginx" rh-nodejs14 - see Section 1.3.5, "Changes in Node.js" All new Software Collections are available only for Red Hat Enterprise Linux 7. Updated Software Collections The following components has been updated in Red Hat Software Collections 3.6: httpd24 - see Section 1.3.6, "Changes in Apache httpd" rh-perl530 - see Section 1.3.7, "Changes in Perl" rh-php73 - see Section 1.3.8, "Changes in PHP" rh-haproxy18 - see Section 1.3.9, "Changes in HAProxy" rh-ruby25 - see Section 1.3.10, "Changes in Ruby" Red Hat Software Collections Container Images The following container images are new in Red Hat Software Collections 3.6: rhscl/devtoolset-10-toolchain-rhel7 rhscl/devtoolset-10-perftools-rhel7 rhscl/nginx-118-rhel7 rhscl/nodejs-14-rhel7 The following container image has been updated in Red Hat Software Collections 3.6 rhscl/httpd-24-rhel7 rhscl/php-73-rhel7 rhscl/perl-530-rhel7 rhscl/ruby-25-rhel7 For more information about Red Hat Software Collections container images, see Section 3.4, "Red Hat Software Collections Container Images" . 1.3.2. Changes in Red Hat Developer Toolset The following components have been upgraded in Red Hat Developer Toolset 10.0 compared to the release of Red Hat Developer Toolset: GCC to version 10.2.1 binutils to version 2.35 GDB to version 9.2 strace to version 5.7 SystemTap to version 4.3 OProfile to version 1.4.0 Valgrind to version 3.16.1 elfutils to version 0.180 annobin to version 9.23 For detailed information on changes in 10.0, see the Red Hat Developer Toolset User Guide . 1.3.3. Changes in Git The new rh-git227 Software Collection includes Git 2.27.0 , which provides numerous bug fixes and new features compared to the rh-git218 Collection released with Red Hat Software Collections 3.2. Notable changes in this release include: The git checkout command has been split into two separate commands: git switch for managing branches git restore for managing changes within the directory tree The behavior of the git rebase command is now based on the merge workflow by default rather than the patch+apply workflow. To preserve the behavior, set the rebase.backend configuration variable to apply . The git difftool command can now be used also outside a repository. Four new configuration variables, {author,committer}.{name,email} , have been introduced to override user.{name,email} in more specific cases. Several new options have been added that enable users to configure SSL for communication with proxies. Handling of commits with log messages in non-UTF-8 character encoding has been improved in the git fast-export and git fast-import utilities. Git Large File Storage (LFS) has been updated to version 2.11.0. For detailed list of further enhancements, bug fixes, and backward compatibility notes related to Git 2.27.0 , see the upstream release notes . See also the Git manual page for version 2.27.0. 1.3.4. Changes in nginx The new rh-nginx118 Software Collection introduces nginx 1.18.0 , which provides a number of bug and security fixes, new features and enhancements over version 1.16. Notable changes include: Enhancements to HTTP request rate and connection limiting have been implemented. For example, the limit_rate and limit_rate_after directives now support variables, including new USDlimit_req_status and USDlimit_conn_status variables. In addition, dry-run mode has been added for the limit_conn_dry_run and limit_req_dry_run directives. A new auth_delay directive has been added, which enables delayed processing of unauthorized requests. The following directives now support variables: grpc_pass , proxy_upload_rate , and proxy_download_rate . Additional PROXY protocol variables have been added, namely USDproxy_protocol_server_addr and USDproxy_protocol_server_port . rh-nginx118 uses the rh-perl530 Software Collection for Perl integration. For more information regarding changes in nginx , refer to the upstream release notes . For migration instructions, see Section 5.5, "Migrating to nginx 1.18" . 1.3.5. Changes in Node.js The new rh-nodejs14 Software Collection provides Node.js 14.15.0 , which is the most recent Long Term Support (LTS) version. Notable enhancements in this release include: The V8 engine has been upgraded to version 8.3. A new experimental WebAssembly System Interface (WASI) has been implemented. A new experimental Async Local Storage API has been introduced. The diagnostic report feature is now stable. The streams APIs have been hardened. Experimental modules warnings have been removed. Stability has been improved. For detailed changes in Node.js 14.15.0, see the upstream release notes and upstream documentation . 1.3.6. Changes in Apache httpd The httpd24 Software Collection has been updated to provide multiple security and bug fixes. In addition, the ProxyRemote configuration directive has been enhanced to optionally take username and password credentials, which are used to authenticate to the remote proxy using HTTP Basic authentication. This feature has been backported from httpd 2.5 . For details, see the upstream documentation . 1.3.7. Changes in Perl The rh-perl530-perl-CGI package has been added to the rh-perl530 Software Collection. The rh-perl530-perl-CGI package provides a Perl module that implements Common Gateway Interface (CGI) for running scripts written in the Perl language. 1.3.8. Changes in PHP The rh-php73 Software Collection has been updated to version 7.3.20, which provides multiple security and bug fixes. 1.3.9. Changes in HAProxy The rh-haproxy18 Software Collection has been updated with a bug fix. 1.3.10. Changes in Ruby The rh-ruby25 Software Collection has been updated with a bug fix. 1.4. Compatibility Information Red Hat Software Collections 3.6 is available for all supported releases of Red Hat Enterprise Linux 7 on AMD64 and Intel 64 architectures, 64-bit IBM Z, and IBM POWER, little endian. Certain previously released components are available also for the 64-bit ARM architecture. For a full list of available components, see Table 1.2, "All Available Software Collections" . 1.5. Known Issues rh-ruby27 component, BZ# 1836201 When a custom script requires the Psych YAML parser and afterwards uses the Gem.load_yaml method, running the script fails with the following error message: To work around this problem, add the gem 'psych' line to the script somewhere above the require 'psych' line: ... gem 'psych' ... require 'psych' Gem.load_yaml multiple components, BZ# 1716378 Certain files provided by the Software Collections debuginfo packages might conflict with the corresponding debuginfo package files from the base Red Hat Enterprise Linux system or from other versions of Red Hat Software Collections components. For example, the python27-python-debuginfo package files might conflict with the corresponding files from the python-debuginfo package installed on the core system. Similarly, files from the httpd24-mod_auth_mellon-debuginfo package might conflict with similar files provided by the base system mod_auth_mellon-debuginfo package. To work around this problem, uninstall the base system debuginfo package prior to installing the Software Collection debuginfo package. rh-mysql80 component, BZ# 1646363 The mysql-connector-java database connector does not work with the MySQL 8.0 server. To work around this problem, use the mariadb-java-client database connector from the rh-mariadb103 Software Collection. rh-mysql80 component, BZ# 1646158 The default character set has been changed to utf8mb4 in MySQL 8.0 but this character set is unsupported by the php-mysqlnd database connector. Consequently, php-mysqlnd fails to connect in the default configuration. To work around this problem, specify a known character set as a parameter of the MySQL server configuration. For example, modify the /etc/opt/rh/rh-mysql80/my.cnf.d/mysql-server.cnf file to read: httpd24 component, BZ# 1429006 Since httpd 2.4.27 , the mod_http2 module is no longer supported with the default prefork Multi-Processing Module (MPM). To enable HTTP/2 support, edit the configuration file at /opt/rh/httpd24/root/etc/httpd/conf.modules.d/00-mpm.conf and switch to the event or worker MPM. Note that the HTTP/2 server-push feature does not work on the 64-bit ARM architecture, 64-bit IBM Z, and IBM POWER, little endian. httpd24 component, BZ# 1327548 The mod_ssl module does not support the ALPN protocol on Red Hat Enterprise Linux 6, or on Red Hat Enterprise Linux 7.3 and earlier. Consequently, clients that support upgrading TLS connections to HTTP/2 only using ALPN are limited to HTTP/1.1 support. httpd24 component, BZ# 1224763 When using the mod_proxy_fcgi module with FastCGI Process Manager (PHP-FPM), httpd uses port 8000 for the FastCGI protocol by default instead of the correct port 9000 . To work around this problem, specify the correct port explicitly in configuration. httpd24 component, BZ# 1382706 When SELinux is enabled, the LD_LIBRARY_PATH environment variable is not passed through to CGI scripts invoked by httpd . As a consequence, in some cases it is impossible to invoke executables from Software Collections enabled in the /opt/rh/httpd24/service-environment file from CGI scripts run by httpd . To work around this problem, set LD_LIBRARY_PATH as desired from within the CGI script. httpd24 component Compiling external applications against the Apache Portable Runtime (APR) and APR-util libraries from the httpd24 Software Collection is not supported. The LD_LIBRARY_PATH environment variable is not set in httpd24 because it is not required by any application in this Software Collection. python27 component, BZ# 1330489 The python27-python-pymongo package has been updated to version 3.2.1. Note that this version is not fully compatible with the previously shipped version 2.5.2. scl-utils component In Red Hat Enterprise Linux 7.5 and earlier, due to an architecture-specific macro bug in the scl-utils package, the <collection>/root/usr/lib64/ directory does not have the correct package ownership on the 64-bit ARM architecture and on IBM POWER, little endian. As a consequence, this directory is not removed when a Software Collection is uninstalled. To work around this problem, manually delete <collection>/root/usr/lib64/ when removing a Software Collection. maven component When the user has installed both the Red Hat Enterprise Linux system version of maven-local package and the rh-maven*-maven-local package, XMvn , a tool used for building Java RPM packages, run from the Maven Software Collection tries to read the configuration file from the base system and fails. To work around this problem, uninstall the maven-local package from the base Red Hat Enterprise Linux system. perl component It is impossible to install more than one mod_perl.so library. As a consequence, it is not possible to use the mod_perl module from more than one Perl Software Collection. postgresql component The rh-postgresql9* packages for Red Hat Enterprise Linux 6 do not provide the sepgsql module as this feature requires installation of libselinux version 2.0.99, which is not available in Red Hat Enterprise Linux 6. httpd , mariadb , mongodb , mysql , nodejs , perl , php , python , ruby , and ror components, BZ# 1072319 When uninstalling the httpd24 , rh-mariadb* , rh-mongodb* , rh-mysql* , rh-nodejs* , rh-perl* , rh-php* , python27 , rh-python* , rh-ruby* , or rh-ror* packages, the order of uninstalling can be relevant due to ownership of dependent packages. As a consequence, some directories and files might not be removed properly and might remain on the system. mariadb , mysql components, BZ# 1194611 Since MariaDB 10 and MySQL 5.6 , the rh-mariadb*-mariadb-server and rh-mysql*-mysql-server packages no longer provide the test database by default. Although this database is not created during initialization, the grant tables are prefilled with the same values as when test was created by default. As a consequence, upon a later creation of the test or test_* databases, these databases have less restricted access rights than is default for new databases. Additionally, when running benchmarks, the run-all-tests script no longer works out of the box with example parameters. You need to create a test database before running the tests and specify the database name in the --database parameter. If the parameter is not specified, test is taken by default but you need to make sure the test database exist. mariadb , mysql , postgresql , mongodb components Red Hat Software Collections contains the MySQL 5.7 , MySQL 8.0 , MariaDB 10.2 , MariaDB 10.3 , PostgreSQL 9.6 , PostgreSQL 10 , PostgreSQL 12 , MongoDB 3.4 , and MongoDB 3.6 databases. The core Red Hat Enterprise Linux 6 provides earlier versions of the MySQL and PostgreSQL databases (client library and daemon). The core Red Hat Enterprise Linux 7 provides earlier versions of the MariaDB and PostgreSQL databases (client library and daemon). Client libraries are also used in database connectors for dynamic languages, libraries, and so on. The client library packaged in the Red Hat Software Collections database packages in the PostgreSQL component is not supposed to be used, as it is included only for purposes of server utilities and the daemon. Users are instead expected to use the system library and the database connectors provided with the core system. A protocol, which is used between the client library and the daemon, is stable across database versions, so, for example, using the PostgreSQL 9.2 client library with the PostgreSQL 9.4 or 9.5 daemon works as expected. The core Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7 do not include the client library for MongoDB . In order to use this client library for your application, you should use the client library from Red Hat Software Collections and always use the scl enable ... call every time you run an application linked against this MongoDB client library. mariadb , mysql , mongodb components MariaDB, MySQL, and MongoDB do not make use of the /opt/ provider / collection /root prefix when creating log files. Note that log files are saved in the /var/opt/ provider / collection /log/ directory, not in /opt/ provider / collection /root/var/log/ . Other Notes rh-ruby* , rh-python* , rh-php* components Using Software Collections on a read-only NFS has several limitations. Ruby gems cannot be installed while the rh-ruby* Software Collection is on a read-only NFS. Consequently, for example, when the user tries to install the ab gem using the gem install ab command, an error message is displayed, for example: The same problem occurs when the user tries to update or install gems from an external source by running the bundle update or bundle install commands. When installing Python packages on a read-only NFS using the Python Package Index (PyPI), running the pip command fails with an error message similar to this: Installing packages from PHP Extension and Application Repository (PEAR) on a read-only NFS using the pear command fails with the error message: This is an expected behavior. httpd component Language modules for Apache are supported only with the Red Hat Software Collections version of Apache httpd and not with the Red Hat Enterprise Linux system versions of httpd . For example, the mod_wsgi module from the rh-python35 Collection can be used only with the httpd24 Collection. all components Since Red Hat Software Collections 2.0, configuration files, variable data, and runtime data of individual Collections are stored in different directories than in versions of Red Hat Software Collections. coreutils , util-linux , screen components Some utilities, for example, su , login , or screen , do not export environment settings in all cases, which can lead to unexpected results. It is therefore recommended to use sudo instead of su and set the env_keep environment variable in the /etc/sudoers file. Alternatively, you can run commands in a reverse order; for example: instead of When using tools like screen or login , you can use the following command to preserve the environment settings: source /opt/rh/<collection_name>/enable python component When the user tries to install more than one scldevel package from the python27 and rh-python* Software Collections, a transaction check error message is returned. This is an expected behavior because the user can install only one set of the macro files provided by the packages ( %scl_python , %scl_ prefix _python ). php component When the user tries to install more than one scldevel package from the rh-php* Software Collections, a transaction check error message is returned. This is an expected behavior because the user can install only one set of the macro files provided by the packages ( %scl_php , %scl_ prefix _php ). ruby component When the user tries to install more than one scldevel package from the rh-ruby* Software Collections, a transaction check error message is returned. This is an expected behavior because the user can install only one set of the macro files provided by the packages ( %scl_ruby , %scl_ prefix _ruby ). perl component When the user tries to install more than one scldevel package from the rh-perl* Software Collections, a transaction check error message is returned. This is an expected behavior because the user can install only one set of the macro files provided by the packages ( %scl_perl , %scl_ prefix _perl ). nginx component When the user tries to install more than one scldevel package from the rh-nginx* Software Collections, a transaction check error message is returned. This is an expected behavior because the user can install only one set of the macro files provided by the packages ( %scl_nginx , %scl_ prefix _nginx ). 1.6. Deprecated Functionality httpd24 component, BZ# 1434053 Previously, in an SSL/TLS configuration requiring name-based SSL virtual host selection, the mod_ssl module rejected requests with a 400 Bad Request error, if the host name provided in the Host: header did not match the host name provided in a Server Name Indication (SNI) header. Such requests are no longer rejected if the configured SSL/TLS security parameters are identical between the selected virtual hosts, in-line with the behavior of upstream mod_ssl . | [
"superclass mismatch for class Mark (TypeError)",
"gem 'psych' require 'psych' Gem.load_yaml",
"[mysqld] character-set-server=utf8",
"ERROR: While executing gem ... (Errno::EROFS) Read-only file system @ dir_s_mkdir - /opt/rh/rh-ruby22/root/usr/local/share/gems",
"Read-only file system: '/opt/rh/rh-python34/root/usr/lib/python3.4/site-packages/ipython-3.1.0.dist-info'",
"Cannot install, php_dir for channel \"pear.php.net\" is not writeable by the current user",
"su -l postgres -c \"scl enable rh-postgresql94 psql\"",
"scl enable rh-postgresql94 bash su -l postgres -c psql"
]
| https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.6_release_notes/chap-RHSCL |
Chapter 7. Ceph performance benchmark | Chapter 7. Ceph performance benchmark As a storage administrator, you can benchmark performance of the Red Hat Ceph Storage cluster. The purpose of this section is to give Ceph administrators a basic understanding of Ceph's native benchmarking tools. These tools will provide some insight into how the Ceph storage cluster is performing. This is not the definitive guide to Ceph performance benchmarking, nor is it a guide on how to tune Ceph accordingly. 7.1. Prerequisites A running Red Hat Ceph Storage cluster. 7.2. Performance baseline The OSD, including the journal, disks and the network throughput should each have a performance baseline to compare against. You can identify potential tuning opportunities by comparing the baseline performance data with the data from Ceph's native tools. Red Hat Enterprise Linux has many built-in tools, along with a plethora of open source community tools, available to help accomplish these tasks. Additional Resources For more details about some of the available tools, see this Knowledgebase article . 7.3. Benchmarking Ceph performance Ceph includes the rados bench command to do performance benchmarking on a RADOS storage cluster. The command will execute a write test and two types of read tests. The --no-cleanup option is important to use when testing both read and write performance. By default the rados bench command will delete the objects it has written to the storage pool. Leaving behind these objects allows the two read tests to measure sequential and random read performance. Note Before running these performance tests, drop all the file system caches by running the following: Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Create a new storage pool: Execute a write test for 10 seconds to the newly created storage pool: Example Output Execute a sequential read test for 10 seconds to the storage pool: Example Output Execute a random read test for 10 seconds to the storage pool: Example Output To increase the number of concurrent reads and writes, use the -t option, which the default is 16 threads. Also, the -b parameter can adjust the size of the object being written. The default object size is 4 MB. A safe maximum object size is 16 MB. Red Hat recommends running multiple copies of these benchmark tests to different pools. Doing this shows the changes in performance from multiple clients. Add the --run-name <label> option to control the names of the objects that get written during the benchmark test. Multiple rados bench commands may be ran simultaneously by changing the --run-name label for each running command instance. This prevents potential I/O errors that can occur when multiple clients are trying to access the same object and allows for different clients to access different objects. The --run-name option is also useful when trying to simulate a real world workload. For example: Example Output Remove the data created by the rados bench command: 7.4. Benchmarking Ceph block performance Ceph includes the rbd bench-write command to test sequential writes to the block device measuring throughput and latency. The default byte size is 4096, the default number of I/O threads is 16, and the default total number of bytes to write is 1 GB. These defaults can be modified by the --io-size , --io-threads and --io-total options respectively. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Load the rbd kernel module, if not already loaded: Create a 1 GB rbd image file in the testbench pool: Map the image file to a device file: Create an ext4 file system on the block device: Create a new directory: Mount the block device under /mnt/ceph-block-device/ : Execute the write performance test against the block device Example Additional Resources See the Block Device Commands section in the Red Hat Ceph Storage Block Device Guide for more information on the rbd command. | [
"echo 3 | sudo tee /proc/sys/vm/drop_caches && sudo sync",
"ceph osd pool create testbench 100 100",
"rados bench -p testbench 10 write --no-cleanup",
"Maintaining 16 concurrent writes of 4194304 bytes for up to 10 seconds or 0 objects Object prefix: benchmark_data_cephn1.home.network_10510 sec Cur ops started finished avg MB/s cur MB/s last lat avg lat 0 0 0 0 0 0 - 0 1 16 16 0 0 0 - 0 2 16 16 0 0 0 - 0 3 16 16 0 0 0 - 0 4 16 17 1 0.998879 1 3.19824 3.19824 5 16 18 2 1.59849 4 4.56163 3.87993 6 16 18 2 1.33222 0 - 3.87993 7 16 19 3 1.71239 2 6.90712 4.889 8 16 25 9 4.49551 24 7.75362 6.71216 9 16 25 9 3.99636 0 - 6.71216 10 16 27 11 4.39632 4 9.65085 7.18999 11 16 27 11 3.99685 0 - 7.18999 12 16 27 11 3.66397 0 - 7.18999 13 16 28 12 3.68975 1.33333 12.8124 7.65853 14 16 28 12 3.42617 0 - 7.65853 15 16 28 12 3.19785 0 - 7.65853 16 11 28 17 4.24726 6.66667 12.5302 9.27548 17 11 28 17 3.99751 0 - 9.27548 18 11 28 17 3.77546 0 - 9.27548 19 11 28 17 3.57683 0 - 9.27548 Total time run: 19.505620 Total writes made: 28 Write size: 4194304 Bandwidth (MB/sec): 5.742 Stddev Bandwidth: 5.4617 Max bandwidth (MB/sec): 24 Min bandwidth (MB/sec): 0 Average Latency: 10.4064 Stddev Latency: 3.80038 Max latency: 19.503 Min latency: 3.19824",
"# rados bench -p testbench 10 seq",
"sec Cur ops started finished avg MB/s cur MB/s last lat avg lat 0 0 0 0 0 0 - 0 Total time run: 0.804869 Total reads made: 28 Read size: 4194304 Bandwidth (MB/sec): 139.153 Average Latency: 0.420841 Max latency: 0.706133 Min latency: 0.0816332",
"rados bench -p testbench 10 rand",
"sec Cur ops started finished avg MB/s cur MB/s last lat avg lat 0 0 0 0 0 0 - 0 1 16 46 30 119.801 120 0.440184 0.388125 2 16 81 65 129.408 140 0.577359 0.417461 3 16 120 104 138.175 156 0.597435 0.409318 4 15 157 142 141.485 152 0.683111 0.419964 5 16 206 190 151.553 192 0.310578 0.408343 6 16 253 237 157.608 188 0.0745175 0.387207 7 16 287 271 154.412 136 0.792774 0.39043 8 16 325 309 154.044 152 0.314254 0.39876 9 16 362 346 153.245 148 0.355576 0.406032 10 16 405 389 155.092 172 0.64734 0.398372 Total time run: 10.302229 Total reads made: 405 Read size: 4194304 Bandwidth (MB/sec): 157.248 Average Latency: 0.405976 Max latency: 1.00869 Min latency: 0.0378431",
"rados bench -p testbench 10 write -t 4 --run-name client1",
"Maintaining 4 concurrent writes of 4194304 bytes for up to 10 seconds or 0 objects Object prefix: benchmark_data_node1_12631 sec Cur ops started finished avg MB/s cur MB/s last lat avg lat 0 0 0 0 0 0 - 0 1 4 4 0 0 0 - 0 2 4 6 2 3.99099 4 1.94755 1.93361 3 4 8 4 5.32498 8 2.978 2.44034 4 4 8 4 3.99504 0 - 2.44034 5 4 10 6 4.79504 4 2.92419 2.4629 6 3 10 7 4.64471 4 3.02498 2.5432 7 4 12 8 4.55287 4 3.12204 2.61555 8 4 14 10 4.9821 8 2.55901 2.68396 9 4 16 12 5.31621 8 2.68769 2.68081 10 4 17 13 5.18488 4 2.11937 2.63763 11 4 17 13 4.71431 0 - 2.63763 12 4 18 14 4.65486 2 2.4836 2.62662 13 4 18 14 4.29757 0 - 2.62662 Total time run: 13.123548 Total writes made: 18 Write size: 4194304 Bandwidth (MB/sec): 5.486 Stddev Bandwidth: 3.0991 Max bandwidth (MB/sec): 8 Min bandwidth (MB/sec): 0 Average Latency: 2.91578 Stddev Latency: 0.956993 Max latency: 5.72685 Min latency: 1.91967",
"rados -p testbench cleanup",
"modprobe rbd",
"rbd create image01 --size 1024 --pool testbench",
"rbd map image01 --pool testbench --name client.admin",
"mkfs.ext4 /dev/rbd/testbench/image01",
"mkdir /mnt/ceph-block-device",
"mount /dev/rbd/testbench/image01 /mnt/ceph-block-device",
"rbd bench --io-type write image01 --pool=testbench",
"bench-write io_size 4096 io_threads 16 bytes 1073741824 pattern seq SEC OPS OPS/SEC BYTES/SEC 2 11127 5479.59 22444382.79 3 11692 3901.91 15982220.33 4 12372 2953.34 12096895.42 5 12580 2300.05 9421008.60 6 13141 2101.80 8608975.15 7 13195 356.07 1458459.94 8 13820 390.35 1598876.60 9 14124 325.46 1333066.62 .."
]
| https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/administration_guide/ceph-performance-benchmarking |
4.143. libsemanage | 4.143. libsemanage 4.143.1. RHBA-2011:1770 - libsemanage bug fix update Updated libsemanage packages that fix file creation when umask is changed. The libsemanage library provides an API for the manipulation of SELinux binary policies. It is used by checkpolicy (the policy compiler) and similar tools, as well as by programs such as load_policy, which must perform specific transformations on binary policies (for example, customizing policy boolean settings). Bug Fix BZ# 747345 When running semanage commands while umask is set to 027 (or to a similar value that restricts a non-priviledged user from reading files created with such a file-creating mask), semanage changed the permissions of certain files such as the /etc/selinux/mls/contexts/files/file_contexts file. As a consequence, non-priviledged processes were not able to read such files and certain commands such as the restorecon command failed to run on these files. To solve this problem, libsemanage has been modified to save and clear umask before libsemanage creates context files and then restore it after the files are created so the file permissions are readable by non-priviledged processes. Operations on these context files now work as expected. All users of libsemange are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/libsemanage |
Chapter 4. Configuration information for Red Hat Quay | Chapter 4. Configuration information for Red Hat Quay Checking a configuration YAML can help identify and resolve various issues related to the configuration of Red Hat Quay. Checking the configuration YAML can help you address the following issues: Incorrect Configuration Parameters : If the database is not functioning as expected or is experiencing performance issues, your configuration parameters could be at fault. By checking the configuration YAML, administrators can ensure that all the required parameters are set correctly and match the intended settings for the database. Resource Limitations : The configuration YAML might specify resource limits for the database, such as memory and CPU limits. If the database is running into resource constraints or experiencing contention with other services, adjusting these limits can help optimize resource allocation and improve overall performance. Connectivity Issues : Incorrect network settings in the configuration YAML can lead to connectivity problems between the application and the database. Ensuring that the correct network configurations are in place can resolve issues related to connectivity and communication. Data Storage and Paths : The configuration YAML may include paths for storing data and logs. If the paths are misconfigured or inaccessible, the database may encounter errors while reading or writing data, leading to operational issues. Authentication and Security : The configuration YAML may contain authentication settings, including usernames, passwords, and access controls. Verifying these settings is crucial for maintaining the security of the database and ensuring only authorized users have access. Plugin and Extension Settings : Some databases support extensions or plugins that enhance functionality. Issues may arise if these plugins are misconfigured or not loaded correctly. Checking the configuration YAML can help identify any problems with plugin settings. Replication and High Availability Settings : In clustered or replicated database setups, the configuration YAML may define replication settings and high availability configurations. Incorrect settings can lead to data inconsistency and system instability. Backup and Recovery Options : The configuration YAML might include backup and recovery options, specifying how data backups are performed and how data can be recovered in case of failures. Validating these settings can ensure data safety and successful recovery processes. By checking your configuration YAML, Red Hat Quay administrators can detect and resolve these issues before they cause significant disruptions to the application or service relying on the database. 4.1. Obtaining configuration information for Red Hat Quay Configuration information can be obtained for all types of Red Hat Quay deployments, include standalone, Operator, and geo-replication deployments. Obtaining configuration information can help you resolve issues with authentication and authorization, your database, object storage, and repository mirroring. After you have obtained the necessary configuration information, you can update your config.yaml file, search the Red Hat Knowledgebase for a solution, or file a support ticket with the Red Hat Support team. Procedure To obtain configuration information on Red Hat Quay Operator deployments, you can use oc exec , oc cp , or oc rsync . To use the oc exec command, enter the following command: USD oc exec -it <quay_pod_name> -- cat /conf/stack/config.yaml This command returns your config.yaml file directly to your terminal. To use the oc copy command, enter the following commands: USD oc cp <quay_pod_name>:/conf/stack/config.yaml /tmp/config.yaml To display this information in your terminal, enter the following command: USD cat /tmp/config.yaml To use the oc rsync command, enter the following commands: oc rsync <quay_pod_name>:/conf/stack/ /tmp/local_directory/ To display this information in your terminal, enter the following command: USD cat /tmp/local_directory/config.yaml Example output DISTRIBUTED_STORAGE_CONFIG: local_us: - RHOCSStorage - access_key: redacted bucket_name: lht-quay-datastore-68fff7b8-1b5e-46aa-8110-c4b7ead781f5 hostname: s3.openshift-storage.svc.cluster.local is_secure: true port: 443 secret_key: redacted storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: - local_us DISTRIBUTED_STORAGE_PREFERENCE: - local_us To obtain configuration information on standalone Red Hat Quay deployments, you can use podman cp or podman exec . To use the podman copy command, enter the following commands: USD podman cp <quay_container_id>:/conf/stack/config.yaml /tmp/local_directory/ To display this information in your terminal, enter the following command: USD cat /tmp/local_directory/config.yaml To use podman exec , enter the following commands: USD podman exec -it <quay_container_id> cat /conf/stack/config.yaml Example output BROWSER_API_CALLS_XHR_ONLY: false ALLOWED_OCI_ARTIFACT_TYPES: application/vnd.oci.image.config.v1+json: - application/vnd.oci.image.layer.v1.tar+zstd application/vnd.sylabs.sif.config.v1+json: - application/vnd.sylabs.sif.layer.v1+tar AUTHENTICATION_TYPE: Database AVATAR_KIND: local BUILDLOGS_REDIS: host: quay-server.example.com password: strongpassword port: 6379 DATABASE_SECRET_KEY: 05ee6382-24a6-43c0-b30f-849c8a0f7260 DB_CONNECTION_ARGS: {} --- 4.2. Obtaining database configuration information You can obtain configuration information about your database by using the following procedure. Warning Interacting with the PostgreSQL database is potentially destructive. It is highly recommended that you perform the following procedure with the help of a Red Hat Quay Support Specialist. Procedure If you are using the Red Hat Quay Operator on OpenShift Container Platform, enter the following command: USD oc exec -it <database_pod> -- cat /var/lib/pgsql/data/userdata/postgresql.conf If you are using a standalone deployment of Red Hat Quay, enter the following command: USD podman exec -it <database_container> cat /var/lib/pgsql/data/userdata/postgresql.conf | [
"oc exec -it <quay_pod_name> -- cat /conf/stack/config.yaml",
"oc cp <quay_pod_name>:/conf/stack/config.yaml /tmp/config.yaml",
"cat /tmp/config.yaml",
"rsync <quay_pod_name>:/conf/stack/ /tmp/local_directory/",
"cat /tmp/local_directory/config.yaml",
"DISTRIBUTED_STORAGE_CONFIG: local_us: - RHOCSStorage - access_key: redacted bucket_name: lht-quay-datastore-68fff7b8-1b5e-46aa-8110-c4b7ead781f5 hostname: s3.openshift-storage.svc.cluster.local is_secure: true port: 443 secret_key: redacted storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: - local_us DISTRIBUTED_STORAGE_PREFERENCE: - local_us",
"podman cp <quay_container_id>:/conf/stack/config.yaml /tmp/local_directory/",
"cat /tmp/local_directory/config.yaml",
"podman exec -it <quay_container_id> cat /conf/stack/config.yaml",
"BROWSER_API_CALLS_XHR_ONLY: false ALLOWED_OCI_ARTIFACT_TYPES: application/vnd.oci.image.config.v1+json: - application/vnd.oci.image.layer.v1.tar+zstd application/vnd.sylabs.sif.config.v1+json: - application/vnd.sylabs.sif.layer.v1+tar AUTHENTICATION_TYPE: Database AVATAR_KIND: local BUILDLOGS_REDIS: host: quay-server.example.com password: strongpassword port: 6379 DATABASE_SECRET_KEY: 05ee6382-24a6-43c0-b30f-849c8a0f7260 DB_CONNECTION_ARGS: {} ---",
"oc exec -it <database_pod> -- cat /var/lib/pgsql/data/userdata/postgresql.conf",
"podman exec -it <database_container> cat /var/lib/pgsql/data/userdata/postgresql.conf"
]
| https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/troubleshooting_red_hat_quay/obtaining-quay-config-information |
Chapter 3. Understanding persistent storage | Chapter 3. Understanding persistent storage 3.1. Persistent storage overview Managing storage is a distinct problem from managing compute resources. OpenShift Container Platform uses the Kubernetes persistent volume (PV) framework to allow cluster administrators to provision persistent storage for a cluster. Developers can use persistent volume claims (PVCs) to request PV resources without having specific knowledge of the underlying storage infrastructure. PVCs are specific to a project, and are created and used by developers as a means to use a PV. PV resources on their own are not scoped to any single project; they can be shared across the entire OpenShift Container Platform cluster and claimed from any project. After a PV is bound to a PVC, that PV can not then be bound to additional PVCs. This has the effect of scoping a bound PV to a single namespace, that of the binding project. PVs are defined by a PersistentVolume API object, which represents a piece of existing storage in the cluster that was either statically provisioned by the cluster administrator or dynamically provisioned using a StorageClass object. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes but have a lifecycle that is independent of any individual pod that uses the PV. PV objects capture the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system. Important High availability of storage in the infrastructure is left to the underlying storage provider. PVCs are defined by a PersistentVolumeClaim API object, which represents a request for storage by a developer. It is similar to a pod in that pods consume node resources and PVCs consume PV resources. For example, pods can request specific levels of resources, such as CPU and memory, while PVCs can request specific storage capacity and access modes. For example, they can be mounted once read-write or many times read-only. 3.2. Lifecycle of a volume and claim PVs are resources in the cluster. PVCs are requests for those resources and also act as claim checks to the resource. The interaction between PVs and PVCs have the following lifecycle. 3.2.1. Provision storage In response to requests from a developer defined in a PVC, a cluster administrator configures one or more dynamic provisioners that provision storage and a matching PV. Alternatively, a cluster administrator can create a number of PVs in advance that carry the details of the real storage that is available for use. PVs exist in the API and are available for use. 3.2.2. Bind claims When you create a PVC, you request a specific amount of storage, specify the required access mode, and create a storage class to describe and classify the storage. The control loop in the master watches for new PVCs and binds the new PVC to an appropriate PV. If an appropriate PV does not exist, a provisioner for the storage class creates one. The size of all PVs might exceed your PVC size. This is especially true with manually provisioned PVs. To minimize the excess, OpenShift Container Platform binds to the smallest PV that matches all other criteria. Claims remain unbound indefinitely if a matching volume does not exist or can not be created with any available provisioner servicing a storage class. Claims are bound as matching volumes become available. For example, a cluster with many manually provisioned 50Gi volumes would not match a PVC requesting 100Gi. The PVC can be bound when a 100Gi PV is added to the cluster. 3.2.3. Use pods and claimed PVs Pods use claims as volumes. The cluster inspects the claim to find the bound volume and mounts that volume for a pod. For those volumes that support multiple access modes, you must specify which mode applies when you use the claim as a volume in a pod. Once you have a claim and that claim is bound, the bound PV belongs to you for as long as you need it. You can schedule pods and access claimed PVs by including persistentVolumeClaim in the pod's volumes block. Note If you attach persistent volumes that have high file counts to pods, those pods can fail or can take a long time to start. For more information, see When using Persistent Volumes with high file counts in OpenShift, why do pods fail to start or take an excessive amount of time to achieve "Ready" state? . 3.2.4. Storage Object in Use Protection The Storage Object in Use Protection feature ensures that PVCs in active use by a pod and PVs that are bound to PVCs are not removed from the system, as this can result in data loss. Storage Object in Use Protection is enabled by default. Note A PVC is in active use by a pod when a Pod object exists that uses the PVC. If a user deletes a PVC that is in active use by a pod, the PVC is not removed immediately. PVC removal is postponed until the PVC is no longer actively used by any pods. Also, if a cluster admin deletes a PV that is bound to a PVC, the PV is not removed immediately. PV removal is postponed until the PV is no longer bound to a PVC. 3.2.5. Release a persistent volume When you are finished with a volume, you can delete the PVC object from the API, which allows reclamation of the resource. The volume is considered released when the claim is deleted, but it is not yet available for another claim. The claimant's data remains on the volume and must be handled according to policy. 3.2.6. Reclaim policy for persistent volumes The reclaim policy of a persistent volume tells the cluster what to do with the volume after it is released. A volume's reclaim policy can be Retain , Recycle , or Delete . Retain reclaim policy allows manual reclamation of the resource for those volume plugins that support it. Recycle reclaim policy recycles the volume back into the pool of unbound persistent volumes once it is released from its claim. Important The Recycle reclaim policy is deprecated in OpenShift Container Platform 4. Dynamic provisioning is recommended for equivalent and better functionality. Delete reclaim policy deletes both the PersistentVolume object from OpenShift Container Platform and the associated storage asset in external infrastructure, such as AWS EBS or VMware vSphere. Note Dynamically provisioned volumes are always deleted. 3.2.7. Reclaiming a persistent volume manually When a persistent volume claim (PVC) is deleted, the persistent volume (PV) still exists and is considered "released". However, the PV is not yet available for another claim because the data of the claimant remains on the volume. Procedure To manually reclaim the PV as a cluster administrator: Delete the PV. USD oc delete pv <pv-name> The associated storage asset in the external infrastructure, such as an AWS EBS, GCE PD, Azure Disk, or Cinder volume, still exists after the PV is deleted. Clean up the data on the associated storage asset. Delete the associated storage asset. Alternately, to reuse the same storage asset, create a new PV with the storage asset definition. The reclaimed PV is now available for use by another PVC. 3.2.8. Changing the reclaim policy of a persistent volume To change the reclaim policy of a persistent volume: List the persistent volumes in your cluster: USD oc get pv Example output NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-b6efd8da-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim1 manual 10s pvc-b95650f8-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim2 manual 6s pvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim3 manual 3s Choose one of your persistent volumes and change its reclaim policy: USD oc patch pv <your-pv-name> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' Verify that your chosen persistent volume has the right policy: USD oc get pv Example output NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-b6efd8da-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim1 manual 10s pvc-b95650f8-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim2 manual 6s pvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Retain Bound default/claim3 manual 3s In the preceding output, the volume bound to claim default/claim3 now has a Retain reclaim policy. The volume will not be automatically deleted when a user deletes claim default/claim3 . 3.3. Persistent volumes Each PV contains a spec and status , which is the specification and status of the volume, for example: PersistentVolume object definition example apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 5Gi 2 accessModes: - ReadWriteOnce 3 persistentVolumeReclaimPolicy: Retain 4 ... status: ... 1 Name of the persistent volume. 2 The amount of storage available to the volume. 3 The access mode, defining the read-write and mount permissions. 4 The reclaim policy, indicating how the resource should be handled once it is released. 3.3.1. Types of PVs OpenShift Container Platform supports the following persistent volume plugins: AliCloud Disk AWS Elastic Block Store (EBS) AWS Elastic File Store (EFS) Azure Disk Azure File Cinder Fibre Channel GCE Persistent Disk IBM VPC Block HostPath iSCSI Local volume NFS OpenStack Manila Red Hat OpenShift Data Foundation VMware vSphere 3.3.2. Capacity Generally, a persistent volume (PV) has a specific storage capacity. This is set by using the capacity attribute of the PV. Currently, storage capacity is the only resource that can be set or requested. Future attributes may include IOPS, throughput, and so on. 3.3.3. Access modes A persistent volume can be mounted on a host in any way supported by the resource provider. Providers have different capabilities and each PV's access modes are set to the specific modes supported by that particular volume. For example, NFS can support multiple read-write clients, but a specific NFS PV might be exported on the server as read-only. Each PV gets its own set of access modes describing that specific PV's capabilities. Claims are matched to volumes with similar access modes. The only two matching criteria are access modes and size. A claim's access modes represent a request. Therefore, you might be granted more, but never less. For example, if a claim requests RWO, but the only volume available is an NFS PV (RWO+ROX+RWX), the claim would then match NFS because it supports RWO. Direct matches are always attempted first. The volume's modes must match or contain more modes than you requested. The size must be greater than or equal to what is expected. If two types of volumes, such as NFS and iSCSI, have the same set of access modes, either of them can match a claim with those modes. There is no ordering between types of volumes and no way to choose one type over another. All volumes with the same modes are grouped, and then sorted by size, smallest to largest. The binder gets the group with matching modes and iterates over each, in size order, until one size matches. The following table lists the access modes: Table 3.1. Access modes Access Mode CLI abbreviation Description ReadWriteOnce RWO The volume can be mounted as read-write by a single node. ReadOnlyMany ROX The volume can be mounted as read-only by many nodes. ReadWriteMany RWX The volume can be mounted as read-write by many nodes. Important Volume access modes are descriptors of volume capabilities. They are not enforced constraints. The storage provider is responsible for runtime errors resulting from invalid use of the resource. For example, NFS offers ReadWriteOnce access mode. You must mark the claims as read-only if you want to use the volume's ROX capability. Errors in the provider show up at runtime as mount errors. iSCSI and Fibre Channel volumes do not currently have any fencing mechanisms. You must ensure the volumes are only used by one node at a time. In certain situations, such as draining a node, the volumes can be used simultaneously by two nodes. Before draining the node, first ensure the pods that use these volumes are deleted. Table 3.2. Supported access modes for PVs Volume plugin ReadWriteOnce [1] ReadOnlyMany ReadWriteMany AliCloud Disk ✅ - - AWS EBS [2] ✅ - - AWS EFS ✅ ✅ ✅ Azure File ✅ ✅ ✅ Azure Disk ✅ - - Cinder ✅ - - Fibre Channel ✅ ✅ ✅ [3] GCE Persistent Disk ✅ - - HostPath ✅ - - IBM VPC Disk ✅ - - iSCSI ✅ ✅ ✅ [3] Local volume ✅ - - NFS ✅ ✅ ✅ OpenStack Manila - - ✅ Red Hat OpenShift Data Foundation ✅ - ✅ VMware vSphere ✅ - ✅ [4] ReadWriteOnce (RWO) volumes cannot be mounted on multiple nodes. If a node fails, the system does not allow the attached RWO volume to be mounted on a new node because it is already assigned to the failed node. If you encounter a multi-attach error message as a result, force delete the pod on a shutdown or crashed node to avoid data loss in critical workloads, such as when dynamic persistent volumes are attached. Use a recreate deployment strategy for pods that rely on AWS EBS. Only raw block volumes support the ReadWriteMany (RWX) access mode for Fibre Channel and iSCSI. For more information, see "Block volume support". If the underlying vSphere environment supports the vSAN file service, then the vSphere Container Storage Interface (CSI) Driver Operator installed by OpenShift Container Platform supports provisioning of ReadWriteMany (RWX) volumes. If you do not have vSAN file service configured, and you request RWX, the volume fails to get created and an error is logged. For more information, see "Using Container Storage Interface" "VMware vSphere CSI Driver Operator". 3.3.4. Phase Volumes can be found in one of the following phases: Table 3.3. Volume phases Phase Description Available A free resource not yet bound to a claim. Bound The volume is bound to a claim. Released The claim was deleted, but the resource is not yet reclaimed by the cluster. Failed The volume has failed its automatic reclamation. You can view the name of the PVC bound to the PV by running: USD oc get pv <pv-claim> 3.3.4.1. Mount options You can specify mount options while mounting a PV by using the attribute mountOptions . For example: Mount options example apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce mountOptions: 1 - nfsvers=4.1 nfs: path: /tmp server: 172.17.0.2 persistentVolumeReclaimPolicy: Retain claimRef: name: claim1 namespace: default 1 Specified mount options are used while mounting the PV to the disk. The following PV types support mount options: AWS Elastic Block Store (EBS) Azure Disk Azure File Cinder GCE Persistent Disk iSCSI Local volume NFS Red Hat OpenShift Data Foundation (Ceph RBD only) VMware vSphere Note Fibre Channel and HostPath PVs do not support mount options. Additional resources ReadWriteMany vSphere volume support 3.4. Persistent volume claims Each PersistentVolumeClaim object contains a spec and status , which is the specification and status of the persistent volume claim (PVC), for example: PersistentVolumeClaim object definition example kind: PersistentVolumeClaim apiVersion: v1 metadata: name: myclaim 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: 8Gi 3 storageClassName: gold 4 status: ... 1 Name of the PVC 2 The access mode, defining the read-write and mount permissions 3 The amount of storage available to the PVC 4 Name of the StorageClass required by the claim 3.4.1. Storage classes Claims can optionally request a specific storage class by specifying the storage class's name in the storageClassName attribute. Only PVs of the requested class, ones with the same storageClassName as the PVC, can be bound to the PVC. The cluster administrator can configure dynamic provisioners to service one or more storage classes. The cluster administrator can create a PV on demand that matches the specifications in the PVC. Important The Cluster Storage Operator might install a default storage class depending on the platform in use. This storage class is owned and controlled by the operator. It cannot be deleted or modified beyond defining annotations and labels. If different behavior is desired, you must define a custom storage class. The cluster administrator can also set a default storage class for all PVCs. When a default storage class is configured, the PVC must explicitly ask for StorageClass or storageClassName annotations set to "" to be bound to a PV without a storage class. Note If more than one storage class is marked as default, a PVC can only be created if the storageClassName is explicitly specified. Therefore, only one storage class should be set as the default. 3.4.2. Access modes Claims use the same conventions as volumes when requesting storage with specific access modes. 3.4.3. Resources Claims, such as pods, can request specific quantities of a resource. In this case, the request is for storage. The same resource model applies to volumes and claims. 3.4.4. Claims as volumes Pods access storage by using the claim as a volume. Claims must exist in the same namespace as the pod using the claim. The cluster finds the claim in the pod's namespace and uses it to get the PersistentVolume backing the claim. The volume is mounted to the host and into the pod, for example: Mount volume to the host and into the pod example kind: Pod apiVersion: v1 metadata: name: mypod spec: containers: - name: myfrontend image: dockerfile/nginx volumeMounts: - mountPath: "/var/www/html" 1 name: mypd 2 volumes: - name: mypd persistentVolumeClaim: claimName: myclaim 3 1 Path to mount the volume inside the pod. 2 Name of the volume to mount. Do not mount to the container root, / , or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host /dev/pts files. It is safe to mount the host by using /host . 3 Name of the PVC, that exists in the same namespace, to use. 3.5. Block volume support OpenShift Container Platform can statically provision raw block volumes. These volumes do not have a file system, and can provide performance benefits for applications that either write to the disk directly or implement their own storage service. Raw block volumes are provisioned by specifying volumeMode: Block in the PV and PVC specification. Important Pods using raw block volumes must be configured to allow privileged containers. The following table displays which volume plugins support block volumes. Table 3.4. Block volume support Volume Plugin Manually provisioned Dynamically provisioned Fully supported AliCloud Disk ✅ ✅ ✅ AWS EBS ✅ ✅ ✅ AWS EFS Azure Disk ✅ ✅ ✅ Azure File Cinder ✅ ✅ ✅ Fibre Channel ✅ ✅ GCP ✅ ✅ ✅ HostPath IBM VPC Disk ✅ ✅ ✅ iSCSI ✅ ✅ Local volume ✅ ✅ NFS Red Hat OpenShift Data Foundation ✅ ✅ ✅ VMware vSphere ✅ ✅ ✅ Important Using any of the block volumes that can be provisioned manually, but are not provided as fully supported, is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 3.5.1. Block volume examples PV example apiVersion: v1 kind: PersistentVolume metadata: name: block-pv spec: capacity: storage: 10Gi accessModes: - ReadWriteOnce volumeMode: Block 1 persistentVolumeReclaimPolicy: Retain fc: targetWWNs: ["50060e801049cfd1"] lun: 0 readOnly: false 1 volumeMode must be set to Block to indicate that this PV is a raw block volume. PVC example apiVersion: v1 kind: PersistentVolumeClaim metadata: name: block-pvc spec: accessModes: - ReadWriteOnce volumeMode: Block 1 resources: requests: storage: 10Gi 1 volumeMode must be set to Block to indicate that a raw block PVC is requested. Pod specification example apiVersion: v1 kind: Pod metadata: name: pod-with-block-volume spec: containers: - name: fc-container image: fedora:26 command: ["/bin/sh", "-c"] args: [ "tail -f /dev/null" ] volumeDevices: 1 - name: data devicePath: /dev/xvda 2 volumes: - name: data persistentVolumeClaim: claimName: block-pvc 3 1 volumeDevices , instead of volumeMounts , is used for block devices. Only PersistentVolumeClaim sources can be used with raw block volumes. 2 devicePath , instead of mountPath , represents the path to the physical device where the raw block is mapped to the system. 3 The volume source must be of type persistentVolumeClaim and must match the name of the PVC as expected. Table 3.5. Accepted values for volumeMode Value Default Filesystem Yes Block No Table 3.6. Binding scenarios for block volumes PV volumeMode PVC volumeMode Binding result Filesystem Filesystem Bind Unspecified Unspecified Bind Filesystem Unspecified Bind Unspecified Filesystem Bind Block Block Bind Unspecified Block No Bind Block Unspecified No Bind Filesystem Block No Bind Block Filesystem No Bind Important Unspecified values result in the default value of Filesystem . 3.6. Using fsGroup to reduce pod timeouts If a storage volume contains many files (~1,000,000 or greater), you may experience pod timeouts. This can occur because, by default, OpenShift Container Platform recursively changes ownership and permissions for the contents of each volume to match the fsGroup specified in a pod's securityContext when that volume is mounted. For large volumes, checking and changing ownership and permissions can be time consuming, slowing pod startup. You can use the fsGroupChangePolicy field inside a securityContext to control the way that OpenShift Container Platform checks and manages ownership and permissions for a volume. fsGroupChangePolicy defines behavior for changing ownership and permission of the volume before being exposed inside a pod. This field only applies to volume types that support fsGroup -controlled ownership and permissions. This field has two possible values: OnRootMismatch : Only change permissions and ownership if permission and ownership of root directory does not match with expected permissions of the volume. This can help shorten the time it takes to change ownership and permission of a volume to reduce pod timeouts. Always : Always change permission and ownership of the volume when a volume is mounted. fsGroupChangePolicy example securityContext: runAsUser: 1000 runAsGroup: 3000 fsGroup: 2000 fsGroupChangePolicy: "OnRootMismatch" 1 ... 1 OnRootMismatch specifies skipping recursive permission change, thus helping to avoid pod timeout problems. Note The fsGroupChangePolicyfield has no effect on ephemeral volume types, such as secret, configMap, and emptydir. | [
"oc delete pv <pv-name>",
"oc get pv",
"NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-b6efd8da-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim1 manual 10s pvc-b95650f8-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim2 manual 6s pvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim3 manual 3s",
"oc patch pv <your-pv-name> -p '{\"spec\":{\"persistentVolumeReclaimPolicy\":\"Retain\"}}'",
"oc get pv",
"NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-b6efd8da-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim1 manual 10s pvc-b95650f8-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim2 manual 6s pvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Retain Bound default/claim3 manual 3s",
"apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 5Gi 2 accessModes: - ReadWriteOnce 3 persistentVolumeReclaimPolicy: Retain 4 status:",
"oc get pv <pv-claim>",
"apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce mountOptions: 1 - nfsvers=4.1 nfs: path: /tmp server: 172.17.0.2 persistentVolumeReclaimPolicy: Retain claimRef: name: claim1 namespace: default",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: myclaim 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: 8Gi 3 storageClassName: gold 4 status:",
"kind: Pod apiVersion: v1 metadata: name: mypod spec: containers: - name: myfrontend image: dockerfile/nginx volumeMounts: - mountPath: \"/var/www/html\" 1 name: mypd 2 volumes: - name: mypd persistentVolumeClaim: claimName: myclaim 3",
"apiVersion: v1 kind: PersistentVolume metadata: name: block-pv spec: capacity: storage: 10Gi accessModes: - ReadWriteOnce volumeMode: Block 1 persistentVolumeReclaimPolicy: Retain fc: targetWWNs: [\"50060e801049cfd1\"] lun: 0 readOnly: false",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: block-pvc spec: accessModes: - ReadWriteOnce volumeMode: Block 1 resources: requests: storage: 10Gi",
"apiVersion: v1 kind: Pod metadata: name: pod-with-block-volume spec: containers: - name: fc-container image: fedora:26 command: [\"/bin/sh\", \"-c\"] args: [ \"tail -f /dev/null\" ] volumeDevices: 1 - name: data devicePath: /dev/xvda 2 volumes: - name: data persistentVolumeClaim: claimName: block-pvc 3",
"securityContext: runAsUser: 1000 runAsGroup: 3000 fsGroup: 2000 fsGroupChangePolicy: \"OnRootMismatch\" 1"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/storage/understanding-persistent-storage |
probe::sunrpc.clnt.shutdown_client | probe::sunrpc.clnt.shutdown_client Name probe::sunrpc.clnt.shutdown_client - Shutdown an RPC client Synopsis sunrpc.clnt.shutdown_client Values om_queue the jiffies queued for xmit clones the number of clones vers the RPC program version number om_rtt the RPC RTT jiffies om_execute the RPC execution jiffies rpccnt the count of RPC calls progname the RPC program name authflavor the authentication flavor prot the IP protocol number prog the RPC program number om_bytes_recv the count of bytes in om_bytes_sent the count of bytes out port the port number om_ntrans the count of RPC transmissions netreconn the count of reconnections om_ops the count of operations tasks the number of references servername the server machine name | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-sunrpc-clnt-shutdown-client |
Chapter 3. Optimizing Virtualization Performance with virt-manager | Chapter 3. Optimizing Virtualization Performance with virt-manager This chapter covers performance tuning options available in virt-manager , a desktop tool for managing guest virtual machines. 3.1. Operating System Details and Devices 3.1.1. Specifying Guest Virtual Machine Details The virt-manager tool provides different profiles depending on what operating system type and version are selected for a new guest virtual machine. When creating a guest, you should provide as many details as possible; this can improve performance by enabling features available for your specific type of guest. See the following example screen capture of the virt-manager tool. When creating a new guest virtual machine, always specify your intended OS type and Version : Figure 3.1. Provide the OS type and Version 3.1.2. Remove Unused Devices Removing unused or unnecessary devices can improve performance. For instance, a guest tasked as a web server is unlikely to require audio features or an attached tablet. See the following example screen capture of the virt-manager tool. Click the Remove button to remove unnecessary devices: Figure 3.2. Remove unused devices | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_tuning_and_optimization_guide/chap-virtualization_tuning_optimization_guide-virt_manager |
Chapter 1. About CI/CD | Chapter 1. About CI/CD Red Hat OpenShift Service on AWS is an enterprise-ready Kubernetes platform for developers, which enables organizations to automate the application delivery process through DevOps practices, such as continuous integration (CI) and continuous delivery (CD). To meet your organizational needs, the Red Hat OpenShift Service on AWS provides the following CI/CD solutions: OpenShift Builds OpenShift Pipelines OpenShift GitOps Jenkins 1.1. OpenShift Builds OpenShift Builds provides you the following options to configure and run a build: Builds using Shipwright is an extensible build framework based on the Shipwright project. You can use it to build container images on an Red Hat OpenShift Service on AWS cluster. You can build container images from source code and Dockerfile by using image build tools, such as Source-to-Image (S2I) and Buildah. For more information, see builds for Red Hat OpenShift . Builds using BuildConfig objects is a declarative build process to create cloud-native apps. You can define the build process in a YAML file that you use to create a BuildConfig object. This definition includes attributes such as build triggers, input parameters, and source code. When deployed, the BuildConfig object builds a runnable image and pushes the image to a container image registry. With the BuildConfig object, you can create a Docker, Source-to-image (S2I), or custom build. For more information, see Understanding image builds . 1.2. OpenShift Pipelines OpenShift Pipelines provides a Kubernetes-native CI/CD framework to design and run each step of the CI/CD pipeline in its own container. It can scale independently to meet the on-demand pipelines with predictable outcomes. For more information, see Red Hat OpenShift Pipelines . 1.3. OpenShift GitOps OpenShift GitOps is an Operator that uses Argo CD as the declarative GitOps engine. It enables GitOps workflows across multicluster OpenShift and Kubernetes infrastructure. Using OpenShift GitOps, administrators can consistently configure and deploy Kubernetes-based infrastructure and applications across clusters and development lifecycles. For more information, see Red Hat OpenShift GitOps . 1.4. Jenkins Jenkins automates the process of building, testing, and deploying applications and projects. OpenShift Developer Tools provides a Jenkins image that integrates directly with the Red Hat OpenShift Service on AWS. Jenkins can be deployed on OpenShift by using the Samples Operator templates or certified Helm chart. For more information, see Configuring Jenkins images . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/cicd_overview/ci-cd-overview |
33.4. Resolving Problems in System Recovery Modes | 33.4. Resolving Problems in System Recovery Modes This section provides several procedures that explain how to resolve some of the most common problems that needs to be addressed in some of the system recovery modes. The following procedure shows how to reset a root password: Procedure 33.4. Resetting a Root Password Boot to single-user mode as described in Procedure 33.2, "Booting into Single-User Mode" . Run the passwd command from the maintenance shell command line. One of the most common causes for an unbootable system is overwriting of the Master Boot Record (MBR) that originally contained the GRUB boot loader. If the boot loader is overwritten, you cannot boot Red Hat Enterprise Linux unless you reconfigure the boot loader in rescue mode . To reinstall GRUB on the MBR of your hard drive, proceed with the following procedure: Procedure 33.5. Reinstalling the GRUB Boot Loader Boot to rescue mode as described in Procedure 33.1, "Booting into Rescue Mode" . Ensure that you mount the system's root partition in read-write mode. Execute the following command to change the root partition: Run the following command to reinstall the GRUB boot loader: where boot_part is your boot partition (typically, /dev/sda ). Review the /boot/grub/grub.conf file, as additional entries may be needed for GRUB to control additional operating systems. Reboot the system. Another common problem that would render your system unbootable is a change of your root partition number. This can usually happen when resizing a partition or creating a new partition after installation. If the partition number of your root partition changes, the GRUB boot loader might not be able to find it to mount the partition. To fix this problem,boot into rescue mode and modify the /boot/grub/grub.conf file. A malfunctioning or missing driver can prevent a system from booting normally. You can use the RPM package manager to remove malfunctioning drivers or to add updated or missing drivers in rescue mode . If you cannot remove a malfunctioning driver for some reason, you can instead blacklist the driver so that it does not load at boot time. Note When you install a driver from a driver disc, the driver disc updates all initramfs images on the system to use this driver. If a problem with a driver prevents a system from booting, you cannot rely on booting the system from another initramfs image. To remove a malfunctioning driver that prevents the system from booting, follow this procedure: Procedure 33.6. Remove a Driver in Rescue Mode Boot to rescue mode as described in Procedure 33.1, "Booting into Rescue Mode" . Ensure that you mount the system's root partition in read-write mode. Change the root directory to /mnt/sysimage/ : Run the following command to remove the driver package: Exit the chroot environment: Reboot the system. To install a missing driver that prevents the system from booting, follow this procedure: Procedure 33.7. Installing a Driver in Rescue Mode Boot to rescue mode as described in Procedure 33.1, "Booting into Rescue Mode" . Ensure that you mount the system's root partition in read-write mode. Mount a media with an RPM package that contains the driver and copy the package to a location of your choice under the /mnt/sysimage/ directory, for example: /mnt/sysimage/root/drivers/ . Change the root directory to /mnt/sysimage/ : Run the following command to install the driver package: Note that /root/drivers/ in this chroot environment is /mnt/sysimage/root/drivers/ in the original rescue environment. Exit the chroot environment: Reboot the system. To blacklist a driver that prevents the system from booting and to ensure that this driver cannot be loaded after the root device is mounted, follow this procedure: Procedure 33.8. Blacklisting a Driver in Rescue Mode Boot to rescue mode with the command linux rescue rdblacklist= driver_name , where driver_name is the driver that you need to blacklist. Follow the instructions in Procedure 33.1, "Booting into Rescue Mode" and ensure that you mount the system's root partition in read-write mode. Open the /boot/grub/grub.conf file in the vi editor: Identify the default kernel used to boot the system. Each kernel is specified in the grub.conf file with a group of lines that begins title . The default kernel is specified by the default parameter near the start of the file. A value of 0 refers to the kernel described in the first group of lines, a value of 1 refers to the kernel described in the second group, and higher values refer to subsequent kernels in turn. Edit the kernel line of the group to include the option rdblacklist= driver_name , where driver_name is the driver that you need to blacklist. For example: Save the file and exit the vi editor by typing: :wq Run the following command to create a new file /etc/modprobe.d/ driver_name .conf that will ensure blacklisting of the driver after the root partition is mounted: Reboot the system. | [
"sh-3.00b# chroot /mnt/sysimage",
"sh-3.00b# /sbin/grub-install boot_part",
"sh-3.00b# chroot /mnt/sysimage",
"sh-3.00b# rpm -e driver_name",
"sh-3.00b# exit",
"sh-3.00b# chroot /mnt/sysimage",
"sh-3.00b# rpm -ihv /root/drivers/ package_name",
"sh-3.00b# exit",
"sh-3.00b# vi /boot/grub/grub.conf",
"kernel /vmlinuz-2.6.32-71.18-2.el6.i686 ro root=/dev/sda1 rhgb quiet rdblacklist= driver_name",
":wq",
"echo \"install driver_name \" > /mnt/sysimage/etc/modprobe.d/ driver_name .conf"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-Resolving_Problems_in_System_Recovery_Modes |
Appendix A. Getting involved in the Migration Toolkit for Applications development | Appendix A. Getting involved in the Migration Toolkit for Applications development A.1. Contributing to the project To help the Migration Toolkit for Applications cover most application constructs and server configurations, including yours, you can help with any of the following items: Send an email to [email protected] and let us know what MTA migration rules must cover. Provide example applications to test migration rules. Identify application components and problem areas that might be difficult to migrate: Write a short description of the problem migration areas. Write a brief overview describing how to solve the problem in migration areas. Try Migration Toolkit for Applications on your application. Make sure to report any issues you meet. Contribute to the Migration Toolkit for Applications rules repository: Write a Migration Toolkit for Applications rule to identify or automate a migration process. Create a test for the new rule. For more information, see Rule Development Guide . Contribute to the project source code: Create a core rule. Improve MTA performance or efficiency. Any level of involvement is greatly appreciated! A.2. Reporting issues MTA uses Jira as its issue tracking system. If you encounter an issue executing MTA, submit a Jira issue . A.3. Migration Toolkit for Applications development resources Use the following resources to learn and contribute to the Migration Toolkit for Applications development: MTA forums: https://developer.jboss.org/en/windup Jira issue tracker: https://issues.redhat.com/projects/MTA/issues MTA mailing list: [email protected] Revised on 2025-02-26 19:47:51 UTC | null | https://docs.redhat.com/en/documentation/migration_toolkit_for_applications/7.2/html/introduction_to_the_migration_toolkit_for_applications/getting_involved_in_the_migration_toolkit_for_applications_development |
7.233. xkeyboard-config | 7.233. xkeyboard-config 7.233.1. RHBA-2015:1276 - xkeyboard-config bug fix and enhancement update Updated xkeyboard-config packages that fix two bugs are now available for Red Hat Enterprise Linux 6. The xkeyboard-config packages contain configuration data used by the X keyboard Extension (XKB), which allows selection of keyboard layouts when using a graphical interface. Bug Fixes BZ# 923160 With the upgrade to upstream version 2.11, the /usr/share/X11/xkb/keymap.dir file was removed from the xkeyboard-config packages. Consequently, X11 keyboard configuration stopped working for NX connections. This update includes the missing file again, and as a result, the broken functionality is restored. BZ# 1164507 The upgrade to upstream version 2.11 also remapped three keys in the Russian phonetic keyboard layout: the "x" key was mapped to "ha", "h" to "che", and "=" to the soft sign. This change caused problems to users who expected the usual layout of the phonetic keyboard. Now, the layout has been fixed, and these keys are correctly mapped to the soft sign, "ha", and "che" respectively. Users of xkeyboard-config are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-xkeyboard-config |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.