title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
url
stringlengths
79
342
25.10. Removing a Path to a Storage Device
25.10. Removing a Path to a Storage Device If you are removing a path to a device that uses multipathing (without affecting other paths to the device), then the general procedure is as follows: Procedure 25.12. Removing a Path to a Storage Device Remove any reference to the device's path-based name, like /dev/sd or /dev/disk/by-path or the major:minor number, in applications, scripts, or utilities on the system. This is important in ensuring that different devices added in the future will not be mistaken for the current device. Take the path offline using echo offline > /sys/block/sda/device/state . This will cause any subsequent I/O sent to the device on this path to be failed immediately. Device-mapper-multipath will continue to use the remaining paths to the device. Remove the path from the SCSI subsystem. To do so, use the command echo 1 > /sys/block/ device-name /device/delete where device-name may be sde , for example (as described in Procedure 25.11, "Ensuring a Clean Device Removal" ). After performing Procedure 25.12, "Removing a Path to a Storage Device" , the path can be safely removed from the running system. It is not necessary to stop I/O while this is done, as device-mapper-multipath will re-route I/O to remaining paths according to the configured path grouping and failover policies. Other procedures, such as the physical removal of the cable, followed by a rescan of the SCSI bus to cause the operating system state to be updated to reflect the change, are not recommended. This will cause delays due to I/O timeouts, and devices may be removed unexpectedly. If it is necessary to perform a rescan of an interconnect, it must be done while I/O is paused, as described in Section 25.12, "Scanning Storage Interconnects" .
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/removing-path-to-storage-device
Chapter 2. Getting started
Chapter 2. Getting started 2.1. Maintenance and support for monitoring Not all configuration options for the monitoring stack are exposed. The only supported way of configuring OpenShift Container Platform monitoring is by configuring the Cluster Monitoring Operator (CMO) using the options described in the Config map reference for the Cluster Monitoring Operator . Do not use other configurations, as they are unsupported. Configuration paradigms might change across Prometheus releases, and such cases can only be handled gracefully if all configuration possibilities are controlled. If you use configurations other than those described in the Config map reference for the Cluster Monitoring Operator , your changes will disappear because the CMO automatically reconciles any differences and resets any unsupported changes back to the originally defined state by default and by design. 2.1.1. Support considerations for monitoring Note Backward compatibility for metrics, recording rules, or alerting rules is not guaranteed. The following modifications are explicitly not supported: Creating additional ServiceMonitor , PodMonitor , and PrometheusRule objects in the openshift-* and kube-* projects. Modifying any resources or objects deployed in the openshift-monitoring or openshift-user-workload-monitoring projects. The resources created by the OpenShift Container Platform monitoring stack are not meant to be used by any other resources, as there are no guarantees about their backward compatibility. Note The Alertmanager configuration is deployed as the alertmanager-main secret resource in the openshift-monitoring namespace. If you have enabled a separate Alertmanager instance for user-defined alert routing, an Alertmanager configuration is also deployed as the alertmanager-user-workload secret resource in the openshift-user-workload-monitoring namespace. To configure additional routes for any instance of Alertmanager, you need to decode, modify, and then encode that secret. This procedure is a supported exception to the preceding statement. Modifying resources of the stack. The OpenShift Container Platform monitoring stack ensures its resources are always in the state it expects them to be. If they are modified, the stack will reset them. Deploying user-defined workloads to openshift-* , and kube-* projects. These projects are reserved for Red Hat provided components and they should not be used for user-defined workloads. Enabling symptom based monitoring by using the Probe custom resource definition (CRD) in Prometheus Operator. Manually deploying monitoring resources into namespaces that have the openshift.io/cluster-monitoring: "true" label. Adding the openshift.io/cluster-monitoring: "true" label to namespaces. This label is reserved only for the namespaces with core OpenShift Container Platform components and Red Hat certified components. Installing custom Prometheus instances on OpenShift Container Platform. A custom instance is a Prometheus custom resource (CR) managed by the Prometheus Operator. 2.1.2. Support policy for monitoring Operators Monitoring Operators ensure that OpenShift Container Platform monitoring resources function as designed and tested. If Cluster Version Operator (CVO) control of an Operator is overridden, the Operator does not respond to configuration changes, reconcile the intended state of cluster objects, or receive updates. While overriding CVO control for an Operator can be helpful during debugging, this is unsupported and the cluster administrator assumes full control of the individual component configurations and upgrades. Overriding the Cluster Version Operator The spec.overrides parameter can be added to the configuration for the CVO to allow administrators to provide a list of overrides to the behavior of the CVO for a component. Setting the spec.overrides[].unmanaged parameter to true for a component blocks cluster upgrades and alerts the administrator after a CVO override has been set: Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing. Warning Setting a CVO override puts the entire cluster in an unsupported state and prevents the monitoring stack from being reconciled to its intended state. This impacts the reliability features built into Operators and prevents updates from being received. Reported issues must be reproduced after removing any overrides for support to proceed. 2.1.3. Support version matrix for monitoring components The following matrix contains information about versions of monitoring components for OpenShift Container Platform 4.12 and later releases: Table 2.1. OpenShift Container Platform and component versions OpenShift Container Platform Prometheus Operator Prometheus Metrics Server Alertmanager kube-state-metrics agent monitoring-plugin node-exporter agent Thanos 4.17 0.75.2 2.53.1 0.7.1 0.27.0 2.13.0 1.0.0 1.8.2 0.35.1 4.16 0.73.2 2.52.0 0.7.1 0.26.0 2.12.0 1.0.0 1.8.0 0.35.0 4.15 0.70.0 2.48.0 0.6.4 0.26.0 2.10.1 1.0.0 1.7.0 0.32.5 4.14 0.67.1 2.46.0 N/A 0.25.0 2.9.2 1.0.0 1.6.1 0.30.2 4.13 0.63.0 2.42.0 N/A 0.25.0 2.8.1 N/A 1.5.0 0.30.2 4.12 0.60.1 2.39.1 N/A 0.24.0 2.6.0 N/A 1.4.0 0.28.1 Note The openshift-state-metrics agent and Telemeter Client are OpenShift-specific components. Therefore, their versions correspond with the versions of OpenShift Container Platform. 2.2. Core platform monitoring first steps After OpenShift Container Platform is installed, core platform monitoring components immediately begin collecting metrics, which you can query and view. The default in-cluster monitoring stack includes the core platform Prometheus instance that collects metrics from your cluster and the core Alertmanager instance that routes alerts, among other components. Depending on who will use the monitoring stack and for what purposes, as a cluster administrator, you can further configure these monitoring components to suit the needs of different users in various scenarios. 2.2.1. Configuring core platform monitoring: Postinstallation steps After OpenShift Container Platform is installed, cluster administrators typically configure core platform monitoring to suit their needs. These activities include setting up storage and configuring options for Prometheus, Alertmanager, and other monitoring components. Note By default, in a newly installed OpenShift Container Platform system, users can query and view collected metrics. You need only configure an alert receiver if you want users to receive alert notifications. Any other configuration options listed here are optional. Create the cluster-monitoring-config ConfigMap object if it does not exist. Configure notifications for default platform alerts so that Alertmanager can send alerts to an external notification system such as email, Slack, or PagerDuty. For shorter term data retention, configure persistent storage for Prometheus and Alertmanager to store metrics and alert data. Specify the metrics data retention parameters for Prometheus and Thanos Ruler. Important In multi-node clusters, you must configure persistent storage for Prometheus, Alertmanager, and Thanos Ruler to ensure high availability. By default, in a newly installed OpenShift Container Platform system, the monitoring ClusterOperator resource reports a PrometheusDataPersistenceNotConfigured status message to remind you that storage is not configured. For longer term data retention, configure the remote write feature to enable Prometheus to send ingested metrics to remote systems for storage. Important Be sure to add cluster ID labels to metrics for use with your remote write storage configuration. Grant monitoring cluster roles to any non-administrator users that need to access certain monitoring features. Assign tolerations to monitoring stack components so that administrators can move them to tainted nodes. Set the body size limit for metrics collection to help avoid situations in which Prometheus consumes excessive amounts of memory when scraped targets return a response that contains a large amount of data. Modify or create alerting rules for your cluster. These rules specify the conditions that trigger alerts, such as high CPU or memory usage, network latency, and so forth. Specify resource limits and requests for monitoring components to ensure that the containers that run monitoring components have enough CPU and memory resources. With the monitoring stack configured to suit your needs, Prometheus collects metrics from the specified services and stores these metrics according to your settings. You can go to the Observe pages in the OpenShift Container Platform web console to view and query collected metrics, manage alerts, identify performance bottlenecks, and scale resources as needed: View dashboards to visualize collected metrics, troubleshoot alerts, and monitor other information about your cluster. Query collected metrics by creating PromQL queries or using predefined queries. 2.3. User workload monitoring first steps As a cluster administrator, you can optionally enable monitoring for user-defined projects in addition to core platform monitoring. Non-administrator users such as developers can then monitor their own projects outside of core platform monitoring. Cluster administrators typically complete the following activities to configure user-defined projects so that users can view collected metrics, query these metrics, and receive alerts for their own projects: Enable user workload monitoring . Grant non-administrator users permissions to monitor user-defined projects by assigning the monitoring-rules-view , monitoring-rules-edit , or monitoring-edit cluster roles. Assign the user-workload-monitoring-config-edit role to grant non-administrator users permission to configure user-defined projects. Enable alert routing for user-defined projects so that developers and other users can configure custom alerts and alert routing for their projects. If needed, configure alert routing for user-defined projects to use an optional Alertmanager instance dedicated for use only by user-defined projects . Configure notifications for user-defined alerts . If you use the platform Alertmanager instance for user-defined alert routing, configure different alert receivers for default platform alerts and user-defined alerts. 2.4. Developer and non-administrator steps After monitoring for user-defined projects is enabled and configured, developers and other non-administrator users can then perform the following activities to set up and use monitoring for their own projects: Deploy and monitor services . Create and manage alerting rules . Receive and manage alerts for your projects. If granted the alert-routing-edit cluster role, configure alert routing . View dashboards by using the OpenShift Container Platform web console. Query the collected metrics by creating PromQL queries or using predefined queries.
[ "Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing." ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/monitoring/getting-started
Appendix B. Revision History
Appendix B. Revision History The revision numbers below relate to the edition of this manual, not to version numbers of Red Hat Enterprise Linux. Version Date and change Author 7.0-11 Oct 15 2019: Added troubleshooting appendix. Marc Muehlfeld 7.0-10 Sep 26 2019: Added Granting and Restricting Access to SSSD Containers Using HBAC Rules . Marc Muehlfeld 7.0-9 Aug 23 2019: Updated introduction of Configuring the SSSD Container to Provide Identity and Authentication Services on Atomic Host . Marc Muehlfeld 7.0-8 Apr 05 2018: Preparing document for 7.5 GA publication. Lucie Manaskova 7.0-7 Mar 19 2018: Updated Deploying sssd containers with different configurations. Lucie Manaskova 7.0-6 Jan 29 2018: Minor fixes. Aneta Steflova Petrova 7.0-5 Nov 20 2017: Updated Enrolling to an Identity Management Domain Using an SSSD Container . Aneta Steflova Petrova 7.0-4 Sep 12 2017: Added a procedure for uninstalling an SSSD container joined to an AD domain. Aneta Steflova Petrova 7.0-3 Aug 28 2017: Updated part Using the sssd container with more user stories and fixes. Aneta Steflova Petrova 7.0-2 Aug 14 2017: Updated sections Available Container Images and Joining an Active Directory Domain Using an SSSD Container . Aneta Steflova Petrova 7.0-1 Jul 18 2017: Document version for 7.4 GA publication. Aneta Steflova Petrova
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/using_containerized_identity_management_services/revision_history
3.2.3. Improving a CPU Shortage
3.2.3. Improving a CPU Shortage When there is insufficient processing power available for the work needing to be done, you have two options: Reducing the load Increasing the capacity 3.2.3.1. Reducing the Load Reducing the CPU load is something that can be done with no expenditure of money. The trick is to identify those aspects of the system load under your control that can be cut back. There are three areas to focus on: Reducing operating system overhead Reducing application overhead Eliminating applications entirely 3.2.3.1.1. Reducing Operating System Overhead To reduce operating system overhead, you must examine your current system load and determine what aspects of it result in inordinate amounts of overhead. These areas could include: Reducing the need for frequent process scheduling Reducing the amount of I/O performed Do not expect miracles; in a reasonably-well configured system, it is unlikely to notice much of a performance increase by trying to reduce operating system overhead. This is due to the fact that a reasonably-well configured system, by definition, results in a minimal amount of overhead. However, if your system is running with too little RAM for instance, you may be able to reduce overhead by alleviating the RAM shortage. 3.2.3.1.2. Reducing Application Overhead Reducing application overhead means making sure the application has everything it needs to run well. Some applications exhibit wildly different behaviors under different environments -- an application may become highly compute-bound while processing certain types of data, but not others, for example. The point to keep in mind here is that you must understand the applications running on your system if you are to enable them to run as efficiently as possible. Often this entails working with your users, and/or your organization's developers, to help uncover ways in which the applications can be made to run more efficiently. 3.2.3.1.3. Eliminating Applications Entirely Depending on your organization, this approach might not be available to you, as it often is not a system administrator's responsibility to dictate which applications will and will not be run. However, if you can identify any applications that are known "CPU hogs", you might be able to influence the powers-that-be to retire them. Doing this will likely involve more than just yourself. The affected users should certainly be a part of this process; in many cases they may have the knowledge and the political power to make the necessary changes to the application lineup. Note Keep in mind that an application may not need to be eliminated from every system in your organization. You might be able to move a particularly CPU-hungry application from an overloaded system to another system that is nearly idle.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s2-bandwidth-processing-improving
Logging
Logging OpenShift Container Platform 4.11 OpenShift Logging installation, usage, and release notes Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/logging/index
probe::nfs.proc.commit_setup
probe::nfs.proc.commit_setup Name probe::nfs.proc.commit_setup - NFS client setting up a commit RPC task Synopsis nfs.proc.commit_setup Values version NFS version count bytes in this commit prot transfer protocol server_ip IP address of server bitmask1 V4 bitmask representing the set of attributes supported on this filesystem bitmask0 V4 bitmask representing the set of attributes supported on this filesystem offset the file offset size bytes in this commit Description The commit_setup function is used to setup a commit RPC task. Is is not doing the actual commit operation. It does not exist in NFSv2.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-nfs-proc-commit-setup
function::task_dentry_path
function::task_dentry_path Name function::task_dentry_path - get the full dentry path Synopsis Arguments task task_struct pointer. dentry direntry pointer. vfsmnt vfsmnt pointer. Description Returns the full dirent name (full path to the root), like the kernel d_path function.
[ "function task_dentry_path:string(task:long,dentry:long,vfsmnt:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-task-dentry-path
Installing with an RPM package
Installing with an RPM package Red Hat build of MicroShift 4.18 Installing MicroShift with RPMs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/installing_with_an_rpm_package/index
8.2. Packages and Package Groups
8.2. Packages and Package Groups 8.2.1. Searching Packages You can search all RPM package names, descriptions and summaries by using the following command: yum search term ... Replace term with a package name you want to search. Example 8.2. Searching for packages matching a specific string To list all packages that match " vim " , " gvim " , or " emacs " , type: The yum search command is useful for searching for packages you do not know the name of, but for which you know a related term. Note that by default, yum search returns matches in package name and summary, which makes the search faster. Use the yum search all command for a more exhaustive but slower search.
[ "~]USD yum search vim gvim emacs Loaded plugins: langpacks, product-id, search-disabled-repos, subscription-manager ============================= N/S matched: vim ============================== vim -X11.x86_64 : The VIM version of the vi editor for the X Window System vim -common.x86_64 : The common files needed by any version of the VIM editor [output truncated] ============================ N/S matched: emacs ============================= emacs .x86_64 : GNU Emacs text editor emacs -auctex.noarch : Enhanced TeX modes for Emacs [output truncated] Name and summary matches mostly, use \"search all\" for everything. Warning: No matches found for: gvim" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-packages_and_package_groups
33.4. Managing Master DNS Zones
33.4. Managing Master DNS Zones 33.4.1. Adding and Removing Master DNS Zones Adding Master DNS Zones in the Web UI Open the Network Services tab, and select the DNS subtab, followed by the DNS Zones section. Figure 33.1. Managing DNS Master Zones To add a new master zone, click Add at the top of the list of all zones. Figure 33.2. Adding a Master DNS Zone Provide the zone name, and click Add . Figure 33.3. Entering a New Master Zone Adding Master DNS Zones from the Command Line The ipa dnszone-add command adds a new zone to the DNS domain. Adding a new zone requires you to specify the name of the new subdomain. You can pass the subdomain name directly with the command: If you do not pass the name to ipa dnszone-add , the script prompts for it automatically. The ipa dnszone-add command also accepts various command-line options. For a complete list of these options, run the ipa dnszone-add --help command. Removing Master DNS Zones To remove a master DNS zone in the web UI, in the list of all zones, select the check box by the zone name and click Delete . Figure 33.4. Removing a Master DNS Zone To remove a master DNS zone from the command line, use the ipa dnszone-del command. For example: 33.4.2. Adding Additional Configuration for Master DNS Zones IdM creates a new zone with certain default configuration, such as the refresh periods, transfer settings, or cache settings. DNS Zone Configuration Attributes The available zone settings are listed in Table 33.1, "Zone Attributes" . Along with setting the actual information for the zone, the settings define how the DNS server handles the start of authority (SOA) record entries and how it updates its records from the DNS name server. Table 33.1. Zone Attributes Attribute Command-Line Option Description Authoritative name server --name-server Sets the domain name of the master DNS name server, also known as SOA MNAME. By default, each IdM server advertises itself in the SOA MNAME field. Consequently, the value stored in LDAP using --name-server is ignored. Administrator e-mail address --admin-email Sets the email address to use for the zone administrator. This defaults to the root account on the host. SOA serial --serial Sets a serial number in the SOA record. Note that IdM sets the version number automatically and users are not expected to modify it. SOA refresh --refresh Sets the interval, in seconds, for a secondary DNS server to wait before requesting updates from the primary DNS server. SOA retry --retry Sets the time, in seconds, to wait before retrying a failed refresh operation. SOA expire --expire Sets the time, in seconds, that a secondary DNS server will try to perform a refresh update before ending the operation attempt. SOA minimum --minimum Sets the time to live (TTL) value in seconds for negative caching according to RFC 2308 . SOA time to live --ttl Sets TTL in seconds for records at zone apex. In zone example.com , for instance, all records (A, NS, or SOA) under name example.com are configured, but no other domain names, like test.example.com , are affected. Default time to live --default-ttl Sets the default time to live (TTL) value in seconds for negative caching for all values in a zone that never had an individual TTL value set before. Requires a restart of the named-pkcs11 service on all IdM DNS servers after changes to take effect. BIND update policy --update-policy Sets the permissions allowed to clients in the DNS zone. See Dynamic Update Policies in the BIND 9 Administrator Reference Manual for more information on update policy syntax. Dynamic update --dynamic-update =TRUE|FALSE Enables dynamic updates to DNS records for clients. Note that if this is set to false, IdM client machines will not be able to add or update their IP address. See Section 33.5.1, "Enabling Dynamic DNS Updates" for more information. Allow transfer --allow-transfer = string Gives a list of IP addresses or network names which are allowed to transfer the given zone, separated by semicolons (;). Zone transfers are disabled by default. The default --allow-transfer value is none . Allow query --allow-query Gives a list of IP addresses or network names which are allowed to issue DNS queries, separated by semicolons (;). Allow PTR sync --allow-sync-ptr =1|0 Sets whether A or AAAA records (forward records) for the zone will be automatically synchronized with the PTR (reverse) records. Zone forwarders --forwarder = IP_address Specifies a forwarder specifically configured for the DNS zone. This is separate from any global forwarders used in the IdM domain. To specify multiple forwarders, use the option multiple times. Forward policy --forward-policy =none|only|first Specifies the forward policy. For information about the supported policies, see the section called "Forward Policies" Editing the Zone Configuration in the Web UI To manage DNS master zones from the web UI, open the Network Services tab, and select the DNS subtab, followed by the DNS Zones section. Figure 33.5. DNS Master Zones Management To edit an existing master zone in the DNS Zones section: Click on the zone name in the list of all zones to open the DNS zone page. Figure 33.6. Editing a Master Zone Click Settings , and then change the zone configuration as required. Figure 33.7. The Settings Tab in the Master Zone Edit Page For information about the available settings, see Table 33.1, "Zone Attributes" . Click Save to confirm the new configuration. Note If you are changing the default time to live (TTL) of a zone, restart the named-pkcs11 service on all IdM DNS servers to make the changes take effect. All other settings are automatically activated immediately. Editing the Zone Configuration from the Command Line To modify an existing master DNS zone from the command line, use the ipa dnszone-mod command. For information about the available settings, see Table 33.1, "Zone Attributes" . If an attribute does not exist in the DNS zone entry, the ipa dnszone-mod command adds the attribute. If the attribute exists, the command overwrites the current value with the specified value. For detailed information about ipa dnszone-mod and its options, run the ipa dnszone-mod --help command. Note If you are changing the default time to live (TTL) of a zone, restart the named-pkcs11 service on all IdM DNS servers to make the changes take effect. All other settings are automatically activated immediately. 33.4.3. Enabling Zone Transfers Name servers maintain authoritative data for the zones; changes made to the zones must be sent to and distributed among the name servers for the DNS domain. A zone transfer copies all resource records from one name server to another. IdM supports zone transfers according to the RFC 5936 (AXFR) and RFC 1995 (IXFR) standards. Important The IdM-integrated DNS is multi-master. SOA serial numbers in IdM zones are not synchronized between IdM servers. For this reason, configure DNS slave servers to only use one IdM master server. This prevents zone transfer failures caused by non-synchronized SOA serial numbers. Enabling Zone Transfers in the UI Open the DNS zone page, as described in the section called "Editing the Zone Configuration in the Web UI" , and switch to the Settings tab. Under Allow transfer , specify the name servers to which the zone records will be transferred. Figure 33.8. Enabling Zone Transfers Click Save at the top of the DNS zone page to confirm the new configuration. Enabling Zone Transfers from the Command Line To enable zone transfers from the command line, add the --allow-transfer option to the ipa dnszone-mod command. Specify the list of name servers to which the zone records will be transferred using --allow-transfer . For example: Once zone transfers are enabled in the bind service, IdM DNS zones can be transferred, by name, by clients such as the dig utility: 33.4.4. Adding Records to DNS Zones IdM supports many different record types. The following four are used most frequently: A This is a basic map for a host name and an ordinary IPv4 address. The record name of an A record is a host name, such as www . The IP Address value of an A record is a standard IPv4 address, such as 192.0.2.1 . For more information about A records, see RFC 1035 . AAAA This is a basic map for a host name and an IPv6 address. The record name of an AAAA record is a host name, such as www . The IP Address value is a standard hexadecimal IPv6 address, such as 2001:DB8::1111 . For more information about AAAA records, see RFC 3596 . SRV Service (SRV) resource records map service names to the DNS name of the server that is providing that particular service. For example, this record type can map a service like an LDAP directory to the server which manages it. The record name of an SRV record has the format _ service ._ protocol , such as _ldap._tcp . The configuration options for SRV records include priority, weight, port number, and host name for the target service. For more information about SRV records, see RFC 2782 . PTR A pointer record type (PTR) record adds a reverse DNS record, which maps an IP address to a domain name. Note All reverse DNS lookups for IPv4 addresses use reverse entries that are defined in the in-addr.arpa. domain. The reverse address, in human-readable form, is the exact reverse of the regular IP address, with the in-addr.arpa. domain appended to it. For example, for the network address 192.0.2.0/24 , the reverse zone is 2.0.192.in-addr.arpa . The record name of a PTR record must be in the standard format specified in RFC 1035 , extended in RFC 2317 , and RFC 3596 . The host name value must be a canonical host name of the host for which you want to create the record. For more information, see Example 33.8, "PTR Record" . Note Reverse zones can also be configured for IPv6 addresses, with zones in the .ip6.arpa. domain. For more information about IPv6 reverse zones, see RFC 3596 . When adding DNS resource records, note that many of the records require different data. For example, a CNAME record requires a host name, while an A record requires an IP address. In the web UI, the fields in the form for adding a new record are updated automatically to reflect what data is required for the currently selected type of record. DNS Wildcard Support IdM supports the special record * in a DNS zone as wildcard. Example 33.2. Demonstrating DNS Wildcard Results Configure the following in your DNS zone example.com : A wildcard A record *.example.com . An MX record for mail.example.com , but no A record for this host. No record for demo.example.com . Query existing and non-existent DNS records and types. You will receive the following results: For more details, see RFC1034 . Adding DNS Resource Records from the Web UI Open the DNS zone page, as described in the section called "Editing the Zone Configuration in the Web UI" . In the DNS Resource Records section, click Add to add a new record. Figure 33.9. Adding a New DNS Resource Record Select the type of record to create and fill out the other fields as required. Figure 33.10. Defining a New DNS Resource Record Click Add to confirm the new record. Adding DNS Resource Records from the Command Line To add a DNS resource record of any type from the command line, use the ipa dnsrecord-add command. The command follows this syntax: The zone_name is the name of the DNS zone to which the record is being added. The record_name is an identifier for the new DNS resource record. Table 33.2, "Common ipa dnsrecord-add Options" lists options for the most common resource record types: A (IPv4), AAAA (IPv6), SRV, and PTR. Lists of entries can be set by using the option multiple times with the same command invocation or, in Bash, by listing the options in a comma-separated list inside curly braces, such as --option={val1,val2,val3} . For more detailed information on how to use ipa dnsrecord-add and which DNS record types are supported by IdM, run the ipa dnsrecord-add --help command. Table 33.2. Common ipa dnsrecord-add Options General Record Options Option Description --ttl = number Sets the time to live for the record. --structured Parses the raw DNS records and returns them in a structured format. Table 33.2. Common ipa dnsrecord-add Options "A" Record Options Option Description --a-rec = ARECORD Passes a list of A records. --a-ip-address = string Gives the IP address for the record. Table 33.2. Common ipa dnsrecord-add Options "AAAA" Record Options Option Description --aaaa-rec = AAAARECORD Passes a list of AAAA (IPv6) records. --aaaa-ip-address = string Gives the IPv6 address for the record. Table 33.2. Common ipa dnsrecord-add Options "PTR" Record Options Option Description --ptr-rec = PTRRECORD Passes a list of PTR records. --ptr-hostname = string Gives the host name for the record. Table 33.2. Common ipa dnsrecord-add Options "SRV" Record Options Option Description --srv-rec = SRVRECORD Passes a list of SRV records. --srv-priority = number Sets the priority of the record. There can be multiple SRV records for a service type. The priority (0 - 65535) sets the rank of the record; the lower the number, the higher the priority. A service has to use the record with the highest priority first. --srv-weight = number Sets the weight of the record. This helps determine the order of SRV records with the same priority. The set weights should add up to 100, representing the probability (in percentages) that a particular record is used. --srv-port = number Gives the port for the service on the target host. --srv-target = string Gives the domain name of the target host. This can be a single period (.) if the service is not available in the domain. 33.4.5. Examples of Adding or Modifying DNS Resource Records from the Command Line Example 33.3. Adding a IPv4 Record The following example creates the record www.example.com with the IP address 192.0.2.123 . Example 33.4. Adding a IPv4 Wildcard Record The following example creates a wildcard A record with the IP address 192.0.2.123 : Example 33.5. Modifying a IPv4 Record When creating a record, the option to specify the A record value is --a-record . However, when modifying an A record, the --a-record option is used to specify the current value for the A record. The new value is set with the --a-ip-address option. Example 33.6. Adding an IPv6 Record The following example creates the record www.example.com with the IP address 2001:db8::1231:5675 . Example 33.7. Adding an SRV Record In the following example, _ldap._tcp defines the service type and the connection protocol for the SRV record. The --srv-rec option defines the priority, weight, port, and target values. For example: The weight values ( 51 and 49 in this example) add up to 100 and represent the probability (in percentages) that a particular record is used. Example 33.8. PTR Record When adding the reverse DNS record, the zone name used with the ipa dnsrecord-add command is reverse, compared to the usage for adding other DNS records: Typically, hostIpAddress is the last octet of the IP address in a given network. For example, this adds a PTR record for server4.example.com with IPv4 address 192.0.2.4: The example adds a reverse DNS entry to the 0.0.0.0.0.0.0.0.8.b.d.0.1.0.0.2.ip6.arpa. IPv6 reverse zone for the host server2.example.com with the IP address 2001:DB8::1111 : 33.4.6. Deleting Records from DNS Zones Deleting Records in the Web UI To delete only a specific record type from the resource record: Open the DNS zone page, as described in the section called "Editing the Zone Configuration in the Web UI" . In the DNS Resource Records section, click the name of the resource record. Figure 33.11. Selecting a DNS Resource Record Select the check box by the name of the record type to delete. Figure 33.12. Deleting a DNS Resource Record After this, only the selected record type is deleted; the other configuration is left intact. To delete all records for the resource in the zone: Open the DNS zone page, as described in the section called "Editing the Zone Configuration in the Web UI" . In the DNS Resource Records section, select the check box by the name of the resource record to delete, and then click Delete at the top of the list of zone records. Figure 33.13. Deleting an Entire Resource Record After this, the entire resource record is deleted. Deleting Records from the Command Line To remove records from a zone, use the ipa dnsrecord-del command and add the -- recordType -rec option together with the record value. For example, to remove the A type record: If you run ipa dnsrecord-del without any options, the command prompts for information about the record to delete. Note that passing the --del-all option with the command removes all associated records for the zone. For detailed information on how to use ipa dnsrecord-del and a complete list of options accepted by the command, run the ipa dnsrecord-del --help command. 33.4.7. Disabling and Enabling Zones IdM allows the administrator to disable and enable DNS zones. While deleting a DNS zone, described in the section called "Removing Master DNS Zones" , completely removes the zone entry and all the associated configuration, disabling the zone removes it from activity without permanently removing the zone from IdM. A disabled zone can also be enabled again. Disabling and Enabling Zones in the Web UI To manage DNS zones from the Web UI, open the Network Services tab, and select the DNS subtab, followed by the DNS Zones section. Figure 33.14. Managing DNS Zones To disable a zone, select the check box to the zone name and click Disable . Figure 33.15. Disabling a DNS Zone Similarly, to enable a disabled zone, select the check box to the zone name and click Enable . Disabling and Enabling DNS Zones from the Command Line To disable a DNS zone from the command line, use the ipa dnszone-disable command. For example: To re-enable a disabled zone, use the ipa dnszone-enable command.
[ "ipa dnszone-add newserver.example.com", "ipa dnszone-del server.example.com", "[user@server ~]USD ipa dnszone-mod --allow-transfer=\"192.0.2.1;198.51.100.1;203.0.113.1\" example.com", "dig @ipa-server zone_name AXFR", "host -t MX mail.example.com. mail.example.com mail is handled by 10 server.example.com. host -t MX demo.example.com. demo.example.com. has no MX record. host -t A mail.example.com. mail.example.com has no A record host -t A demo.example.com. random.example.com has address 192.168.1.1", "ipa dnsrecord-add zone_name record_name -- record_type_option=data", "ipa dnsrecord-add example.com www --a-rec 192.0.2.123", "ipa dnsrecord-add example.com \"*\" --a-rec 192.0.2.123", "ipa dnsrecord-mod example.com www --a-rec 192.0.2.123 --a-ip-address 192.0.2.1", "ipa dnsrecord-add example.com www --aaaa-rec 2001:db8::1231:5675", "ipa dnsrecord-add server.example.com _ldap._tcp --srv-rec=\"0 51 389 server1.example.com.\" ipa dnsrecord-add server.example.com _ldap._tcp --srv-rec=\"1 49 389 server2.example.com.\"", "ipa dnsrecord-add reverseNetworkIpAddress hostIpAddress --ptr-rec FQDN", "ipa dnsrecord-add 2.0.192.in-addr.arpa 4 --ptr-rec server4.example.com.", "ipa dnsrecord-add 0.0.0.0.0.0.0.0.8.b.d.0.1.0.0.2.ip6.arpa. 1.1.1.0.0.0.0.0.0.0.0.0.0.0.0 --ptr-rec server2.example.com.", "ipa dnsrecord-del example.com www --a-rec 192.0.2.1", "[user@server ~]USD ipa dnszone-disable zone.example.com ----------------------------------------- Disabled DNS zone \"example.com\" -----------------------------------------" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/managing-master-dns-zones
5.9. Configuring Fencing Levels
5.9. Configuring Fencing Levels Pacemaker supports fencing nodes with multiple devices through a feature called fencing topologies. To implement topologies, create the individual devices as you normally would and then define one or more fencing levels in the fencing topology section in the configuration. Each level is attempted in ascending numeric order, starting at 1. If a device fails, processing terminates for the current level. No further devices in that level are exercised and the level is attempted instead. If all devices are successfully fenced, then that level has succeeded and no other levels are tried. The operation is finished when a level has passed (success), or all levels have been attempted (failed). Use the following command to add a fencing level to a node. The devices are given as a comma-separated list of stonith ids, which are attempted for the node at that level. The following command lists all of the fencing levels that are currently configured. In the following example, there are two fence devices configured for node rh7-2 : an ilo fence device called my_ilo and an apc fence device called my_apc . These commands sets up fence levels so that if the device my_ilo fails and is unable to fence the node, then Pacemaker will attempt to use the device my_apc . This example also shows the output of the pcs stonith level command after the levels are configured. The following command removes the fence level for the specified node and devices. If no nodes or devices are specified then the fence level you specify is removed from all nodes. The following command clears the fence levels on the specified node or stonith id. If you do not specify a node or stonith id, all fence levels are cleared. If you specify more than one stonith id, they must be separated by a comma and no spaces, as in the following example. The following command verifies that all fence devices and nodes specified in fence levels exist. As of Red Hat Enterprise Linux 7.4, you can specify nodes in fencing topology by a regular expression applied on a node name and by a node attribute and its value. For example, the following commands configure nodes node1 , node2 , and ` node3 to use fence devices apc1 and ` apc2 , and nodes ` node4 , node5 , and ` node6 to use fence devices apc3 and ` apc4 . The following commands yield the same results by using node attribute matching.
[ "pcs stonith level add level node devices", "pcs stonith level", "pcs stonith level add 1 rh7-2 my_ilo pcs stonith level add 2 rh7-2 my_apc pcs stonith level Node: rh7-2 Level 1 - my_ilo Level 2 - my_apc", "pcs stonith level remove level [ node_id ] [ stonith_id ] ... [ stonith_id ]", "pcs stonith level clear [ node | stonith_id (s)]", "pcs stonith level clear dev_a,dev_b", "pcs stonith level verify", "pcs stonith level add 1 \"regexp%node[1-3]\" apc1,apc2 pcs stonith level add 1 \"regexp%node[4-6]\" apc3,apc4", "pcs node attribute node1 rack=1 pcs node attribute node2 rack=1 pcs node attribute node3 rack=1 pcs node attribute node4 rack=2 pcs node attribute node5 rack=2 pcs node attribute node6 rack=2 pcs stonith level add 1 attrib%rack=1 apc1,apc2 pcs stonith level add 1 attrib%rack=2 apc3,apc4" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/s1-fencelevels-haar
A.8. Live Migration Errors
A.8. Live Migration Errors There may be cases where a guest changes memory too fast, and the live migration process has to transfer it over and over again, and fails to finish (converge). The current live-migration implementation has a default migration time configured to 30ms. This value determines the guest pause time at the end of the migration in order to transfer the leftovers. Higher values increase the odds that live migration will converge
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-troubleshooting-live_migration_errors
Chapter 1. OpenShift Container Platform CLI tools overview
Chapter 1. OpenShift Container Platform CLI tools overview A user performs a range of operations while working on OpenShift Container Platform such as the following: Managing clusters Building, deploying, and managing applications Managing deployment processes Developing Operators Creating and maintaining Operator catalogs OpenShift Container Platform offers a set of command-line interface (CLI) tools that simplify these tasks by enabling users to perform various administration and development operations from the terminal. These tools expose simple commands to manage the applications, as well as interact with each component of the system. 1.1. List of CLI tools The following set of CLI tools are available in OpenShift Container Platform: OpenShift CLI (oc) : This is the most commonly used CLI tool by OpenShift Container Platform users. It helps both cluster administrators and developers to perform end-to-end operations across OpenShift Container Platform using the terminal. Unlike the web console, it allows the user to work directly with the project source code using command scripts. Knative CLI (kn) : The Knative ( kn ) CLI tool provides simple and intuitive terminal commands that can be used to interact with OpenShift Serverless components, such as Knative Serving and Eventing. Pipelines CLI (tkn) : OpenShift Pipelines is a continuous integration and continuous delivery (CI/CD) solution in OpenShift Container Platform, which internally uses Tekton. The tkn CLI tool provides simple and intuitive commands to interact with OpenShift Pipelines using the terminal. opm CLI : The opm CLI tool helps the Operator developers and cluster administrators to create and maintain the catalogs of Operators from the terminal. Operator SDK : The Operator SDK, a component of the Operator Framework, provides a CLI tool that Operator developers can use to build, test, and deploy an Operator from the terminal. It simplifies the process of building Kubernetes-native applications, which can require deep, application-specific operational knowledge.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/cli_tools/cli-tools-overview
Chapter 1. Security Architecture
Chapter 1. Security Architecture Abstract In the OSGi container, it is possible to deploy applications supporting a variety of security features. Currently, only the Java Authentication and Authorization Service (JAAS) is based on a common, container-wide infrastructure. Other security features are provided separately by the individual products and components deployed in the container. 1.1. OSGi Container Security Overview Figure 1.1, "OSGi Container Security Architecture" shows an overview of the security infrastructure that is used across the container and is accessible to all bundles deployed in the container. This common security infrastructure currently consists of a mechanism for making JAAS realms (or login modules) available to all application bundles. Figure 1.1. OSGi Container Security Architecture JAAS realms A JAAS realm or login module is a plug-in module that provides authentication and authorization data to Java applications, as defined by the Java Authentication and Authorization Service (JAAS) specification. Red Hat Fuse supports a special mechanism for defining JAAS login modules (in either a Spring or a blueprint file), which makes the login module accessible to all bundles in the container. This makes it easy for multiple applications running in the OSGi container to consolidate their security data into a single JAAS realm. karaf realm The OSGi container has a predefined JAAS realm, the karaf realm. Red Hat Fuse uses the karaf realm to provide authentication for remote administration of the OSGi runtime, for the Fuse Management Console, and for JMX management. The karaf realm uses a simple file-based repository, where authentication data is stored in the InstallDir /etc/users.properties file. You can use the karaf realm in your own applications. Simply configure karaf as the name of the JAAS realm that you want to use. Your application then performs authentication using the data from the users.properties file. Console port You can administer the OSGi container remotely either by connecting to the console port with a Karaf client or using the Karaf ssh:ssh command. The console port is secured by a JAAS login feature that connects to the karaf realm. Users that try to connect to the console port will be prompted to enter a username and password that must match one of the accounts from the karaf realm. JMX port You can manage the OSGi container by connecting to the JMX port (for example, using Java's JConsole). The JMX port is also secured by a JAAS login feature that connects to the karaf realm. Application bundles and JAAS security Any application bundles that you deploy into the OSGi container can access the container's JAAS realms. The application bundle simply references one of the existing JAAS realms by name (which corresponds to an instance of a JAAS login module). It is essential, however, that the JAAS realms are defined using the OSGi container's own login configuration mechanism-by default, Java provides a simple file-based login configuration implementation, but you cannot use this implementation in the context of the OSGi container. 1.2. Apache Camel Security Overview Figure 1.2, "Apache Camel Security Architecture" shows an overview of the basic options for securely routing messages between Apache Camel endpoints. Figure 1.2. Apache Camel Security Architecture Alternatives for Apache Camel security As shown in Figure 1.2, "Apache Camel Security Architecture" , you have the following options for securing messages: Endpoint security -part (a) shows a message sent between two routes with secure endpoints. The producer endpoint on the left opens a secure connection (typically using SSL/TLS) to the consumer endpoint on the right. Both of the endpoints support security in this scenario. With endpoint security, it is typically possible to perform some form of peer authentication (and sometimes authorization). Payload security -part (b) shows a message sent between two routes where the endpoints are both insecure . To protect the message from unauthorized snooping in this case, use a payload processor that encrypts the message before sending and decrypts the message after it is received. A limitation of payload security is that it does not provide any kind of authentication or authorization mechanisms. Endpoint security There are several Camel components that support security features. It is important to note, however, that these security features are implemented by the individual components, not by the Camel core. Hence, the kinds of security feature that are supported, and the details of their implementation, vary from component to component. Some of the Camel components that currently support security are, as follows: JMS and ActiveMQ-SSL/TLS security and JAAS security for client-to-broker and broker-to-broker communication. Jetty-HTTP Basic Authentication and SSL/TLS security. CXF-SSL/TLS security and WS-Security. Crypto-creates and verifies digital signatures in order to guarantee message integrity. Netty-SSL/TLS security. MINA-SSL/TLS security. Cometd-SSL/TLS security. glogin and gauth-authorization in the context of Google applications. Payload security Apache Camel provides the following payload security implementations, where the encryption and decryption steps are exposed as data formats on the marshal() and unmarshal() operations the section called "XMLSecurity data format" . the section called "Crypto data format" . XMLSecurity data format The XMLSecurity data format is specifically designed to encrypt XML payloads. When using this data format, you can specify which XML element to encrypt. The default behavior is to encrypt all XML elements. This feature uses a symmetric encryption algorithm. For more details, see http://camel.apache.org/xmlsecurity-dataformat.html . Crypto data format The crypto data format is a general purpose encryption feature that can encrypt any kind of payload. It is based on the Java Cryptographic Extension and implements only symmetric (shared-key) encryption and decryption. For more details, see http://camel.apache.org/crypto.html .
null
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_karaf_security_guide/Arch-Architecture
Chapter 2. Architectures
Chapter 2. Architectures Red Hat Enterprise Linux 7.5 is distributed with the kernel version 3.10.0-862, which provides support for the following architectures: [1] 64-bit AMD 64-bit Intel IBM POWER7+ and POWER8 (big endian) [2] IBM POWER8 (little endian) [3] IBM Z [4] Support for Architectures in the kernel-alt Packages Red Hat Enterprise Linux 7.5 is distributed with the kernel-alt packages, which include kernel version 4.14. This kernel version provides support for the following architectures: 64-bit ARM IBM POWER9 (little endian) [5] IBM Z The following table provides an overview of architectures supported by the two kernel versions available in Red Hat Enterprise Linux 7.5: Table 2.1. Architectures Supported in Red Hat Enterprise Linux 7.5 Architecture Kernel version 3.10 Kernel version 4.14 64-bit AMD and Intel yes no 64-bit ARM no yes IBM POWER7 (big endian) yes no IBM POWER8 (big endian) yes no IBM POWER8 (little endian) yes no IBM POWER9 (little endian) no yes IBM z System yes [a] yes (Structure A) [a] The 3.10 kernel version does not support KVM virtualization and containers on IBM Z. Both of these features are supported on the 4.14 kernel on IBM Z - this offerring is also referred to as Structure A. For more information, see Chapter 19, Red Hat Enterprise Linux 7.5 for ARM and Chapter 20, Red Hat Enterprise Linux 7.5 for IBM Power LE (POWER9) . [1] Note that the Red Hat Enterprise Linux 7.5 installation is supported only on 64-bit hardware. Red Hat Enterprise Linux 7.5 is able to run 32-bit operating systems, including versions of Red Hat Enterprise Linux, as virtual machines. [2] Red Hat Enterprise Linux 7.5 POWER8 (big endian) are currently supported as KVM guests on Red Hat Enterprise Linux 7.5 POWER8 systems that run the KVM hypervisor, and on PowerVM. [3] Red Hat Enterprise Linux 7.5 POWER8 (little endian) is currently supported as a KVM guest on Red Hat Enterprise Linux 7.5 POWER8 systems that run the KVM hypervisor, and on PowerVM. In addition, Red Hat Enterprise Linux 7.5 POWER8 (little endian) guests are supported on Red Hat Enterprise Linux 7.5 POWER9 systems that run the KVM hypervisor in POWER8-compatibility mode on version 4.14 kernel using the kernel-alt package. [4] Red Hat Enterprise Linux 7.5 for IBM Z (both the 3.10 kernel version and the 4.14 kernel version) is currently supported as a KVM guest on Red Hat Enterprise Linux 7.5 for IBM Z hosts that run the KVM hypervisor on version 4.14 kernel using the kernel-alt package. [5] Red Hat Enterprise Linux 7.5 POWER9 (little endian) is currently supported as a KVM guest on Red Hat Enterprise Linux 7.5 POWER9 systems that run the KVM hypervisor on version 4.14 kernel using the kernel-alt package, and on PowerVM.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.5_release_notes/chap-red_hat_enterprise_linux-7.5_release_notes-architectures
Post-installation configuration
Post-installation configuration OpenShift Container Platform 4.11 Day 2 operations for OpenShift Container Platform Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/post-installation_configuration/index
Chapter 13. Deploying using a JMS broker
Chapter 13. Deploying using a JMS broker Abstract Fuse 7.13 does not ship with a default internal broker, but it is designed to interface with four external JMS brokers. Fuse 7.13 containers contain broker client libraries for the supported external brokers. See Supported Configurations for more information about the external brokers, client and Camel component combinations that are available for messaging on Fuse 7.13. 13.1. AMQ 7 quickstart A quickstart is provided to demonstrate the set up and deployment of apps using the AMQ 7 broker. Download the quickstart You can install all of the quickstarts from the Fuse Software Downloads page. Extract the contents of the downloaded zip file to a local folder, for example, a folder named quickstarts . Setup the quickstart Navigate to the quickstarts/camel/camel-jms folder. Enter mvn clean install to build the quickstart. Copy the file org.ops4j.connectionfactory-amq7.cfg from the /camel/camel-jms/src/main directory to the FUSE_HOME/etc directory in your Fuse installation. Verify its contents for the correct broker URL and credentials. By default, the broker URL is set to tcp://localhost:61616 following AMQ 7's CORE protocol. Credentials are set to admin/admin. Change these details to suit your external broker. Start Fuse by running ./bin/fuse on Linux or bin\fuse.bat on Windows. In the Fuse console, enter the following commands: Fuse will give you a bundle ID when the bundle is deployed. Enter log:display to see the start up log information. Check to make sure the bundle was deployed successfully. Run the quickstart When the Camel routes run, the /camel/camel-jms/work/jms/input directory will be created. Copy the files from the /camel/camel-jms/src/main/data directory to the /camel/camel-jms/work/jms/input directory. The files copied into the ... /src/main/data file are order files. Wait for a minute and then check the /camel/camel-jms/work/jms/output directory. The files will be sorted into separate directories according to their country of destination: order1.xml , order2.xml and order4.xml in /camel/camel-jms/work/jms/output/others/ order3.xml and order5.xml in /camel/camel-jms/work/jms/output/us order6.xml in /camel/camel-jms/work/jms/output/fr Use log:display to see the log messages: Camel commands will show details about the context: Use camel:context-list to show the context details: Use camel:route-list to display the Camel routes in the context: Use camel:route-info to display the exchange statistics: 13.2. Using the Artemis core client The Artemis core client can be used to connect to an external broker instead of qpid-jms-client . Connect using the Artemis core client To enable the Artemis core client, start Fuse. Navigate to the FUSE_HOME directory and enter ./bin/fuse on Linux or bin\fuse.bat on Windows. Add the Artemis client as a feature using the following command: feature:install artemis-core-client When you are writing your code you need to connect the Camel component with the connection factory. Import the connection factory: import org.apache.qpid.jms.JmsConnectionFactory; Set up the connection: ConnectionFactory connectionFactory = new JmsConnectionFactory("amqp://localhost:5672"); try (Connection connection = connectionFactory.createConnection()) {
[ "feature:install pax-jms-pool artemis-jms-client camel-blueprint camel-jms install -s mvn:org.jboss.fuse.quickstarts/camel-jms/USD{project.version}", "12:13:50.445 INFO [Blueprint Event Dispatcher: 1] Attempting to start Camel Context jms-example-context 12:13:50.446 INFO [Blueprint Event Dispatcher: 1] Apache Camel 2.21.0.fuse-000030 (CamelContext: jms-example-context) is starting 12:13:50.446 INFO [Blueprint Event Dispatcher: 1] JMX is enabled 12:13:50.528 INFO [Blueprint Event Dispatcher: 1] StreamCaching is not in use. If using streams then its recommended to enable stream caching. See more details at http://camel.apache.org/stream-caching.html 12:13:50.553 INFO [Blueprint Event Dispatcher: 1] Route: file-to-jms-route started and consuming from: file://work/jms/input 12:13:50.555 INFO [Blueprint Event Dispatcher: 1] Route: jms-cbr-route started and consuming from: jms://queue:incomingOrders?transacted=true 12:13:50.556 INFO [Blueprint Event Dispatcher: 1] Total 2 routes, of which 2 are started", "Receiving order order1.xml Sending order order1.xml to another country Done processing order1.xml", "Context Status Total # Failed # Inflight # Uptime ------- ------ ------- -------- ---------- ------ jms-example-context Started 12 0 0 3 minutes", "Context Route Status Total # Failed # Inflight # Uptime ------- ----- ------ ------- -------- ---------- ------ jms-example-context file-to-jms-route Started 6 0 0 3 minutes jms-example-context jms-cbr-route Started 6 0 0 3 minutes", "karaf@root()> camel:route-info jms-cbr-route jms-example-context Camel Route jms-cbr-route Camel Context: jms-example-context State: Started State: Started Statistics Exchanges Total: 6 Exchanges Completed: 6 Exchanges Failed: 0 Exchanges Inflight: 0 Min Processing Time: 2 ms Max Processing Time: 12 ms Mean Processing Time: 4 ms Total Processing Time: 29 ms Last Processing Time: 4 ms Delta Processing Time: 1 ms Start Statistics Date: 2018-01-30 12:13:50 Reset Statistics Date: 2018-01-30 12:13:50 First Exchange Date: 2018-01-30 12:19:47 Last Exchange Date: 2018-01-30 12:19:47", "import org.apache.qpid.jms.JmsConnectionFactory;", "ConnectionFactory connectionFactory = new JmsConnectionFactory(\"amqp://localhost:5672\"); try (Connection connection = connectionFactory.createConnection()) {" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/deploying_into_apache_karaf/deployjmsbroker
Chapter 4. Installing RHACS on Red Hat OpenShift
Chapter 4. Installing RHACS on Red Hat OpenShift 4.1. Installing Central services for RHACS on Red Hat OpenShift Central is the resource that contains the RHACS application management interface and services. It handles data persistence, API interactions, and RHACS portal access. You can use the same Central instance to secure multiple OpenShift Container Platform or Kubernetes clusters. You can install Central on your OpenShift Container Platform or Kubernetes cluster by using one of the following methods: Install using the Operator Install using Helm charts Install using the roxctl CLI (do not use this method unless you have a specific installation need that requires using it) 4.1.1. Install Central using the Operator 4.1.1.1. Installing the Red Hat Advanced Cluster Security for Kubernetes Operator Using the OperatorHub provided with OpenShift Container Platform is the easiest way to install Red Hat Advanced Cluster Security for Kubernetes. Prerequisites You have access to an OpenShift Container Platform cluster using an account with Operator installation permissions. You must be using OpenShift Container Platform 4.12 or later. For information about supported platforms and architecture, see the Red Hat Advanced Cluster Security for Kubernetes Support Matrix . For life cycle support information for RHACS, see the Red Hat Advanced Cluster Security for Kubernetes Support Policy . Procedure In the web console, go to the Operators OperatorHub page. If Red Hat Advanced Cluster Security for Kubernetes is not displayed, enter Advanced Cluster Security into the Filter by keyword box to find the Red Hat Advanced Cluster Security for Kubernetes Operator. Select the Red Hat Advanced Cluster Security for Kubernetes Operator to view the details page. Read the information about the Operator, and then click Install . On the Install Operator page: Keep the default value for Installation mode as All namespaces on the cluster . Choose a specific namespace in which to install the Operator for the Installed namespace field. Install the Red Hat Advanced Cluster Security for Kubernetes Operator in the rhacs-operator namespace. Select automatic or manual updates for Update approval . If you choose automatic updates, when a new version of the Operator is available, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator. If you choose manual updates, when a newer version of the Operator is available, OLM creates an update request. As a cluster administrator, you must manually approve the update request to update the Operator to the latest version. Important If you choose manual updates, you must update the RHACS Operator in all secured clusters when you update the RHACS Operator in the cluster where Central is installed. The secured clusters and the cluster where Central is installed must have the same version to ensure optimal functionality. Click Install . Verification After the installation completes, go to Operators Installed Operators to verify that the Red Hat Advanced Cluster Security for Kubernetes Operator is listed with the status of Succeeded . Step You installed the Operator into the rhacs-operator project. Using that Operator, install, configure, and deploy the Central custom resource into the stackrox project. 4.1.1.2. Installing Central using the Operator method The main component of Red Hat Advanced Cluster Security for Kubernetes is called Central. You can install Central on OpenShift Container Platform by using the Central custom resource. You deploy Central only once, and you can monitor multiple separate clusters by using the same Central installation. Important When you install Red Hat Advanced Cluster Security for Kubernetes for the first time, you must first install the Central custom resource because the SecuredCluster custom resource installation is dependent on certificates that Central generates. Red Hat recommends installing the Red Hat Advanced Cluster Security for Kubernetes Central custom resource in a dedicated project. Do not install it in the project where you have installed the Red Hat Advanced Cluster Security for Kubernetes Operator. Additionally, do not install it in any projects with names that begin with kube , openshift , or redhat , and in the istio-system project. Prerequisites You must be using OpenShift Container Platform 4.12 or later. For information about supported platforms and architecture, see the Red Hat Advanced Cluster Security for Kubernetes Support Matrix . For life cycle support information for RHACS, see the Red Hat Advanced Cluster Security for Kubernetes Support Policy . Procedure On the OpenShift Container Platform web console, go to the Operators Installed Operators page. Select the Red Hat Advanced Cluster Security for Kubernetes Operator from the list of installed Operators. If you have installed the Operator in the recommended namespace, OpenShift Container Platform lists the project as rhacs-operator . Select Project: rhacs-operator Create project . Note If you installed the Operator in a different namespace, OpenShift Container Platform lists the name of that namespace instead of rhacs-operator . Enter the new project name (for example, stackrox ), and click Create . Red Hat recommends that you use stackrox as the project name. Under the Provided APIs section, select Central . Click Create Central . Optional: If you are using declarative configuration, to Configure via: , click YAML view and add the information for the declarative configuration, such as shown in the following example: ... spec: central: declarativeConfiguration: configMaps: - name: "<declarative-configs>" 1 secrets: - name: "<sensitive-declarative-configs>" 2 ... 1 Replace <declarative-configs> with the name of the config maps that you are using. 2 Replace <sensitive-declarative-configs> with the name of the secrets that you are using. Enter a name for your Central custom resource and add any labels you want to apply. Otherwise, accept the default values for the available options. You can configure available options for Central: Central component settings: Setting Description Administrator password Secret that contains the administrator password. Use this field if you do not want RHACS to generate a password for you. Exposure Settings for exposing Central by using a route, load balancer, or node port. See the central.exposure.<parameter> information in the "Public configuration file" section in "Installing Central services for RHACS on Red Hat OpenShift". User-facing TLS certificate secret Use this field if you want to terminate TLS in Central and serve a custom server certificate. Monitoring Configures the monitoring endpoint for Central. See the central.exposeMonitoring parameter in the "Public configuration file" section in "Installing Central services for RHACS on Red Hat OpenShift". Central DB Settings Settings for Central DB, including data persistence. See the central.db.<parameter> information in the "Public configuration file" section in "Installing Central services for RHACS on Red Hat OpenShift". Resources Use these fields after consulting the documentation if you need to override the default settings for memory and CPU resources. For more information, see the "Default resource requirements for RHACS" and "Recommended resource requirements for RHACS" sections in the "Installation" chapter. Tolerations Use this parameter to configure Central to run only on specific nodes. See the central.tolerations parameter in the "Public configuration file" section in "Installing Central services for RHACS on Red Hat OpenShift". Host Aliases Use this parameter to configure additional hostnames to resolve in the pod's hosts file. Scanner Component Settings : Settings for the default scanner, also called the StackRox Scanner. See the "Scanner" table in the "Public configuration file" section in "Installing Central services for RHACS on Red Hat OpenShift". Scanner V4 Component Settings : Settings for the optional Scanner V4 scanner, available in version 4.4 and later. It is not currently enabled by default. You can enable both the StackRox Scanner and Scanner V4 for concurrent use. See the "Scanner V4" table in the "Public configuration file" section in "Installing Central services for RHACS on Red Hat OpenShift". When Scanner V4 is enabled, you can configure the following options: Setting Description Indexer The process that indexes images and creates a report of findings. You can configure replicas and autoscaling, resources, and tolerations. Before changing the default resource values, see the "Scanner V4" sections in the "Default resource requirements for RHACS" and "Recommended resource requirements for RHACS" sections in the "Installation" chapter. Matcher The process that performs vulnerability matching of the report from the indexer against vulnerability data stored in Scanner V4 DB. You can configure replicas and autoscaling, resources, and tolerations. Before changing the default resource values, see the "Scanner V4" sections in the "Default resource requirements for RHACS" and "Recommended resource requirements for RHACS" sections in the "Installation" chapter. DB The database that stores information for Scanner V4, including vulnerability data and index reports. You can configure persistence, resources, and tolerations. If you are using Scanner V4, a persistent volume claim (PVC) is required on Central clusters. A PVC is strongly recommended on secured clusters for best results. Before changing the default resource values, see the "Scanner V4" sections in the "Default resource requirements for RHACS" and "Recommended resource requirements for RHACS" sections in the "Installation" chapter. Egress : Settings for outgoing network traffic, including whether RHACS should run in online (connected) or offline (disconnected) mode. TLS : Use this field to add additional trusted root certificate authorities (CAs). network : To provide security at the network level, RHACS creates default NetworkPolicy resources in the namespace where Central is installed. To create and manage your own network policies, in the policies section, select Disabled . By default, this option is Enabled . Warning Disabling creation of default network policies can break communication between RHACS components. If you disable creation of default policies, you must create your own network policies to allow this communication. Advanced configuration : You can use these fields to perform the following actions: Specify additional image pull secrets Add custom environment variables to set for managed pods' containers Enable Red Hat OpenShift monitoring Click Create . Note If you are using the cluster-wide proxy, Red Hat Advanced Cluster Security for Kubernetes uses that proxy configuration to connect to the external services. Steps Verify Central installation. Optional: Configure Central options. Generate an init bundle containing the cluster secrets that allows communication between the Central and SecuredCluster resources. You need to download this bundle, use it to generate resources on the clusters you want to secure, and securely store it. Install secured cluster services on each cluster you want to monitor. Additional resources Default resource requirements for Red Hat Advanced Cluster Security for Kubernetes Recommended resource requirements for Red Hat Advanced Cluster Security for Kubernetes Public configuration file 4.1.1.3. Provisioning a database in your PostgreSQL instance This step is optional. You can use your existing PostgreSQL infrastructure to provision a database for RHACS. Use the instructions in this section for configuring a PostgreSQL database environment, creating a user, database, schema, role, and granting required permissions. Procedure Create a new user: CREATE USER stackrox WITH PASSWORD <password>; Create a database: CREATE DATABASE stackrox; Connect to the database: \connect stackrox Create user schema: CREATE SCHEMA stackrox; (Optional) Revoke rights on public: REVOKE CREATE ON SCHEMA public FROM PUBLIC; REVOKE USAGE ON SCHEMA public FROM PUBLIC; REVOKE ALL ON DATABASE stackrox FROM PUBLIC; Create a role: CREATE ROLE readwrite; Grant connection permission to the role: GRANT CONNECT ON DATABASE stackrox TO readwrite; Add required permissions to the readwrite role: GRANT USAGE ON SCHEMA stackrox TO readwrite; GRANT USAGE, CREATE ON SCHEMA stackrox TO readwrite; GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA stackrox TO readwrite; ALTER DEFAULT PRIVILEGES IN SCHEMA stackrox GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO readwrite; GRANT USAGE ON ALL SEQUENCES IN SCHEMA stackrox TO readwrite; ALTER DEFAULT PRIVILEGES IN SCHEMA stackrox GRANT USAGE ON SEQUENCES TO readwrite; Assign the readwrite role to the stackrox user: GRANT readwrite TO stackrox; 4.1.1.4. Installing Central with an external database using the Operator method The main component of Red Hat Advanced Cluster Security for Kubernetes is called Central. You can install Central on OpenShift Container Platform by using the Central custom resource. You deploy Central only once, and you can monitor multiple separate clusters by using the same Central installation. Important When you install Red Hat Advanced Cluster Security for Kubernetes for the first time, you must first install the Central custom resource because the SecuredCluster custom resource installation is dependent on certificates that Central generates. For more information about RHACS databases, see the Database Scope of Coverage . Prerequisites You must be using OpenShift Container Platform 4.12 or later. For more information about supported OpenShift Container Platform versions, see the Red Hat Advanced Cluster Security for Kubernetes Support Matrix . You must have a database in your database instance that supports PostgreSQL 13 and a user with the following permissions: Connection rights to the database. Usage and Create on the schema. Select , Insert , Update , and Delete on all tables in the schema. Usage on all sequences in the schema. Procedure On the OpenShift Container Platform web console, go to the Operators Installed Operators page. Select the Red Hat Advanced Cluster Security for Kubernetes Operator from the list of installed Operators. If you have installed the Operator in the recommended namespace, OpenShift Container Platform lists the project as rhacs-operator . Select Project: rhacs-operator Create project . Warning If you have installed the Operator in a different namespace, OpenShift Container Platform shows the name of that namespace rather than rhacs-operator . Red Hat recommends installing the Red Hat Advanced Cluster Security for Kubernetes Central custom resource in a dedicated project. Do not install it in the project where you have installed the Red Hat Advanced Cluster Security for Kubernetes Operator. Additionally, do not install it in any projects with names that begin with kube , openshift , or redhat , and in the istio-system project. Enter the new project name (for example, stackrox ), and click Create . Red Hat recommends that you use stackrox as the project name. Create a password secret in the deployed namespace by using the OpenShift Container Platform web console or the terminal. On the OpenShift Container Platform web console, go to the Workloads Secrets page. Create a Key/Value secret with the key password and the value as the path of a plain text file containing the password for the superuser of the provisioned database. Or, run the following command in your terminal: USD oc create secret generic external-db-password \ 1 --from-file=password=<password.txt> 2 1 If you use Kubernetes, enter kubectl instead of oc . 2 Replace password.txt with the path of the file which has the plain text password. Return to the Red Hat Advanced Cluster Security for Kubernetes operator page in the OpenShift Container Platform web console. Under the Provided APIs section, select Central . Click Create Central . Optional: If you are using declarative configuration, to Configure via: , click YAML view . Add the information for the declarative configuration, such as shown in the following example: ... spec: central: declarativeConfiguration: configMaps: - name: <declarative-configs> 1 secrets: - name: <sensitive-declarative-configs> 2 ... 1 Replace <declarative-configs> with the name of the config maps that you are using. 2 Replace <sensitive-declarative-configs> with the name of the secrets that you are using. Enter a name for your Central custom resource and add any labels you want to apply. Go to Central Component Settings Central DB Settings . For Administrator Password specify the referenced secret as external-db-password (or the secret name of the password created previously). For Connection String specify the connection string in keyword=value format, for example, host=<host> port=5432 database=stackrox user=stackrox sslmode=verify-ca For Persistence PersistentVolumeClaim Claim Name , remove central-db . If necessary, you can specify a Certificate Authority so that there is trust between the database certificate and Central. To add this, go to the YAML view and add a TLS block under the top-level spec, as shown in the following example: spec: tls: additionalCAs: - name: db-ca content: | <certificate> Click Create . Note If you are using the cluster-wide proxy, Red Hat Advanced Cluster Security for Kubernetes uses that proxy configuration to connect to the external services. Steps Verify Central installation. Optional: Configure Central options. Generate an init bundle containing the cluster secrets that allows communication between the Central and SecuredCluster resources. You need to download this bundle, use it to generate resources on the clusters you want to secure, and securely store it. Install secured cluster services on each cluster you want to monitor. Additional resources Central configuration options PostgreSQL Connection String Docs 4.1.1.5. Verifying Central installation using the Operator method After Central finishes installing, log in to the RHACS portal to verify the successful installation of Central. Procedure On the OpenShift Container Platform web console, go to the Operators Installed Operators page. Select the Red Hat Advanced Cluster Security for Kubernetes Operator from the list of installed Operators. Select the Central tab. From the Centrals list, select stackrox-central-services to view its details. To get the password for the admin user, you can either: Click the link under Admin Password Secret Reference . Use the Red Hat OpenShift CLI to enter the command listed under Admin Credentials Info : USD oc -n stackrox get secret central-htpasswd -o go-template='{{index .data "password" | base64decode}}' Find the link to the RHACS portal by using the Red Hat OpenShift CLI command: USD oc -n stackrox get route central -o jsonpath="{.status.ingress[0].host}" Alternatively, you can use the Red Hat Advanced Cluster Security for Kubernetes web console to find the link to the RHACS portal by performing the following commands: Go to Networking Routes . Find the central Route and click on the RHACS portal link under the Location column. Log in to the RHACS portal using the username admin and the password that you retrieved in a step. Until RHACS is completely configured (for example, you have the Central resource and at least one SecuredCluster resource installed and configured), no data is available in the dashboard. The SecuredCluster resource can be installed and configured on the same cluster as the Central resource. Clusters with the SecuredCluster resource are similar to managed clusters in Red Hat Advanced Cluster Management (RHACM). Steps Optional: Configure central settings. Generate an init bundle containing the cluster secrets that allows communication between the Central and SecuredCluster resources. You need to download this bundle, use it to generate resources on the clusters you want to secure, and securely store it. Install secured cluster services on each cluster you want to monitor. 4.1.2. Install Central using Helm charts You can install Central using Helm charts without any customization, using the default values, or by using Helm charts with additional customizations of configuration parameters. 4.1.2.1. Install Central using Helm charts without customization You can install RHACS on your cluster without any customizations. You must add the Helm chart repository and install the central-services Helm chart to install the centralized components of Central and Scanner. 4.1.2.1.1. Adding the Helm chart repository Procedure Add the RHACS charts repository. USD helm repo add rhacs https://mirror.openshift.com/pub/rhacs/charts/ The Helm repository for Red Hat Advanced Cluster Security for Kubernetes includes Helm charts for installing different components, including: Central services Helm chart ( central-services ) for installing the centralized components (Central and Scanner). Note You deploy centralized components only once and you can monitor multiple separate clusters by using the same installation. Secured Cluster Services Helm chart ( secured-cluster-services ) for installing the per-cluster and per-node components (Sensor, Admission Controller, Collector, and Scanner-slim). Note Deploy the per-cluster components into each cluster that you want to monitor and deploy the per-node components in all nodes that you want to monitor. Verification Run the following command to verify the added chart repository: USD helm search repo -l rhacs/ 4.1.2.1.2. Installing the central-services Helm chart without customizations Use the following instructions to install the central-services Helm chart to deploy the centralized components (Central and Scanner). Prerequisites You must have access to the Red Hat Container Registry. For information about downloading images from registry.redhat.io , see Red Hat Container Registry Authentication . Procedure Run the following command to install Central services and expose Central using a route: USD helm install -n stackrox \ --create-namespace stackrox-central-services rhacs/central-services \ --set imagePullSecrets.username=<username> \ 1 --set imagePullSecrets.password=<password> \ 2 --set central.exposure.route.enabled=true 1 Include the user name for your pull secret for Red Hat Container Registry authentication. 2 Include the password for your pull secret for Red Hat Container Registry authentication. Or, run the following command to install Central services and expose Central using a load balancer: USD helm install -n stackrox \ --create-namespace stackrox-central-services rhacs/central-services \ --set imagePullSecrets.username=<username> \ 1 --set imagePullSecrets.password=<password> \ 2 --set central.exposure.loadBalancer.enabled=true 1 Include the user name for your pull secret for Red Hat Container Registry authentication. 2 Include the password for your pull secret for Red Hat Container Registry authentication. Or, run the following command to install Central services and expose Central using port forward: USD helm install -n stackrox \ --create-namespace stackrox-central-services rhacs/central-services \ --set imagePullSecrets.username=<username> \ 1 --set imagePullSecrets.password=<password> 2 1 Include the user name for your pull secret for Red Hat Container Registry authentication. 2 Include the password for your pull secret for Red Hat Container Registry authentication. Important If you are installing Red Hat Advanced Cluster Security for Kubernetes in a cluster that requires a proxy to connect to external services, you must specify your proxy configuration by using the proxyConfig parameter. For example: env: proxyConfig: | url: http://proxy.name:port username: username password: password excludes: - some.domain If you already created one or more image pull secrets in the namespace in which you are installing, instead of using a username and password, you can use --set imagePullSecrets.useExisting="<pull-secret-1;pull-secret-2>" . Do not use image pull secrets: If you are pulling your images from quay.io/stackrox-io or a registry in a private network that does not require authentication. Use use --set imagePullSecrets.allowNone=true instead of specifying a username and password. If you already configured image pull secrets in the default service account in the namespace you are installing. Use --set imagePullSecrets.useFromDefaultServiceAccount=true instead of specifying a username and password. The output of the installation command includes: An automatically generated administrator password. Instructions on storing all the configuration values. Any warnings that Helm generates. 4.1.2.2. Install Central using Helm charts with customizations You can install RHACS on your Red Hat OpenShift cluster with customizations by using Helm chart configuration parameters with the helm install and helm upgrade commands. You can specify these parameters by using the --set option or by creating YAML configuration files. Create the following files for configuring the Helm chart for installing Red Hat Advanced Cluster Security for Kubernetes: Public configuration file values-public.yaml : Use this file to save all non-sensitive configuration options. Private configuration file values-private.yaml : Use this file to save all sensitive configuration options. Ensure that you store this file securely. Configuration file declarative-config-values.yaml : Create this file if you are using declarative configuration to add the declarative configuration mounts to Central. 4.1.2.2.1. Private configuration file This section lists the configurable parameters of the values-private.yaml file. There are no default values for these parameters. 4.1.2.2.1.1. Image pull secrets The credentials that are required for pulling images from the registry depend on the following factors: If you are using a custom registry, you must specify these parameters: imagePullSecrets.username imagePullSecrets.password image.registry If you do not use a username and password to log in to the custom registry, you must specify one of the following parameters: imagePullSecrets.allowNone imagePullSecrets.useExisting imagePullSecrets.useFromDefaultServiceAccount Parameter Description imagePullSecrets.username The username of the account that is used to log in to the registry. imagePullSecrets.password The password of the account that is used to log in to the registry. imagePullSecrets.allowNone Use true if you are using a custom registry and it allows pulling images without credentials. imagePullSecrets.useExisting A comma-separated list of secrets as values. For example, secret1, secret2, secretN . Use this option if you have already created pre-existing image pull secrets with the given name in the target namespace. imagePullSecrets.useFromDefaultServiceAccount Use true if you have already configured the default service account in the target namespace with sufficiently scoped image pull secrets. 4.1.2.2.1.2. Proxy configuration If you are installing Red Hat Advanced Cluster Security for Kubernetes in a cluster that requires a proxy to connect to external services, you must specify your proxy configuration by using the proxyConfig parameter. For example: env: proxyConfig: | url: http://proxy.name:port username: username password: password excludes: - some.domain Parameter Description env.proxyConfig Your proxy configuration. 4.1.2.2.1.3. Central Configurable parameters for Central. For a new installation, you can skip the following parameters: central.jwtSigner.key central.serviceTLS.cert central.serviceTLS.key central.adminPassword.value central.adminPassword.htpasswd central.db.serviceTLS.cert central.db.serviceTLS.key central.db.password.value When you do not specify values for these parameters the Helm chart autogenerates values for them. If you want to modify these values you can use the helm upgrade command and specify the values using the --set option. Important For setting the administrator password, you can only use either central.adminPassword.value or central.adminPassword.htpasswd , but not both. Parameter Description central.jwtSigner.key A private key which RHACS should use for signing JSON web tokens (JWTs) for authentication. central.serviceTLS.cert An internal certificate that the Central service should use for deploying Central. central.serviceTLS.key The private key of the internal certificate that the Central service should use. central.defaultTLS.cert The user-facing certificate that Central should use. RHACS uses this certificate for RHACS portal. For a new installation, you must provide a certificate, otherwise, RHACS installs Central by using a self-signed certificate. If you are upgrading, RHACS uses the existing certificate and its key. central.defaultTLS.key The private key of the user-facing certificate that Central should use. For a new installation, you must provide the private key, otherwise, RHACS installs Central by using a self-signed certificate. If you are upgrading, RHACS uses the existing certificate and its key. central.db.password.value Connection password for Central database. central.adminPassword.value Administrator password for logging into RHACS. central.adminPassword.htpasswd Administrator password for logging into RHACS. This password is stored in hashed format using bcrypt. central.db.serviceTLS.cert An internal certificate that the Central DB service should use for deploying Central DB. central.db.serviceTLS.key The private key of the internal certificate that the Central DB service should use. central.db.password.value The password used to connect to the Central DB. Note If you are using central.adminPassword.htpasswd parameter, you must use a bcrypt encoded password hash. You can run the command htpasswd -nB admin to generate a password hash. For example, htpasswd: | admin:<bcrypt-hash> 4.1.2.2.1.4. Scanner Configurable parameters for the StackRox Scanner and Scanner V4. For a new installation, you can skip the following parameters and the Helm chart autogenerates values for them. Otherwise, if you are upgrading to a new version, specify the values for the following parameters: scanner.dbPassword.value scanner.serviceTLS.cert scanner.serviceTLS.key scanner.dbServiceTLS.cert scanner.dbServiceTLS.key scannerV4.db.password.value scannerV4.indexer.serviceTLS.cert scannerV4.indexer.serviceTLS.key scannerV4.matcher.serviceTLS.cert scannerV4.matcher.serviceTLS.key scannerV4.db.serviceTLS.cert scannerV4.db.serviceTLS.key Parameter Description scanner.dbPassword.value The password to use for authentication with Scanner database. Do not modify this parameter because RHACS automatically creates and uses its value internally. scanner.serviceTLS.cert An internal certificate that the StackRox Scanner service should use for deploying the StackRox Scanner. scanner.serviceTLS.key The private key of the internal certificate that the Scanner service should use. scanner.dbServiceTLS.cert An internal certificate that the Scanner-db service should use for deploying Scanner database. scanner.dbServiceTLS.key The private key of the internal certificate that the Scanner-db service should use. scannerV4.db.password.value The password to use for authentication with the Scanner V4 database. Do not modify this parameter because RHACS automatically creates and uses its value internally. scannerV4.db.serviceTLS.cert An internal certificate that the Scanner V4 DB service should use for deploying the Scanner V4 database. scannerV4.db.serviceTLS.key The private key of the internal certificate that the Scanner V4 DB service should use. scannerV4.indexer.serviceTLS.cert An internal certificate that the Scanner V4 service should use for deploying the Scanner V4 Indexer. scannerV4.indexer.serviceTLS.key The private key of the internal certificate that the Scanner V4 Indexer should use. scannerV4.matcher.serviceTLS.cert An internal certificate that the Scanner V4 service should use for deploying the the Scanner V4 Matcher. scannerV4.matcher.serviceTLS.key The private key of the internal certificate that the Scanner V4 Matcher should use. 4.1.2.2.2. Public configuration file This section lists the configurable parameters of the values-public.yaml file. 4.1.2.2.2.1. Image pull secrets Image pull secrets are the credentials required for pulling images from your registry. Parameter Description imagePullSecrets.allowNone Use true if you are using a custom registry and it allows pulling images without credentials. imagePullSecrets.useExisting A comma-separated list of secrets as values. For example, secret1, secret2 . Use this option if you have already created pre-existing image pull secrets with the given name in the target namespace. imagePullSecrets.useFromDefaultServiceAccount Use true if you have already configured the default service account in the target namespace with sufficiently scoped image pull secrets. 4.1.2.2.2.2. Image Image declares the configuration to set up the main registry, which the Helm chart uses to resolve images for the central.image , scanner.image , scanner.dbImage , scannerV4.image , and scannerV4.db.image parameters. Parameter Description image.registry Address of your image registry. Either use a hostname, such as registry.redhat.io , or a remote registry hostname, such as us.gcr.io/stackrox-mirror . 4.1.2.2.2.3. Policy as code Policy as code provides a way to configure RHACS to work with a continuous delivery tool such as Argo CD to track, manage, and apply policies that you have authored locally or exported from the RHACS portal and modified. You configure Argo CD or your other tool to apply policy as code resources to the same namespace in which RHACS is installed. Parameter Description configAsCode.enabled By default, the value is true so that policy as code is enabled. Set to false to disable the policy as code feature. 4.1.2.2.2.4. Environment variables Red Hat Advanced Cluster Security for Kubernetes automatically detects your cluster environment and sets values for env.openshift , env.istio , and env.platform . Only set these values to override the automatic cluster environment detection. Parameter Description env.openshift Use true for installing on an OpenShift Container Platform cluster and overriding automatic cluster environment detection. env.istio Use true for installing on an Istio enabled cluster and overriding automatic cluster environment detection. env.platform The platform on which you are installing RHACS. Set its value to default or gke to specify cluster platform and override automatic cluster environment detection. env.offlineMode Use true to use RHACS in offline mode. 4.1.2.2.2.5. Additional trusted certificate authorities The RHACS automatically references the system root certificates to trust. When Central, the StackRox Scanner, or Scanner V4 must reach out to services that use certificates issued by an authority in your organization or a globally trusted partner organization, you can add trust for these services by specifying the root certificate authority to trust by using the following parameter: Parameter Description additionalCAs.<certificate_name> Specify the PEM encoded certificate of the root certificate authority to trust. 4.1.2.2.2.6. Default network policies To provide security at the network level, RHACS creates default NetworkPolicy resources in the namespace where Central is installed. These network policies allow ingress to specific components on specific ports. If you do not want RHACS to create these policies, set this parameter to Disabled . The default value is Enabled . Warning Disabling creation of default network policies can break communication between RHACS components. If you disable creation of default policies, you must create your own network policies to allow this communication. Parameter Description network.enableNetworkPolicies Specify if RHACS creates default network policies to allow communication between components. To create your own network policies, set this parameter to False . The default value is True . 4.1.2.2.2.7. Central Configurable parameters for Central. For exposing Central deployment for external access. You must specify one parameter, either central.exposure.loadBalancer , central.exposure.nodePort , or central.exposure.route . When you do not specify any value for these parameters, you must manually expose Central or access it by using port-forwarding. The following table includes settings for an external PostgreSQL database. Parameter Description central.declarativeConfiguration.mounts.configMaps Mounts config maps used for declarative configurations. Central.declarativeConfiguration.mounts.secrets Mounts secrets used for declarative configurations. central.endpointsConfig The endpoint configuration options for Central. central.nodeSelector If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Central. This parameter is mainly used for infrastructure nodes. central.tolerations If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Central. This parameter is mainly used for infrastructure nodes. central.exposeMonitoring Specify true to expose Prometheus metrics endpoint for Central on port number 9090 . central.image.registry A custom registry that overrides the global image.registry parameter for the Central image. central.image.name The custom image name that overrides the default Central image name ( main ). central.image.tag The custom image tag that overrides the default tag for Central image. If you specify your own image tag during a new installation, you must manually increment this tag when you to upgrade to a new version by running the helm upgrade command. If you mirror Central images in your own registry, do not modify the original image tags. central.image.fullRef Full reference including registry address, image name, and image tag for the Central image. Setting a value for this parameter overrides the central.image.registry , central.image.name , and central.image.tag parameters. central.resources.requests.memory The memory request for Central. central.resources.requests.cpu The CPU request for Central. central.resources.limits.memory The memory limit for Central. central.resources.limits.cpu The CPU limit for Central. central.exposure.loadBalancer.enabled Use true to expose Central by using a load balancer. central.exposure.loadBalancer.port The port number on which to expose Central. The default port number is 443. central.exposure.nodePort.enabled Use true to expose Central by using the node port service. central.exposure.nodePort.port The port number on which to expose Central. When you skip this parameter, OpenShift Container Platform automatically assigns a port number. Red Hat recommends that you do not specify a port number if you are exposing RHACS by using a node port. central.exposure.route.enabled Use true to expose Central by using a route. This parameter is only available for OpenShift Container Platform clusters. central.db.external Use true to specify that Central DB should not be deployed and that an external database will be used. central.db.source.connectionString The connection string for Central to use to connect to the database. This is only used when central.db.external is set to true. The connection string must be in keyword/value format as described in the PostgreSQL documentation in "Additional resources". Only PostgreSQL 13 is supported. Connections through PgBouncer are not supported. User must be superuser with ability to create and delete databases. central.db.source.minConns The minimum number of connections to the database to be established. central.db.source.maxConns The maximum number of connections to the database to be established. central.db.source.statementTimeoutMs The number of milliseconds a single query or transaction can be active against the database. central.db.postgresConfig The postgresql.conf to be used for Central DB as described in the PostgreSQL documentation in "Additional resources". central.db.hbaConfig The pg_hba.conf to be used for Central DB as described in the PostgreSQL documentation in "Additional resources". central.db.nodeSelector Specify a node selector label as label-key: label-value to force Central DB to only schedule on nodes with the specified label. central.db.image.registry A custom registry that overrides the global image.registry parameter for the Central DB image. central.db.image.name The custom image name that overrides the default Central DB image name ( central-db ). central.db.image.tag The custom image tag that overrides the default tag for Central DB image. If you specify your own image tag during a new installation, you must manually increment this tag when you to upgrade to a new version by running the helm upgrade command. If you mirror Central DB images in your own registry, do not modify the original image tags. central.db.image.fullRef Full reference including registry address, image name, and image tag for the Central DB image. Setting a value for this parameter overrides the central.db.image.registry , central.db.image.name , and central.db.image.tag parameters. central.db.resources.requests.memory The memory request for Central DB. central.db.resources.requests.cpu The CPU request for Central DB. central.db.resources.limits.memory The memory limit for Central DB. central.db.resources.limits.cpu The CPU limit for Central DB. central.db.persistence.hostPath The path on the node where RHACS should create a database volume. Red Hat does not recommend using this option. central.db.persistence.persistentVolumeClaim.claimName The name of the persistent volume claim (PVC) you are using. central.db.persistence.persistentVolumeClaim.createClaim Use true to create a new persistent volume claim, or false to use an existing claim. central.db.persistence.persistentVolumeClaim.size The size (in GiB) of the persistent volume managed by the specified claim. 4.1.2.2.2.8. StackRox Scanner The following table lists the configurable parameters for the StackRox Scanner. This is the scanner used for node and platform scanning. If Scanner V4 is not enabled, the StackRox scanner also performs image scanning. Beginning with version 4.4, Scanner V4 can be enabled to provide image scanning. See the table for Scanner V4 parameters. Parameter Description scanner.disable Use true to install RHACS without the StackRox Scanner. When you use it with the helm upgrade command, Helm removes the existing StackRox Scanner deployment. scanner.exposeMonitoring Specify true to expose Prometheus metrics endpoint for the StackRox Scanner on port number 9090 . scanner.replicas The number of replicas to create for the StackRox Scanner deployment. When you use it with the scanner.autoscaling parameter, this value sets the initial number of replicas. scanner.logLevel Configure the log level for the StackRox Scanner. Red Hat recommends that you not change the default log level value ( INFO ). scanner.nodeSelector Specify a node selector label as label-key: label-value to force the StackRox Scanner to only schedule on nodes with the specified label. scanner.tolerations If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for the StackRox Scanner. This parameter is mainly used for infrastructure nodes. scanner.autoscaling.disable Use true to disable autoscaling for the StackRox Scanner deployment. When you disable autoscaling, the minReplicas and maxReplicas parameters do not have any effect. scanner.autoscaling.minReplicas The minimum number of replicas for autoscaling. scanner.autoscaling.maxReplicas The maximum number of replicas for autoscaling. scanner.resources.requests.memory The memory request for the StackRox Scanner. scanner.resources.requests.cpu The CPU request for the StackRox Scanner. scanner.resources.limits.memory The memory limit for the StackRox Scanner. scanner.resources.limits.cpu The CPU limit for the StackRox Scanner. scanner.dbResources.requests.memory The memory request for the StackRox Scanner database deployment. scanner.dbResources.requests.cpu The CPU request for the StackRox Scanner database deployment. scanner.dbResources.limits.memory The memory limit for the StackRox Scanner database deployment. scanner.dbResources.limits.cpu The CPU limit for the StackRox Scanner database deployment. scanner.image.registry A custom registry for the StackRox Scanner image. scanner.image.name The custom image name that overrides the default StackRox Scanner image name ( scanner ). scanner.dbImage.registry A custom registry for the StackRox Scanner DB image. scanner.dbImage.name The custom image name that overrides the default StackRox Scanner DB image name ( scanner-db ). scanner.dbNodeSelector Specify a node selector label as label-key: label-value to force the StackRox Scanner DB to only schedule on nodes with the specified label. scanner.dbTolerations If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for the StackRox Scanner DB. This parameter is mainly used for infrastructure nodes. 4.1.2.2.2.9. Scanner V4 The following table lists the configurable parameters for Scanner V4. Parameter Description scannerV4.db.persistence.persistentVolumeClaim.claimName The name of the PVC to manage persistent data for Scanner V4. If no PVC with the given name exists, it is created. The default value is scanner-v4-db if not set. To prevent data loss, the PVC is not removed automatically when Central is deleted. scannerV4.db.persistence.persistentVolumeClaim.size The size of the PVC to manage persistent data for Scanner V4. scannerV4.db.persistence.persistentVolumeClaim.storageClassName The name of the storage class to use for the PVC. If your cluster is not configured with a default storage class, you must provide a value for this parameter. scannerV4.disable Use false to enable Scanner V4. When setting this parameter, the StackRox Scanner must also be enabled by setting scanner.disable=false . Until feature parity between the StackRox Scanner and Scanner V4 is reached, Scanner V4 can only be used in combination with the StackRox Scanner. Enabling Scanner V4 without also enabling the StackRox Scanner is not supported. When you set this parameter to true with the helm upgrade command, Helm removes the existing Scanner V4 deployment. scannerV4.exposeMonitoring Specify true to expose Prometheus metrics endpoint for Scanner V4 on port number 9090 . scannerV4.indexer.replicas The number of replicas to create for the Scanner V4 Indexer deployment. When you use it with the scannerV4.indexer.autoscaling parameter, this value sets the initial number of replicas. scannerV4.indexer.logLevel Configure the log level for the Scanner V4 Indexer. Red Hat recommends that you not change the default log level value ( INFO ). scannerV4.indexer.nodeSelector Specify a node selector label as label-key: label-value to force the Scanner V4 Indexer to only schedule on nodes with the specified label. scannerV4.indexer.tolerations If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for the Scanner V4 Indexer. This parameter is mainly used for infrastructure nodes. scannerV4.indexer.autoscaling.disable Use true to disable autoscaling for the Scanner V4 Indexer deployment. When you disable autoscaling, the minReplicas and maxReplicas parameters do not have any effect. scannerV4.indexer.autoscaling.minReplicas The minimum number of replicas for autoscaling. scannerV4.indexer.autoscaling.maxReplicas The maximum number of replicas for autoscaling. scannerV4.indexer.resources.requests.memory The memory request for the Scanner V4 Indexer. scannerV4.indexer.resources.requests.cpu The CPU request for the Scanner V4 Indexer. scannerV4.indexer.resources.limits.memory The memory limit for the Scanner V4 Indexer. scannerV4.indexer.resources.limits.cpu The CPU limit for the Scanner V4 Indexer. scannerV4.matcher.replicas The number of replicas to create for the Scanner V4 Matcher deployment. When you use it with the scannerV4.matcher.autoscaling parameter, this value sets the initial number of replicas. scannerV4.matcher.logLevel Red Hat recommends that you not change the default log level value ( INFO ). scannerV4.matcher.nodeSelector Specify a node selector label as label-key: label-value to force the Scanner V4 Matcher to only schedule on nodes with the specified label. scannerV4.matcher.tolerations If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for the Scanner V4 Matcher. This parameter is mainly used for infrastructure nodes. scannerV4.matcher.autoscaling.disable Use true to disable autoscaling for the Scanner V4 Matcher deployment. When you disable autoscaling, the minReplicas and maxReplicas parameters do not have any effect. scannerV4.matcher.autoscaling.minReplicas The minimum number of replicas for autoscaling. scannerV4.matcher.autoscaling.maxReplicas The maximum number of replicas for autoscaling. scannerV4.matcher.resources.requests.memory The memory request for the Scanner V4 Matcher. scannerV4.matcher.resources.requests.cpu The CPU request for the Scanner V4 Matcher. scannerV4.db.resources.requests.memory The memory request for the Scanner V4 database deployment. scannerV4.db.resources.requests.cpu The CPU request for the Scanner V4 database deployment. scannerV4.db.resources.limits.memory The memory limit for the Scanner V4 database deployment. scannerV4.db.resources.limits.cpu The CPU limit for the Scanner V4 database deployment. scannerV4.db.nodeSelector Specify a node selector label as label-key: label-value to force the Scanner V4 DB to only schedule on nodes with the specified label. scannerV4.db.tolerations If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for the Scanner V4 DB. This parameter is mainly used for infrastructure nodes. scannerV4.db.image.registry A custom registry for the Scanner V4 DB image. scannerV4.db.image.name The custom image name that overrides the default Scanner V4 DB image name ( scanner-v4-db ). scannerV4.image.registry A custom registry for the Scanner V4 image. scannerV4.image.name The custom image name that overrides the default Scanner V4 image name ( scanner-v4 ). 4.1.2.2.2.10. Customization Use these parameters to specify additional attributes for all objects that RHACS creates. Parameter Description customize.labels A custom label to attach to all objects. customize.annotations A custom annotation to attach to all objects. customize.podLabels A custom label to attach to all deployments. customize.podAnnotations A custom annotation to attach to all deployments. customize.envVars A custom environment variable for all containers in all objects. customize.central.labels A custom label to attach to all objects that Central creates. customize.central.annotations A custom annotation to attach to all objects that Central creates. customize.central.podLabels A custom label to attach to all Central deployments. customize.central.podAnnotations A custom annotation to attach to all Central deployments. customize.central.envVars A custom environment variable for all Central containers. customize.scanner.labels A custom label to attach to all objects that Scanner creates. customize.scanner.annotations A custom annotation to attach to all objects that Scanner creates. customize.scanner.podLabels A custom label to attach to all Scanner deployments. customize.scanner.podAnnotations A custom annotation to attach to all Scanner deployments. customize.scanner.envVars A custom environment variable for all Scanner containers. customize.scanner-db.labels A custom label to attach to all objects that Scanner DB creates. customize.scanner-db.annotations A custom annotation to attach to all objects that Scanner DB creates. customize.scanner-db.podLabels A custom label to attach to all Scanner DB deployments. customize.scanner-db.podAnnotations A custom annotation to attach to all Scanner DB deployments. customize.scanner-db.envVars A custom environment variable for all Scanner DB containers. customize.scanner-v4-indexer.labels A custom label to attach to all objects that Scanner V4 Indexer creates and into the pods belonging to them. customize.scanner-v4-indexer.annotations A custom annotation to attach to all objects that Scanner V4 Indexer creates and into the pods belonging to them. customize.scanner-v4-indexer.podLabels A custom label to attach to all objects that Scanner V4 Indexer creates and into the pods belonging to them. customize.scanner-v4-indexer.podAnnotations A custom annotation to attach to all objects that Scanner V4 Indexer creates and into the pods belonging to them. customize.scanner-4v-indexer.envVars A custom environment variable for all Scanner V4 Indexer containers and the pods belonging to them. customize.scanner-v4-matcher.labels A custom label to attach to all objects that Scanner V4 Matcher creates and into the pods belonging to them. customize.scanner-v4-matcher.annotations A custom annotation to attach to all objects that Scanner V4 Matcher creates and into the pods belonging to them. customize.scanner-v4-matcher.podLabels A custom label to attach to all objects that Scanner V4 Matcher creates and into the pods belonging to them. customize.scanner-v4-matcher.podAnnotations A custom annotation to attach to all objects that Scanner V4 Matcher creates and into the pods belonging to them. customize.scanner-4v-matcher.envVars A custom environment variable for all Scanner V4 Matcher containers and the pods belonging to them. customize.scanner-v4-db.labels A custom label to attach to all objects that Scanner V4 DB creates and into the pods belonging to them. customize.scanner-v4-db.annotations A custom annotation to attach to all objects that Scanner V4 DB creates and into the pods belonging to them. customize.scanner-v4-db.podLabels A custom label to attach to all objects that Scanner V4 DB creates and into the pods belonging to them. customize.scanner-v4-db.podAnnotations A custom annotation to attach to all objects that Scanner V4 DB creates and into the pods belonging to them. customize.scanner-4v-db.envVars A custom environment variable for all Scanner V4 DB containers and the pods belonging to them. You can also use: the customize.other.service/*.labels and the customize.other.service/*.annotations parameters, to specify labels and annotations for all objects. or, provide a specific service name, for example, customize.other.service/central-loadbalancer.labels and customize.other.service/central-loadbalancer.annotations as parameters and set their value. 4.1.2.2.2.11. Advanced customization Important The parameters specified in this section are for information only. Red Hat does not support RHACS instances with modified namespace and release names. Parameter Description allowNonstandardNamespace Use true to deploy RHACS into a namespace other than the default namespace stackrox . allowNonstandardReleaseName Use true to deploy RHACS with a release name other than the default stackrox-central-services . 4.1.2.2.3. Declarative configuration values To use declarative configuration, you must create a YAML file (in this example, named "declarative-config-values.yaml") that adds the declarative configuration mounts to Central. This file is used in a Helm installation. Procedure Create the YAML file (in this example, named declarative-config-values.yaml ) using the following example as a guideline: central: declarativeConfiguration: mounts: configMaps: - declarative-configs secrets: - sensitive-declarative-configs Install the Central services Helm chart as documented in the "Installing the central-services Helm chart", referencing the declarative-config-values.yaml file. 4.1.2.2.4. Installing the central-services Helm chart After you configure the values-public.yaml and values-private.yaml files, install the central-services Helm chart to deploy the centralized components (Central and Scanner). Procedure Run the following command: USD helm install -n stackrox --create-namespace \ stackrox-central-services rhacs/central-services \ -f <path_to_values_public.yaml> -f <path_to_values_private.yaml> 1 1 Use the -f option to specify the paths for your YAML configuration files. Note Optional: If using declarative configuration, add -f <path_to_declarative-config-values.yaml to this command to mount the declarative configurations file in Central. 4.1.2.3. Changing configuration options after deploying the central-services Helm chart You can make changes to any configuration options after you have deployed the central-services Helm chart. When using the helm upgrade command to make changes, the following guidelines and requirements apply: You can also specify configuration values using the --set or --set-file parameters. However, these options are not saved, and you must manually specify all the options again whenever you make changes. Some changes, such as enabling a new component like Scanner V4, require new certificates to be issued for the component. Therefore, you must provide a CA when making these changes. If the CA was generated by the Helm chart during the initial installation, you must retrieve these automatically generated values from the cluster and provide them to the helm upgrade command. The post-installation notes of the central-services Helm chart include a command for retrieving the automatically generated values. If the CA was generated outside of the Helm chart and provided during the installation of the central-services chart, then you must perform that action again when using the helm upgrade command, for example, by using the --reuse-values flag with the helm upgrade command. Procedure Update the values-public.yaml and values-private.yaml configuration files with new values. Run the helm upgrade command and specify the configuration files using the -f option: USD helm upgrade -n stackrox \ stackrox-central-services rhacs/central-services \ --reuse-values \ 1 -f <path_to_init_bundle_file \ -f <path_to_values_public.yaml> \ -f <path_to_values_private.yaml> 1 If you have modified values that are not included in the values_public.yaml and values_private.yaml files, include the --reuse-values parameter. 4.1.3. Install Central using the roxctl CLI Warning For production environments, Red Hat recommends using the Operator or Helm charts to install RHACS. Do not use the roxctl install method unless you have a specific installation need that requires using this method. 4.1.3.1. Installing the roxctl CLI To install Red Hat Advanced Cluster Security for Kubernetes you must install the roxctl CLI by downloading the binary. You can install roxctl on Linux, Windows, or macOS. 4.1.3.1.1. Installing the roxctl CLI on Linux You can install the roxctl CLI binary on Linux by using the following procedure. Note roxctl CLI for Linux is available for amd64 , arm64 , ppc64le , and s390x architectures. Procedure Determine the roxctl architecture for the target operating system: USD arch="USD(uname -m | sed "s/x86_64//")"; arch="USD{arch:+-USDarch}" Download the roxctl CLI: USD curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.6.3/bin/Linux/roxctlUSD{arch}" Make the roxctl binary executable: USD chmod +x roxctl Place the roxctl binary in a directory that is on your PATH : To check your PATH , execute the following command: USD echo USDPATH Verification Verify the roxctl version you have installed: USD roxctl version 4.1.3.1.2. Installing the roxctl CLI on macOS You can install the roxctl CLI binary on macOS by using the following procedure. Note roxctl CLI for macOS is available for amd64 and arm64 architectures. Procedure Determine the roxctl architecture for the target operating system: USD arch="USD(uname -m | sed "s/x86_64//")"; arch="USD{arch:+-USDarch}" Download the roxctl CLI: USD curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.6.3/bin/Darwin/roxctlUSD{arch}" Remove all extended attributes from the binary: USD xattr -c roxctl Make the roxctl binary executable: USD chmod +x roxctl Place the roxctl binary in a directory that is on your PATH : To check your PATH , execute the following command: USD echo USDPATH Verification Verify the roxctl version you have installed: USD roxctl version 4.1.3.1.3. Installing the roxctl CLI on Windows You can install the roxctl CLI binary on Windows by using the following procedure. Note roxctl CLI for Windows is available for the amd64 architecture. Procedure Download the roxctl CLI: USD curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.6.3/bin/Windows/roxctl.exe Verification Verify the roxctl version you have installed: USD roxctl version 4.1.3.2. Using the interactive installer Use the interactive installer to generate the required secrets, deployment configurations, and deployment scripts for your environment. Procedure Run the interactive install command: USD roxctl central generate interactive Important Installing RHACS using the roxctl CLI creates PodSecurityPolicy (PSP) objects by default for backward compatibility. If you install RHACS on Kubernetes versions 1.25 and newer or OpenShift Container Platform version 4.12 and newer, you must disable the PSP object creation. To do this, specify --enable-pod-security-policies option as false for the roxctl central generate and roxctl sensor generate commands. Press Enter to accept the default value for a prompt or enter custom values as required. The following example shows the interactive installer prompts: Path to the backup bundle from which to restore keys and certificates (optional): PEM cert bundle file (optional): 1 Disable the administrator password (only use this if you have already configured an IdP for your instance) (default: "false"): Create PodSecurityPolicy resources (for pre-v1.25 Kubernetes) (default: "false"): 2 Administrator password (default: autogenerated): Orchestrator (k8s, openshift): Default container images settings (rhacs, opensource); it controls repositories from where to download the images, image names and tags format (default: "rhacs"): The directory to output the deployment bundle to (default: "central-bundle"): Whether to enable telemetry (default: "true"): The central-db image to use (if unset, a default will be used according to --image-defaults) (default: "registry.redhat.io/advanced-cluster-security/rhacs-central-db-rhel8:4.6.0"): List of secrets to add as declarative configuration mounts in central (default: "[]"): 3 The method of exposing Central (lb, np, none) (default: "none"): 4 The main image to use (if unset, a default will be used according to --image-defaults) (default: "registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.6.0"): Whether to run StackRox in offline mode, which avoids reaching out to the Internet (default: "false"): List of config maps to add as declarative configuration mounts in central (default: "[]"): 5 The deployment tool to use (kubectl, helm, helm-values) (default: "kubectl"): Istio version when deploying into an Istio-enabled cluster (leave empty when not running Istio) (optional): The scanner-db image to use (if unset, a default will be used according to --image-defaults) (default: "registry.redhat.io/advanced-cluster-security/rhacs-scanner-db-rhel8:4.6.0"): The scanner image to use (if unset, a default will be used according to --image-defaults) (default: "registry.redhat.io/advanced-cluster-security/rhacs-scanner-rhel8:4.6.0"): The scanner-v4-db image to use (if unset, a default will be used according to --image-defaults) (default: "registry.redhat.io/advanced-cluster-security/rhacs-scanner-v4-db-rhel8:4.6.0"): The scanner-v4 image to use (if unset, a default will be used according to --image-defaults) (default: "registry.redhat.io/advanced-cluster-security/rhacs-scanner-v4-rhel8:4.6.0"): External volume type (hostpath, pvc): hostpath Path on the host (default: "/var/lib/stackrox-central"): Node selector key (e.g. kubernetes.io/hostname): Node selector value: 1 If you want to add a custom TLS certificate, provide the file path for the PEM-encoded certificate. When you specify a custom certificate the interactive installer also prompts you to provide a PEM private key for the custom certificate you are using. 2 If you are running Kubernetes version 1.25 or later, set this value to false . 3 For more information on using declarative configurations for authentication and authorization, see "Declarative configuration for authentication and authorization resources" in "Managing RBAC in Red Hat Advanced Cluster Security for Kubernetes". 4 To use the RHACS portal, you must expose Central by using a route, a load balancer or a node port. 5 For more information on using declarative configurations for authentication and authorization, see "Declarative configuration for authentication and authorization resources" in "Managing RBAC in Red Hat Advanced Cluster Security for Kubernetes". Warning On OpenShift Container Platform, for using a hostPath volume, you must modify the SELinux policy to allow access to the directory, which the host and the container share. It is because SELinux blocks directory sharing by default. To modify the SELinux policy, run the following command: USD sudo chcon -Rt svirt_sandbox_file_t <full_volume_path> However, Red Hat does not recommend modifying the SELinux policy, instead use PVC when installing on OpenShift Container Platform. On completion, the installer creates a folder named central-bundle, which contains the necessary YAML manifests and scripts to deploy Central. In addition, it shows on-screen instructions for the scripts you need to run to deploy additional trusted certificate authorities, Central and Scanner, and the authentication instructions for logging into the RHACS portal along with the autogenerated password if you did not provide one when answering the prompts. 4.1.3.3. Running the Central installation scripts After you run the interactive installer, you can run the setup.sh script to install Central. Procedure Run the setup.sh script to configure image registry access: USD ./central-bundle/central/scripts/setup.sh To enable the policy as code feature (Technology Preview), manually apply the config.stackrox.io CRD that is located in the .zip file at helm/chart/crds/config.stackrox.io_securitypolicies.yaml . Important Policy as code is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . To apply the CRD, run the following command: USD oc create -f helm/chart/crds/config.stackrox.io_securitypolicies.yaml Create the necessary resources: USD oc create -R -f central-bundle/central Check the deployment progress: USD oc get pod -n stackrox -w After Central is running, find the RHACS portal IP address and open it in your browser. Depending on the exposure method you selected when answering the prompts, use one of the following methods to get the IP address. Exposure method Command Address Example Route oc -n stackrox get route central The address under the HOST/PORT column in the output https://central-stackrox.example.route Node Port oc get node -owide && oc -n stackrox get svc central-loadbalancer IP or hostname of any node, on the port shown for the service https://198.51.100.0:31489 Load Balancer oc -n stackrox get svc central-loadbalancer EXTERNAL-IP or hostname shown for the service, on port 443 https://192.0.2.0 None central-bundle/central/scripts/port-forward.sh 8443 https://localhost:8443 https://localhost:8443 Note If you have selected autogenerated password during the interactive install, you can run the following command to see it for logging into Central: USD cat central-bundle/password 4.2. Configuring Central configuration options for RHACS using the Operator When installing the Central instance by using the Operator, you can configure optional settings. 4.2.1. Central configuration options using the Operator When you create a Central instance, the Operator lists the following configuration options for the Central custom resource. The following table includes settings for an external PostgreSQL database. 4.2.1.1. Central settings Parameter Description central.adminPasswordSecret Specify a secret that contains the administrator password in the password data item. If omitted, the operator autogenerates a password and stores it in the password item in the central-htpasswd secret. central.defaultTLSSecret By default, Central only serves an internal TLS certificate, which means that you need to handle TLS termination at the ingress or load balancer level. If you want to terminate TLS in Central and serve a custom server certificate, you can specify a secret containing the certificate and private key. central.adminPasswordGenerationDisabled Set this parameter to true to disable the automatic administrator password generation. Use this only after you perform the first-time setup of alternative authentication methods. Do not use this for initial installation. Otherwise, you must reinstall the custom resource to log back in. central.tolerations If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Central. This parameter is mainly used for infrastructure nodes. central.hostAliases Use this parameter to inject hosts and IP addresses into the pod's hosts file. central.exposure.loadBalancer.enabled Set this to true to expose Central through a load balancer. central.exposure.loadBalancer.port Use this parameter to specify a custom port for your load balancer. central.exposure.loadBalancer.ip Use this parameter to specify a static IP address reserved for your load balancer. central.exposure.route.enabled Set this to true to expose Central through a Red Hat OpenShift route. The default value is false . central.exposure.route.host Specify a custom hostname to use for Central's route. Leave this unset to accept the default value that OpenShift Container Platform provides. central.exposure.nodeport.enabled Set this to true to expose Central through a node port. The default value is false . central.exposure.nodeport.port Use this to specify an explicit node port. central.monitoring.exposeEndpoint Use Enabled to enable monitoring for Central. When you enable monitoring, RHACS creates a new monitoring service on port number 9090 . The default value is Disabled . central.nodeSelector If you want this component to only run on specific nodes, you can use this parameter to configure a node selector. central.resources.limits Use this parameter to override the default resource limits for the Central. central.resources.requests Use this parameter to override the default resource requests for the Central. central.imagePullSecrets Use this parameter to specify the image pull secrets for the Central image. central.db.passwordSecret.name Specify a secret that has the database password in the password data item. Only use this parameter if you want to specify a connection string manually. If omitted, the operator auto-generates a password and stores it in the password item in the central-db-password secret. central.db.connectionString Setting this parameter will not deploy Central DB, and Central will connect using the specified connection string. If you specify a value for this parameter, you must also specify a value for central.db.passwordSecret.name . This parameter has the following constraints: Connection string must be in keyword/value format as described in the PostgreSQL documentation. For more information, see the links in the Additional resources section. Only PostgreSQL 13 is supported. Connections through PGBouncer are not supported. User must be a superuser who can create and delete databases. central.db.tolerations If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Central DB. This parameter is mainly used for infrastructure nodes. central.db.hostAliases Use this parameter to inject hosts and IP addresses into the pod's hosts file. central.db.persistence.hostPath.path Specify a host path to store persistent data in a directory on the host. Red Hat does not recommend using this. If you need to use host path, you must use it with a node selector. central.db.persistence.persistentVolumeClaim.claimName The name of the PVC to manage persistent data. If no PVC with the given name exists, it is created. The default value is central-db if not set. To prevent data loss, the PVC is not removed automatically when Central is deleted. central.db.persistence.persistentVolumeClaim.size The size of the persistent volume when created through the claim. This is automatically generated by default. central.db.persistence.persistentVolumeClaim.storageClassName The name of the storage class to use for the PVC. If your cluster is not configured with a default storage class, you must provide a value for this parameter. central.db.connectionPoolSize.minConnections Use this parameter to override the default minimum connection pool size between Central and Central DB. The default value is 10. central.db.connectionPoolSize.maxConnections Use this parameter to override the default maximum connection pool size between Central and Central DB. The default value is 90. Ensure that this value does not exceed the maximum number of connections supported by the Central DB: An Operator-managed Central DB supports a maximum of 200 connections by default. For external PostgreSQL databases, check the database settings or consult your cloud provider for managed databases. central.db.resources.limits Use this parameter to override the default resource limits for the Central DB. central.db.resources.requests Use this parameter to override the default resource requests for the Central DB. central.db.nodeSelector If you want this component to only run on specific nodes, you can use this parameter to configure a node selector. 4.2.1.2. StackRox Scanner settings Parameter Description scanner.analyzer.nodeSelector If you want this scanner to only run on specific nodes, you can use this parameter to configure a node selector. scanner.analyzer.tolerations If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for the StackRox Scanner. This parameter is mainly used for infrastructure nodes. scanner.analyzer.hostAliases Use this parameter to inject hosts and IP addresses into the pod's hosts file. scanner.analyzer.resources.limits Use this parameter to override the default resource limits for the StackRox Scanner. scanner.analyzer.resources.requests Use this parameter to override the default resource requests for the StackRox Scanner. scanner.analyzer.scaling.autoScaling When enabled, the number of analyzer replicas is managed dynamically based on the load, within the limits specified. scanner.analyzer.scaling.maxReplicas Specifies the maximum replicas to be used in the analyzer autoscaling configuration scanner.analyzer.scaling.minReplicas Specifies the minimum replicas to be used in the analyzer autoscaling configuration scanner.analyzer.scaling.replicas When autoscaling is disabled, the number of replicas is always configured to match this value. scanner.db.nodeSelector If you want this component to only run on specific nodes, you can use this parameter to configure a node selector. scanner.db.tolerations If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for the StackRox Scanner DB. This parameter is mainly used for infrastructure nodes. scanner.db.hostAliases Use this parameter to inject hosts and IP addresses into the pod's hosts file. scanner.db.resources.limits Use this parameter to override the default resource limits for the StackRox Scanner DB. scanner.db.resources.requests Use this parameter to override the default resource requests for the StackRox Scanner DB. scanner.monitoring.exposeEndpoint Use Enabled to enable monitoring for the StackRox Scanner. When you enable monitoring, RHACS creates a new monitoring service on port number 9090 . The default value is Disabled . scanner.scannerComponent If you do not want to deploy the StackRox Scanner, you can disable it by using this parameter. If you disable the StackRox Scanner, all other settings in this section have no effect. Red Hat does not recommend disabling Red Hat Advanced Cluster Security for Kubernetes the StackRox Scanner. Do not disable the StackRox Scanner if you have enabled Scanner V4. Scanner V4 requires that the StackRox Scanner is also enabled to provide the necessary scanning capabilities. 4.2.1.3. Scanner V4 settings Parameter Description scannerV4.db.nodeSelector If you want this component to only run on specific nodes, you can use this parameter to configure a node selector. scannerV4.db.tolerations If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Scanner V4 DB. This parameter is mainly used for infrastructure nodes. scannerV4.db.hostAliases Use this parameter to inject hosts and IP addresses into the pod's hosts file. scannerV4.db.resources.limits Use this parameter to override the default resource limits for Scanner V4 DB. scannerV4.db.resources.requests Use this parameter to override the default resource requests for Scanner V4 DB. scannerV4.db.persistence.persistentVolumeClaim.claimName The name of the PVC to manage persistent data for Scanner V4. If no PVC with the given name exists, it is created. The default value is scanner-v4-db if not set. To prevent data loss, the PVC is not removed automatically when Central is deleted. scannerV4.db.persistence.persistentVolumeClaim.size The size of the PVC to manage persistent data for Scanner V4. scannerV4.db.persistence.persistentVolumeClaim.storageClassName The name of the storage class to use for the PVC. If your cluster is not configured with a default storage class, you must provide a value for this parameter. scannerV4.indexer.nodeSelector If you want this component to only run on specific nodes, you can use this parameter to configure a node selector. scannerV4.indexer.tolerations If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for the Scanner V4 Indexer. This parameter is mainly used for infrastructure nodes. scannerV4.indexer.hostAliases Use this parameter to inject hosts and IP addresses into the pod's hosts file. scannerV4.indexer.resources.limits Use this parameter to override the default resource limits for the Scanner V4 Indexer. scannerV4.indexer.resources.requests Use this parameter to override the default resource requests for the Scanner V4 Indexer. scannerV4.indexer.scaling.autoScaling When enabled, the number of Scanner V4 Indexer replicas is managed dynamically based on the load, within the limits specified. scannerV4.indexer.scaling.maxReplicas Specifies the maximum replicas to be used in the Scanner V4 Indexer autoscaling configuration. scannerV4.indexer.scaling.minReplicas Specifies the minimum replicas to be used in the Scanner V4 Indexer autoscaling configuration. scannerV4.indexer.scaling.replicas When autoscaling is disabled for the Scanner V4 Indexer, the number of replicas is always configured to match this value. scannerV4.matcher.nodeSelector If you want this component to only run on specific nodes, you can use this parameter to configure a node selector. scannerV4.matcher.tolerations If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for the Scanner V4 Matcher. This parameter is mainly used for infrastructure nodes. scannerV4.matcher.hostAliases Use this parameter to inject hosts and IP addresses into the pod's hosts file. scannerV4.matcher.resources.limits Use this parameter to override the default resource limits for the Scanner V4 Matcher. scannerV4.matcher.resources.requests Use this parameter to override the default resource requests for the Scanner V4 Matcher. scannerV4.matcher.scaling.autoScaling When enabled, the number of Scanner V4 Matcher replicas is managed dynamically based on the load, within the limits specified. scannerV4.matcher.scaling.maxReplicas Specifies the maximum replicas to be used in the Scanner V4 Matcher autoscaling configuration. scannerV4.matcher.scaling.minReplicas Specifies the minimum replicas to be used in the Scanner V4 Matcher autoscaling configuration. scannerV4.matcher.scaling.replicas When autoscaling is disabled for the Scanner V4 Matcher, the number of replicas is always configured to match this value. scannerV4.monitoring.exposeEndpoint Configures a monitoring endpoint for Scanner V4. The monitoring endpoint allows other services to collect metrics from Scanner V4, provided in a Prometheus-compatible format. Use Enabled to expose the monitoring endpoint. When you enable monitoring, RHACS creates a new service, monitoring , with port 9090, and a network policy allowing inbound connections to the port. By default, this is not enabled. scannerV4.scannerComponent Enables Scanner V4. The default value is default , which is disabled. To enable Scanner V4, set this parameter to Enabled . 4.2.1.4. General and miscellaneous settings Parameter Description customize.annotations Allows specifying custom annotations for the Central deployment. customize.envVars Advanced settings to configure environment variables. egress.connectivityPolicy Configures whether RHACS should run in online or offline mode. In offline mode, automatic updates of vulnerability definitions and kernel modules are disabled. misc.createSCCs Specify true to create SecurityContextConstraints (SCCs) for Central. Setting to true might cause issues in some environments. monitoring.openshift.enabled If you set this option to false , Red Hat Advanced Cluster Security for Kubernetes will not set up Red Hat OpenShift monitoring. Defaults to true on Red Hat OpenShift 4. network.policies To provide security at the network level, RHACS creates default NetworkPolicy resources in the namespace where Central is installed. These network policies allow ingress to specific components on specific ports. If you do not want RHACS to create these policies, set this parameter to Disabled . The default value is Enabled . Warning Disabling creation of default network policies can break communication between RHACS components. If you disable creation of default policies, you must create your own network policies to allow this communication. overlays See "Customizing the installation using the Operator with overlays". tls.additionalCAs Additional Trusted CA certificates for the secured cluster to trust. These certificates are typically used when integrating with services using a private certificate authority. 4.2.2. Customizing the installation using the Operator with overlays Learn how to tailor the installation of RHACS using the Operator method with overlays. 4.2.2.1. Overlays When Central or SecuredCluster custom resources don't expose certain low-level configuration options as parameters, you can use the .spec.overlays field for adjustments. Use this field to amend the Kubernetes resources generated by these custom resources. The .spec.overlays field comprises a sequence of patches, applied in their listed order. These patches are processed by the Operator on the Kubernetes resources before deployment to the cluster. Warning The .spec.overlays field in both Central and SecuredCluster allows users to modify low-level Kubernetes resources in arbitrary ways. Use this feature only when the desired customization is not available through the SecuredCluster or Central custom resources. Support for the .spec.overlays feature is limited primarily because it grants the ability to make intricate and highly specific modifications to Kubernetes resources, which can vary significantly from one implementation to another. This level of customization introduces a complexity that goes beyond standard usage scenarios, making it challenging to provide broad support. Each modification can be unique, potentially interacting with the Kubernetes system in unpredictable ways across different versions and configurations of the product. This variability means that troubleshooting and guaranteeing the stability of these customizations require a level of expertise and understanding specific to each individual's setup. Consequently, while this feature empowers tailoring Kubernetes resources to meet precise needs, greater responsibility must also assumed to ensure the compatibility and stability of configurations, especially during upgrades or changes to the underlying product. The following example shows the structure of an overlay: overlays: - apiVersion: v1 1 kind: ConfigMap 2 name: my-configmap 3 patches: - path: .data 4 value: | 5 key1: data2 key2: data2 1 Targeted Kubernetes resource ApiVersion, for example apps/v1 , v1 , networking.k8s.io/v1 2 Resource type (e.g., Deployment, ConfigMap, NetworkPolicy) 3 Name of the resource, for example my-configmap 4 JSONPath expression to the field, for example spec.template.spec.containers[name:central].env[-1] 5 YAML string for the new field value 4.2.2.1.1. Adding an overlay For customizations, you can add overlays to Central or SecuredCluster custom resources. Use the OpenShift CLI ( oc ) or the OpenShift Container Platform web console for modifications. If overlays do not take effect as expected, check the RHACS Operator logs for any syntax errors or issues logged. 4.2.2.2. Overlay examples 4.2.2.2.1. Specifying an EKS pod role ARN for the Central ServiceAccount Add an Amazon Elastic Kubernetes Service (EKS) pod role Amazon Resource Name (ARN) annotation to the central ServiceAccount as shown in the following example: apiVersion: platform.stackrox.io kind: Central metadata: name: central spec: # ... overlays: - apiVersion: v1 kind: ServiceAccount name: central patches: - path: metadata.annotations.eks\.amazonaws\.com/role-arn value: "\"arn:aws:iam:1234:role\"" 4.2.2.2.2. Injecting an environment variable into the Central deployment Inject an environment variable into the central deployment as shown in the following example: apiVersion: platform.stackrox.io kind: Central metadata: name: central spec: # ... overlays: - apiVersion: apps/v1 kind: Deployment name: central patches: - path: spec.template.spec.containers[name:central].env[-1] value: | name: MY_ENV_VAR value: value 4.2.2.2.3. Extending network policy with an ingress rule Add an ingress rule to the allow-ext-to-central network policy for port 999 traffic as shown in the following example: apiVersion: platform.stackrox.io kind: Central metadata: name: central spec: # ... overlays: - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy name: allow-ext-to-central patches: - path: spec.ingress[-1] value: | ports: - port: 999 protocol: TCP 4.2.2.2.4. Modifying ConfigMap data Modify the central-endpoints ConfigMap data as shown in the following example: apiVersion: platform.stackrox.io kind: Central metadata: name: central spec: # ... overlays: - apiVersion: v1 kind: ConfigMap name: central-endpoints patches: - path: data value: | endpoints.yaml: | disableDefault: false 4.2.2.2.5. Adding a container to the Central deployment Add a new container to the central deployment as shown in the following example:. apiVersion: platform.stackrox.io kind: Central metadata: name: central spec: # ... overlays: - apiVersion: apps/v1 kind: Deployment name: central patches: - path: spec.template.spec.containers[-1] value: | name: nginx image: nginx ports: - containerPort: 8000 name: http protocol: TCP Additional resources Connection Strings - PostgreSQL Docs Parameter Interaction via the Configuration File - PostgreSQL Docs The pg_hba.conf File - PostgreSQL Docs 4.3. Generating and applying an init bundle for RHACS on Red Hat OpenShift Before you install the SecuredCluster resource on a cluster, you must create an init bundle. The cluster that has SecuredCluster installed and configured then uses this bundle to authenticate with Central. You can create an init bundle by using either the RHACS portal or the roxctl CLI. You then apply the init bundle by using it to create resources. To configure an init bundle for RHACS Cloud Service, see the following resources: Generating an init bundle for secured clusters (Red Hat Cloud) Applying an init bundle for secured clusters (Red Hat Cloud) Generating an init bundle for Kubernetes secured clusters Applying an init bundle for Kubernetes secured clusters Note You must have the Admin user role to create an init bundle. 4.3.1. Generating an init bundle 4.3.1.1. Generating an init bundle by using the RHACS portal You can create an init bundle containing secrets by using the RHACS portal. Note You must have the Admin user role to create an init bundle. Procedure Find the address of the RHACS portal as described in "Verifying Central installation using the Operator method". Log in to the RHACS portal. If you do not have secured clusters, the Platform Configuration Clusters page appears. Click Create init bundle . Enter a name for the cluster init bundle. Select your platform. Select the installation method you will use for your secured clusters: Operator or Helm chart . Click Download to generate and download the init bundle, which is created in the form of a YAML file. You can use one init bundle and its corresponding YAML file for all secured clusters if you are using the same installation method. Important Store this bundle securely because it contains secrets. Apply the init bundle by using it to create resources on the secured cluster. Install secured cluster services on each cluster. 4.3.1.2. Generating an init bundle by using the roxctl CLI You can create an init bundle with secrets by using the roxctl CLI. Note You must have the Admin user role to create init bundles. Prerequisites You have configured the ROX_API_TOKEN and the ROX_CENTRAL_ADDRESS environment variables: Set the ROX_API_TOKEN by running the following command: USD export ROX_API_TOKEN=<api_token> Set the ROX_CENTRAL_ADDRESS environment variable by running the following command: USD export ROX_CENTRAL_ADDRESS=<address>:<port_number> Procedure To generate a cluster init bundle containing secrets for Helm installations, run the following command: USD roxctl -e "USDROX_CENTRAL_ADDRESS" \ central init-bundles generate --output \ <cluster_init_bundle_name> cluster_init_bundle.yaml To generate a cluster init bundle containing secrets for Operator installations, run the following command: USD roxctl -e "USDROX_CENTRAL_ADDRESS" \ central init-bundles generate --output-secrets \ <cluster_init_bundle_name> cluster_init_bundle.yaml Important Ensure that you store this bundle securely because it contains secrets. You can use the same bundle to set up multiple secured clusters. 4.3.1.3. Applying the init bundle on the secured cluster Before you configure a secured cluster, you must apply the init bundle by using it to create the required resources on the cluster. Applying the init bundle allows the services on the secured cluster to communicate with Central. Note If you are installing by using Helm charts, do not perform this step. Complete the installation by using Helm; See "Installing RHACS on secured clusters by using Helm charts" in the additional resources section. Prerequisites You must have generated an init bundle containing secrets. You must have created the stackrox project, or namespace, on the cluster where secured cluster services will be installed. Using stackrox for the project is not required, but ensures that vulnerabilities for RHACS processes are not reported when scanning your clusters. Procedure To create resources, perform only one of the following steps: Create resources using the OpenShift Container Platform web console: In the OpenShift Container Platform web console, make sure that you are in the stackrox namespace. In the top menu, click + to open the Import YAML page. You can drag the init bundle file or copy and paste its contents into the editor, and then click Create . When the command is complete, the display shows that the collector-tls , sensor-tls , and admission-control-tls` resources were created. Create resources using the Red Hat OpenShift CLI: Using the Red Hat OpenShift CLI, run the following command to create the resources: USD oc create -f <init_bundle>.yaml \ 1 -n <stackrox> 2 1 Specify the file name of the init bundle containing the secrets. 2 Specify the name of the project where Central services are installed. 4.3.2. steps Install RHACS secured cluster services in all clusters that you want to monitor. 4.3.3. Additional resources Installing RHACS on secured clusters by using Helm charts 4.4. Installing Secured Cluster services for RHACS on Red Hat OpenShift You can install RHACS on your secured clusters by using one of the following methods: Install by using the Operator Install by using Helm charts Install by using the roxctl CLI (do not use this method unless you have a specific installation need that requires using it) 4.4.1. Installing RHACS on secured clusters by using the Operator 4.4.1.1. Installing secured cluster services You can install Secured Cluster services on your clusters by using the Operator, which creates the SecuredCluster custom resource. You must install the Secured Cluster services on every cluster in your environment that you want to monitor. Important When you install Red Hat Advanced Cluster Security for Kubernetes: If you are installing RHACS for the first time, you must first install the Central custom resource because the SecuredCluster custom resource installation is dependent on certificates that Central generates. Do not install SecuredCluster in projects whose names start with kube , openshift , or redhat , or in the istio-system project. If you are installing RHACS SecuredCluster custom resource on a cluster that also hosts Central, ensure that you install it in the same namespace as Central. If you are installing Red Hat Advanced Cluster Security for Kubernetes SecuredCluster custom resource on a cluster that does not host Central, Red Hat recommends that you install the Red Hat Advanced Cluster Security for Kubernetes SecuredCluster custom resource in its own project and not in the project in which you have installed the Red Hat Advanced Cluster Security for Kubernetes Operator. Prerequisites If you are using OpenShift Container Platform, you must install version 4.12 or later. You have installed the RHACS Operator on the cluster that you want to secure, called the secured cluster. You have generated an init bundle and applied it to the cluster. Procedure On the OpenShift Container Platform web console for the secured cluster, go to the Operators Installed Operators page. Click the RHACS Operator. If you have installed the Operator in the recommended namespace, OpenShift Container Platform lists the project as rhacs-operator . Select Project: rhacs-operator Create project . Note If you installed the Operator in a different namespace, OpenShift Container Platform lists the name of that namespace instead of rhacs-operator . Enter the new project name (for example, stackrox ), and click Create . Red Hat recommends that you use stackrox as the project name. Click Secured Cluster from the central navigation menu in the Operator details page. Click Create SecuredCluster . Select one of the following options in the Configure via field: Form view : Use this option if you want to use the on-screen fields to configure the secured cluster and do not need to change any other fields. YAML view : Use this view to set up the secured cluster by using the YAML file. The YAML file is displayed in the window and you can edit fields in it. If you select this option, when you are finished editing the file, click Create . If you are using Form view , enter the new project name by accepting or editing the default name. The default value is stackrox-secured-cluster-services . Optional: Add any labels for the cluster. Enter a unique name for your SecuredCluster custom resource. For Central Endpoint , enter the address of your Central instance. For example, if Central is available at https://central.example.com , then specify the central endpoint as central.example.com . Use the default value of central.stackrox.svc:443 only if you are installing secured cluster services in the same cluster where Central is installed. Do not use the default value when you are configuring multiple clusters. Instead, use the hostname when configuring the Central Endpoint value for each cluster. For the remaining fields, accept the default values or configure custom values if needed. For example, you might need to configure TLS if you are using custom certificates or untrusted CAs. See "Configuring Secured Cluster services options for RHACS using the Operator" for more information. Click Create . After a brief pause, the SecuredClusters page displays the status of stackrox-secured-cluster-services . You might see the following conditions: Conditions: Deployed, Initialized : The secured cluster services have been installed and the secured cluster is communicating with Central. Conditions: Initialized, Irreconcilable : The secured cluster is not communicating with Central. Make sure that you applied the init bundle you created in the RHACS web portal to the secured cluster. steps Configure additional secured cluster settings (optional). Verify installation. 4.4.2. Installing RHACS on secured clusters by using Helm charts You can install RHACS on secured clusters by using Helm charts with no customization, using the default values, or with customizations of configuration parameters. 4.4.2.1. Installing RHACS on secured clusters by using Helm charts without customizations 4.4.2.1.1. Adding the Helm chart repository Procedure Add the RHACS charts repository. USD helm repo add rhacs https://mirror.openshift.com/pub/rhacs/charts/ The Helm repository for Red Hat Advanced Cluster Security for Kubernetes includes Helm charts for installing different components, including: Central services Helm chart ( central-services ) for installing the centralized components (Central and Scanner). Note You deploy centralized components only once and you can monitor multiple separate clusters by using the same installation. Secured Cluster Services Helm chart ( secured-cluster-services ) for installing the per-cluster and per-node components (Sensor, Admission Controller, Collector, and Scanner-slim). Note Deploy the per-cluster components into each cluster that you want to monitor and deploy the per-node components in all nodes that you want to monitor. Verification Run the following command to verify the added chart repository: USD helm search repo -l rhacs/ 4.4.2.1.2. Installing the secured-cluster-services Helm chart without customization Use the following instructions to install the secured-cluster-services Helm chart to deploy the per-cluster and per-node components (Sensor, Admission controller, Collector, and Scanner-slim). Prerequisites You must have generated an RHACS init bundle for your cluster. You must have access to the Red Hat Container Registry and a pull secret for authentication. For information about downloading images from registry.redhat.io , see Red Hat Container Registry Authentication . You must have the address that you are exposing the Central service on. Procedure Run the following command on OpenShift Container Platform clusters: USD helm install -n stackrox --create-namespace \ stackrox-secured-cluster-services rhacs/secured-cluster-services \ -f <path_to_cluster_init_bundle.yaml> \ 1 -f <path_to_pull_secret.yaml> \ 2 --set clusterName=<name_of_the_secured_cluster> \ --set centralEndpoint=<endpoint_of_central_service> 3 --set scanner.disable=false 4 1 Use the -f option to specify the path for the init bundle. 2 Use the -f option to specify the path for the pull secret for Red Hat Container Registry authentication. 3 Specify the address and port number for Central. For example, acs.domain.com:443 . 4 Set the value of the scanner.disable parameter to false , which means that Scanner-slim will be enabled during the installation. In Kubernetes, the secured cluster services now include Scanner-slim as an optional component. Additional resources Generating and applying an init bundle for RHACS on Red Hat OpenShift 4.4.2.2. Configuring the secured-cluster-services Helm chart with customizations This section describes Helm chart configuration parameters that you can use with the helm install and helm upgrade commands. You can specify these parameters by using the --set option or by creating YAML configuration files. Create the following files for configuring the Helm chart for installing Red Hat Advanced Cluster Security for Kubernetes: Public configuration file values-public.yaml : Use this file to save all non-sensitive configuration options. Private configuration file values-private.yaml : Use this file to save all sensitive configuration options. Ensure that you store this file securely. Important While using the secured-cluster-services Helm chart, do not modify the values.yaml file that is part of the chart. 4.4.2.2.1. Configuration parameters Parameter Description clusterName Name of your cluster. centralEndpoint Address of the Central endpoint. If you are using a non-gRPC capable load balancer, use the WebSocket protocol by prefixing the endpoint address with wss:// . When configuring multiple clusters, use the hostname for the address. For example, central.example.com . sensor.endpoint Address of the Sensor endpoint including port number. sensor.imagePullPolicy Image pull policy for the Sensor container. sensor.serviceTLS.cert The internal service-to-service TLS certificate that Sensor uses. sensor.serviceTLS.key The internal service-to-service TLS certificate key that Sensor uses. sensor.resources.requests.memory The memory request for the Sensor container. Use this parameter to override the default value. sensor.resources.requests.cpu The CPU request for the Sensor container. Use this parameter to override the default value. sensor.resources.limits.memory The memory limit for the Sensor container. Use this parameter to override the default value. sensor.resources.limits.cpu The CPU limit for the Sensor container. Use this parameter to override the default value. sensor.nodeSelector Specify a node selector label as label-key: label-value to force Sensor to only schedule on nodes with the specified label. sensor.tolerations If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Sensor. This parameter is mainly used for infrastructure nodes. image.main.name The name of the main image. image.collector.name The name of the Collector image. image.main.registry The address of the registry you are using for the main image. image.collector.registry The address of the registry you are using for the Collector image. image.scanner.registry The address of the registry you are using for the Scanner image. image.scannerDb.registry The address of the registry you are using for the Scanner DB image. image.scannerV4.registry The address of the registry you are using for the Scanner V4 image. image.scannerV4DB.registry The address of the registry you are using for the Scanner V4 DB image. image.main.pullPolicy Image pull policy for main images. image.collector.pullPolicy Image pull policy for the Collector images. image.main.tag Tag of main image to use. image.collector.tag Tag of collector image to use. collector.collectionMethod Either CORE_BPF or NO_COLLECTION . collector.imagePullPolicy Image pull policy for the Collector container. collector.complianceImagePullPolicy Image pull policy for the Compliance container. collector.disableTaintTolerations If you specify false , tolerations are applied to Collector, and the collector pods can schedule onto all nodes with taints. If you specify it as true , no tolerations are applied, and the collector pods are not scheduled onto nodes with taints. collector.resources.requests.memory The memory request for the Collector container. Use this parameter to override the default value. collector.resources.requests.cpu The CPU request for the Collector container. Use this parameter to override the default value. collector.resources.limits.memory The memory limit for the Collector container. Use this parameter to override the default value. collector.resources.limits.cpu The CPU limit for the Collector container. Use this parameter to override the default value. collector.complianceResources.requests.memory The memory request for the Compliance container. Use this parameter to override the default value. collector.complianceResources.requests.cpu The CPU request for the Compliance container. Use this parameter to override the default value. collector.complianceResources.limits.memory The memory limit for the Compliance container. Use this parameter to override the default value. collector.complianceResources.limits.cpu The CPU limit for the Compliance container. Use this parameter to override the default value. collector.serviceTLS.cert The internal service-to-service TLS certificate that Collector uses. collector.serviceTLS.key The internal service-to-service TLS certificate key that Collector uses. admissionControl.listenOnCreates This setting controls whether Kubernetes is configured to contact Red Hat Advanced Cluster Security for Kubernetes with AdmissionReview requests for workload creation events. admissionControl.listenOnUpdates When you set this parameter as false , Red Hat Advanced Cluster Security for Kubernetes creates the ValidatingWebhookConfiguration in a way that causes the Kubernetes API server not to send object update events. Since the volume of object updates is usually higher than the object creates, leaving this as false limits the load on the admission control service and decreases the chances of a malfunctioning admission control service. admissionControl.listenOnEvents This setting controls whether the cluster is configured to contact Red Hat Advanced Cluster Security for Kubernetes with AdmissionReview requests for Kubernetes exec and portforward events. RHACS does not support this feature on OpenShift Container Platform 3.11. admissionControl.dynamic.enforceOnCreates This setting controls whether Red Hat Advanced Cluster Security for Kubernetes evaluates policies; if it is disabled, all AdmissionReview requests are automatically accepted. admissionControl.dynamic.enforceOnUpdates This setting controls the behavior of the admission control service. You must specify listenOnUpdates as true for this to work. admissionControl.dynamic.scanInline If you set this option to true , the admission control service requests an image scan before making an admission decision. Since image scans take several seconds, enable this option only if you can ensure that all images used in your cluster are scanned before deployment (for example, by a CI integration during image build). This option corresponds to the Contact image scanners option in the RHACS portal. admissionControl.dynamic.disableBypass Set it to true to disable bypassing the Admission controller. admissionControl.dynamic.timeout Use this parameter to specify the maximum number of seconds RHACS must wait for an admission review before marking it as fail open. If the admission webhook does not receive information that it is requesting before the end of the timeout period, it fails, but in fail open status, it still allows the operation to succeed. For example, the admission controller would allow a deployment to be created even if a scan had timed out and RHACS could not determine if the deployment violated a policy. Beginning in release 4.5, Red Hat reduced the default timeout setting for the RHACS admission controller webhooks from 20 seconds to 10 seconds, resulting in an effective timeout of 12 seconds within the ValidatingWebhookConfiguration . This change does not negatively affect OpenShift Container Platform users because OpenShift Container Platform caps the timeout at 13 seconds. admissionControl.resources.requests.memory The memory request for the Admission Control container. Use this parameter to override the default value. admissionControl.resources.requests.cpu The CPU request for the Admission Control container. Use this parameter to override the default value. admissionControl.resources.limits.memory The memory limit for the Admission Control container. Use this parameter to override the default value. admissionControl.resources.limits.cpu The CPU limit for the Admission Control container. Use this parameter to override the default value. admissionControl.nodeSelector Specify a node selector label as label-key: label-value to force Admission Control to only schedule on nodes with the specified label. admissionControl.tolerations If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Admission Control. This parameter is mainly used for infrastructure nodes. admissionControl.serviceTLS.cert The internal service-to-service TLS certificate that Admission Control uses. admissionControl.serviceTLS.key The internal service-to-service TLS certificate key that Admission Control uses. registryOverride Use this parameter to override the default docker.io registry. Specify the name of your registry if you are using some other registry. collector.disableTaintTolerations If you specify false , tolerations are applied to Collector, and the Collector pods can schedule onto all nodes with taints. If you specify it as true , no tolerations are applied, and the Collector pods are not scheduled onto nodes with taints. createUpgraderServiceAccount Specify true to create the sensor-upgrader account. By default, Red Hat Advanced Cluster Security for Kubernetes creates a service account called sensor-upgrader in each secured cluster. This account is highly privileged but is only used during upgrades. If you do not create this account, you must complete future upgrades manually if the Sensor does not have enough permissions. createSecrets Specify false to skip the orchestrator secret creation for the Sensor, Collector, and Admission controller. collector.slimMode Deprecated. Specify true if you want to use a slim Collector image for deploying Collector. sensor.resources Resource specification for Sensor. admissionControl.resources Resource specification for Admission controller. collector.resources Resource specification for Collector. collector.complianceResources Resource specification for Collector's Compliance container. exposeMonitoring If you set this option to true , Red Hat Advanced Cluster Security for Kubernetes exposes Prometheus metrics endpoints on port number 9090 for the Sensor, Collector, and the Admission controller. auditLogs.disableCollection If you set this option to true , Red Hat Advanced Cluster Security for Kubernetes disables the audit log detection features used to detect access and modifications to configuration maps and secrets. scanner.disable If you set this option to false , Red Hat Advanced Cluster Security for Kubernetes deploys a Scanner-slim and Scanner DB in the secured cluster to allow scanning images on the integrated OpenShift image registry. Enabling Scanner-slim is supported on OpenShift Container Platform and Kubernetes secured clusters. Defaults to true . scanner.dbTolerations If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Scanner DB. scanner.replicas Resource specification for Collector's Compliance container. scanner.logLevel Setting this parameter allows you to modify the scanner log level. Use this option only for troubleshooting purposes. scanner.autoscaling.disable If you set this option to true , Red Hat Advanced Cluster Security for Kubernetes disables autoscaling on the Scanner deployment. scanner.autoscaling.minReplicas The minimum number of replicas for autoscaling. Defaults to 2. scanner.autoscaling.maxReplicas The maximum number of replicas for autoscaling. Defaults to 5. scanner.nodeSelector Specify a node selector label as label-key: label-value to force Scanner to only schedule on nodes with the specified label. scanner.tolerations If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Scanner. scanner.dbNodeSelector Specify a node selector label as label-key: label-value to force Scanner DB to only schedule on nodes with the specified label. scanner.dbTolerations If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Scanner DB. scanner.resources.requests.memory The memory request for the Scanner container. Use this parameter to override the default value. scanner.resources.requests.cpu The CPU request for the Scanner container. Use this parameter to override the default value. scanner.resources.limits.memory The memory limit for the Scanner container. Use this parameter to override the default value. scanner.resources.limits.cpu The CPU limit for the Scanner container. Use this parameter to override the default value. scanner.dbResources.requests.memory The memory request for the Scanner DB container. Use this parameter to override the default value. scanner.dbResources.requests.cpu The CPU request for the Scanner DB container. Use this parameter to override the default value. scanner.dbResources.limits.memory The memory limit for the Scanner DB container. Use this parameter to override the default value. scanner.dbResources.limits.cpu The CPU limit for the Scanner DB container. Use this parameter to override the default value. monitoring.openshift.enabled If you set this option to false , Red Hat Advanced Cluster Security for Kubernetes will not set up Red Hat OpenShift monitoring. Defaults to true on Red Hat OpenShift 4. network.enableNetworkPolicies To provide security at the network level, RHACS creates default NetworkPolicy resources in the namespace where secured cluster resources are installed. These network policies allow ingress to specific components on specific ports. If you do not want RHACS to create these policies, set this parameter to False . This is a Boolean value. The default value is True , which means the default policies are automatically created. Warning Disabling creation of default network policies can break communication between RHACS components. If you disable creation of default policies, you must create your own network policies to allow this communication. 4.4.2.2.1.1. Environment variables You can specify environment variables for Sensor and Admission controller in the following format: customize: envVars: ENV_VAR1: "value1" ENV_VAR2: "value2" The customize setting allows you to specify custom Kubernetes metadata (labels and annotations) for all objects created by this Helm chart and additional pod labels, pod annotations, and container environment variables for workloads. The configuration is hierarchical, in the sense that metadata defined at a more generic scope (for example, for all objects) can be overridden by metadata defined at a narrower scope (for example, only for the Sensor deployment). 4.4.2.2.2. Installing the secured-cluster-services Helm chart with customizations After you configure the values-public.yaml and values-private.yaml files, install the secured-cluster-services Helm chart to deploy the following per-cluster and per-node components: Sensor Admission controller Collector Scanner: optional for secured clusters when the StackRox Scanner is installed Scanner DB: optional for secured clusters when the StackRox Scanner is installed Scanner V4 Indexer and Scanner V4 DB: optional for secured clusters when Scanner V4 is installed Prerequisites You must have generated an RHACS init bundle for your cluster. You must have access to the Red Hat Container Registry and a pull secret for authentication. For information about downloading images from registry.redhat.io , see Red Hat Container Registry Authentication . You must have the address and the port number that you are exposing the Central service on. Procedure Run the following command: USD helm install -n stackrox \ --create-namespace stackrox-secured-cluster-services rhacs/secured-cluster-services \ -f <name_of_cluster_init_bundle.yaml> \ -f <path_to_values_public.yaml> -f <path_to_values_private.yaml> \ 1 --set imagePullSecrets.username=<username> \ 2 --set imagePullSecrets.password=<password> 3 1 Use the -f option to specify the paths for your YAML configuration files. 2 Include the user name for your pull secret for Red Hat Container Registry authentication. 3 Include the password for your pull secret for Red Hat Container Registry authentication. Note To deploy secured-cluster-services Helm chart by using a continuous integration (CI) system, pass the init bundle YAML file as an environment variable to the helm install command: USD helm install ... -f <(echo "USDINIT_BUNDLE_YAML_SECRET") 1 1 If you are using base64 encoded variables, use the helm install ... -f <(echo "USDINIT_BUNDLE_YAML_SECRET" | base64 --decode) command instead. Additional resources Generating and applying an init bundle for RHACS on Red Hat OpenShift 4.4.2.3. Changing configuration options after deploying the secured-cluster-services Helm chart You can make changes to any configuration options after you have deployed the secured-cluster-services Helm chart. When using the helm upgrade command to make changes, the following guidelines and requirements apply: You can also specify configuration values using the --set or --set-file parameters. However, these options are not saved, and you must manually specify all the options again whenever you make changes. Some changes, such as enabling a new component like Scanner V4, require new certificates to be issued for the component. Therefore, you must provide a CA when making these changes. If the CA was generated by the Helm chart during the initial installation, you must retrieve these automatically generated values from the cluster and provide them to the helm upgrade command. The post-installation notes of the central-services Helm chart include a command for retrieving the automatically generated values. If the CA was generated outside of the Helm chart and provided during the installation of the central-services chart, then you must perform that action again when using the helm upgrade command, for example, by using the --reuse-values flag with the helm upgrade command. Procedure Update the values-public.yaml and values-private.yaml configuration files with new values. Run the helm upgrade command and specify the configuration files using the -f option: USD helm upgrade -n stackrox \ stackrox-secured-cluster-services rhacs/secured-cluster-services \ --reuse-values \ 1 -f <path_to_values_public.yaml> \ -f <path_to_values_private.yaml> 1 If you have modified values that are not included in the values_public.yaml and values_private.yaml files, include the --reuse-values parameter. 4.4.3. Installing RHACS on secured clusters by using the roxctl CLI This method is also referred to as the manifest installation method. Prerequisites If you plan to use the roxctl CLI command to generate the files used by the sensor installation script, you have installed the roxctl CLI. You have generated the files that will be used by the sensor installation script. Procedure On the OpenShift Container Platform secured cluster, deploy the Sensor component by running the sensor installation script. 4.4.3.1. Installing the roxctl CLI You must first download the binary. You can install roxctl on Linux, Windows, or macOS. 4.4.3.1.1. Installing the roxctl CLI on Linux You can install the roxctl CLI binary on Linux by using the following procedure. Note roxctl CLI for Linux is available for amd64 , arm64 , ppc64le , and s390x architectures. Procedure Determine the roxctl architecture for the target operating system: USD arch="USD(uname -m | sed "s/x86_64//")"; arch="USD{arch:+-USDarch}" Download the roxctl CLI: USD curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.6.3/bin/Linux/roxctlUSD{arch}" Make the roxctl binary executable: USD chmod +x roxctl Place the roxctl binary in a directory that is on your PATH : To check your PATH , execute the following command: USD echo USDPATH Verification Verify the roxctl version you have installed: USD roxctl version 4.4.3.1.2. Installing the roxctl CLI on macOS You can install the roxctl CLI binary on macOS by using the following procedure. Note roxctl CLI for macOS is available for amd64 and arm64 architectures. Procedure Determine the roxctl architecture for the target operating system: USD arch="USD(uname -m | sed "s/x86_64//")"; arch="USD{arch:+-USDarch}" Download the roxctl CLI: USD curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.6.3/bin/Darwin/roxctlUSD{arch}" Remove all extended attributes from the binary: USD xattr -c roxctl Make the roxctl binary executable: USD chmod +x roxctl Place the roxctl binary in a directory that is on your PATH : To check your PATH , execute the following command: USD echo USDPATH Verification Verify the roxctl version you have installed: USD roxctl version 4.4.3.1.3. Installing the roxctl CLI on Windows You can install the roxctl CLI binary on Windows by using the following procedure. Note roxctl CLI for Windows is available for the amd64 architecture. Procedure Download the roxctl CLI: USD curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.6.3/bin/Windows/roxctl.exe Verification Verify the roxctl version you have installed: USD roxctl version 4.4.3.2. Installing Sensor To monitor a cluster, you must deploy Sensor. You must deploy Sensor into each cluster that you want to monitor. This installation method is also called the manifest installation method. To perform an installation by using the manifest installation method, follow only one of the following procedures: Use the RHACS web portal to download the cluster bundle, and then extract and run the sensor script. Use the roxctl CLI to generate the required sensor configuration for your OpenShift Container Platform cluster and associate it with your Central instance. Prerequisites You must have already installed Central services, or you can access Central services by selecting your ACS instance on Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service). 4.4.3.2.1. Manifest installation method by using the web portal Procedure On your secured cluster, in the RHACS portal, go to Platform Configuration Clusters . Select Secure a cluster Legacy installation method . Specify a name for the cluster. Provide appropriate values for the fields based on where you are deploying the Sensor. If you are deploying Sensor in the same cluster, accept the default values for all the fields. If you are deploying into a different cluster, replace central.stackrox.svc:443 with a load balancer, node port, or other address, including the port number, that is accessible from the other cluster. If you are using a non-gRPC capable load balancer, such as HAProxy, AWS Application Load Balancer (ALB), or AWS Elastic Load Balancing (ELB), use the WebSocket Secure ( wss ) protocol. To use wss : Prefix the address with wss:// . Add the port number after the address, for example, wss://stackrox-central.example.com:443 . Click to continue with the Sensor setup. Click Download YAML File and Keys to download the cluster bundle (zip archive). Important The cluster bundle zip archive includes unique configurations and keys for each cluster. Do not reuse the same files in another cluster. From a system that has access to the monitored cluster, extract and run the sensor script from the cluster bundle: USD unzip -d sensor sensor-<cluster_name>.zip USD ./sensor/sensor.sh If you get a warning that you do not have the required permissions to deploy Sensor, follow the on-screen instructions, or contact your cluster administrator for help. After Sensor is deployed, it contacts Central and provides cluster information. 4.4.3.2.2. Manifest installation by using the roxctl CLI Procedure Generate the required sensor configuration for your OpenShift Container Platform cluster and associate it with your Central instance by running the following command: USD roxctl sensor generate openshift --openshift-version <ocp_version> --name <cluster_name> --central "USDROX_ENDPOINT" 1 1 For the --openshift-version option, specify the major OpenShift Container Platform version number for your cluster. For example, specify 3 for OpenShift Container Platform version 3.x and specify 4 for OpenShift Container Platform version 4.x . From a system that has access to the monitored cluster, extract and run the sensor script from the cluster bundle: USD unzip -d sensor sensor-<cluster_name>.zip USD ./sensor/sensor.sh If you get a warning that you do not have the required permissions to deploy Sensor, follow the on-screen instructions, or contact your cluster administrator for help. After Sensor is deployed, it contacts Central and provides cluster information. Verification Return to the RHACS portal and check if the deployment is successful. If successful, when viewing your list of clusters in Platform Configuration Clusters , the cluster status displays a green checkmark and a Healthy status. If you do not see a green checkmark, use the following command to check for problems: On OpenShift Container Platform, enter the following command: USD oc get pod -n stackrox -w On Kubernetes, enter the following command: USD kubectl get pod -n stackrox -w Click Finish to close the window. After installation, Sensor starts reporting security information to RHACS and the RHACS portal dashboard begins showing deployments, images, and policy violations from the cluster on which you have installed the Sensor. 4.5. Configuring Secured Cluster services options for RHACS using the Operator When installing Secured Cluster services by using the Operator, you can configure optional settings. 4.5.1. Secured Cluster services configuration options When you create a Central instance, the Operator lists the following configuration options for the Central custom resource. 4.5.1.1. Required Configuration Settings Parameter Description centralEndpoint The endpoint of Central instance to connect to, including the port number. If using a non-gRPC capable load balancer, use the WebSocket protocol by prefixing the endpoint address with wss:// . If you do not specify a value for this parameter, Sensor attempts to connect to a Central instance running in the same namespace. clusterName The unique name of this cluster, which shows up in the RHACS portal. After you set the name by using this parameter, you cannot change it again. To change the name, you must delete and re-create the object. 4.5.1.2. Admission controller settings Parameter Description admissionControl.listenOnCreates Specify true to enable preventive policy enforcement for object creations. The default value is true . admissionControl.listenOnEvents Specify true to enable monitoring and enforcement for Kubernetes events, such as port-forward and exec events. It is used to control access to resources through the Kubernetes API. The default value is true . admissionControl.listenOnUpdates Specify true to enable preventive policy enforcement for object updates. It will not have any effect unless Listen On Creates is set to true as well. The default value is true . admissionControl.nodeSelector If you want this component to only run on specific nodes, you can configure a node selector using this parameter. admissionControl.tolerations If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Admission Control. This parameter is mainly used for infrastructure nodes. admissionControl.hostAliases Use this parameter to inject hosts and IP addresses into the pod's hosts file. admissionControl.resources.limits Use this parameter to override the default resource limits for the admission controller. admissionControl.resources.requests Use this parameter to override the default resource requests for the admission controller. admissionControl.bypass Use one of the following values to configure the bypassing of admission controller enforcement: BreakGlassAnnotation to enable bypassing the admission controller via the admission.stackrox.io/break-glass annotation. Disabled to disable the ability to bypass admission controller enforcement for the secured cluster. The default value is BreakGlassAnnotation . admissionControl.contactImageScanners Use one of the following values to specify if the admission controller must connect to the image scanner: ScanIfMissing if the scan results for the image are missing. DoNotScanInline to skip scanning the image when processing the admission request. The default value is DoNotScanInline . admissionControl.timeoutSeconds Use this parameter to specify the maximum number of seconds RHACS must wait for an admission review before marking it as fail open. If the admission webhook does not receive information that it is requesting before the end of the timeout period, it fails, but in fail open status, it still allows the operation to succeed. For example, the admission controller would allow a deployment to be created even if a scan had timed out and RHACS could not determine if the deployment violated a policy. Beginning in release 4.5, Red Hat reduced the default timeout setting for the RHACS admission controller webhooks from 20 seconds to 10 seconds, resulting in an effective timeout of 12 seconds within the ValidatingWebhookConfiguration . This change does not negatively affect OpenShift Container Platform users because OpenShift Container Platform caps the timeout at 13 seconds. 4.5.1.3. Scanner configuration Use Scanner configuration settings to modify the local cluster scanner for the integrated OpenShift image registry. Parameter Description scanner.analyzer.nodeSelector Specify a node selector label as label-key: label-value to force Scanner to only schedule on nodes with the specified label. scanner.analyzer.tolerations If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Scanner. scanner.analyzer.hostAliases Use this parameter to inject hosts and IP addresses into the pod's hosts file. scanner.analyzer.resources.requests.memory The memory request for the Scanner container. Use this parameter to override the default value. scanner.analyzer.resources.requests.cpu The CPU request for the Scanner container. Use this parameter to override the default value. scanner.analyzer.resources.limits.memory The memory limit for the Scanner container. Use this parameter to override the default value. scanner.analyzer.resources.limits.cpu The CPU limit for the Scanner container. Use this parameter to override the default value. scanner.analyzer.scaling.autoscaling If you set this option to Disabled , Red Hat Advanced Cluster Security for Kubernetes disables autoscaling on the Scanner deployment. The default value is Enabled . scanner.analyzer.scaling.minReplicas The minimum number of replicas for autoscaling. The default value is 2 . scanner.analyzer.scaling.maxReplicas The maximum number of replicas for autoscaling. The default value is 5 . scanner.analyzer.scaling.replicas The default number of replicas. The default value is 3 . scanner.analyzer.tolerations If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Scanner. scanner.db.nodeSelector Specify a node selector label as label-key: label-value to force Scanner DB to only schedule on nodes with the specified label. scanner.db.hostAliases Use this parameter to inject hosts and IP addresses into the pod's hosts file. scanner.db.resources.requests.memory The memory request for the Scanner DB container. Use this parameter to override the default value. scanner.db.resources.requests.cpu The CPU request for the Scanner DB container. Use this parameter to override the default value. scanner.db.resources.limits.memory The memory limit for the Scanner DB container. Use this parameter to override the default value. scanner.db.resources.limits.cpu The CPU limit for the Scanner DB container. Use this parameter to override the default value. scanner.db.tolerations If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Scanner DB. scanner.scannerComponent If you set this option to Disabled , Red Hat Advanced Cluster Security for Kubernetes does not deploy the Scanner deployment. Do not disable the Scanner on OpenShift Container Platform clusters. The default value is AutoSense . scannerV4.db.nodeSelector If you want this component to only run on specific nodes, you can use this parameter to configure a node selector. scannerV4.db.tolerations If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Scanner V4 DB. This parameter is mainly used for infrastructure nodes. scannerV4.db.resources.limits Use this parameter to override the default resource limits for Scanner V4 DB. scannerV4.db.resources.requests Use this parameter to override the default resource requests for Scanner V4 DB. scannerV4.db.persistence.persistentVolumeClaim.claimName The name of the PVC to manage persistent data for Scanner V4. If no PVC with the given name exists, it is created. The default value is scanner-v4-db if not set. To prevent data loss, the PVC is not removed automatically when Central is deleted. scannerV4.db.persistence.persistentVolumeClaim.size The size of the PVC to manage persistent data for Scanner V4. scannerV4.db.persistence.persistentVolumeClaim.storageClassName The name of the storage class to use for the PVC. If your cluster is not configured with a default storage class, you must provide a value for this parameter. scannerV4.indexer.nodeSelector If you want this component to only run on specific nodes, you can use this parameter to configure a node selector. scannerV4.indexer.tolerations If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for the Scanner V4 Indexer. This parameter is mainly used for infrastructure nodes. scannerV4.indexer.resources.limits Use this parameter to override the default resource limits for the Scanner V4 Indexer. scannerV4.indexer.resources.requests Use this parameter to override the default resource requests for the Scanner V4 Indexer. scannerV4.indexer.scaling.autoScaling When enabled, the number of Scanner V4 Indexer replicas is managed dynamically based on the load, within the limits specified. scannerV4.indexer.scaling.maxReplicas Specifies the maximum replicas to be used in the Scanner V4 Indexer autoscaling configuration. scannerV4.indexer.scaling.minReplicas Specifies the minimum replicas to be used in the Scanner V4 Indexer autoscaling configuration. scannerV4.indexer.scaling.replicas When autoscaling is disabled for the Scanner V4 Indexer, the number of replicas is always configured to match this value. scannerV4.monitoring.exposeEndpoint Configures a monitoring endpoint for Scanner V4. The monitoring endpoint allows other services to collect metrics from Scanner V4, provided in a Prometheus-compatible format. Use Enabled to expose the monitoring endpoint. When you enable monitoring, RHACS creates a new service, monitoring , with port 9090, and a network policy allowing inbound connections to the port. By default, this is not enabled. scannerV4.scannerComponent Enables Scanner V4. Valid values are: * Default : Scanner V4 is not enabled and not deployed. * AutoSense : If Central exists in the same namespace, Scanner V4 is not deployed and the existing Scanner V4 that was installed with Central is used. If there is no Central in this namespace, Scanner V4 is deployed. * Disabled : Do not deploy Scanner V4. 4.5.1.4. Image configuration Use image configuration settings when you are using a custom registry. Parameter Description imagePullSecrets.name Additional image pull secrets to be taken into account for pulling images. 4.5.1.5. Per node settings Per node settings define the configuration settings for components that run on each node in a cluster to secure the cluster. These components are Collector and Compliance. Parameter Description perNode.collector.collection The method for system-level data collection. The default value is CORE_BPF . Red Hat recommends using CORE_BPF for data collection. If you select NoCollection , Collector does not report any information about the network activity and the process executions. Available options are NoCollection and CORE_BPF . The EBPF option is available only for version 4.4 and earlier. perNode.collector.imageFlavor The image type to use for Collector. You can specify it as Regular or Slim . This value is deprecated. Regular and Slim images are identical. perNode.collector.resources.limits Use this parameter to override the default resource limits for Collector. perNode.collector.resources.requests Use this parameter to override the default resource requests for Collector. perNode.compliance.resources.requests Use this parameter to override the default resource requests for Compliance. perNode.compliance.resources.limits Use this parameter to override the default resource limits for Compliance. perNode.taintToleration To ensure comprehensive monitoring of your cluster activity, Red Hat Advanced Cluster Security for Kubernetes runs services on every node in the cluster, including tainted nodes by default. If you do not want this behavior, specify AvoidTaints for this parameter. The default value is TolerateTaints . 4.5.1.6. Sensor configuration This configuration defines the settings of the Sensor components, which runs on one node in a cluster. Parameter Description sensor.nodeSelector If you want Sensor to only run on specific nodes, you can configure a node selector. sensor.tolerations If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Sensor. This parameter is mainly used for infrastructure nodes. sensor.hostAliases Use this parameter to inject hosts and IP addresses into the pod's hosts file. sensor.resources.limits Use this parameter to override the default resource limits for Sensor. sensor.resources.requests Use this parameter to override the default resource requests for Sensor. 4.5.1.7. General and miscellaneous settings Parameter Description customize.annotations Allows specifying custom annotations for the Central deployment. customize.envVars Advanced settings to configure environment variables. egress.connectivityPolicy Configures whether Red Hat Advanced Cluster Security for Kubernetes should run in online or offline mode. In offline mode, automatic updates of vulnerability definitions and kernel modules are disabled. misc.createSCCs Set this to true to create SCCs for Central. It may cause issues in some environments. network.policies To provide security at the network level, RHACS creates default NetworkPolicy resources in the namespace where secured cluster resources are installed. These network policies allow ingress to specific components on specific ports. If you do not want RHACS to create these policies, set this parameter to Disabled . The default value is Enabled . Warning Disabling creation of default network policies can break communication between RHACS components. If you disable creation of default policies, you must create your own network policies to allow this communication. overlays See "Customizing the installation using the Operator with overlays". tls.additionalCAs Additional trusted CA certificates for the secured cluster. These certificates are used when integrating with services using a private certificate authority. 4.5.2. Customizing the installation using the Operator with overlays Learn how to tailor the installation of RHACS using the Operator method with overlays. 4.5.2.1. Overlays When Central or SecuredCluster custom resources don't expose certain low-level configuration options as parameters, you can use the .spec.overlays field for adjustments. Use this field to amend the Kubernetes resources generated by these custom resources. The .spec.overlays field comprises a sequence of patches, applied in their listed order. These patches are processed by the Operator on the Kubernetes resources before deployment to the cluster. Warning The .spec.overlays field in both Central and SecuredCluster allows users to modify low-level Kubernetes resources in arbitrary ways. Use this feature only when the desired customization is not available through the SecuredCluster or Central custom resources. Support for the .spec.overlays feature is limited primarily because it grants the ability to make intricate and highly specific modifications to Kubernetes resources, which can vary significantly from one implementation to another. This level of customization introduces a complexity that goes beyond standard usage scenarios, making it challenging to provide broad support. Each modification can be unique, potentially interacting with the Kubernetes system in unpredictable ways across different versions and configurations of the product. This variability means that troubleshooting and guaranteeing the stability of these customizations require a level of expertise and understanding specific to each individual's setup. Consequently, while this feature empowers tailoring Kubernetes resources to meet precise needs, greater responsibility must also assumed to ensure the compatibility and stability of configurations, especially during upgrades or changes to the underlying product. The following example shows the structure of an overlay: overlays: - apiVersion: v1 1 kind: ConfigMap 2 name: my-configmap 3 patches: - path: .data 4 value: | 5 key1: data2 key2: data2 1 Targeted Kubernetes resource ApiVersion, for example apps/v1 , v1 , networking.k8s.io/v1 2 Resource type (e.g., Deployment, ConfigMap, NetworkPolicy) 3 Name of the resource, for example my-configmap 4 JSONPath expression to the field, for example spec.template.spec.containers[name:central].env[-1] 5 YAML string for the new field value 4.5.2.1.1. Adding an overlay For customizations, you can add overlays to Central or SecuredCluster custom resources. Use the OpenShift CLI ( oc ) or the OpenShift Container Platform web console for modifications. If overlays do not take effect as expected, check the RHACS Operator logs for any syntax errors or issues logged. 4.5.2.2. Overlay examples 4.5.2.2.1. Specifying an EKS pod role ARN for the Central ServiceAccount Add an Amazon Elastic Kubernetes Service (EKS) pod role Amazon Resource Name (ARN) annotation to the central ServiceAccount as shown in the following example: apiVersion: platform.stackrox.io kind: Central metadata: name: central spec: # ... overlays: - apiVersion: v1 kind: ServiceAccount name: central patches: - path: metadata.annotations.eks\.amazonaws\.com/role-arn value: "\"arn:aws:iam:1234:role\"" 4.5.2.2.2. Injecting an environment variable into the Central deployment Inject an environment variable into the central deployment as shown in the following example: apiVersion: platform.stackrox.io kind: Central metadata: name: central spec: # ... overlays: - apiVersion: apps/v1 kind: Deployment name: central patches: - path: spec.template.spec.containers[name:central].env[-1] value: | name: MY_ENV_VAR value: value 4.5.2.2.3. Extending network policy with an ingress rule Add an ingress rule to the allow-ext-to-central network policy for port 999 traffic as shown in the following example: apiVersion: platform.stackrox.io kind: Central metadata: name: central spec: # ... overlays: - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy name: allow-ext-to-central patches: - path: spec.ingress[-1] value: | ports: - port: 999 protocol: TCP 4.5.2.2.4. Modifying ConfigMap data Modify the central-endpoints ConfigMap data as shown in the following example: apiVersion: platform.stackrox.io kind: Central metadata: name: central spec: # ... overlays: - apiVersion: v1 kind: ConfigMap name: central-endpoints patches: - path: data value: | endpoints.yaml: | disableDefault: false 4.5.2.2.5. Adding a container to the Central deployment Add a new container to the central deployment as shown in the following example:. apiVersion: platform.stackrox.io kind: Central metadata: name: central spec: # ... overlays: - apiVersion: apps/v1 kind: Deployment name: central patches: - path: spec.template.spec.containers[-1] value: | name: nginx image: nginx ports: - containerPort: 8000 name: http protocol: TCP 4.6. Verifying installation of RHACS on Red Hat OpenShift Provides steps to verify that RHACS is properly installed. 4.6.1. Verifying installation After you complete the installation, run a few vulnerable applications and go to the RHACS portal to evaluate the results of security assessments and policy violations. Note The sample applications listed in the following section contain critical vulnerabilities and they are specifically designed to verify the build and deploy-time assessment features of Red Hat Advanced Cluster Security for Kubernetes. To verify installation: Find the address of the RHACS portal based on your exposure method: For a route: USD oc get route central -n stackrox For a load balancer: USD oc get service central-loadbalancer -n stackrox For port forward: Run the following command: USD oc port-forward svc/central 18443:443 -n stackrox Go to https://localhost:18443/ . Using the Red Hat OpenShift CLI, create a new project: USD oc new-project test Start some applications with critical vulnerabilities: USD oc run shell --labels=app=shellshock,team=test-team \ --image=quay.io/stackrox-io/docs:example-vulnerables-cve-2014-6271 -n test USD oc run samba --labels=app=rce \ --image=quay.io/stackrox-io/docs:example-vulnerables-cve-2017-7494 -n test Red Hat Advanced Cluster Security for Kubernetes automatically scans these deployments for security risks and policy violations as soon as they are submitted to the cluster. Go to the RHACS portal to view the violations. You can log in to the RHACS portal by using the default username admin and the generated password.
[ "spec: central: declarativeConfiguration: configMaps: - name: \"<declarative-configs>\" 1 secrets: - name: \"<sensitive-declarative-configs>\" 2", "CREATE USER stackrox WITH PASSWORD <password>;", "CREATE DATABASE stackrox;", "\\connect stackrox", "CREATE SCHEMA stackrox;", "REVOKE CREATE ON SCHEMA public FROM PUBLIC; REVOKE USAGE ON SCHEMA public FROM PUBLIC; REVOKE ALL ON DATABASE stackrox FROM PUBLIC;", "CREATE ROLE readwrite;", "GRANT CONNECT ON DATABASE stackrox TO readwrite;", "GRANT USAGE ON SCHEMA stackrox TO readwrite; GRANT USAGE, CREATE ON SCHEMA stackrox TO readwrite; GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA stackrox TO readwrite; ALTER DEFAULT PRIVILEGES IN SCHEMA stackrox GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO readwrite; GRANT USAGE ON ALL SEQUENCES IN SCHEMA stackrox TO readwrite; ALTER DEFAULT PRIVILEGES IN SCHEMA stackrox GRANT USAGE ON SEQUENCES TO readwrite;", "GRANT readwrite TO stackrox;", "oc create secret generic external-db-password \\ 1 --from-file=password=<password.txt> 2", "spec: central: declarativeConfiguration: configMaps: - name: <declarative-configs> 1 secrets: - name: <sensitive-declarative-configs> 2", "spec: tls: additionalCAs: - name: db-ca content: | <certificate>", "oc -n stackrox get secret central-htpasswd -o go-template='{{index .data \"password\" | base64decode}}'", "oc -n stackrox get route central -o jsonpath=\"{.status.ingress[0].host}\"", "helm repo add rhacs https://mirror.openshift.com/pub/rhacs/charts/", "helm search repo -l rhacs/", "helm install -n stackrox --create-namespace stackrox-central-services rhacs/central-services --set imagePullSecrets.username=<username> \\ 1 --set imagePullSecrets.password=<password> \\ 2 --set central.exposure.route.enabled=true", "helm install -n stackrox --create-namespace stackrox-central-services rhacs/central-services --set imagePullSecrets.username=<username> \\ 1 --set imagePullSecrets.password=<password> \\ 2 --set central.exposure.loadBalancer.enabled=true", "helm install -n stackrox --create-namespace stackrox-central-services rhacs/central-services --set imagePullSecrets.username=<username> \\ 1 --set imagePullSecrets.password=<password> 2", "env: proxyConfig: | url: http://proxy.name:port username: username password: password excludes: - some.domain", "env: proxyConfig: | url: http://proxy.name:port username: username password: password excludes: - some.domain", "htpasswd: | admin:<bcrypt-hash>", "central: declarativeConfiguration: mounts: configMaps: - declarative-configs secrets: - sensitive-declarative-configs", "helm install -n stackrox --create-namespace stackrox-central-services rhacs/central-services -f <path_to_values_public.yaml> -f <path_to_values_private.yaml> 1", "helm upgrade -n stackrox stackrox-central-services rhacs/central-services --reuse-values \\ 1 -f <path_to_init_bundle_file -f <path_to_values_public.yaml> -f <path_to_values_private.yaml>", "arch=\"USD(uname -m | sed \"s/x86_64//\")\"; arch=\"USD{arch:+-USDarch}\"", "curl -L -f -o roxctl \"https://mirror.openshift.com/pub/rhacs/assets/4.6.3/bin/Linux/roxctlUSD{arch}\"", "chmod +x roxctl", "echo USDPATH", "roxctl version", "arch=\"USD(uname -m | sed \"s/x86_64//\")\"; arch=\"USD{arch:+-USDarch}\"", "curl -L -f -o roxctl \"https://mirror.openshift.com/pub/rhacs/assets/4.6.3/bin/Darwin/roxctlUSD{arch}\"", "xattr -c roxctl", "chmod +x roxctl", "echo USDPATH", "roxctl version", "curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.6.3/bin/Windows/roxctl.exe", "roxctl version", "roxctl central generate interactive", "Path to the backup bundle from which to restore keys and certificates (optional): PEM cert bundle file (optional): 1 Disable the administrator password (only use this if you have already configured an IdP for your instance) (default: \"false\"): Create PodSecurityPolicy resources (for pre-v1.25 Kubernetes) (default: \"false\"): 2 Administrator password (default: autogenerated): Orchestrator (k8s, openshift): Default container images settings (rhacs, opensource); it controls repositories from where to download the images, image names and tags format (default: \"rhacs\"): The directory to output the deployment bundle to (default: \"central-bundle\"): Whether to enable telemetry (default: \"true\"): The central-db image to use (if unset, a default will be used according to --image-defaults) (default: \"registry.redhat.io/advanced-cluster-security/rhacs-central-db-rhel8:4.6.0\"): List of secrets to add as declarative configuration mounts in central (default: \"[]\"): 3 The method of exposing Central (lb, np, none) (default: \"none\"): 4 The main image to use (if unset, a default will be used according to --image-defaults) (default: \"registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.6.0\"): Whether to run StackRox in offline mode, which avoids reaching out to the Internet (default: \"false\"): List of config maps to add as declarative configuration mounts in central (default: \"[]\"): 5 The deployment tool to use (kubectl, helm, helm-values) (default: \"kubectl\"): Istio version when deploying into an Istio-enabled cluster (leave empty when not running Istio) (optional): The scanner-db image to use (if unset, a default will be used according to --image-defaults) (default: \"registry.redhat.io/advanced-cluster-security/rhacs-scanner-db-rhel8:4.6.0\"): The scanner image to use (if unset, a default will be used according to --image-defaults) (default: \"registry.redhat.io/advanced-cluster-security/rhacs-scanner-rhel8:4.6.0\"): The scanner-v4-db image to use (if unset, a default will be used according to --image-defaults) (default: \"registry.redhat.io/advanced-cluster-security/rhacs-scanner-v4-db-rhel8:4.6.0\"): The scanner-v4 image to use (if unset, a default will be used according to --image-defaults) (default: \"registry.redhat.io/advanced-cluster-security/rhacs-scanner-v4-rhel8:4.6.0\"): External volume type (hostpath, pvc): hostpath Path on the host (default: \"/var/lib/stackrox-central\"): Node selector key (e.g. kubernetes.io/hostname): Node selector value:", "sudo chcon -Rt svirt_sandbox_file_t <full_volume_path>", "./central-bundle/central/scripts/setup.sh", "oc create -f helm/chart/crds/config.stackrox.io_securitypolicies.yaml", "oc create -R -f central-bundle/central", "oc get pod -n stackrox -w", "cat central-bundle/password", "overlays: - apiVersion: v1 1 kind: ConfigMap 2 name: my-configmap 3 patches: - path: .data 4 value: | 5 key1: data2 key2: data2", "apiVersion: platform.stackrox.io kind: Central metadata: name: central spec: # overlays: - apiVersion: v1 kind: ServiceAccount name: central patches: - path: metadata.annotations.eks\\.amazonaws\\.com/role-arn value: \"\\\"arn:aws:iam:1234:role\\\"\"", "apiVersion: platform.stackrox.io kind: Central metadata: name: central spec: # overlays: - apiVersion: apps/v1 kind: Deployment name: central patches: - path: spec.template.spec.containers[name:central].env[-1] value: | name: MY_ENV_VAR value: value", "apiVersion: platform.stackrox.io kind: Central metadata: name: central spec: # overlays: - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy name: allow-ext-to-central patches: - path: spec.ingress[-1] value: | ports: - port: 999 protocol: TCP", "apiVersion: platform.stackrox.io kind: Central metadata: name: central spec: # overlays: - apiVersion: v1 kind: ConfigMap name: central-endpoints patches: - path: data value: | endpoints.yaml: | disableDefault: false", "apiVersion: platform.stackrox.io kind: Central metadata: name: central spec: # overlays: - apiVersion: apps/v1 kind: Deployment name: central patches: - path: spec.template.spec.containers[-1] value: | name: nginx image: nginx ports: - containerPort: 8000 name: http protocol: TCP", "export ROX_API_TOKEN=<api_token>", "export ROX_CENTRAL_ADDRESS=<address>:<port_number>", "roxctl -e \"USDROX_CENTRAL_ADDRESS\" central init-bundles generate --output <cluster_init_bundle_name> cluster_init_bundle.yaml", "roxctl -e \"USDROX_CENTRAL_ADDRESS\" central init-bundles generate --output-secrets <cluster_init_bundle_name> cluster_init_bundle.yaml", "oc create -f <init_bundle>.yaml \\ 1 -n <stackrox> 2", "helm repo add rhacs https://mirror.openshift.com/pub/rhacs/charts/", "helm search repo -l rhacs/", "helm install -n stackrox --create-namespace stackrox-secured-cluster-services rhacs/secured-cluster-services -f <path_to_cluster_init_bundle.yaml> \\ 1 -f <path_to_pull_secret.yaml> \\ 2 --set clusterName=<name_of_the_secured_cluster> --set centralEndpoint=<endpoint_of_central_service> 3 --set scanner.disable=false 4", "customize: envVars: ENV_VAR1: \"value1\" ENV_VAR2: \"value2\"", "helm install -n stackrox --create-namespace stackrox-secured-cluster-services rhacs/secured-cluster-services -f <name_of_cluster_init_bundle.yaml> -f <path_to_values_public.yaml> -f <path_to_values_private.yaml> \\ 1 --set imagePullSecrets.username=<username> \\ 2 --set imagePullSecrets.password=<password> 3", "helm install ... -f <(echo \"USDINIT_BUNDLE_YAML_SECRET\") 1", "helm upgrade -n stackrox stackrox-secured-cluster-services rhacs/secured-cluster-services --reuse-values \\ 1 -f <path_to_values_public.yaml> -f <path_to_values_private.yaml>", "arch=\"USD(uname -m | sed \"s/x86_64//\")\"; arch=\"USD{arch:+-USDarch}\"", "curl -L -f -o roxctl \"https://mirror.openshift.com/pub/rhacs/assets/4.6.3/bin/Linux/roxctlUSD{arch}\"", "chmod +x roxctl", "echo USDPATH", "roxctl version", "arch=\"USD(uname -m | sed \"s/x86_64//\")\"; arch=\"USD{arch:+-USDarch}\"", "curl -L -f -o roxctl \"https://mirror.openshift.com/pub/rhacs/assets/4.6.3/bin/Darwin/roxctlUSD{arch}\"", "xattr -c roxctl", "chmod +x roxctl", "echo USDPATH", "roxctl version", "curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.6.3/bin/Windows/roxctl.exe", "roxctl version", "unzip -d sensor sensor-<cluster_name>.zip", "./sensor/sensor.sh", "roxctl sensor generate openshift --openshift-version <ocp_version> --name <cluster_name> --central \"USDROX_ENDPOINT\" 1", "unzip -d sensor sensor-<cluster_name>.zip", "./sensor/sensor.sh", "oc get pod -n stackrox -w", "kubectl get pod -n stackrox -w", "overlays: - apiVersion: v1 1 kind: ConfigMap 2 name: my-configmap 3 patches: - path: .data 4 value: | 5 key1: data2 key2: data2", "apiVersion: platform.stackrox.io kind: Central metadata: name: central spec: # overlays: - apiVersion: v1 kind: ServiceAccount name: central patches: - path: metadata.annotations.eks\\.amazonaws\\.com/role-arn value: \"\\\"arn:aws:iam:1234:role\\\"\"", "apiVersion: platform.stackrox.io kind: Central metadata: name: central spec: # overlays: - apiVersion: apps/v1 kind: Deployment name: central patches: - path: spec.template.spec.containers[name:central].env[-1] value: | name: MY_ENV_VAR value: value", "apiVersion: platform.stackrox.io kind: Central metadata: name: central spec: # overlays: - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy name: allow-ext-to-central patches: - path: spec.ingress[-1] value: | ports: - port: 999 protocol: TCP", "apiVersion: platform.stackrox.io kind: Central metadata: name: central spec: # overlays: - apiVersion: v1 kind: ConfigMap name: central-endpoints patches: - path: data value: | endpoints.yaml: | disableDefault: false", "apiVersion: platform.stackrox.io kind: Central metadata: name: central spec: # overlays: - apiVersion: apps/v1 kind: Deployment name: central patches: - path: spec.template.spec.containers[-1] value: | name: nginx image: nginx ports: - containerPort: 8000 name: http protocol: TCP", "oc get route central -n stackrox", "oc get service central-loadbalancer -n stackrox", "oc port-forward svc/central 18443:443 -n stackrox", "oc new-project test", "oc run shell --labels=app=shellshock,team=test-team --image=quay.io/stackrox-io/docs:example-vulnerables-cve-2014-6271 -n test oc run samba --labels=app=rce --image=quay.io/stackrox-io/docs:example-vulnerables-cve-2017-7494 -n test" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/installing/installing-rhacs-on-red-hat-openshift
Chapter 11. AWS 2 Simple Queue Service FIFO sink
Chapter 11. AWS 2 Simple Queue Service FIFO sink Send message to an AWS SQS FIFO Queue 11.1. Configuration Options The following table summarizes the configuration options available for the aws-sqs-fifo-sink Kamelet: Property Name Description Type Default Example accessKey * Access Key The access key obtained from AWS string queueNameOrArn * Queue Name The SQS Queue name or ARN string region * AWS Region The AWS region to connect to string "eu-west-1" secretKey * Secret Key The secret key obtained from AWS string autoCreateQueue Autocreate Queue Setting the autocreation of the SQS queue. boolean false contentBasedDeduplication Content-Based Deduplication Use content-based deduplication (should be enabled in the SQS FIFO queue first) boolean false Note Fields marked with an asterisk (*) are mandatory. 11.2. Dependencies At runtime, the aws-sqs-fifo-sink Kamelet relies upon the presence of the following dependencies: camel:aws2-sqs camel:core camel:kamelet 11.3. Usage This section describes how you can use the aws-sqs-fifo-sink . 11.3.1. Knative Sink You can use the aws-sqs-fifo-sink Kamelet as a Knative sink by binding it to a Knative object. aws-sqs-fifo-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-sqs-fifo-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-sqs-fifo-sink properties: accessKey: "The Access Key" queueNameOrArn: "The Queue Name" region: "eu-west-1" secretKey: "The Secret Key" 11.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 11.3.1.2. Procedure for using the cluster CLI Save the aws-sqs-fifo-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f aws-sqs-fifo-sink-binding.yaml 11.3.1.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind channel:mychannel aws-sqs-fifo-sink -p "sink.accessKey=The Access Key" -p "sink.queueNameOrArn=The Queue Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key" This command creates the KameletBinding in the current namespace on the cluster. 11.3.2. Kafka Sink You can use the aws-sqs-fifo-sink Kamelet as a Kafka sink by binding it to a Kafka topic. aws-sqs-fifo-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-sqs-fifo-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-sqs-fifo-sink properties: accessKey: "The Access Key" queueNameOrArn: "The Queue Name" region: "eu-west-1" secretKey: "The Secret Key" 11.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 11.3.2.2. Procedure for using the cluster CLI Save the aws-sqs-fifo-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f aws-sqs-fifo-sink-binding.yaml 11.3.2.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-sqs-fifo-sink -p "sink.accessKey=The Access Key" -p "sink.queueNameOrArn=The Queue Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key" This command creates the KameletBinding in the current namespace on the cluster. 11.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/aws-sqs-fifo-sink.kamelet.yaml
[ "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-sqs-fifo-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-sqs-fifo-sink properties: accessKey: \"The Access Key\" queueNameOrArn: \"The Queue Name\" region: \"eu-west-1\" secretKey: \"The Secret Key\"", "apply -f aws-sqs-fifo-sink-binding.yaml", "kamel bind channel:mychannel aws-sqs-fifo-sink -p \"sink.accessKey=The Access Key\" -p \"sink.queueNameOrArn=The Queue Name\" -p \"sink.region=eu-west-1\" -p \"sink.secretKey=The Secret Key\"", "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-sqs-fifo-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-sqs-fifo-sink properties: accessKey: \"The Access Key\" queueNameOrArn: \"The Queue Name\" region: \"eu-west-1\" secretKey: \"The Secret Key\"", "apply -f aws-sqs-fifo-sink-binding.yaml", "kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-sqs-fifo-sink -p \"sink.accessKey=The Access Key\" -p \"sink.queueNameOrArn=The Queue Name\" -p \"sink.region=eu-west-1\" -p \"sink.secretKey=The Secret Key\"" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.7/html/kamelets_reference/aws-sqs-fifo-sink
Deploying and Managing Streams for Apache Kafka on OpenShift
Deploying and Managing Streams for Apache Kafka on OpenShift Red Hat Streams for Apache Kafka 2.9 Deploy and manage Streams for Apache Kafka 2.9 on OpenShift Container Platform
[ "apiVersion: kafka.strimzi.io/v1beta2 kind: CustomResourceDefinition metadata: 1 name: kafkatopics.kafka.strimzi.io labels: app: strimzi spec: 2 group: kafka.strimzi.io versions: v1beta2 scope: Namespaced names: # singular: kafkatopic plural: kafkatopics shortNames: - kt 3 additionalPrinterColumns: 4 # subresources: status: {} 5 validation: 6 openAPIV3Schema: properties: spec: type: object properties: partitions: type: integer minimum: 1 replicas: type: integer minimum: 1 maximum: 32767 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic 1 metadata: name: my-topic labels: strimzi.io/cluster: my-cluster 2 spec: 3 partitions: 1 replicas: 1 config: retention.ms: 7200000 segment.bytes: 1073741824 status: conditions: 4 lastTransitionTime: \"2019-08-20T11:37:00.706Z\" status: \"True\" type: Ready observedGeneration: 1 /", "get k NAME DESIRED KAFKA REPLICAS DESIRED ZK REPLICAS my-cluster 3 3", "get strimzi NAME DESIRED KAFKA REPLICAS DESIRED ZK REPLICAS kafka.kafka.strimzi.io/my-cluster 3 3 NAME PARTITIONS REPLICATION FACTOR kafkatopic.kafka.strimzi.io/kafka-apps 3 3 NAME AUTHENTICATION AUTHORIZATION kafkauser.kafka.strimzi.io/my-user tls simple", "get strimzi -o name kafka.kafka.strimzi.io/my-cluster kafkatopic.kafka.strimzi.io/kafka-apps kafkauser.kafka.strimzi.io/my-user", "delete USD(oc get strimzi -o name) kafka.kafka.strimzi.io \"my-cluster\" deleted kafkatopic.kafka.strimzi.io \"kafka-apps\" deleted kafkauser.kafka.strimzi.io \"my-user\" deleted", "get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name==\"tls\")].bootstrapServers}{\"\\n\"}' my-cluster-kafka-bootstrap.myproject.svc:9093", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: spec: # status: clusterId: XP9FP2P-RByvEy0W4cOEUA 1 conditions: 2 - lastTransitionTime: '2023-01-20T17:56:29.396588Z' status: 'True' type: Ready 3 kafkaMetadataState: KRaft 4 kafkaVersion: 3.9.0 5 kafkaNodePools: 6 - name: broker - name: controller listeners: 7 - addresses: - host: my-cluster-kafka-bootstrap.prm-project.svc port: 9092 bootstrapServers: 'my-cluster-kafka-bootstrap.prm-project.svc:9092' name: plain - addresses: - host: my-cluster-kafka-bootstrap.prm-project.svc port: 9093 bootstrapServers: 'my-cluster-kafka-bootstrap.prm-project.svc:9093' certificates: - | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- name: tls - addresses: - host: >- 2054284155.us-east-2.elb.amazonaws.com port: 9095 bootstrapServers: >- 2054284155.us-east-2.elb.amazonaws.com:9095 certificates: - | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- name: external3 - addresses: - host: ip-10-0-172-202.us-east-2.compute.internal port: 31644 bootstrapServers: 'ip-10-0-172-202.us-east-2.compute.internal:31644' certificates: - | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- name: external4 observedGeneration: 3 8 operatorLastSuccessfulVersion: 2.9 9", "get kafka <kafka_resource_name> -o jsonpath='{.status}' | jq", "sed -i 's/namespace: .*/namespace: <my_namespace>/' install/cluster-operator/*RoleBinding*.yaml", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: controller labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - controller storage: type: jbod volumes: - id: 0 type: persistent-claim size: 20Gi deleteClaim: false resources: requests: memory: 64Gi cpu: \"8\" limits: memory: 64Gi cpu: \"12\"", "annotate kafka my-cluster strimzi.io/kraft=\"migration\" --overwrite", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: my-project annotations: strimzi.io/kraft: migration", "get pods -n my-project", "NAME READY STATUS RESTARTS my-cluster-kafka-0 1/1 Running 0 my-cluster-kafka-1 1/1 Running 0 my-cluster-kafka-2 1/1 Running 0 my-cluster-controller-3 1/1 Running 0 my-cluster-controller-4 1/1 Running 0 my-cluster-controller-5 1/1 Running 0", "get kafka my-cluster -n my-project -w", "NAME ... METADATA STATE my-cluster ... Zookeeper my-cluster ... KRaftMigration my-cluster ... KRaftDualWriting my-cluster ... KRaftPostMigration", "annotate kafka my-cluster strimzi.io/kraft=\"enabled\" --overwrite", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: my-project annotations: strimzi.io/kraft: enabled", "get kafka my-cluster -n my-project -w", "NAME ... METADATA STATE my-cluster ... Zookeeper my-cluster ... KRaftMigration my-cluster ... KRaftDualWriting my-cluster ... KRaftPostMigration my-cluster ... PreKRaft my-cluster ... KRaft", "annotate kafka my-cluster strimzi.io/kraft=\"rollback\" --overwrite", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: my-project annotations: strimzi.io/kraft: rollback", "delete KafkaNodePool controller -n my-project", "annotate kafka my-cluster strimzi.io/kraft=\"disabled\" --overwrite", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: my-project annotations: strimzi.io/kraft: disabled", "create secret docker-registry <pull_secret_name> --docker-server=registry.redhat.io --docker-username=<user_name> --docker-password=<password> --docker-email=<email>", "apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-cluster-operator spec: # template: spec: serviceAccountName: strimzi-cluster-operator containers: # env: - name: STRIMZI_IMAGE_PULL_SECRETS value: \"<pull_secret_name>\"", "create -f install/strimzi-admin", "create clusterrolebinding strimzi-admin --clusterrole=strimzi-admin --user= user1 --user= user2", "sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml", "sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml", "create -f install/cluster-operator -n my-cluster-operator-namespace", "get deployments -n my-cluster-operator-namespace", "NAME READY UP-TO-DATE AVAILABLE strimzi-cluster-operator 1/1 1 1", "sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml", "sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml", "apiVersion: apps/v1 kind: Deployment spec: # template: spec: serviceAccountName: strimzi-cluster-operator containers: - name: strimzi-cluster-operator image: registry.redhat.io/amq-streams/strimzi-rhel9-operator:2.9.0 imagePullPolicy: IfNotPresent env: - name: STRIMZI_NAMESPACE value: watched-namespace-1,watched-namespace-2,watched-namespace-3", "create -f install/cluster-operator/020-RoleBinding-strimzi-cluster-operator.yaml -n <watched_namespace> create -f install/cluster-operator/023-RoleBinding-strimzi-cluster-operator.yaml -n <watched_namespace> create -f install/cluster-operator/031-RoleBinding-strimzi-cluster-operator-entity-operator-delegation.yaml -n <watched_namespace>", "create -f install/cluster-operator -n my-cluster-operator-namespace", "get deployments -n my-cluster-operator-namespace", "NAME READY UP-TO-DATE AVAILABLE strimzi-cluster-operator 1/1 1 1", "sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml", "sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml", "apiVersion: apps/v1 kind: Deployment spec: # template: spec: # serviceAccountName: strimzi-cluster-operator containers: - name: strimzi-cluster-operator image: registry.redhat.io/amq-streams/strimzi-rhel9-operator:2.9.0 imagePullPolicy: IfNotPresent env: - name: STRIMZI_NAMESPACE value: \"*\" #", "create clusterrolebinding strimzi-cluster-operator-namespaced --clusterrole=strimzi-cluster-operator-namespaced --serviceaccount my-cluster-operator-namespace:strimzi-cluster-operator create clusterrolebinding strimzi-cluster-operator-watched --clusterrole=strimzi-cluster-operator-watched --serviceaccount my-cluster-operator-namespace:strimzi-cluster-operator create clusterrolebinding strimzi-cluster-operator-entity-operator-delegation --clusterrole=strimzi-entity-operator --serviceaccount my-cluster-operator-namespace:strimzi-cluster-operator", "create -f install/cluster-operator -n my-cluster-operator-namespace", "get deployments -n my-cluster-operator-namespace", "NAME READY UP-TO-DATE AVAILABLE strimzi-cluster-operator 1/1 1 1", "apply -f examples/kafka/kraft/kafka-with-dual-role-nodes.yaml", "get pods -n <my_cluster_operator_namespace>", "NAME READY STATUS RESTARTS my-cluster-entity-operator 3/3 Running 0 my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-4 1/1 Running 0", "apply -f examples/kafka/kafka-ephemeral.yaml", "apply -f examples/kafka/kafka-persistent.yaml", "get pods -n <my_cluster_operator_namespace>", "NAME READY STATUS RESTARTS my-cluster-entity-operator 3/3 Running 0 my-cluster-kafka-0 1/1 Running 0 my-cluster-kafka-1 1/1 Running 0 my-cluster-kafka-2 1/1 Running 0 my-cluster-zookeeper-0 1/1 Running 0 my-cluster-zookeeper-1 1/1 Running 0 my-cluster-zookeeper-2 1/1 Running 0", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # entityOperator: topicOperator: {} userOperator: {}", "apply -f <kafka_configuration_file>", "get pods -n <my_cluster_operator_namespace>", "NAME READY STATUS RESTARTS my-cluster-entity-operator 3/3 Running 0", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # entityOperator: topicOperator: {} userOperator: {}", "apply -f <kafka_configuration_file>", "get pods -n <my_cluster_operator_namespace>", "NAME READY STATUS RESTARTS my-cluster-entity-operator 3/3 Running 0", "exec -ti my-cluster -zookeeper-0 -- bin/zookeeper-shell.sh localhost:12181 ls /", "apply -f examples/connect/kafka-connect.yaml", "get pods -n <my_cluster_operator_namespace>", "NAME READY STATUS RESTARTS my-connect-cluster-connect-<pod_id> 1/1 Running 0", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: 1 # build: output: 2 type: docker image: my-registry.io/my-org/my-connect-cluster:latest pushSecret: my-registry-credentials plugins: 3 - name: connector-1 artifacts: - type: tgz url: <url_to_download_connector_1_artifact> sha512sum: <SHA-512_checksum_of_connector_1_artifact> - name: connector-2 artifacts: - type: jar url: <url_to_download_connector_2_artifact> sha512sum: <SHA-512_checksum_of_connector_2_artifact> #", "oc apply -f <kafka_connect_configuration_file>", "FROM registry.redhat.io/amq-streams/kafka-39-rhel9:2.9.0 USER root:root COPY ./ my-plugins / /opt/kafka/plugins/ USER 1001", "tree ./ my-plugins / ./ my-plugins / ├── debezium-connector-mongodb │ ├── bson-<version>.jar │ ├── CHANGELOG.md │ ├── CONTRIBUTE.md │ ├── COPYRIGHT.txt │ ├── debezium-connector-mongodb-<version>.jar │ ├── debezium-core-<version>.jar │ ├── LICENSE.txt │ ├── mongodb-driver-core-<version>.jar │ ├── README.md │ └── # ├── debezium-connector-mysql │ ├── CHANGELOG.md │ ├── CONTRIBUTE.md │ ├── COPYRIGHT.txt │ ├── debezium-connector-mysql-<version>.jar │ ├── debezium-core-<version>.jar │ ├── LICENSE.txt │ ├── mysql-binlog-connector-java-<version>.jar │ ├── mysql-connector-java-<version>.jar │ ├── README.md │ └── # └── debezium-connector-postgres ├── CHANGELOG.md ├── CONTRIBUTE.md ├── COPYRIGHT.txt ├── debezium-connector-postgres-<version>.jar ├── debezium-core-<version>.jar ├── LICENSE.txt ├── postgresql-<version>.jar ├── protobuf-java-<version>.jar ├── README.md └── #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: 1 # image: my-new-container-image 2 config: 3 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: \"true\" spec: #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector 1 labels: strimzi.io/cluster: my-connect-cluster 2 spec: class: org.apache.kafka.connect.file.FileStreamSourceConnector 3 tasksMax: 2 4 autoRestart: 5 enabled: true config: 6 file: \"/opt/kafka/LICENSE\" 7 topic: my-topic 8 #", "apply -f examples/connect/source-connector.yaml", "touch examples/connect/sink-connector.yaml", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-sink-connector labels: strimzi.io/cluster: my-connect spec: class: org.apache.kafka.connect.file.FileStreamSinkConnector 1 tasksMax: 2 config: 2 file: \"/tmp/my-file\" 3 topics: my-topic 4", "apply -f examples/connect/sink-connector.yaml", "get kctr --selector strimzi.io/cluster=<my_connect_cluster> -o name my-source-connector my-sink-connector", "exec <my_kafka_cluster>-kafka-0 -i -t -- bin/kafka-console-consumer.sh --bootstrap-server <my_kafka_cluster>-kafka-bootstrap. NAMESPACE .svc:9092 --topic my-topic --from-beginning", "curl -X POST http://my-connect-cluster-connect-api:8083/connectors -H 'Content-Type: application/json' -d '{ \"name\": \"my-source-connector\", \"config\": { \"connector.class\":\"org.apache.kafka.connect.file.FileStreamSourceConnector\", \"file\": \"/opt/kafka/LICENSE\", \"topic\":\"my-topic\", \"tasksMax\": \"4\", \"type\": \"source\" } }'", "selector: strimzi.io/cluster: my-connect-cluster 1 strimzi.io/kind: KafkaConnect strimzi.io/name: my-connect-cluster-connect 2 #", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: my-custom-connect-network-policy spec: ingress: - from: - podSelector: 1 matchLabels: app: my-connector-manager ports: - port: 8083 protocol: TCP podSelector: matchLabels: strimzi.io/cluster: my-connect-cluster strimzi.io/kind: KafkaConnect strimzi.io/name: my-connect-cluster-connect policyTypes: - Ingress", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: \"true\" spec: # jvmOptions: javaSystemProperties: - name: org.apache.kafka.disallowed.login.modules value: com.sun.security.auth.module.JndiLoginModule, org.apache.kafka.common.security.kerberos.KerberosLoginModule", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: \"true\" spec: # config: connector.client.config.override.policy: None", "apply -f examples/mirror-maker/kafka-mirror-maker.yaml", "apply -f examples/mirror-maker/kafka-mirror-maker-2.yaml", "get pods -n <my_cluster_operator_namespace>", "NAME READY STATUS RESTARTS my-mirror-maker-mirror-maker-<pod_id> 1/1 Running 1 my-mm2-cluster-mirrormaker2-<pod_id> 1/1 Running 1", "apply -f examples/bridge/kafka-bridge.yaml", "get pods -n <my_cluster_operator_namespace>", "NAME READY STATUS RESTARTS my-bridge-bridge-<pod_id> 1/1 Running 0", "get pods -o name pod/kafka-consumer pod/my-bridge-bridge-<pod_id>", "port-forward pod/my-bridge-bridge-<pod_id> 8080:8080 &", "selector: strimzi.io/cluster: kafka-bridge-name 1 strimzi.io/kind: KafkaBridge #", "apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-topic-operator labels: app: strimzi spec: # template: # spec: # containers: - name: strimzi-topic-operator # env: - name: STRIMZI_NAMESPACE 1 valueFrom: fieldRef: fieldPath: metadata.namespace - name: STRIMZI_KAFKA_BOOTSTRAP_SERVERS 2 value: my-kafka-bootstrap-address:9092 - name: STRIMZI_RESOURCE_LABELS 3 value: \"strimzi.io/cluster=my-cluster\" - name: STRIMZI_FULL_RECONCILIATION_INTERVAL_MS 4 value: \"120000\" - name: STRIMZI_LOG_LEVEL 5 value: INFO - name: STRIMZI_TLS_ENABLED 6 value: \"false\" - name: STRIMZI_JAVA_OPTS 7 value: \"-Xmx=512M -Xms=256M\" - name: STRIMZI_JAVA_SYSTEM_PROPERTIES 8 value: \"-Djavax.net.debug=verbose -DpropertyName=value\" - name: STRIMZI_PUBLIC_CA 9 value: \"false\" - name: STRIMZI_TLS_AUTH_ENABLED 10 value: \"false\" - name: STRIMZI_SASL_ENABLED 11 value: \"false\" - name: STRIMZI_SASL_USERNAME 12 value: \"admin\" - name: STRIMZI_SASL_PASSWORD 13 value: \"password\" - name: STRIMZI_SASL_MECHANISM 14 value: \"scram-sha-512\" - name: STRIMZI_SECURITY_PROTOCOL 15 value: \"SSL\" - name: STRIMZI_USE_FINALIZERS value: \"false\" 16", ". env: - name: STRIMZI_TRUSTSTORE_LOCATION 1 value: \"/path/to/truststore.p12\" - name: STRIMZI_TRUSTSTORE_PASSWORD 2 value: \" TRUSTSTORE-PASSWORD \" - name: STRIMZI_KEYSTORE_LOCATION 3 value: \"/path/to/keystore.p12\" - name: STRIMZI_KEYSTORE_PASSWORD 4 value: \" KEYSTORE-PASSWORD \"", ". env: - name: STRIMZI_SASL_ENABLED value: \"true\" - name: STRIMZI_SECURITY_PROTOCOL value: SASL_SSL - name: STRIMZI_SKIP_CLUSTER_CONFIG_REVIEW 1 value: \"true\" - name: STRIMZI_ALTERABLE_TOPIC_CONFIG 2 value: compression.type, max.message.bytes, message.timestamp.difference.max.ms, message.timestamp.type, retention.bytes, retention.ms - name: STRIMZI_SASL_CUSTOM_CONFIG_JSON 3 value: | { \"sasl.mechanism\": \"AWS_MSK_IAM\", \"sasl.jaas.config\": \"software.amazon.msk.auth.iam.IAMLoginModule required;\", \"sasl.client.callback.handler.class\": \"software.amazon.msk.auth.iam.IAMClientCallbackHandler\" } - name: STRIMZI_PUBLIC_CA value: \"true\" - name: STRIMZI_TRUSTSTORE_LOCATION value: /etc/pki/java/cacerts - name: STRIMZI_TRUSTSTORE_PASSWORD value: changeit - name: STRIMZI_KAFKA_BOOTSTRAP_SERVERS value: my-kafka-cluster-.kafka-serverless.us-east-1.amazonaws.com:9098", "FROM registry.redhat.io/amq-streams/strimzi-rhel9-operator:2.9.0 USER root RUN mkdir -p USD{STRIMZI_HOME}/external-libs RUN chmod +rx USD{STRIMZI_HOME}/external-libs COPY ./aws-msk-iam-auth-and-dependencies/* USD{STRIMZI_HOME}/external-libs/ ENV JAVA_CLASSPATH=USD{STRIMZI_HOME}/external-libs/* USER 1001", "get deployments", "NAME READY UP-TO-DATE AVAILABLE strimzi-topic-operator 1/1 1 1", "apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-user-operator labels: app: strimzi spec: # template: # spec: # containers: - name: strimzi-user-operator # env: - name: STRIMZI_NAMESPACE 1 valueFrom: fieldRef: fieldPath: metadata.namespace - name: STRIMZI_KAFKA_BOOTSTRAP_SERVERS 2 value: my-kafka-bootstrap-address:9092 - name: STRIMZI_CA_CERT_NAME 3 value: my-cluster-clients-ca-cert - name: STRIMZI_CA_KEY_NAME 4 value: my-cluster-clients-ca - name: STRIMZI_LABELS 5 value: \"strimzi.io/cluster=my-cluster\" - name: STRIMZI_FULL_RECONCILIATION_INTERVAL_MS 6 value: \"120000\" - name: STRIMZI_WORK_QUEUE_SIZE 7 value: 10000 - name: STRIMZI_CONTROLLER_THREAD_POOL_SIZE 8 value: 10 - name: STRIMZI_USER_OPERATIONS_THREAD_POOL_SIZE 9 value: 4 - name: STRIMZI_LOG_LEVEL 10 value: INFO - name: STRIMZI_GC_LOG_ENABLED 11 value: \"true\" - name: STRIMZI_CA_VALIDITY 12 value: \"365\" - name: STRIMZI_CA_RENEWAL 13 value: \"30\" - name: STRIMZI_JAVA_OPTS 14 value: \"-Xmx=512M -Xms=256M\" - name: STRIMZI_JAVA_SYSTEM_PROPERTIES 15 value: \"-Djavax.net.debug=verbose -DpropertyName=value\" - name: STRIMZI_SECRET_PREFIX 16 value: \"kafka-\" - name: STRIMZI_ACLS_ADMIN_API_SUPPORTED 17 value: \"true\" - name: STRIMZI_MAINTENANCE_TIME_WINDOWS 18 value: '* * 8-10 * * ?;* * 14-15 * * ?' - name: STRIMZI_KAFKA_ADMIN_CLIENT_CONFIGURATION 19 value: | default.api.timeout.ms=120000 request.timeout.ms=60000", ". env: - name: STRIMZI_CLUSTER_CA_CERT_SECRET_NAME 1 value: my-cluster-cluster-ca-cert - name: STRIMZI_EO_KEY_SECRET_NAME 2 value: my-cluster-entity-operator-certs ...\"", "create -f install/user-operator", "get deployments", "NAME READY UP-TO-DATE AVAILABLE strimzi-user-operator 1/1 1 1", "env: - name: STRIMZI_FEATURE_GATES value: +FeatureGate1,-FeatureGate2", "apply -f <kafka_configuration_file>", "examples ├── user 1 ├── topic 2 ├── security 3 │ ├── tls-auth │ ├── scram-sha-512-auth │ └── keycloak-authorization ├── mirror-maker 4 ├── metrics 5 ├── kafka 6 │ └── kraft 7 ├── cruise-control 8 ├── connect 9 └── bridge 10", "Basic configuration (required) apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster Deployment specifications spec: kafka: # Listener configuration (required) listeners: 1 - name: plain 2 port: 9092 3 type: internal 4 tls: false 5 configuration: useServiceDnsDomain: true 6 - name: tls port: 9093 type: internal tls: true authentication: 7 type: tls - name: external1 8 port: 9094 type: route tls: true configuration: brokerCertChainAndKey: 9 secretName: my-secret certificate: my-certificate.crt key: my-key.key # Kafka version (recommended) version: 3.9.0 10 # KRaft metadata version (recommended) metadataVersion: 3.9 11 # Kafka configuration (recommended) config: 12 auto.create.topics.enable: \"false\" offsets.topic.replication.factor: 3 transaction.state.log.replication.factor: 3 transaction.state.log.min.isr: 2 default.replication.factor: 3 min.insync.replicas: 2 # Resources requests and limits (recommended) resources: 13 requests: memory: 64Gi cpu: \"8\" limits: memory: 64Gi cpu: \"12\" # Logging configuration (optional) logging: 14 type: inline loggers: kafka.root.logger.level: INFO # Readiness probe (optional) readinessProbe: 15 initialDelaySeconds: 15 timeoutSeconds: 5 # Liveness probe (optional) livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 # JVM options (optional) jvmOptions: 16 -Xms: 8192m -Xmx: 8192m # Custom image (optional) image: my-org/my-image:latest 17 # Authorization (optional) authorization: 18 type: simple # Rack awareness (optional) rack: 19 topologyKey: topology.kubernetes.io/zone # Metrics configuration (optional) metricsConfig: 20 type: jmxPrometheusExporter valueFrom: configMapKeyRef: 21 name: my-config-map key: my-key # Entity Operator (recommended) entityOperator: 22 topicOperator: watchedNamespace: my-topic-namespace reconciliationIntervalMs: 60000 # Resources requests and limits (recommended) resources: requests: memory: 512Mi cpu: \"1\" limits: memory: 512Mi cpu: \"1\" # Logging configuration (optional) logging: 23 type: inline loggers: rootLogger.level: INFO userOperator: watchedNamespace: my-topic-namespace reconciliationIntervalMs: 60000 # Resources requests and limits (recommended) resources: requests: memory: 512Mi cpu: \"1\" limits: memory: 512Mi cpu: \"1\" # Logging configuration (optional) logging: 24 type: inline loggers: rootLogger.level: INFO # Kafka Exporter (optional) kafkaExporter: 25 # # Cruise Control (optional) cruiseControl: 26 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # quotas: type: strimzi producerByteRate: 1000000 1 consumerByteRate: 1000000 2 minAvailableBytesPerVolume: 500000000000 3 excludedPrincipals: 4 - my-user", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # quotas: type: kafka producerByteRate: 1000000 consumerByteRate: 1000000 requestPercentage: 55 1 controllerMutationRate: 50 2", "annotate pod <cluster_name>-kafka-<index_number> strimzi.io/delete-pod-and-pvc=\"true\"", "Basic configuration (required) apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster Deployment specifications spec: # Kafka configuration (required) kafka: # Replicas (required) replicas: 3 # Listener configuration (required) listeners: - name: plain port: 9092 type: internal tls: false configuration: useServiceDnsDomain: true - name: tls port: 9093 type: internal tls: true authentication: type: tls - name: external1 port: 9094 type: route tls: true configuration: brokerCertChainAndKey: secretName: my-secret certificate: my-certificate.crt key: my-key.key # Storage configuration (required) storage: type: persistent-claim size: 10000Gi # Kafka version (recommended) version: 3.9.0 # Kafka configuration (recommended) config: auto.create.topics.enable: \"false\" offsets.topic.replication.factor: 3 transaction.state.log.replication.factor: 3 transaction.state.log.min.isr: 2 default.replication.factor: 3 min.insync.replicas: 2 inter.broker.protocol.version: \"3.9\" # Resources requests and limits (recommended) resources: requests: memory: 64Gi cpu: \"8\" limits: memory: 64Gi cpu: \"12\" # Logging configuration (optional) logging: type: inline loggers: kafka.root.logger.level: INFO # Readiness probe (optional) readinessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 # Liveness probe (optional) livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 # JVM options (optional) jvmOptions: -Xms: 8192m -Xmx: 8192m # Custom image (optional) image: my-org/my-image:latest # Authorization (optional) authorization: type: simple # Rack awareness (optional) rack: topologyKey: topology.kubernetes.io/zone # Metrics configuration (optional) metricsConfig: type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: my-config-map key: my-key # # ZooKeeper configuration (required) zookeeper: 1 # Replicas (required) replicas: 3 2 # Storage configuration (required) storage: 3 type: persistent-claim size: 1000Gi # Resources requests and limits (recommended) resources: 4 requests: memory: 8Gi cpu: \"2\" limits: memory: 8Gi cpu: \"2\" # Logging configuration (optional) logging: 5 type: inline loggers: zookeeper.root.logger: INFO # JVM options (optional) jvmOptions: 6 -Xms: 4096m -Xmx: 4096m # Metrics configuration (optional) metricsConfig: 7 type: jmxPrometheusExporter valueFrom: configMapKeyRef: 8 name: my-config-map key: my-key # # Entity operator (recommended) entityOperator: topicOperator: # Resources requests and limits (recommended) resources: requests: memory: 512Mi cpu: \"1\" limits: memory: 512Mi cpu: \"1\" # Logging configuration (optional) logging: type: inline loggers: rootLogger.level: INFO watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 userOperator: # Resources requests and limits (recommended) resources: requests: memory: 512Mi cpu: \"1\" limits: memory: 512Mi cpu: \"1\" # Logging configuration (optional) logging: type: inline loggers: rootLogger.level: INFO watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 # Kafka Exporter (optional) kafkaExporter: # # Cruise Control (optional) cruiseControl: #", "annotate pod <cluster_name>-zookeeper-<index_number> strimzi.io/delete-pod-and-pvc=\"true\"", "Basic configuration (required) apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: kraft-dual-role 1 labels: strimzi.io/cluster: my-cluster 2 Node pool specifications spec: # Replicas (required) replicas: 3 3 # Roles (required) roles: 4 - controller - broker # Storage configuration (required) storage: 5 type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false # Resources requests and limits (recommended) resources: 6 requests: memory: 64Gi cpu: \"8\" limits: memory: 64Gi cpu: \"12\"", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-a labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - broker 1 storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false resources: requests: memory: 64Gi cpu: \"8\" limits: memory: 64Gi cpu: \"12\"", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: config: reserved.broker.max.id: 10000 #", "annotate kafkanodepool pool-a strimzi.io/next-node-ids=\"[0,1,2,10-20,30]\"", "annotate kafkanodepool pool-b strimzi.io/remove-node-ids=\"[60-50,9,8,7]\"", "annotate kafkanodepool pool-a strimzi.io/next-node-ids-", "annotate kafkanodepool pool-b strimzi.io/remove-node-ids-", "NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0", "scale kafkanodepool pool-a --replicas=4", "get pods -n <my_cluster_operator_namespace>", "NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0 my-cluster-pool-a-3 1/1 Running 0", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: # spec: mode: add-brokers brokers: [3]", "NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0 my-cluster-pool-a-3 1/1 Running 0", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: # spec: mode: remove-brokers brokers: [3]", "scale kafkanodepool pool-a --replicas=3", "NAME READY STATUS RESTARTS my-cluster-pool-b-kafka-0 1/1 Running 0 my-cluster-pool-b-kafka-1 1/1 Running 0 my-cluster-pool-b-kafka-2 1/1 Running 0", "scale kafkanodepool pool-a --replicas=4", "get pods -n <my_cluster_operator_namespace>", "NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-4 1/1 Running 0 my-cluster-pool-a-7 1/1 Running 0 my-cluster-pool-b-2 1/1 Running 0 my-cluster-pool-b-3 1/1 Running 0 my-cluster-pool-b-5 1/1 Running 0 my-cluster-pool-b-6 1/1 Running 0", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: # spec: mode: remove-brokers brokers: [6]", "scale kafkanodepool pool-b --replicas=3", "NAME READY STATUS RESTARTS my-cluster-pool-b-kafka-2 1/1 Running 0 my-cluster-pool-b-kafka-3 1/1 Running 0 my-cluster-pool-b-kafka-5 1/1 Running 0", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-a labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - controller - broker storage: type: jbod volumes: - id: 0 type: persistent-claim size: 20Gi deleteClaim: false #", "NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-b labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - broker storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false #", "get pods -n <my_cluster_operator_namespace>", "NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0 my-cluster-pool-b-3 1/1 Running 0 my-cluster-pool-b-4 1/1 Running 0 my-cluster-pool-b-5 1/1 Running 0", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: # spec: mode: remove-brokers brokers: [0, 1, 2]", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-a labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - controller storage: type: jbod volumes: - id: 0 type: persistent-claim size: 20Gi deleteClaim: false #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-a labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - controller storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false # --- apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-b labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - broker storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false #", "NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0 my-cluster-pool-b-3 1/1 Running 0 my-cluster-pool-b-4 1/1 Running 0 my-cluster-pool-b-5 1/1 Running 0", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-a labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - controller - broker storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false #", "get pods -n <my_cluster_operator_namespace>", "NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0 my-cluster-pool-b-3 1/1 Running 0 my-cluster-pool-b-4 1/1 Running 0 my-cluster-pool-b-5 1/1 Running 0", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: # spec: mode: remove-brokers brokers: [3, 4, 5]", "delete kafkanodepool pool-b -n <my_cluster_operator_namespace>", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: kafka labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - broker storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false", "apply -f <node_pool_configuration_file>", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster annotations: strimzi.io/node-pools: enabled spec: kafka: # zookeeper: #", "apply -f <kafka_configuration_file>", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: my-node-pool labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - broker storage: type: ephemeral #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: my-node-pool labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - broker storage: type: persistent-claim size: 500Gi deleteClaim: true #", "storage: type: persistent-claim size: 500Gi class: my-storage-class selector: hdd-type: ssd deleteClaim: true", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: my-node-pool labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - broker storage: type: jbod volumes: - id: 0 type: persistent-claim size: 2000Gi deleteClaim: false - id: 1 type: persistent-claim size: 2000Gi deleteClaim: false - id: 2 type: persistent-claim size: 2000Gi deleteClaim: false #", "get pv", "NAME CAPACITY CLAIM pvc-0ca459ce-... 2000Gi my-project/data-my-cluster-my-node-pool-2 pvc-6e1810be-... 2000Gi my-project/data-my-cluster-my-node-pool-0 pvc-82dc78c9-... 2000Gi my-project/data-my-cluster-my-node-pool-1", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: my-node-pool labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - broker storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false - id: 1 type: persistent-claim size: 100Gi deleteClaim: false - id: 2 type: persistent-claim size: 100Gi deleteClaim: false #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-a # spec: storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false - id: 1 type: persistent-claim size: 100Gi kraftMetadata: shared deleteClaim: false #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-a labels: strimzi.io/cluster: my-cluster spec: roles: - broker replicas: 3 storage: type: jbod volumes: - id: 0 type: persistent-claim size: 500Gi class: gp2-ebs #", "get pods -n <my_cluster_operator_namespace>", "NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-b labels: strimzi.io/cluster: my-cluster spec: roles: - broker replicas: 3 storage: type: jbod volumes: - id: 0 type: persistent-claim size: 1Ti class: gp3-ebs #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: # spec: mode: remove-brokers brokers: [0, 1, 2]", "delete kafkanodepool pool-a", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: all-zones provisioner: kubernetes.io/my-storage parameters: type: ssd volumeBindingMode: WaitForFirstConsumer", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-zone-1 labels: strimzi.io/cluster: my-cluster spec: replicas: 3 storage: type: jbod volumes: - id: 0 type: persistent-claim size: 500Gi class: all-zones template: pod: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: topology.kubernetes.io/zone operator: In values: - zone-1 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-zone-2 labels: strimzi.io/cluster: my-cluster spec: replicas: 4 storage: type: jbod volumes: - id: 0 type: persistent-claim size: 500Gi class: all-zones template: pod: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: topology.kubernetes.io/zone operator: In values: - zone-2 #", "get pods -n <my_cluster_operator_namespace>", "NAME READY STATUS RESTARTS my-cluster-pool-zone-1-kafka-0 1/1 Running 0 my-cluster-pool-zone-1-kafka-1 1/1 Running 0 my-cluster-pool-zone-1-kafka-2 1/1 Running 0 my-cluster-pool-zone-2-kafka-3 1/1 Running 0 my-cluster-pool-zone-2-kafka-4 1/1 Running 0 my-cluster-pool-zone-2-kafka-5 1/1 Running 0 my-cluster-pool-zone-2-kafka-6 1/1 Running 0", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: storage: type: ephemeral zookeeper: storage: type: ephemeral #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: storage: type: persistent-claim size: 500Gi deleteClaim: true # zookeeper: storage: type: persistent-claim size: 1000Gi", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false - id: 1 type: persistent-claim size: 100Gi deleteClaim: false - id: 2 type: persistent-claim size: 100Gi deleteClaim: false # zookeeper: storage: type: persistent-claim size: 1000Gi", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: labels: app: my-cluster name: my-cluster namespace: myproject spec: # kafka: replicas: 3 storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false class: my-storage-class overrides: - broker: 0 class: my-storage-class-zone-1a - broker: 1 class: my-storage-class-zone-1b - broker: 2 class: my-storage-class-zone-1c # # zookeeper: replicas: 3 storage: deleteClaim: true size: 100Gi type: persistent-claim class: my-storage-class overrides: - broker: 0 class: my-storage-class-zone-1a - broker: 1 class: my-storage-class-zone-1b - broker: 2 class: my-storage-class-zone-1c #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: tieredStorage: type: custom 1 remoteStorageManager: 2 className: com.example.kafka.tiered.storage.s3.S3RemoteStorageManager classPath: /opt/kafka/plugins/tiered-storage-s3/* config: storage.bucket.name: my-bucket 3 # config: rlmm.config.remote.log.metadata.topic.replication.factor: 1 4 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: topicOperator: {} userOperator: {}", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: # topicOperator: watchedNamespace: my-topic-namespace reconciliationIntervalMs: 60000 resources: requests: cpu: \"1\" memory: 500Mi limits: cpu: \"1\" memory: 500Mi #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: # userOperator: watchedNamespace: my-user-namespace reconciliationIntervalMs: 60000 resources: requests: cpu: \"1\" memory: 500Mi limits: cpu: \"1\" memory: 500Mi #", "env: - name: STRIMZI_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace", "env: - name: STRIMZI_OPERATOR_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace", "env: - name: STRIMZI_OPERATOR_NAMESPACE_LABELS value: label1=value1,label2=value2", "env: - name: STRIMZI_LABELS_EXCLUSION_PATTERN value: \"^key1.*\"", "env: - name: STRIMZI_CUSTOM_RESOURCE_SELECTOR value: label1=value1,label2=value2", "env: - name: STRIMZI_KUBERNETES_VERSION value: | major=1 minor=16 gitVersion=v1.16.2 gitCommit=c97fe5036ef3df2967d086711e6c0c405941e14b gitTreeState=clean buildDate=2019-10-15T19:09:08Z goVersion=go1.12.10 compiler=gc platform=linux/amd64", "<cluster-name> -kafka-0. <cluster-name> -kafka-brokers. <namespace> .svc. cluster.local", "# env: # - name: STRIMZI_OPERATOR_NAMESPACE_LABELS value: label1=value1,label2=value2 #", "# env: # - name: STRIMZI_FULL_RECONCILIATION_INTERVAL_MS value: \"120000\" #", "annotate <kind_of_custom_resource> <name_of_custom_resource> strimzi.io/pause-reconciliation=\"true\"", "annotate KafkaConnect my-connect strimzi.io/pause-reconciliation=\"true\"", "describe <kind_of_custom_resource> <name_of_custom_resource>", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: annotations: strimzi.io/pause-reconciliation: \"true\" strimzi.io/use-connector-resources: \"true\" creationTimestamp: 2021-03-12T10:47:11Z # spec: # status: conditions: - lastTransitionTime: 2021-03-12T10:47:41.689249Z status: \"True\" type: ReconciliationPaused", "env: - name: STRIMZI_LEADER_ELECTION_LEASE_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace", "env: - name: STRIMZI_LEADER_ELECTION_IDENTITY valueFrom: fieldRef: fieldPath: metadata.name", "apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-cluster-operator labels: app: strimzi spec: replicas: 3", "spec containers: - name: strimzi-cluster-operator # env: - name: STRIMZI_LEADER_ELECTION_ENABLED value: \"true\" - name: STRIMZI_LEADER_ELECTION_LEASE_NAME value: \"my-strimzi-cluster-operator\" - name: STRIMZI_LEADER_ELECTION_LEASE_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: STRIMZI_LEADER_ELECTION_IDENTITY valueFrom: fieldRef: fieldPath: metadata.name", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: strimzi-cluster-operator-leader-election labels: app: strimzi rules: - apiGroups: - coordination.k8s.io resourceNames: - my-strimzi-cluster-operator", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: strimzi-cluster-operator-leader-election labels: app: strimzi subjects: - kind: ServiceAccount name: my-strimzi-cluster-operator namespace: myproject", "create -f install/cluster-operator -n myproject", "get deployments -n myproject", "NAME READY UP-TO-DATE AVAILABLE strimzi-cluster-operator 3/3 3 3", "apiVersion: apps/v1 kind: Deployment spec: # template: spec: serviceAccountName: strimzi-cluster-operator containers: # env: # - name: \"HTTP_PROXY\" value: \"http://proxy.com\" 1 - name: \"HTTPS_PROXY\" value: \"https://proxy.com\" 2 - name: \"NO_PROXY\" value: \"internal.com, other.domain.com\" 3 #", "edit deployment strimzi-cluster-operator", "create -f install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml", "apiVersion: apps/v1 kind: Deployment spec: # template: spec: serviceAccountName: strimzi-cluster-operator containers: # env: # - name: \"FIPS_MODE\" value: \"disabled\" 1 #", "edit deployment strimzi-cluster-operator", "apply -f install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml", "Basic configuration (required) apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect 1 metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: \"true\" 2 Deployment specifications spec: # Replicas (required) replicas: 3 3 # Bootstrap servers (required) bootstrapServers: my-cluster-kafka-bootstrap:9092 4 # Kafka Connect configuration (recommended) config: 5 group.id: my-connect-cluster offset.storage.topic: my-connect-cluster-offsets config.storage.topic: my-connect-cluster-configs status.storage.topic: my-connect-cluster-status key.converter: org.apache.kafka.connect.json.JsonConverter value.converter: org.apache.kafka.connect.json.JsonConverter key.converter.schemas.enable: true value.converter.schemas.enable: true config.storage.replication.factor: 3 offset.storage.replication.factor: 3 status.storage.replication.factor: 3 # Resources requests and limits (recommended) resources: 6 requests: cpu: \"1\" memory: 2Gi limits: cpu: \"2\" memory: 2Gi # Authentication (optional) authentication: 7 type: tls certificateAndKey: certificate: source.crt key: source.key secretName: my-user-source # TLS configuration (optional) tls: 8 trustedCertificates: - secretName: my-cluster-cluster-cert pattern: \"*.crt\" - secretName: my-cluster-cluster-cert pattern: \"*.crt\" # Build configuration (optional) build: 9 output: 10 type: docker image: my-registry.io/my-org/my-connect-cluster:latest pushSecret: my-registry-credentials plugins: 11 - name: connector-1 artifacts: - type: tgz url: <url_to_download_connector_1_artifact> sha512sum: <SHA-512_checksum_of_connector_1_artifact> - name: connector-2 artifacts: - type: jar url: <url_to_download_connector_2_artifact> sha512sum: <SHA-512_checksum_of_connector_2_artifact> # Logging configuration (optional) logging: 12 type: inline loggers: log4j.rootLogger: INFO # Readiness probe (optional) readinessProbe: 13 initialDelaySeconds: 15 timeoutSeconds: 5 # Liveness probe (optional) livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 # Metrics configuration (optional) metricsConfig: 14 type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: my-config-map key: my-key # JVM options (optional) jvmOptions: 15 \"-Xmx\": \"1g\" \"-Xms\": \"1g\" # Custom image (optional) image: my-org/my-image:latest 16 # Rack awareness (optional) rack: topologyKey: topology.kubernetes.io/zone 17 # Pod and container template (optional) template: 18 pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: \"kubernetes.io/hostname\" connectContainer: 19 env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: \"http://otlp-host:4317\" - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: aws-creds key: awsAccessKey - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: aws-creds key: awsSecretAccessKey # Tracing configuration (optional) tracing: type: opentelemetry 20", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: config: group.id: my-connect-cluster 1 offset.storage.topic: my-connect-cluster-offsets 2 config.storage.topic: my-connect-cluster-configs 3 status.storage.topic: my-connect-cluster-status 4 # #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: # authorization: type: simple acls: # access to offset.storage.topic - resource: type: topic name: connect-cluster-offsets patternType: literal operations: - Create - Describe - Read - Write host: \"*\" # access to status.storage.topic - resource: type: topic name: connect-cluster-status patternType: literal operations: - Create - Describe - Read - Write host: \"*\" # access to config.storage.topic - resource: type: topic name: connect-cluster-configs patternType: literal operations: - Create - Describe - Read - Write host: \"*\" # cluster group - resource: type: group name: connect-cluster patternType: literal operations: - Read host: \"*\"", "apply -f KAFKA-USER-CONFIG-FILE", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: \"true\" #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector 1 labels: strimzi.io/cluster: my-connect-cluster 2 spec: class: org.apache.kafka.connect.file.FileStreamSourceConnector 3 tasksMax: 2 4 autoRestart: 5 enabled: true config: 6 file: \"/opt/kafka/LICENSE\" 7 topic: my-topic 8 #", "get KafkaConnector", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector labels: strimzi.io/cluster: my-connect-cluster spec: class: org.apache.kafka.connect.file.FileStreamSourceConnector tasksMax: 2 config: file: \"/opt/kafka/LICENSE\" topic: my-topic state: stopped #", "get KafkaConnector", "annotate KafkaConnector <kafka_connector_name> strimzi.io/restart=\"true\"", "get KafkaConnector", "describe KafkaConnector <kafka_connector_name>", "annotate KafkaConnector <kafka_connector_name> strimzi.io/restart-task=\"0\"", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector labels: strimzi.io/cluster: my-connect-cluster spec: listOffsets: toConfigMap: 1 name: my-connector-offsets 2 #", "annotate kafkaconnector my-source-connector strimzi.io/connector-offsets=list -n <namespace>", "get configmap my-connector-offsets -n <namespace>", "describe configmap my-connector-offsets -n <namespace>", "apiVersion: v1 kind: ConfigMap metadata: # ownerReferences: 1 - apiVersion: kafka.strimzi.io/v1beta2 blockOwnerDeletion: false controller: false kind: KafkaConnector name: my-source-connector uid: 637e3be7-bd96-43ab-abde-c55b4c4550e0 resourceVersion: \"66951\" uid: 641d60a9-36eb-4f29-9895-8f2c1eb9638e data: offsets.json: |- { \"offsets\" : [ { \"partition\" : { \"filename\" : \"/data/myfile.txt\" 2 }, \"offset\" : { \"position\" : 15295 3 } } ] }", "apiVersion: v1 kind: ConfigMap metadata: # ownerReferences: 1 - apiVersion: kafka.strimzi.io/v1beta2 blockOwnerDeletion: false controller: false kind: KafkaConnector name: my-sink-connector uid: 84a29d7f-77e6-43ac-bfbb-719f9b9a4b3b resourceVersion: \"79241\" uid: 721e30bc-23df-41a2-9b48-fb2b7d9b042c data: offsets.json: |- { \"offsets\": [ { \"partition\": { \"kafka_topic\": \"my-topic\", 2 \"kafka_partition\": 2 3 }, \"offset\": { \"kafka_offset\": 4 4 } } ] }", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector labels: strimzi.io/cluster: my-connect-cluster spec: state: stopped 1 alterOffsets: fromConfigMap: 2 name: my-connector-offsets 3 #", "apiVersion: v1 kind: ConfigMap metadata: # data: offsets.json: |- 1 { \"offsets\" : [ { \"partition\" : { \"filename\" : \"/data/myfile.txt\" }, \"offset\" : { \"position\" : 15000 2 } } ] }", "annotate kafkaconnector my-source-connector strimzi.io/connector-offsets=alter -n <namespace>", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector labels: strimzi.io/cluster: my-connect-cluster spec: state: running #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector labels: strimzi.io/cluster: my-connect-cluster spec: # state: stopped 1 #", "annotate kafkaconnector my-source-connector strimzi.io/connector-offsets=reset -n <namespace>", "apiVersion: v1 kind: ConfigMap metadata: # data: offsets.json: |- { \"offsets\" : [] }", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector labels: strimzi.io/cluster: my-connect-cluster spec: state: running #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: version: 3.9.0 connectCluster: \"my-cluster-target\" clusters: - alias: \"my-cluster-source\" bootstrapServers: my-cluster-source-kafka-bootstrap:9092 - alias: \"my-cluster-target\" bootstrapServers: my-cluster-target-kafka-bootstrap:9092 mirrors: - sourceCluster: \"my-cluster-source\" targetCluster: \"my-cluster-target\" sourceConnector: {}", "Basic configuration (required) apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 Deployment specifications spec: # Replicas (required) replicas: 3 1 # Connect cluster name (required) connectCluster: \"my-cluster-target\" 2 # Cluster configurations (required) clusters: 3 - alias: \"my-cluster-source\" 4 # Authentication (optional) authentication: 5 certificateAndKey: certificate: source.crt key: source.key secretName: my-user-source type: tls bootstrapServers: my-cluster-source-kafka-bootstrap:9092 6 # TLS configuration (optional) tls: 7 trustedCertificates: - pattern: \"*.crt\" secretName: my-cluster-source-cluster-ca-cert - alias: \"my-cluster-target\" 8 # Authentication (optional) authentication: 9 certificateAndKey: certificate: target.crt key: target.key secretName: my-user-target type: tls bootstrapServers: my-cluster-target-kafka-bootstrap:9092 10 # Kafka Connect configuration (optional) config: 11 config.storage.replication.factor: 1 offset.storage.replication.factor: 1 status.storage.replication.factor: 1 # TLS configuration (optional) tls: 12 trustedCertificates: - pattern: \"*.crt\" secretName: my-cluster-target-cluster-ca-cert # Mirroring configurations (required) mirrors: 13 - sourceCluster: \"my-cluster-source\" 14 targetCluster: \"my-cluster-target\" 15 # Topic and group patterns (required) topicsPattern: \"topic1|topic2|topic3\" 16 groupsPattern: \"group1|group2|group3\" 17 # Source connector configuration (required) sourceConnector: 18 tasksMax: 10 19 autoRestart: 20 enabled: true config: replication.factor: 1 21 offset-syncs.topic.replication.factor: 1 22 sync.topic.acls.enabled: \"false\" 23 refresh.topics.interval.seconds: 60 24 replication.policy.class: \"org.apache.kafka.connect.mirror.IdentityReplicationPolicy\" 25 # Heartbeat connector configuration (optional) heartbeatConnector: 26 autoRestart: enabled: true config: heartbeats.topic.replication.factor: 1 27 replication.policy.class: \"org.apache.kafka.connect.mirror.IdentityReplicationPolicy\" # Checkpoint connector configuration (optional) checkpointConnector: 28 autoRestart: enabled: true config: checkpoints.topic.replication.factor: 1 29 refresh.groups.interval.seconds: 600 30 sync.group.offsets.enabled: true 31 sync.group.offsets.interval.seconds: 60 32 emit.checkpoints.interval.seconds: 60 33 replication.policy.class: \"org.apache.kafka.connect.mirror.IdentityReplicationPolicy\" # Kafka version (recommended) version: 3.9.0 34 # Resources requests and limits (recommended) resources: 35 requests: cpu: \"1\" memory: 2Gi limits: cpu: \"2\" memory: 2Gi # Logging configuration (optional) logging: 36 type: inline loggers: connect.root.logger.level: INFO # Readiness probe (optional) readinessProbe: 37 initialDelaySeconds: 15 timeoutSeconds: 5 # Liveness probe (optional) livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 # JVM options (optional) jvmOptions: 38 \"-Xmx\": \"1g\" \"-Xms\": \"1g\" # Custom image (optional) image: my-org/my-image:latest 39 # Rack awareness (optional) rack: topologyKey: topology.kubernetes.io/zone 40 # Pod template (optional) template: 41 pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: \"kubernetes.io/hostname\" connectContainer: 42 env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: \"http://otlp-host:4317\" # Tracing configuration (optional) tracing: type: opentelemetry 43", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: connectCluster: \"my-cluster-target\" clusters: - alias: \"my-cluster-target\" config: group.id: my-connect-cluster 1 offset.storage.topic: my-connect-cluster-offsets 2 config.storage.topic: my-connect-cluster-configs 3 status.storage.topic: my-connect-cluster-status 4 # #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: version: 3.9.0 # clusters: - alias: \"my-cluster-source\" bootstrapServers: my-cluster-source-kafka-bootstrap:9092 - alias: \"my-cluster-target\" bootstrapServers: my-cluster-target-kafka-bootstrap:9092 mirrors: - sourceCluster: \"my-cluster-source\" targetCluster: \"my-cluster-target\" sourceConnector: listOffsets: toConfigMap: name: my-connector-offsets #", "annotate kafkamirrormaker2 my-mirror-maker-2 strimzi.io/connector-offsets=list strimzi.io/mirrormaker-connector=\"my-cluster-source->my-cluster-target.MirrorSourceConnector\" -n kafka", "apiVersion: v1 kind: ConfigMap metadata: # ownerReferences: 1 - apiVersion: kafka.strimzi.io/v1beta2 blockOwnerDeletion: false controller: false kind: KafkaMirrorMaker2 name: my-mirror-maker2 uid: 637e3be7-bd96-43ab-abde-c55b4c4550e0 data: my-cluster-source--my-cluster-target.MirrorSourceConnector.json: |- 2 { \"offsets\": [ { \"partition\": { \"cluster\": \"east-kafka\", \"partition\": 0, \"topic\": \"mirrormaker2-cluster-configs\" }, \"offset\": { \"offset\": 0 } } ] }", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: version: 3.9.0 # mirrors: - sourceCluster: \"my-cluster-source\" targetCluster: \"my-cluster-target\" sourceConnector: tasksMax: 5 config: producer.override.batch.size: 327680 producer.override.linger.ms: 100 producer.request.timeout.ms: 30000 consumer.fetch.max.bytes: 52428800 # checkpointConnector: config: producer.override.request.timeout.ms: 30000 consumer.max.poll.interval.ms: 300000 # heartbeatConnector: config: producer.override.request.timeout.ms: 30000 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: # mirrors: - sourceCluster: \"my-cluster-source\" targetCluster: \"my-cluster-target\" sourceConnector: tasksMax: 10 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: # mirrors: - sourceCluster: \"my-cluster-source\" targetCluster: \"my-cluster-target\" checkpointConnector: tasksMax: 10 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-source-cluster spec: kafka: version: 3.9.0 replicas: 1 listeners: - name: tls port: 9093 type: internal tls: true authentication: type: tls authorization: type: simple config: offsets.topic.replication.factor: 1 transaction.state.log.replication.factor: 1 transaction.state.log.min.isr: 1 default.replication.factor: 1 min.insync.replicas: 1 inter.broker.protocol.version: \"3.9\" storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false zookeeper: replicas: 1 storage: type: persistent-claim size: 100Gi deleteClaim: false entityOperator: topicOperator: {} userOperator: {}", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-target-cluster spec: kafka: version: 3.9.0 replicas: 1 listeners: - name: tls port: 9093 type: internal tls: true authentication: type: tls authorization: type: simple config: offsets.topic.replication.factor: 1 transaction.state.log.replication.factor: 1 transaction.state.log.min.isr: 1 default.replication.factor: 1 min.insync.replicas: 1 inter.broker.protocol.version: \"3.9\" storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false zookeeper: replicas: 1 storage: type: persistent-claim size: 100Gi deleteClaim: false entityOperator: topicOperator: {} userOperator: {}", "apply -f <kafka_configuration_file> -n <namespace>", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-source-user labels: strimzi.io/cluster: my-source-cluster spec: authentication: type: tls authorization: type: simple acls: # MirrorSourceConnector - resource: # Not needed if offset-syncs.topic.location=target type: topic name: mm2-offset-syncs.my-target-cluster.internal operations: - Create - DescribeConfigs - Read - Write - resource: # Needed for every topic which is mirrored type: topic name: \"*\" operations: - DescribeConfigs - Read # MirrorCheckpointConnector - resource: type: cluster operations: - Describe - resource: # Needed for every group for which offsets are synced type: group name: \"*\" operations: - Describe - resource: # Not needed if offset-syncs.topic.location=target type: topic name: mm2-offset-syncs.my-target-cluster.internal operations: - Read", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-target-user labels: strimzi.io/cluster: my-target-cluster spec: authentication: type: tls authorization: type: simple acls: # cluster group - resource: type: group name: mirrormaker2-cluster operations: - Read # access to config.storage.topic - resource: type: topic name: mirrormaker2-cluster-configs operations: - Create - Describe - DescribeConfigs - Read - Write # access to status.storage.topic - resource: type: topic name: mirrormaker2-cluster-status operations: - Create - Describe - DescribeConfigs - Read - Write # access to offset.storage.topic - resource: type: topic name: mirrormaker2-cluster-offsets operations: - Create - Describe - DescribeConfigs - Read - Write # MirrorSourceConnector - resource: # Needed for every topic which is mirrored type: topic name: \"*\" operations: - Create - Alter - AlterConfigs - Write # MirrorCheckpointConnector - resource: type: cluster operations: - Describe - resource: type: topic name: my-source-cluster.checkpoints.internal operations: - Create - Describe - Read - Write - resource: # Needed for every group for which the offset is synced type: group name: \"*\" operations: - Read - Describe # MirrorHeartbeatConnector - resource: type: topic name: heartbeats operations: - Create - Describe - Write", "apply -f <kafka_user_configuration_file> -n <namespace>", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker-2 spec: version: 3.9.0 replicas: 1 connectCluster: \"my-target-cluster\" clusters: - alias: \"my-source-cluster\" bootstrapServers: my-source-cluster-kafka-bootstrap:9093 tls: 1 trustedCertificates: - secretName: my-source-cluster-cluster-ca-cert pattern: \"*.crt\" authentication: 2 type: tls certificateAndKey: secretName: my-source-user certificate: user.crt key: user.key - alias: \"my-target-cluster\" bootstrapServers: my-target-cluster-kafka-bootstrap:9093 tls: 3 trustedCertificates: - secretName: my-target-cluster-cluster-ca-cert pattern: \"*.crt\" authentication: 4 type: tls certificateAndKey: secretName: my-target-user certificate: user.crt key: user.key config: # -1 means it will use the default replication factor configured in the broker config.storage.replication.factor: -1 offset.storage.replication.factor: -1 status.storage.replication.factor: -1 mirrors: - sourceCluster: \"my-source-cluster\" targetCluster: \"my-target-cluster\" sourceConnector: config: replication.factor: 1 offset-syncs.topic.replication.factor: 1 sync.topic.acls.enabled: \"false\" heartbeatConnector: config: heartbeats.topic.replication.factor: 1 checkpointConnector: config: checkpoints.topic.replication.factor: 1 sync.group.offsets.enabled: \"true\" topicsPattern: \"topic1|topic2|topic3\" groupsPattern: \"group1|group2|group3\"", "apply -f <mirrormaker2_configuration_file> -n <namespace_of_target_cluster>", "get KafkaMirrorMaker2", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: version: 3.9.0 replicas: 3 connectCluster: \"my-cluster-target\" clusters: # mirrors: - sourceCluster: \"my-cluster-source\" targetCluster: \"my-cluster-target\" sourceConnector: tasksMax: 10 autoRestart: enabled: true state: stopped #", "get KafkaMirrorMaker2", "describe KafkaMirrorMaker2 <mirrormaker_cluster_name>", "annotate KafkaMirrorMaker2 <mirrormaker_cluster_name> \"strimzi.io/restart-connector=<mirrormaker_connector_name>\"", "annotate KafkaMirrorMaker2 my-mirror-maker-2 \"strimzi.io/restart-connector=my-connector\"", "get KafkaMirrorMaker2", "describe KafkaMirrorMaker2 <mirrormaker_cluster_name>", "annotate KafkaMirrorMaker2 <mirrormaker_cluster_name> \"strimzi.io/restart-connector-task=<mirrormaker_connector_name>:<task_id>\"", "annotate KafkaMirrorMaker2 my-mirror-maker-2 \"strimzi.io/restart-connector-task=my-connector:0\"", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker metadata: name: my-mirror-maker spec: replicas: 3 1 consumer: bootstrapServers: my-source-cluster-kafka-bootstrap:9092 2 groupId: \"my-group\" 3 numStreams: 2 4 offsetCommitInterval: 120000 5 tls: 6 trustedCertificates: - secretName: my-source-cluster-ca-cert pattern: \"*.crt\" authentication: 7 type: tls certificateAndKey: secretName: my-source-secret certificate: public.crt key: private.key config: 8 max.poll.records: 100 receive.buffer.bytes: 32768 producer: bootstrapServers: my-target-cluster-kafka-bootstrap:9092 abortOnSendFailure: false 9 tls: trustedCertificates: - secretName: my-target-cluster-ca-cert pattern: \"*.crt\" authentication: type: tls certificateAndKey: secretName: my-target-secret certificate: public.crt key: private.key config: compression.type: gzip batch.size: 8192 include: \"my-topic|other-topic\" 10 resources: 11 requests: cpu: \"1\" memory: 2Gi limits: cpu: \"2\" memory: 2Gi logging: 12 type: inline loggers: mirrormaker.root.logger: INFO readinessProbe: 13 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 metricsConfig: 14 type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: my-config-map key: my-key jvmOptions: 15 \"-Xmx\": \"1g\" \"-Xms\": \"1g\" image: my-org/my-image:latest 16 template: 17 pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: \"kubernetes.io/hostname\" mirrorMakerContainer: 18 env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: \"http://otlp-host:4317\" tracing: 19 type: opentelemetry", "Basic configuration (required) apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # Replicas (required) replicas: 3 1 # Kafka bootstrap servers (required) bootstrapServers: <cluster_name> -cluster-kafka-bootstrap:9092 2 # HTTP configuration (required) http: 3 port: 8080 # CORS configuration (optional) cors: 4 allowedOrigins: \"https://strimzi.io\" allowedMethods: \"GET,POST,PUT,DELETE,OPTIONS,PATCH\" # Resources requests and limits (recommended) resources: 5 requests: cpu: \"1\" memory: 2Gi limits: cpu: \"2\" memory: 2Gi # TLS configuration (optional) tls: 6 trustedCertificates: - secretName: my-cluster-cluster-cert pattern: \"*.crt\" - secretName: my-cluster-cluster-cert certificate: ca2.crt # Authentication (optional) authentication: 7 type: tls certificateAndKey: secretName: my-secret certificate: public.crt key: private.key # Consumer configuration (optional) consumer: 8 config: auto.offset.reset: earliest # Producer configuration (optional) producer: 9 config: delivery.timeout.ms: 300000 # Logging configuration (optional) logging: 10 type: inline loggers: logger.bridge.level: INFO # Enabling DEBUG just for send operation logger.send.name: \"http.openapi.operation.send\" logger.send.level: DEBUG # JVM options (optional) jvmOptions: 11 \"-Xmx\": \"1g\" \"-Xms\": \"1g\" # Readiness probe (optional) readinessProbe: 12 initialDelaySeconds: 15 timeoutSeconds: 5 # Liveness probe (optional) livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 # Custom image (optional) image: my-org/my-image:latest 13 # Pod template (optional) template: 14 pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: \"kubernetes.io/hostname\" bridgeContainer: 15 env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: \"http://otlp-host:4317\" # Tracing configuration (optional) tracing: type: opentelemetry 16", "apiVersion: kafka.strimzi.io/v1beta2 kind: Pod metadata: name: my-cluster-kafka-0 labels: app.kubernetes.io/instance: my-cluster app.kubernetes.io/managed-by: strimzi-cluster-operator app.kubernetes.io/name: kafka app.kubernetes.io/part-of: strimzi-my-cluster spec: #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: strimzi.io/name operator: In values: - CLUSTER-NAME -kafka topologyKey: \"kubernetes.io/hostname\" # zookeeper: # template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: strimzi.io/name operator: In values: - CLUSTER-NAME -zookeeper topologyKey: \"kubernetes.io/hostname\" #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: strimzi.io/cluster operator: In values: - CLUSTER-NAME topologyKey: \"kubernetes.io/hostname\" # zookeeper: # template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: strimzi.io/cluster operator: In values: - CLUSTER-NAME topologyKey: \"kubernetes.io/hostname\" #", "apply -f <kafka_configuration_file>", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: \"kubernetes.io/hostname\" # zookeeper: #", "apply -f <kafka_configuration_file>", "label node NAME-OF-NODE node-type=fast-network", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # template: pod: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: node-type operator: In values: - fast-network # zookeeper: #", "apply -f <kafka_configuration_file>", "adm taint node NAME-OF-NODE dedicated=Kafka:NoSchedule", "label node NAME-OF-NODE dedicated=Kafka", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # template: pod: tolerations: - key: \"dedicated\" operator: \"Equal\" value: \"Kafka\" effect: \"NoSchedule\" affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: dedicated operator: In values: - Kafka # zookeeper: #", "apply -f <kafka_configuration_file>", "logging: type: inline loggers: kafka.root.logger.level: INFO", "logging: type: external valueFrom: configMapKeyRef: name: my-config-map key: my-config-map-key", "kind: ConfigMap apiVersion: v1 metadata: name: logging-configmap data: log4j.properties: kafka.root.logger.level=\"INFO\"", "create configmap logging-configmap --from-file=log4j.properties", "Define the logger kafka.root.logger.level=\"INFO\"", "logging: type: external valueFrom: configMapKeyRef: name: logging-configmap key: log4j.properties", "apply -f <kafka_configuration_file>", "create -f install/cluster-operator/050-ConfigMap-strimzi-cluster-operator.yaml", "edit configmap strimzi-cluster-operator", "rootLogger.level=\"INFO\" appender.console.filter.filter1.type=MarkerFilter 1 appender.console.filter.filter1.onMatch=ACCEPT 2 appender.console.filter.filter1.onMismatch=DENY 3 appender.console.filter.filter1.marker=Kafka(my-namespace/my-kafka-cluster) 4", "appender.console.filter.filter1.type=MarkerFilter appender.console.filter.filter1.onMatch=ACCEPT appender.console.filter.filter1.onMismatch=DENY appender.console.filter.filter1.marker=Kafka(my-namespace/my-kafka-cluster-1) appender.console.filter.filter2.type=MarkerFilter appender.console.filter.filter2.onMatch=ACCEPT appender.console.filter.filter2.onMismatch=DENY appender.console.filter.filter2.marker=Kafka(my-namespace/my-kafka-cluster-2)", "kind: ConfigMap apiVersion: v1 metadata: name: strimzi-cluster-operator data: log4j2.properties: # appender.console.filter.filter1.type=MarkerFilter appender.console.filter.filter1.onMatch=ACCEPT appender.console.filter.filter1.onMismatch=DENY appender.console.filter.filter1.marker=Kafka(my-namespace/my-kafka-cluster)", "edit configmap strimzi-cluster-operator", "create -f install/cluster-operator/050-ConfigMap-strimzi-cluster-operator.yaml", "kind: ConfigMap apiVersion: v1 metadata: name: logging-configmap data: log4j2.properties: rootLogger.level=\"INFO\" appender.console.filter.filter1.type=MarkerFilter appender.console.filter.filter1.onMatch=ACCEPT appender.console.filter.filter1.onMismatch=DENY appender.console.filter.filter1.marker=KafkaTopic(my-namespace/my-topic)", "create configmap logging-configmap --from-file=log4j2.properties", "Define the logger rootLogger.level=\"INFO\" Set the filters appender.console.filter.filter1.type=MarkerFilter appender.console.filter.filter1.onMatch=ACCEPT appender.console.filter.filter1.onMismatch=DENY appender.console.filter.filter1.marker=KafkaTopic(my-namespace/my-topic)", "spec: # entityOperator: topicOperator: logging: type: external valueFrom: configMapKeyRef: name: logging-configmap key: log4j2.properties", "create -f install/cluster-operator -n my-cluster-operator-namespace", "DEBUG AbstractOperator:406 - Reconciliation #55(timer) Kafka(myproject/my-cluster): Failed to acquire lock lock::myproject::Kafka::my-cluster within 10000ms.", "INFO AbstractOperator:399 - Reconciliation #1(watch) Kafka(myproject/my-cluster): Reconciliation is in progress", "logging: type: external valueFrom: configMapKeyRef: name: my-config-map key: my-config-map-key", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # template: connectContainer: env: - name: MY_ENVIRONMENT_VARIABLE valueFrom: configMapKeyRef: name: my-config-map key: my-key", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect annotations: strimzi.io/use-connector-resources: \"true\" spec: # config: # config.providers: env config.providers.env.class: org.apache.kafka.common.config.provider.EnvVarConfigProvider #", "apiVersion: v1 kind: ConfigMap metadata: name: my-connector-configuration data: option1: value1 option2: value2", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect annotations: strimzi.io/use-connector-resources: \"true\" spec: # config: # config.providers: secrets,configmaps 1 config.providers.configmaps.class: io.strimzi.kafka.KubernetesConfigMapConfigProvider 2 config.providers.secrets.class: io.strimzi.kafka.KubernetesSecretConfigProvider 3 #", "apply -f <kafka_connect_configuration_file>", "apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: connector-configuration-role rules: - apiGroups: [\"\"] resources: [\"configmaps\"] resourceNames: [\"my-connector-configuration\"] verbs: [\"get\"]", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: connector-configuration-role-binding subjects: - kind: ServiceAccount name: my-connect-connect namespace: my-project roleRef: kind: Role name: connector-configuration-role apiGroup: rbac.authorization.k8s.io", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-connector labels: strimzi.io/cluster: my-connect spec: # config: option: USD{configmaps:my-project/my-connector-configuration:option1} #", "apiVersion: v1 kind: Secret metadata: name: aws-creds type: Opaque data: awsAccessKey: QUtJQVhYWFhYWFhYWFhYWFg= awsSecretAccessKey: Ylhsd1lYTnpkMjl5WkE=", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect annotations: strimzi.io/use-connector-resources: \"true\" spec: # config: # config.providers: env 1 config.providers.env.class: org.apache.kafka.common.config.provider.EnvVarConfigProvider 2 # template: connectContainer: env: - name: AWS_ACCESS_KEY_ID 3 valueFrom: secretKeyRef: name: aws-creds 4 key: awsAccessKey 5 - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: aws-creds key: awsSecretAccessKey #", "apply -f <kafka_connect_configuration_file>", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-connector labels: strimzi.io/cluster: my-connect spec: # config: option: USD{env:AWS_ACCESS_KEY_ID} option: USD{env:AWS_SECRET_ACCESS_KEY} #", "apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque stringData: connector.properties: |- 1 dbUsername: my-username 2 dbPassword: my-password", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # config: config.providers: file 1 config.providers.file.class: org.apache.kafka.common.config.provider.FileConfigProvider 2 # template: pod: volumes: - name: connector-config-volume 3 secret: secretName: mysecret 4 connectContainer: volumeMounts: - name: connector-config-volume 5 mountPath: /mnt/mysecret 6", "apply -f <kafka_connect_configuration_file>", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector labels: strimzi.io/cluster: my-connect-cluster spec: class: io.debezium.connector.mysql.MySqlConnector tasksMax: 2 config: database.hostname: 192.168.99.1 database.port: \"3306\" database.user: \"USD{file:/mnt/mysecret/connector.properties:dbUsername}\" database.password: \"USD{file:/mnt/mysecret/connector.properties:dbPassword}\" database.server.id: \"184054\" #", "apiVersion: v1 kind: Secret metadata: name: my-user labels: strimzi.io/kind: KafkaUser strimzi.io/cluster: my-cluster type: Opaque data: ca.crt: <public_key> # Public key of the clients CA used to sign this user certificate user.crt: <user_certificate> # Public key of the user user.key: <user_private_key> # Private key of the user user.p12: <store> # PKCS #12 store for user certificates and keys user.password: <password_for_store> # Protects the PKCS #12 store", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # config: config.providers: directory 1 config.providers.directory.class: org.apache.kafka.common.config.provider.DirectoryConfigProvider 2 # template: pod: volumes: - name: my-user-volume 3 secret: secretName: my-user 4 - name: cluster-ca-volume secret: secretName: my-cluster-cluster-ca-cert connectContainer: volumeMounts: - name: my-user-volume 5 mountPath: /mnt/my-user 6 - name: cluster-ca-volume mountPath: /mnt/cluster-ca", "apply -f <kafka_connect_configuration_file>", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector labels: strimzi.io/cluster: my-connect-cluster spec: class: io.debezium.connector.mysql.MySqlConnector tasksMax: 2 config: # database.history.producer.security.protocol: SSL database.history.producer.ssl.truststore.type: PEM database.history.producer.ssl.truststore.certificates: \"USD{directory:/mtn/cluster-ca:ca.crt}\" database.history.producer.ssl.keystore.type: PEM database.history.producer.ssl.keystore.certificate.chain: \"USD{directory:/mnt/my-user:user.crt}\" database.history.producer.ssl.keystore.key: \"USD{directory:/mnt/my-user:user.key}\" #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster labels: app: my-cluster spec: kafka: # template: pod: metadata: labels: mylabel: myvalue #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # template: pod: terminationGracePeriodSeconds: 120 # #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: topic-name-1 labels: strimzi.io/cluster: my-cluster spec: topicName: topic-name-1", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic-1 1 spec: topicName: My.Topic.1 2 #", "run kafka-admin -ti --image=registry.redhat.io/amq-streams/kafka-39-rhel9:2.9.0 --rm=true --restart=Never -- ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi-topic-operator-kstreams-topic-store-changelog --delete && ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi_store_topic --delete", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic-1 labels: strimzi.io/cluster: my-cluster spec: partitions: 10 replicas: 2", "apply -f <topic_config_file>", "get kafkatopics -o wide -w -n <namespace>", "NAME CLUSTER PARTITIONS REPLICATION FACTOR READY my-topic-1 my-cluster 10 3 True my-topic-2 my-cluster 10 3 my-topic-3 my-cluster 10 3 True", "get kafkatopics my-topic-2 -o yaml", "status: conditions: - lastTransitionTime: \"2022-06-13T10:14:43.351550Z\" message: Number of partitions cannot be decreased reason: PartitionDecreaseException status: \"True\" type: NotReady", "get kafkatopics my-topic-2 -o wide -w -n <namespace>", "NAME CLUSTER PARTITIONS REPLICATION FACTOR READY my-topic-2 my-cluster 10 3 True", "get kafkatopics my-topic-2 -o yaml", "status: conditions: - lastTransitionTime: '2022-06-13T10:15:03.761084Z' status: 'True' type: Ready", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic labels: strimzi.io/cluster: my-cluster spec: partitions: 10 1 replicas: 3 2 config: min.insync.replicas: 2 3 #", "annotate kafkatopic my-topic-1 strimzi.io/managed=\"false\" --overwrite", "get kafkatopic my-topic-1 -o yaml", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: generation: 124 name: my-topic-1 finalizer: - strimzi.io/topic-operator labels: strimzi.io/cluster: my-cluster annotations: strimzi.io/managed: \"false\" spec: partitions: 10 replicas: 2 config: retention.ms: 432000000 status: observedGeneration: 124 1 conditions: - lastTransitionTime: \"2024-08-22T06:07:57.671085635Z\" status: \"True\" type: Unmanaged 2", "delete kafkatopic <kafka_topic_name>", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic-1 labels: strimzi.io/cluster: my-cluster spec: partitions: 10 replicas: 2", "apply -f <topic_configuration_file>", "get kafkatopics my-topic-1 -o yaml", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: generation: 1 name: my-topic-1 labels: strimzi.io/cluster: my-cluster spec: partitions: 10 replicas: 2 status: observedGeneration: 1 1 topicName: my-topic-1 conditions: - type: Ready status: True lastTransitionTime: 20230301T103000Z", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: generation: 1 name: my-topic-1 finalizers: - strimzi.io/topic-operator labels: strimzi.io/cluster: my-cluster", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: generation: 1 name: my-topic-1 finalizers: - strimzi.io/topic-operator labels: strimzi.io/cluster: my-cluster deletionTimestamp: 20230301T000000.000", "get kt -o=json | jq '.items[].metadata.finalizers = null' | oc apply -f -", "get kt <topic_name> -o=json | jq '.metadata.finalizers = null' | oc apply -f -", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user-1 labels: strimzi.io/cluster: my-cluster spec: authentication: type: tls authorization: type: simple acls: # Example consumer Acls for topic my-topic using consumer group my-group - resource: type: topic name: my-topic patternType: literal operations: - Describe - Read host: \"*\" - resource: type: group name: my-group patternType: literal operations: - Read host: \"*\" # Example Producer Acls for topic my-topic - resource: type: topic name: my-topic patternType: literal operations: - Create - Describe - Write host: \"*\"", "apply -f <user_config_file>", "get kafkausers -o wide -w -n <namespace>", "NAME CLUSTER AUTHENTICATION AUTHORIZATION READY my-user-1 my-cluster tls simple True my-user-2 my-cluster tls simple my-user-3 my-cluster tls simple True", "get kafkausers my-user-2 -o yaml", "status: conditions: - lastTransitionTime: \"2022-06-10T10:07:37.238065Z\" message: Simple authorization ACL rules are configured but not supported in the Kafka cluster configuration. reason: InvalidResourceException status: \"True\" type: NotReady", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # authorization: type: simple", "get kafkausers my-user-2 -o wide -w -n <namespace>", "NAME CLUSTER AUTHENTICATION AUTHORIZATION READY my-user-2 my-cluster tls simple True", "get kafkausers my-user-2 -o yaml", "status: conditions: - lastTransitionTime: \"2022-06-10T10:33:40.166846Z\" status: \"True\" type: Ready", "run kafka-producer -ti --image=registry.redhat.io/amq-streams/kafka-39-rhel9:2.9.0 --rm=true --restart=Never -- bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic", "run kafka-consumer -ti --image=registry.redhat.io/amq-streams/kafka-39-rhel9:2.9.0 --rm=true --restart=Never -- bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic --from-beginning", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # listeners: - name: plain port: 9092 type: internal tls: false configuration: useServiceDnsDomain: true - name: tls port: 9093 type: internal tls: true authentication: type: tls - name: external1 port: 9094 type: route tls: true configuration: brokerCertChainAndKey: secretName: my-secret certificate: my-certificate.crt key: my-key.key #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: labels: app: my-cluster name: my-cluster namespace: myproject spec: kafka: # listeners: - name: external4 port: 9094 type: nodeport tls: true authentication: type: tls # # zookeeper: #", "apply -f <kafka_configuration_file>", "NAME TYPE CLUSTER-IP PORT(S) my-cluster-kafka-external4-0 NodePort 172.30.55.13 9094:31789/TCP my-cluster-kafka-external4-1 NodePort 172.30.250.248 9094:30028/TCP my-cluster-kafka-external4-2 NodePort 172.30.115.81 9094:32650/TCP my-cluster-kafka-external4-bootstrap NodePort 172.30.30.23 9094:32650/TCP", "status: clusterId: Y_RJQDGKRXmNF7fEcWldJQ conditions: - lastTransitionTime: '2023-01-31T14:59:37.113630Z' status: 'True' type: Ready kafkaVersion: 3.9.0 listeners: # - addresses: - host: ip-10-0-224-199.us-west-2.compute.internal port: 32650 bootstrapServers: 'ip-10-0-224-199.us-west-2.compute.internal:32650' certificates: - | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- name: external4 observedGeneration: 2 operatorLastSuccessfulVersion: 2.9 #", "get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name==\"external4\")].bootstrapServers}{\"\\n\"}' ip-10-0-224-199.us-west-2.compute.internal:32650", "get secret my-cluster-cluster-ca-cert -o jsonpath='{.data.ca\\.crt}' | base64 -d > ca.crt", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: labels: app: my-cluster name: my-cluster namespace: myproject spec: kafka: # listeners: - name: external3 port: 9094 type: loadbalancer tls: true authentication: type: tls # # zookeeper: #", "apply -f <kafka_configuration_file>", "NAME TYPE CLUSTER-IP PORT(S) my-cluster-kafka-external3-0 LoadBalancer 172.30.204.234 9094:30011/TCP my-cluster-kafka-external3-1 LoadBalancer 172.30.164.89 9094:32544/TCP my-cluster-kafka-external3-2 LoadBalancer 172.30.73.151 9094:32504/TCP my-cluster-kafka-external3-bootstrap LoadBalancer 172.30.30.228 9094:30371/TCP NAME EXTERNAL-IP (loadbalancer) my-cluster-kafka-external3-0 a8a519e464b924000b6c0f0a05e19f0d-1132975133.us-west-2.elb.amazonaws.com my-cluster-kafka-external3-1 ab6adc22b556343afb0db5ea05d07347-611832211.us-west-2.elb.amazonaws.com my-cluster-kafka-external3-2 a9173e8ccb1914778aeb17eca98713c0-777597560.us-west-2.elb.amazonaws.com my-cluster-kafka-external3-bootstrap a8d4a6fb363bf447fb6e475fc3040176-36312313.us-west-2.elb.amazonaws.com", "status: clusterId: Y_RJQDGKRXmNF7fEcWldJQ conditions: - lastTransitionTime: '2023-01-31T14:59:37.113630Z' status: 'True' type: Ready kafkaVersion: 3.9.0 listeners: # - addresses: - host: >- a8d4a6fb363bf447fb6e475fc3040176-36312313.us-west-2.elb.amazonaws.com port: 9094 bootstrapServers: >- a8d4a6fb363bf447fb6e475fc3040176-36312313.us-west-2.elb.amazonaws.com:9094 certificates: - | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- name: external3 observedGeneration: 2 operatorLastSuccessfulVersion: 2.9 #", "status: loadBalancer: ingress: - hostname: >- a8d4a6fb363bf447fb6e475fc3040176-36312313.us-west-2.elb.amazonaws.com #", "get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name==\"external3\")].bootstrapServers}{\"\\n\"}' a8d4a6fb363bf447fb6e475fc3040176-36312313.us-west-2.elb.amazonaws.com:9094", "get secret my-cluster-cluster-ca-cert -o jsonpath='{.data.ca\\.crt}' | base64 -d > ca.crt", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: labels: app: my-cluster name: my-cluster namespace: myproject spec: kafka: # listeners: - name: external1 port: 9094 type: route tls: true 1 authentication: type: tls # # zookeeper: #", "apply -f <kafka_configuration_file>", "NAME HOST/PORT SERVICES PORT TERMINATION my-cluster-kafka-external1-0 my-cluster-kafka-external1-0-my-project.router.com my-cluster-kafka-external1-0 9094 passthrough my-cluster-kafka-external1-1 my-cluster-kafka-external1-1-my-project.router.com my-cluster-kafka-external1-1 9094 passthrough my-cluster-kafka-external1-2 my-cluster-kafka-external1-2-my-project.router.com my-cluster-kafka-external1-2 9094 passthrough my-cluster-kafka-external1-bootstrap my-cluster-kafka-external1-bootstrap-my-project.router.com my-cluster-kafka-external1-bootstrap 9094 passthrough", "status: ingress: - host: >- my-cluster-kafka-external1-bootstrap-my-project.router.com #", "openssl s_client -connect my-cluster-kafka-external1-0-my-project.router.com:443 -servername my-cluster-kafka-external1-0-my-project.router.com -showcerts", "Certificate chain 0 s:O = io.strimzi, CN = my-cluster-kafka i:O = io.strimzi, CN = cluster-ca v0", "get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name==\"external1\")].bootstrapServers}{\"\\n\"}' my-cluster-kafka-external1-bootstrap-my-project.router.com:443", "get secret my-cluster-cluster-ca-cert -o jsonpath='{.data.ca\\.crt}' | base64 -d > ca.crt", "apiVersion: v1 kind: Service metadata: annotations: strimzi.io/discovery: |- [ { \"port\" : 9092, \"tls\" : false, \"protocol\" : \"kafka\", \"auth\" : \"scram-sha-512\" }, { \"port\" : 9093, \"tls\" : true, \"protocol\" : \"kafka\", \"auth\" : \"tls\" } ] labels: strimzi.io/cluster: my-cluster strimzi.io/discovery: \"true\" strimzi.io/kind: Kafka strimzi.io/name: my-cluster-kafka-bootstrap name: my-cluster-kafka-bootstrap spec: #", "apiVersion: v1 kind: Service metadata: annotations: strimzi.io/discovery: |- [ { \"port\" : 8080, \"tls\" : false, \"auth\" : \"none\", \"protocol\" : \"http\" } ] labels: strimzi.io/cluster: my-bridge strimzi.io/discovery: \"true\" strimzi.io/kind: KafkaBridge strimzi.io/name: my-bridge-bridge-service", "get service -l strimzi.io/discovery=true", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # listeners: - name: plain port: 9092 type: internal tls: true authentication: type: scram-sha-512 - name: tls port: 9093 type: internal tls: true authentication: type: tls - name: external3 port: 9094 type: loadbalancer tls: true authentication: type: tls", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # listeners: - name: tls port: 9093 type: internal tls: true authentication: type: tls networkPolicyPeers: - podSelector: matchLabels: app: kafka-client # zookeeper: #", "create secret generic <my_secret> --from-file=<my_listener_key.key> --from-file=<my_listener_certificate.crt>", "listeners: - name: plain port: 9092 type: internal tls: false - name: external3 port: 9094 type: loadbalancer tls: true configuration: brokerCertChainAndKey: secretName: my-secret certificate: my-listener-certificate.crt key: my-listener-key.key", "listeners: - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true configuration: brokerCertChainAndKey: secretName: my-secret certificate: my-listener-certificate.crt key: my-listener-key.key", "//Kafka brokers *.<cluster_name>-kafka-brokers *.<cluster_name>-kafka-brokers.<namespace>.svc // Bootstrap service <cluster_name>-kafka-bootstrap <cluster_name>-kafka-bootstrap.<namespace>.svc", "// Kafka brokers <cluster_name>-kafka-0.<cluster_name>-kafka-brokers <cluster_name>-kafka-0.<cluster_name>-kafka-brokers.<namespace>.svc <cluster_name>-kafka-1.<cluster_name>-kafka-brokers <cluster_name>-kafka-1.<cluster_name>-kafka-brokers.<namespace>.svc // Bootstrap service <cluster_name>-kafka-bootstrap <cluster_name>-kafka-bootstrap.<namespace>.svc", "// Kafka brokers <cluster_name>-kafka-<listener-name>-0 <cluster_name>-kafka-<listener-name>-0.<namespace>.svc <cluster_name>-kafka-_listener-name>-1 <cluster_name>-kafka-<listener-name>-1.<namespace>.svc // Bootstrap service <cluster_name>-kafka-<listener-name>-bootstrap <cluster_name>-kafka-<listener-name>-bootstrap.<namespace>.svc", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # authorization: type: simple superUsers: - CN=user-1 - user-2 - CN=user-3 - CN=user-4,OU=my-ou,O=my-org,L=my-location,ST=my-state,C=US - CN=user-5,OU=my-ou,O=my-org,C=GB - CN=user-6,O=my-org #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: type: tls #", "apiVersion: v1 kind: Secret metadata: name: my-user labels: strimzi.io/kind: KafkaUser strimzi.io/cluster: my-cluster type: Opaque data: ca.crt: <public_key> # Public key of the clients CA used to sign this user certificate user.crt: <user_certificate> # Public key of the user user.key: <user_private_key> # Private key of the user user.p12: <store> # PKCS #12 store for user certificates and keys user.password: <password_for_store> # Protects the PKCS #12 store", "bootstrap.servers=<kafka_cluster_name>-kafka-bootstrap:9093 1 security.protocol=SSL 2 ssl.truststore.location=/tmp/ca.p12 3 ssl.truststore.password=<truststore_password> 4 ssl.keystore.location=/tmp/user.p12 5 ssl.keystore.password=<keystore_password> 6", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: type: tls-external #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: type: scram-sha-512 #", "apiVersion: v1 kind: Secret metadata: name: my-user labels: strimzi.io/kind: KafkaUser strimzi.io/cluster: my-cluster type: Opaque data: password: Z2VuZXJhdGVkcGFzc3dvcmQ= 1 sasl.jaas.config: b3JnLmFwYWNoZS5rYWZrYS5jb21tb24uc2VjdXJpdHkuc2NyYW0uU2NyYW1Mb2dpbk1vZHVsZSByZXF1aXJlZCB1c2VybmFtZT0ibXktdXNlciIgcGFzc3dvcmQ9ImdlbmVyYXRlZHBhc3N3b3JkIjsK 2", "echo \"Z2VuZXJhdGVkcGFzc3dvcmQ=\" | base64 --decode", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: type: scram-sha-512 password: valueFrom: secretKeyRef: name: my-secret 1 key: my-password 2 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: # quotas: producerByteRate: 1048576 1 consumerByteRate: 2097152 2 requestPercentage: 55 3 controllerMutationRate: 10 4", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # listeners: 1 - name: external1 2 port: 9094 3 type: <listener_type> 4 tls: true 5 authentication: type: tls 6 configuration: 7 # authorization: 8 type: simple superUsers: - super-user-name 9 #", "get kafka <kafka_cluster_name> -o=jsonpath='{.status.listeners[?(@.name==\"<listener_name>\")].bootstrapServers}{\"\\n\"}'", "get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name==\"external\")].bootstrapServers}{\"\\n\"}'", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster 1 spec: authentication: type: tls 2 authorization: type: simple acls: 3 - resource: type: topic name: my-topic patternType: literal operations: - Describe - Read - resource: type: group name: my-group patternType: literal operations: - Read", "apiVersion: v1 kind: Secret metadata: name: my-user labels: strimzi.io/kind: KafkaUser strimzi.io/cluster: my-cluster type: Opaque data: ca.crt: <public_key> # Public key of the clients CA used to sign this user certificate user.crt: <user_certificate> # Public key of the user user.key: <user_private_key> # Private key of the user user.p12: <store> # PKCS #12 store for user certificates and keys user.password: <password_for_store> # Protects the PKCS #12 store", "get secret <cluster_name>-cluster-ca-cert -o jsonpath='{.data.ca\\.crt}' | base64 -d > ca.crt", "get secret <user_name> -o jsonpath='{.data.user\\.crt}' | base64 -d > user.crt", "get secret <user_name> -o jsonpath='{.data.user\\.key}' | base64 -d > user.key", "props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, \"<hostname>:<port>\");", "props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, \"SSL\"); props.put(SslConfigs.SSL_TRUSTSTORE_TYPE_CONFIG, \"PEM\"); props.put(SslConfigs.SSL_TRUSTSTORE_CERTIFICATES_CONFIG, \"<ca.crt_file_content>\");", "props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, \"SSL\"); props.put(SslConfigs.SSL_KEYSTORE_TYPE_CONFIG, \"PEM\"); props.put(SslConfigs.SSL_KEYSTORE_CERTIFICATE_CHAIN_CONFIG, \"<user.crt_file_content>\"); props.put(SslConfigs.SSL_KEYSTORE_KEY_CONFIG, \"<user.key_file_content>\");", "props.put(SslConfigs.SSL_KEYSTORE_CERTIFICATE_CHAIN_CONFIG, \"-----BEGIN CERTIFICATE----- \\n<user_certificate_content_line_1>\\n<user_certificate_content_line_n>\\n-----END CERTIFICATE---\"); props.put(SslConfigs.SSL_KEYSTORE_KEY_CONFIG, \"----BEGIN PRIVATE KEY-----\\n<user_key_content_line_1>\\n<user_key_content_line_n>\\n-----END PRIVATE KEY-----\");", "Caused by: java.security.cert.CertificateException: No subject alternative names matching IP address 168.72.15.231 found", "ssl.endpoint.identification.algorithm=", "props.put(\"ssl.endpoint.identification.algorithm\", \"\");", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # listeners: - name: tls port: 9093 type: internal tls: true authentication: type: oauth - name: external3 port: 9094 type: loadbalancer tls: true authentication: type: oauth #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # listeners: - name: tls port: 9093 type: internal tls: true authentication: type: oauth - name: external3 port: 9094 type: loadbalancer tls: true authentication: type: oauth enablePlain: true enableOauthBearer: false #", "# - name: external3 port: 9094 type: loadbalancer tls: true authentication: type: oauth 1 validIssuerUri: https://<auth_server_address>/<issuer-context> 2 jwksEndpointUri: https://<auth_server_address>/<path_to_jwks_endpoint> 3 userNameClaim: preferred_username 4 maxSecondsWithoutReauthentication: 3600 5 tlsTrustedCertificates: 6 - secretName: oauth-server-cert pattern: \"*.crt\" disableTlsHostnameVerification: true 7 jwksExpirySeconds: 360 8 jwksRefreshSeconds: 300 9 jwksMinRefreshPauseSeconds: 1 10", "# - name: external3 port: 9094 type: loadbalancer tls: true authentication: type: oauth validIssuerUri: https://kubernetes.default.svc.cluster.local 1 jwksEndpointUri: https://kubernetes.default.svc.cluster.local/openid/v1/jwks 2 serverBearerTokenLocation: /var/run/secrets/kubernetes.io/serviceaccount/token 3 checkAccessTokenType: false 4 includeAcceptHeader: false 5 tlsTrustedCertificates: 6 - secretName: oauth-server-cert pattern: \"*.crt\" maxSecondsWithoutReauthentication: 3600 customClaimCheck: \"@.['kubernetes.io'] && @.['kubernetes.io'].['namespace'] in ['myproject']\" 7", "get cm kube-root-ca.crt -o jsonpath=\"{['data']['ca\\.crt']}\" > /tmp/ca.crt create secret generic oauth-server-cert --from-file=ca.crt=/tmp/ca.crt", "- name: external3 port: 9094 type: loadbalancer tls: true authentication: type: oauth validIssuerUri: https://<auth_server_address>/<issuer-context> introspectionEndpointUri: https://<auth_server_address>/<path_to_introspection_endpoint> 1 clientId: kafka-broker 2 clientSecret: 3 secretName: my-cluster-oauth key: clientSecret userNameClaim: preferred_username 4 maxSecondsWithoutReauthentication: 3600 5 tlsTrustedCertificates: - secretName: oauth-server-cert pattern: \"*.crt\"", "authentication: type: oauth # checkIssuer: false 1 checkAudience: true 2 usernamePrefix: user- 3 fallbackUserNameClaim: client_id 4 fallbackUserNamePrefix: client-account- 5 serverBearerTokenLocation: path/to/access/token 6 validTokenType: bearer 7 userInfoEndpointUri: https://<auth_server_address>/<path_to_userinfo_endpoint> 8 enableOauthBearer: false 9 enablePlain: true 10 tokenEndpointUri: https://<auth_server_address>/<path_to_token_endpoint> 11 customClaimCheck: \"@.custom == 'custom-value'\" 12 clientAudience: audience 13 clientScope: scope 14 connectTimeoutSeconds: 60 15 readTimeoutSeconds: 60 16 httpRetries: 2 17 httpRetryPauseMs: 300 18 groupsClaim: \"USD.groups\" 19 groupsClaimDelimiter: \",\" 20 includeAcceptHeader: false 21", "security.protocol=SASL_SSL 1 sasl.mechanism=OAUTHBEARER 2 ssl.truststore.location=/tmp/truststore.p12 3 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.token.endpoint.uri=\"<token_endpoint_url>\" \\ 4 oauth.client.id=\"<client_id>\" \\ 5 oauth.client.secret=\"<client_secret>\" \\ 6 oauth.ssl.truststore.location=\"/tmp/oauth-truststore.p12\" \\ 7 oauth.ssl.truststore.password=\"USDSTOREPASS\" \\ 8 oauth.ssl.truststore.type=\"PKCS12\" \\ 9 oauth.scope=\"<scope>\" \\ 10 oauth.audience=\"<audience>\" ; 11 sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler", "security.protocol=SASL_SSL sasl.mechanism=OAUTHBEARER ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.token.endpoint.uri=\"<token_endpoint_url>\" oauth.client.id=\"<client_id>\" oauth.client.assertion.location=\"<path_to_client_assertion_token_file>\" \\ 1 oauth.client.assertion.type=\"urn:ietf:params:oauth:client-assertion-type:jwt-bearer\" \\ 2 oauth.ssl.truststore.location=\"/tmp/oauth-truststore.p12\" oauth.ssl.truststore.password=\"USDSTOREPASS\" oauth.ssl.truststore.type=\"PKCS12\" oauth.scope=\"<scope>\" oauth.audience=\"<audience>\" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler", "security.protocol=SASL_SSL sasl.mechanism=OAUTHBEARER ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.token.endpoint.uri=\"<token_endpoint_url>\" oauth.client.id=\"<client_id>\" \\ 1 oauth.client.secret=\"<client_secret>\" \\ 2 oauth.password.grant.username=\"<username>\" \\ 3 oauth.password.grant.password=\"<password>\" \\ 4 oauth.ssl.truststore.location=\"/tmp/oauth-truststore.p12\" oauth.ssl.truststore.password=\"USDSTOREPASS\" oauth.ssl.truststore.type=\"PKCS12\" oauth.scope=\"<scope>\" oauth.audience=\"<audience>\" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler", "security.protocol=SASL_SSL sasl.mechanism=OAUTHBEARER ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.access.token=\"<access_token>\" ; 1 sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler", "security.protocol=SASL_SSL sasl.mechanism=OAUTHBEARER ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.access.token.location=\"/var/run/secrets/kubernetes.io/serviceaccount/token\"; 1 sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler", "security.protocol=SASL_SSL sasl.mechanism=OAUTHBEARER ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.token.endpoint.uri=\"<token_endpoint_url>\" oauth.client.id=\"<client_id>\" \\ 1 oauth.client.secret=\"<client_secret>\" \\ 2 oauth.refresh.token=\"<refresh_token>\" \\ 3 oauth.ssl.truststore.location=\"/tmp/oauth-truststore.p12\" oauth.ssl.truststore.password=\"USDSTOREPASS\" oauth.ssl.truststore.type=\"PKCS12\" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler", "oauth.sasl.extension.key1=\"value1\" oauth.sasl.extension.key2=\"value2\"", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # listeners: - name: tls port: 9093 type: internal tls: true authentication: type: oauth - name: external3 port: 9094 type: loadbalancer tls: true authentication: type: oauth #", "logs -f USD{POD_NAME} -c USD{CONTAINER_NAME} get pod -w", "<dependency> <groupId>io.strimzi</groupId> <artifactId>kafka-oauth-client</artifactId> <version>0.15.0.redhat-00012</version> </dependency>", "Properties props = new Properties(); try (FileReader reader = new FileReader(\"client.properties\", StandardCharsets.UTF_8)) { props.load(reader); }", "apiVersion: kafka.strimzi.io/v1beta2 kind: Secret metadata: name: my-bridge-oauth type: Opaque data: clientSecret: MGQ1OTRmMzYtZTllZS00MDY2LWI5OGEtMTM5MzM2NjdlZjQw 1", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # authentication: type: oauth 1 tokenEndpointUri: https://<auth_server_address>/<path_to_token_endpoint> 2 clientId: kafka-bridge clientSecret: secretName: my-bridge-oauth key: clientSecret tlsTrustedCertificates: 3 - secretName: oauth-server-cert pattern: \"*.crt\"", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # authentication: type: oauth tokenEndpointUri: https://<auth_server_address>/<path_to_token_endpoint> clientId: kafka-bridge clientAssertionLocation: /var/run/secrets/sso/assertion 1 tlsTrustedCertificates: - secretName: oauth-server-cert pattern: \"*.crt\"", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # authentication: type: oauth accessTokenLocation: /var/run/secrets/kubernetes.io/serviceaccount/token 1", "spec: # authentication: # disableTlsHostnameVerification: true 1 accessTokenIsJwt: false 2 scope: any 3 audience: kafka 4 connectTimeoutSeconds: 60 5 readTimeoutSeconds: 60 6 httpRetries: 2 7 httpRetryPauseMs: 300 8 includeAcceptHeader: false 9", "logs -f USD{POD_NAME} -c USD{CONTAINER_NAME} get pod -w", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # authorization: type: keycloak 1 tokenEndpointUri: < https://<auth-server-address>/realms/external/protocol/openid-connect/token > 2 clientId: kafka 3 delegateToKafkaAcls: false 4 disableTlsHostnameVerification: false 5 superUsers: 6 - CN=user-1 - user-2 - CN=user-3 tlsTrustedCertificates: 7 - secretName: oauth-server-cert pattern: \"*.crt\" grantsRefreshPeriodSeconds: 60 8 grantsRefreshPoolSize: 5 9 grantsMaxIdleSeconds: 300 10 grantsGcPeriodSeconds: 300 11 grantsAlwaysLatest: false 12 connectTimeoutSeconds: 60 13 readTimeoutSeconds: 60 14 httpRetries: 2 15 enableMetrics: false 16 includeAcceptHeader: false 17 #", "logs -f USD{POD_NAME} -c kafka get pod -w", "Topic:my-topic Topic:orders-* Group:orders-* Cluster:*", "kafka-cluster:my-cluster,Topic:* kafka-cluster:*,Group:b_*", "bin/kafka-topics.sh --create --topic my-topic --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties", "bin/kafka-topics.sh --list --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties", "bin/kafka-topics.sh --describe --topic my-topic --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties", "bin/kafka-console-producer.sh --topic my-topic --bootstrap-server my-cluster-kafka-bootstrap:9092 --producer.config=/tmp/config.properties", "Topic:my-topic Group:my-group-*", "bin/kafka-console-consumer.sh --topic my-topic --group my-group-1 --from-beginning --bootstrap-server my-cluster-kafka-bootstrap:9092 --consumer.config /tmp/config.properties", "Topic:my-topic Cluster:kafka-cluster", "bin/kafka-console-producer.sh --topic my-topic --bootstrap-server my-cluster-kafka-bootstrap:9092 --producer.config=/tmp/config.properties --producer-property enable.idempotence=true --request-required-acks -1", "bin/kafka-consumer-groups.sh --list --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties", "bin/kafka-consumer-groups.sh --describe --group my-group-1 --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties", "bin/kafka-topics.sh --alter --topic my-topic --partitions 2 --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties", "bin/kafka-configs.sh --entity-type brokers --entity-name 0 --describe --all --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties", "bin/kafka-configs --entity-type brokers --entity-name 0 --alter --add-config log.cleaner.threads=2 --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties", "bin/kafka-topics.sh --delete --topic my-topic --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties", "bin/kafka-leader-election.sh --topic my-topic --partition 0 --election-type PREFERRED / --bootstrap-server my-cluster-kafka-bootstrap:9092 --admin.config /tmp/config.properties", "bin/kafka-reassign-partitions.sh --topics-to-move-json-file /tmp/topics-to-move.json --broker-list \"0,1\" --generate --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config /tmp/config.properties > /tmp/partition-reassignment.json", "bin/kafka-reassign-partitions.sh --reassignment-json-file /tmp/partition-reassignment.json --execute --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config /tmp/config.properties", "bin/kafka-reassign-partitions.sh --reassignment-json-file /tmp/partition-reassignment.json --verify --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config /tmp/config.properties", "NS=sso get ingress keycloak -n USDNS", "get -n USDNS pod keycloak-0 -o yaml | less", "SECRET_NAME=credential-keycloak get -n USDNS secret USDSECRET_NAME -o yaml | grep PASSWORD | awk '{print USD2}' | base64 -D", "Dev Team A can write to topics that start with x_ on any cluster Dev Team B can read from topics that start with x_ on any cluster Dev Team B can update consumer group offsets that start with x_ on any cluster ClusterManager of my-cluster Group has full access to cluster config on my-cluster ClusterManager of my-cluster Group has full access to consumer groups on my-cluster ClusterManager of my-cluster Group has full access to topics on my-cluster", "SSO_HOST= SSO-HOSTNAME SSO_HOST_PORT=USDSSO_HOST:443 STOREPASS=storepass echo \"Q\" | openssl s_client -showcerts -connect USDSSO_HOST_PORT 2>/dev/null | awk ' /BEGIN CERTIFICATE/,/END CERTIFICATE/ { print USD0 } ' > /tmp/sso.pem", "split -p \"-----BEGIN CERTIFICATE-----\" sso.pem sso- for f in USD(ls sso-*); do mv USDf USDf.pem; done cp USD(ls sso-* | sort -r | head -n 1) sso-ca.crt", "create secret generic oauth-server-cert --from-file=/tmp/sso-ca.crt -n USDNS", "SSO_HOST= SSO-HOSTNAME", "cat examples/security/keycloak-authorization/kafka-ephemeral-oauth-single-keycloak-authz.yaml | sed -E 's#\\USD{SSO_HOST}'\"#USDSSO_HOST#\" | oc create -n USDNS -f -", "NS=sso run -ti --restart=Never --image=registry.redhat.io/amq-streams/kafka-39-rhel9:2.9.0 kafka-cli -n USDNS -- /bin/sh", "attach -ti kafka-cli -n USDNS", "SSO_HOST= SSO-HOSTNAME SSO_HOST_PORT=USDSSO_HOST:443 STOREPASS=storepass echo \"Q\" | openssl s_client -showcerts -connect USDSSO_HOST_PORT 2>/dev/null | awk ' /BEGIN CERTIFICATE/,/END CERTIFICATE/ { print USD0 } ' > /tmp/sso.pem", "split -p \"-----BEGIN CERTIFICATE-----\" sso.pem sso- for f in USD(ls sso-*); do mv USDf USDf.pem; done cp USD(ls sso-* | sort -r | head -n 1) sso-ca.crt", "keytool -keystore /tmp/truststore.p12 -storetype pkcs12 -alias sso -storepass USDSTOREPASS -import -file /tmp/sso-ca.crt -noprompt", "KAFKA_HOST_PORT=my-cluster-kafka-bootstrap:9093 STOREPASS=storepass echo \"Q\" | openssl s_client -showcerts -connect USDKAFKA_HOST_PORT 2>/dev/null | awk ' /BEGIN CERTIFICATE/,/END CERTIFICATE/ { print USD0 } ' > /tmp/my-cluster-kafka.pem", "split -p \"-----BEGIN CERTIFICATE-----\" /tmp/my-cluster-kafka.pem kafka- for f in USD(ls kafka-*); do mv USDf USDf.pem; done cp USD(ls kafka-* | sort -r | head -n 1) my-cluster-kafka-ca.crt", "keytool -keystore /tmp/truststore.p12 -storetype pkcs12 -alias my-cluster-kafka -storepass USDSTOREPASS -import -file /tmp/my-cluster-kafka-ca.crt -noprompt", "SSO_HOST= SSO-HOSTNAME cat > /tmp/team-a-client.properties << EOF security.protocol=SASL_SSL ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.mechanism=OAUTHBEARER sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.client.id=\"team-a-client\" oauth.client.secret=\"team-a-client-secret\" oauth.ssl.truststore.location=\"/tmp/truststore.p12\" oauth.ssl.truststore.password=\"USDSTOREPASS\" oauth.ssl.truststore.type=\"PKCS12\" oauth.token.endpoint.uri=\"https://USDSSO_HOST/realms/kafka-authz/protocol/openid-connect/token\" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler EOF", "cat > /tmp/team-b-client.properties << EOF security.protocol=SASL_SSL ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.mechanism=OAUTHBEARER sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.client.id=\"team-b-client\" oauth.client.secret=\"team-b-client-secret\" oauth.ssl.truststore.location=\"/tmp/truststore.p12\" oauth.ssl.truststore.password=\"USDSTOREPASS\" oauth.ssl.truststore.type=\"PKCS12\" oauth.token.endpoint.uri=\"https://USDSSO_HOST/realms/kafka-authz/protocol/openid-connect/token\" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler EOF", "USERNAME=alice PASSWORD=alice-password GRANT_RESPONSE=USD(curl -X POST \"https://USDSSO_HOST/realms/kafka-authz/protocol/openid-connect/token\" -H 'Content-Type: application/x-www-form-urlencoded' -d \"grant_type=password&username=USDUSERNAME&password=USDPASSWORD&client_id=kafka-cli&scope=offline_access\" -s -k) REFRESH_TOKEN=USD(echo USDGRANT_RESPONSE | awk -F \"refresh_token\\\":\\\"\" '{printf USD2}' | awk -F \"\\\"\" '{printf USD1}')", "cat > /tmp/alice.properties << EOF security.protocol=SASL_SSL ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.mechanism=OAUTHBEARER sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.refresh.token=\"USDREFRESH_TOKEN\" oauth.client.id=\"kafka-cli\" oauth.ssl.truststore.location=\"/tmp/truststore.p12\" oauth.ssl.truststore.password=\"USDSTOREPASS\" oauth.ssl.truststore.type=\"PKCS12\" oauth.token.endpoint.uri=\"https://USDSSO_HOST/realms/kafka-authz/protocol/openid-connect/token\" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler EOF", "bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic my-topic --producer.config=/tmp/team-a-client.properties First message", "bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic a_messages --producer.config /tmp/team-a-client.properties First message Second message", "logs my-cluster-kafka-0 -f -n USDNS", "bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic a_messages --from-beginning --consumer.config /tmp/team-a-client.properties", "bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic a_messages --from-beginning --consumer.config /tmp/team-a-client.properties --group a_consumer_group_1", "bin/kafka-topics.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/team-a-client.properties --list", "bin/kafka-consumer-groups.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/team-a-client.properties --list", "bin/kafka-configs.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/team-a-client.properties --entity-type brokers --describe --entity-default", "bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic a_messages --producer.config /tmp/team-b-client.properties Message 1", "bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic b_messages --producer.config /tmp/team-b-client.properties Message 1 Message 2 Message 3", "bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages --producer.config /tmp/team-b-client.properties Message 1", "bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages --producer.config /tmp/team-a-client.properties Message 1", "bin/kafka-topics.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/alice.properties --topic x_messages --create --replication-factor 1 --partitions 1", "bin/kafka-topics.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/alice.properties --list bin/kafka-topics.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/team-a-client.properties --list bin/kafka-topics.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/team-b-client.properties --list", "bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages --producer.config /tmp/team-a-client.properties Message 1 Message 2 Message 3", "bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages --producer.config /tmp/team-b-client.properties Message 4 Message 5", "bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages --from-beginning --consumer.config /tmp/team-b-client.properties --group x_consumer_group_b", "bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages --from-beginning --consumer.config /tmp/team-a-client.properties --group x_consumer_group_a", "bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages --from-beginning --consumer.config /tmp/team-a-client.properties --group a_consumer_group_a", "bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages --from-beginning --consumer.config /tmp/alice.properties", "bin/kafka-configs.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/alice.properties --entity-type brokers --describe --entity-default", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # template: clusterCaCert: metadata: labels: label1: value1 label2: value2 annotations: annotation1: value1 annotation2: value2 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: clusterCa: generateSecretOwnerReference: false clientsCa: generateSecretOwnerReference: false", "Not Before Not After | | |<--------------- validityDays --------------->| <--- renewalDays --->|", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: clusterCa: renewalDays: 30 validityDays: 365 generateCertificateAuthority: true clientsCa: renewalDays: 30 validityDays: 365 generateCertificateAuthority: true", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # maintenanceTimeWindows: - \"* * 0-1 ? * SUN,MON,TUE,WED,THU *\" #", "annotate secret my-cluster-cluster-ca-cert -n my-project strimzi.io/force-renew=\"true\"", "annotate secret my-cluster-clients-ca-cert -n my-project strimzi.io/force-renew=\"true\"", "get secret my-cluster-cluster-ca-cert -n my-project -o=jsonpath='{.data.ca\\.crt}' | base64 -d | openssl x509 -noout -dates", "get secret my-cluster-clients-ca-cert -n my-project -o=jsonpath='{.data.ca\\.crt}' | base64 -d | openssl x509 -noout -dates", "delete secret my-cluster-cluster-ca-cert -n my-project", "delete secret my-cluster-clients-ca-cert -n my-project", "get secret my-cluster-cluster-ca-cert -n my-project -o=jsonpath='{.data.ca\\.crt}' | base64 -d | openssl x509 -noout -dates", "get secret my-cluster-clients-ca-cert -n my-project -o=jsonpath='{.data.ca\\.crt}' | base64 -d | openssl x509 -noout -dates", "get <resource_type> --all-namespaces | grep <kafka_cluster_name>", "kind: Pod apiVersion: v1 metadata: name: client-pod spec: containers: - name: client-name image: client-name volumeMounts: - name: secret-volume mountPath: /data/p12 env: - name: SECRET_PASSWORD valueFrom: secretKeyRef: name: my-secret key: my-password volumes: - name: secret-volume secret: secretName: my-cluster-cluster-ca-cert", "kind: Pod apiVersion: v1 metadata: name: client-pod spec: containers: - name: client-name image: client-name volumeMounts: - name: secret-volume mountPath: /data/crt volumes: - name: secret-volume secret: secretName: my-cluster-cluster-ca-cert", "get secret <cluster_name> -cluster-ca-cert -o jsonpath='{.data.ca\\.p12}' | base64 -d > ca.p12", "get secret <cluster_name> -cluster-ca-cert -o jsonpath='{.data.ca\\.password}' | base64 -d > ca.password", "get secret <cluster_name> -cluster-ca-cert -o jsonpath='{.data.ca\\.crt}' | base64 -d > ca.crt", "openssl pkcs12 -export -in ca.crt -nokeys -out ca.p12 -password pass:<P12_password> -caname ca.crt", "create secret generic <cluster_name>-clients-ca-cert --from-file=ca.crt=ca.crt", "create secret generic <cluster_name>-cluster-ca-cert --from-file=ca.crt=ca.crt --from-file=ca.p12=ca.p12 --from-literal=ca.password= P12-PASSWORD", "create secret generic <ca_key_secret> --from-file=ca.key=ca.key", "label secret <ca_certificate_secret> strimzi.io/kind=Kafka strimzi.io/cluster=\"<cluster_name>\"", "label secret <ca_key_secret> strimzi.io/kind=Kafka strimzi.io/cluster=\"<cluster_name>\"", "annotate secret <ca_certificate_secret> strimzi.io/ca-cert-generation=\"<ca_certificate_generation>\"", "annotate secret <ca_key_secret> strimzi.io/ca-key-generation=\"<ca_key_generation>\"", "kind: Kafka version: kafka.strimzi.io/v1beta2 spec: # clusterCa: generateCertificateAuthority: false", "edit secret <ca_certificate_secret_name>", "apiVersion: v1 kind: Secret data: ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0F... 1 metadata: annotations: strimzi.io/ca-cert-generation: \"0\" 2 labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca-cert # type: Opaque", "cat <path_to_new_certificate> | base64", "apiVersion: v1 kind: Secret data: ca.crt: GCa6LS3RTHeKFiFDGBOUDYFAZ0F... 1 metadata: annotations: strimzi.io/ca-cert-generation: \"1\" 2 labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca-cert # type: Opaque", "annotate Kafka <name_of_custom_resource> strimzi.io/pause-reconciliation=\"true\"", "annotate Kafka my-cluster strimzi.io/pause-reconciliation=\"true\"", "describe Kafka <name_of_custom_resource>", "edit Kafka <name_of_custom_resource>", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: clusterCa: generateCertificateAuthority: false 1 clientsCa: generateCertificateAuthority: false 2", "edit secret <ca_certificate_secret_name>", "apiVersion: v1 kind: Secret data: ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0F... 1 metadata: annotations: strimzi.io/ca-cert-generation: \"0\" 2 labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca-cert # type: Opaque", "cat <path_to_new_certificate> | base64", "apiVersion: v1 kind: Secret data: ca.crt: GCa6LS3RTHeKFiFDGBOUDYFAZ0F... 1 ca-2023-01-26T17-32-00Z.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0F... 2 metadata: annotations: strimzi.io/ca-cert-generation: \"1\" 3 labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca-cert # type: Opaque", "edit secret <ca_key_name>", "apiVersion: v1 kind: Secret data: ca.key: SA1cKF1GFDzOIiPOIUQBHDNFGDFS... 1 metadata: annotations: strimzi.io/ca-key-generation: \"0\" 2 labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca # type: Opaque", "cat <path_to_new_key> | base64", "apiVersion: v1 kind: Secret data: ca.key: AB0cKF1GFDzOIiPOIUQWERZJQ0F... 1 metadata: annotations: strimzi.io/ca-key-generation: \"1\" 2 labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca # type: Opaque", "annotate --overwrite Kafka <name_of_custom_resource> strimzi.io/pause-reconciliation=\"false\"", "annotate Kafka <name_of_custom_resource> strimzi.io/pause-reconciliation-", "edit secret <ca_certificate_secret_name>", "apiVersion: v1 kind: Secret data: ca.crt: GCa6LS3RTHeKFiFDGBOUDYFAZ0F metadata: annotations: strimzi.io/ca-cert-generation: \"1\" labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca-cert # type: Opaque", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: my-node-pool labels: strimzi.io/cluster: my-cluster spec: replicas: 3 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: config: default.replication.factor: 3 min.insync.replicas: 2 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: name: my-add-remove-brokers-rebalancing-template annotations: strimzi.io/rebalance-template: \"true\" 1 spec: goals: - CpuCapacityGoal - NetworkInboundCapacityGoal - DiskCapacityGoal - RackAwareGoal - MinTopicLeadersPerBrokerGoal - NetworkOutboundCapacityGoal - ReplicaCapacityGoal skipHardGoalCheck: true # ... other rebalancing configuration", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # cruiseControl: autoRebalance: - mode: add-brokers template: name: my-add-remove-brokers-rebalancing-template - mode: remove-brokers template: name: my-add-remove-brokers-rebalancing-template", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # cruiseControl: autoRebalance: - mode: add-brokers - mode: remove-brokers template: name: my-add-remove-brokers-rebalancing-template", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-b labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - broker storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # cruiseControl: autoRebalance: - mode: add-brokers template: name: my-add-remove-brokers-rebalancing-template - mode: remove-brokers template: name: my-add-remove-brokers-rebalancing-template status: autoRebalance: lastTransitionTime: <timestamp_for_last_rebalance_state> state: RebalanceOnScaleDown 1 modes: 2 - mode: add-brokers brokers: <broker_ids> - mode: remove-brokers brokers: <broker_ids>", "annotate Kafka my-kafka-cluster strimzi.io/skip-broker-scaledown-check=\"true\"", "annotate Kafka my-kafka-cluster strimzi.io/skip-broker-scaledown-check-", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: topicOperator: {} userOperator: {} cruiseControl: brokerCapacity: inboundNetwork: 10000KB/s outboundNetwork: 10000KB/s config: #`default.goals` (superset) must also include all `hard.goals` (subset) default.goals: > com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.ReplicaCapacityGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.DiskCapacityGoal com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkInboundCapacityGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkOutboundCapacityGoal hard.goals: > com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkInboundCapacityGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkOutboundCapacityGoal #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster spec: goals: - RackAwareGoal - TopicReplicaDistributionGoal skipHardGoalCheck: true", "describe kafkarebalance <kafka_rebalance_resource_name> -n <namespace>", "get kafkarebalance <kafka_rebalance_resource_name> -n <namespace> -o json | jq '.status.optimizationResult'", "Name: my-rebalance Namespace: myproject Labels: strimzi.io/cluster=my-cluster Annotations: API Version: kafka.strimzi.io/v1alpha1 Kind: KafkaRebalance Metadata: Status: Conditions: Last Transition Time: 2022-04-05T14:36:11.900Z Status: ProposalReady Type: State Observed Generation: 1 Optimization Result: Data To Move MB: 0 Excluded Brokers For Leadership: Excluded Brokers For Replica Move: Excluded Topics: Intra Broker Data To Move MB: 12 Monitored Partitions Percentage: 100 Num Intra Broker Replica Movements: 0 Num Leader Movements: 24 Num Replica Movements: 55 On Demand Balancedness Score After: 82.91290759174306 On Demand Balancedness Score Before: 78.01176356230222 Recent Windows: 5 Session Id: a4f833bd-2055-4213-bfdd-ad21f95bf184", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster annotations: strimzi.io/rebalance-auto-approval: \"true\" spec: mode: # any mode #", "describe configmaps <my_rebalance_configmap_name> -n <namespace>", "get configmaps <my_rebalance_configmap_name> -o json | jq '.[\"data\"][\"brokerLoad.json\"]|fromjson|.'", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # cruiseControl: brokerCapacity: 1 inboundNetwork: 10000KB/s outboundNetwork: 10000KB/s overrides: 2 - brokers: [0] inboundNetwork: 20000KiB/s outboundNetwork: 20000KiB/s - brokers: [1, 2] inboundNetwork: 30000KiB/s outboundNetwork: 30000KiB/s # config: 3 # Note that `default.goals` (superset) must also include all `hard.goals` (subset) default.goals: > 4 com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.ReplicaCapacityGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.DiskCapacityGoal # hard.goals: > com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal # cpu.balance.threshold: 1.1 metadata.max.age.ms: 300000 send.buffer.bytes: 131072 webserver.http.cors.enabled: true 5 webserver.http.cors.origin: \"*\" webserver.http.cors.exposeheaders: \"User-Task-ID,Content-Type\" # resources: 6 requests: cpu: 1 memory: 512Mi limits: cpu: 2 memory: 2Gi logging: 7 type: inline loggers: rootLogger.level: INFO template: 8 pod: metadata: labels: label1: value1 securityContext: runAsUser: 1000001 fsGroup: 0 terminationGracePeriodSeconds: 120 readinessProbe: 9 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 metricsConfig: 10 type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: cruise-control-metrics key: metrics-config.yml", "apply -f <kafka_configuration_file>", "get deployments -n <my_cluster_operator_namespace>", "NAME READY UP-TO-DATE AVAILABLE my-cluster-cruise-control 1/1 1 1", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster spec: {}", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster spec: mode: full", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster spec: mode: add-brokers brokers: [3, 4] 1", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster spec: mode: remove-brokers brokers: [3, 4] 1", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster spec: goals: - RackAwareGoal - ReplicaCapacityGoal", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster spec: goals: - RackAwareGoal - ReplicaCapacityGoal skipHardGoalCheck: true", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster annotations: strimzi.io/rebalance-auto-approval: \"true\" spec: goals: - RackAwareGoal - ReplicaCapacityGoal skipHardGoalCheck: true", "apply -f <kafka_rebalance_configuration_file>", "get kafkarebalance -o wide -w -n <namespace>", "describe kafkarebalance <kafka_rebalance_resource_name>", "Status: Conditions: Last Transition Time: 2020-05-19T13:50:12.533Z Status: ProposalReady Type: State Observed Generation: 1 Optimization Result: Data To Move MB: 0 Excluded Brokers For Leadership: Excluded Brokers For Replica Move: Excluded Topics: Intra Broker Data To Move MB: 0 Monitored Partitions Percentage: 100 Num Intra Broker Replica Movements: 0 Num Leader Movements: 0 Num Replica Movements: 26 On Demand Balancedness Score After: 81.8666802863978 On Demand Balancedness Score Before: 78.01176356230222 Recent Windows: 1 Session Id: 05539377-ca7b-45ef-b359-e13564f1458c", "com.linkedin.kafka.cruisecontrol.exception.OptimizationFailureException: [CpuCapacityGoal] Insufficient capacity for cpu (Utilization 615.21, Allowed Capacity 420.00, Threshold: 0.70). Add at least 3 brokers with the same cpu capacity (100.00) as broker-0. Add at least 3 brokers with the same cpu capacity (100.00) as broker-0.", "annotate kafkarebalance <kafka_rebalance_resource_name> strimzi.io/rebalance=\"refresh\"", "get kafkarebalance -o wide -w -n <namespace>", "annotate kafkarebalance <kafka_rebalance_resource_name> strimzi.io/rebalance=\"approve\"", "get kafkarebalance -o wide -w -n <namespace>", "annotate kafkarebalance <kafka_rebalance_resource_name> strimzi.io/rebalance=\"stop\"", "describe kafkarebalance <kafka_rebalance_resource_name>", "describe kafkarebalance <kafka_rebalance_resource_name>", "annotate kafkarebalance <kafka_rebalance_resource_name> strimzi.io/rebalance=\"refresh\"", "describe kafkarebalance <kafka_rebalance_resource_name>", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic labels: strimzi.io/cluster: my-cluster spec: partitions: 1 replicas: 3 #", "get kafkatopics my-topic -o yaml", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic labels: strimzi.io/cluster: my-cluster spec: partitions: 1 replicas: 3 # status: conditions: - lastTransitionTime: \"2024-01-18T16:13:50.490918232Z\" status: \"True\" type: Ready observedGeneration: 2 replicasChange: sessionId: 1aa418ca-53ed-4b93-b0a4-58413c4fc0cb 1 state: ongoing 2 targetReplicas: 3 3 topicName: my-topic", "apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-topic-operator labels: app: strimzi spec: # template: # spec: # containers: - name: strimzi-topic-operator # env: # - name: STRIMZI_CRUISE_CONTROL_ENABLED 1 value: true - name: STRIMZI_CRUISE_CONTROL_RACK_ENABLED 2 value: false - name: STRIMZI_CRUISE_CONTROL_HOSTNAME 3 value: cruise-control-api.namespace.svc - name: STRIMZI_CRUISE_CONTROL_PORT 4 value: 9090 - name: STRIMZI_CRUISE_CONTROL_SSL_ENABLED 5 value: true - name: STRIMZI_CRUISE_CONTROL_AUTH_ENABLED 6 value: true", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: my-project annotations: strimzi.io/node-pools: enabled spec: kafka: # cruiseControl: {} #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-a labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - broker storage: type: jbod volumes: - id: 0 type: persistent-claim size: 2000Gi deleteClaim: false - id: 1 type: persistent-claim size: 2000Gi deleteClaim: false - id: 2 type: persistent-claim size: 2000Gi deleteClaim: false #", "run --restart=Never --image=registry.redhat.io/amq-streams/kafka-39-rhel9:2.9.0 helper-pod -- /bin/sh -c \"sleep 3600\"", "exec -n myproject -ti my-cluster-pool-a-0 bin/kafka-log-dirs.sh --describe --bootstrap-server my-cluster-kafka-bootstrap:9092 --broker-list 0,1,2 --topic-list my-topic", "{ \"brokers\": [ { \"broker\": 0, 1 \"logDirs\": [ { \"partitions\": [ 2 { \"partition\": \"my-topic-5\", \"size\": 0, \"offsetLag\": 0, \"isFuture\": false }, { \"partition\": \"my-topic-2\", \"size\": 0, \"offsetLag\": 0, \"isFuture\": false } ], \"error\": null, 3 \"logDir\": \"/var/lib/kafka/data-2/kafka-log0\" 4 }, { \"partitions\": [ { \"partition\": \"my-topic-0\", \"size\": 0, \"offsetLag\": 0, \"isFuture\": false }, { \"partition\": \"my-topic-3\", \"size\": 0, \"offsetLag\": 0, \"isFuture\": false } ], \"error\": null, \"logDir\": \"/var/lib/kafka/data-0/kafka-log0\" }, { \"partitions\": [ { \"partition\": \"my-topic-4\", \"size\": 0, \"offsetLag\": 0, \"isFuture\": false }, { \"partition\": \"my-topic-1\", \"size\": 0, \"offsetLag\": 0, \"isFuture\": false } ], \"error\": null, \"logDir\": \"/var/lib/kafka/data-1/kafka-log0\" } ] }", "apiVersion: {KafkaRebalanceApiVersion} kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster spec: mode: remove-disks moveReplicasOffVolumes: - brokerId: 0 1 volumeIds: [1, 2] 2", "apiVersion: {KafkaRebalanceApiVersion} kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster annotations: strimzi.io/rebalance-auto-approval: \"true\" spec: mode: remove-disks moveReplicasOffVolumes: - brokerId: 0 volumeIds: [1, 2]", "get kafkarebalance my-rebalance -n my-project -o yaml", "apiVersion: {KafkaRebalanceApiVersion} kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster spec: mode: remove-disks moveReplicasOffVolumes: - brokerId: 0 volumeIds: [1, 2] status: - lastTransitionTime: \"2024-11-13T06:55:42.217794891Z\" status: \"True\" type: ProposalReady observedGeneration: 1 optimizationResult: afterBeforeLoadConfigMap: my-rebalance dataToMoveMB: 0 excludedBrokersForLeadership: [] excludedBrokersForReplicaMove: [] excludedTopics: [] intraBrokerDataToMoveMB: 0 monitoredPartitionsPercentage: 100 numIntraBrokerReplicaMovements: 26 numLeaderMovements: 0 numReplicaMovements: 0 onDemandBalancednessScoreAfter: 100 onDemandBalancednessScoreBefore: 0 provisionRecommendation: \"\" provisionStatus: UNDECIDED recentWindows: 1 sessionId: 24537b9c-a315-4715-8e86-01481e914771", "annotate kafkarebalance my-rebalance strimzi.io/rebalance=\"approve\"", "{ \"brokers\": [ { \"broker\": 0, \"logDirs\": [ { \"partitions\": [], \"error\": null, \"logDir\": \"/var/lib/kafka/data-2/kafka-log0\" }, { \"partitions\": [ { \"partition\": \"my-topic-4\", \"size\": 0, \"offsetLag\": 0, \"isFuture\": false }, { \"partition\": \"my-topic-5\", \"size\": 0, \"offsetLag\": 0, \"isFuture\": false }, { \"partition\": \"my-topic-0\", \"size\": 0, \"offsetLag\": 0, \"isFuture\": false }, { \"partition\": \"my-topic-1\", \"size\": 0, \"offsetLag\": 0, \"isFuture\": false }, { \"partition\": \"my-topic-2\", \"size\": 0, \"offsetLag\": 0, \"isFuture\": false }, { \"partition\": \"my-topic-3\", \"size\": 0, \"offsetLag\": 0, \"isFuture\": false } ], \"error\": null, \"logDir\": \"/var/lib/kafka/data-0/kafka-log0\" }, { \"partitions\": [], \"error\": null, \"logDir\": \"/var/lib/kafka/data-1/kafka-log0\" } ] }", "run helper-pod -ti --image=registry.redhat.io/amq-streams/kafka-39-rhel9:2.9.0 --rm=true --restart=Never -- bash", "{ \"version\": 1, 1 \"partitions\": [ 2 { \"topic\": \"example-topic-1\", 3 \"partition\": 0, 4 \"replicas\": [1, 2, 3] 5 }, { \"topic\": \"example-topic-1\", \"partition\": 1, \"replicas\": [2, 3, 4] }, { \"topic\": \"example-topic-2\", \"partition\": 0, \"replicas\": [3, 4, 5] } ] }", "{ \"version\": 1, \"topics\": [ { \"topic\": \"my-topic\"} ] }", "{ \"version\": 1, \"partitions\": [ { \"topic\": \"example-topic-1\", \"partition\": 0, \"replicas\": [1, 2, 3] \"log_dirs\": [\"/var/lib/kafka/data-0/kafka-log1\", \"any\", \"/var/lib/kafka/data-1/kafka-log2\"] }, { \"topic\": \"example-topic-1\", \"partition\": 1, \"replicas\": [2, 3, 4] \"log_dirs\": [\"any\", \"/var/lib/kafka/data-2/kafka-log3\", \"/var/lib/kafka/data-3/kafka-log4\"] }, { \"topic\": \"example-topic-2\", \"partition\": 0, \"replicas\": [3, 4, 5] \"log_dirs\": [\"/var/lib/kafka/data-4/kafka-log5\", \"any\", \"/var/lib/kafka/data-5/kafka-log6\"] } ] }", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # listeners: # - name: tls port: 9093 type: internal tls: true 1 authentication: type: tls 2 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic labels: strimzi.io/cluster: my-cluster spec: partitions: 10 replicas: 3 config: retention.ms: 7200000 segment.bytes: 1073741824 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: 1 type: tls authorization: type: simple 2 acls: # access to the topic - resource: type: topic name: my-topic operations: - Create - Describe - Read - AlterConfigs host: \"*\" # access to the cluster - resource: type: cluster operations: - Alter - AlterConfigs host: \"*\" # #", "get secret <cluster_name> -cluster-ca-cert -o jsonpath='{.data.ca\\.p12}' | base64 -d > ca.p12", "get secret <cluster_name> -cluster-ca-cert -o jsonpath='{.data.ca\\.password}' | base64 -d > ca.password", "run --restart=Never --image=registry.redhat.io/amq-streams/kafka-39-rhel9:2.9.0 <interactive_pod_name> -- /bin/sh -c \"sleep 3600\"", "cp ca.p12 <interactive_pod_name> :/tmp", "get secret <kafka_user> -o jsonpath='{.data.user\\.p12}' | base64 -d > user.p12", "get secret <kafka_user> -o jsonpath='{.data.user\\.password}' | base64 -d > user.password", "cp user.p12 <interactive_pod_name> :/tmp", "bootstrap.servers= <kafka_cluster_name> -kafka-bootstrap:9093 1 security.protocol=SSL 2 ssl.truststore.location=/tmp/ca.p12 3 ssl.truststore.password= <truststore_password> 4 ssl.keystore.location=/tmp/user.p12 5 ssl.keystore.password= <keystore_password> 6", "cp config.properties <interactive_pod_name> :/tmp/config.properties", "{ \"version\": 1, \"topics\": [ { \"topic\": \"my-topic\"} ] }", "cp topics.json <interactive_pod_name> :/tmp/topics.json", "exec -n <namespace> -ti <interactive_pod_name> /bin/bash", "bin/kafka-reassign-partitions.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/config.properties --topics-to-move-json-file /tmp/topics.json --broker-list 0,1,2,3,4 --generate", "cp reassignment.json <interactive_pod_name> :/tmp/reassignment.json", "exec -n <namespace> -ti <interactive_pod_name> /bin/bash", "bin/kafka-reassign-partitions.sh --bootstrap-server <cluster_name> -kafka-bootstrap:9093 --command-config /tmp/config.properties --reassignment-json-file /tmp/reassignment.json --execute", "bin/kafka-reassign-partitions.sh --bootstrap-server <cluster_name> -kafka-bootstrap:9093 --command-config /tmp/config.properties --reassignment-json-file /tmp/reassignment.json --throttle 5000000 --execute", "bin/kafka-reassign-partitions.sh --bootstrap-server <cluster_name> -kafka-bootstrap:9093 --command-config /tmp/config.properties --reassignment-json-file /tmp/reassignment.json --throttle 10000000 --execute", "bin/kafka-reassign-partitions.sh --bootstrap-server <cluster_name> -kafka-bootstrap:9093 --command-config /tmp/config.properties --reassignment-json-file /tmp/reassignment.json --verify", "cp reassignment.json <interactive_pod_name> :/tmp/reassignment.json", "exec -n <namespace> -ti <interactive_pod_name> /bin/bash", "bin/kafka-reassign-partitions.sh --bootstrap-server <cluster_name> -kafka-bootstrap:9093 --command-config /tmp/config.properties --reassignment-json-file /tmp/reassignment.json --execute", "bin/kafka-reassign-partitions.sh --bootstrap-server <cluster_name> -kafka-bootstrap:9093 --command-config /tmp/config.properties --reassignment-json-file /tmp/reassignment.json --throttle 5000000 --execute", "bin/kafka-reassign-partitions.sh --bootstrap-server <cluster_name> -kafka-bootstrap:9093 --command-config /tmp/config.properties --reassignment-json-file /tmp/reassignment.json --throttle 10000000 --execute", "bin/kafka-reassign-partitions.sh --bootstrap-server <cluster_name> -kafka-bootstrap:9093 --command-config /tmp/config.properties --reassignment-json-file /tmp/reassignment.json --verify", "exec my-cluster-kafka-0 -c kafka -it -- /bin/bash -c \"ls -l /var/lib/kafka/kafka-log_<n>_ | grep -E '^d' | grep -vE '[a-zA-Z0-9.-]+\\.[a-z0-9]+-deleteUSD'\"", "{ \"version\": 1, \"topics\": [ { \"topic\": \"my-topic\"} ] }", "Current partition replica assignment {\"version\":1,\"partitions\":[{\"topic\":\"my-topic\",\"partition\":0,\"replicas\":[3,4,2,0],\"log_dirs\":[\"any\",\"any\",\"any\",\"any\"]},{\"topic\":\"my-topic\",\"partition\":1,\"replicas\":[0,2,3,1],\"log_dirs\":[\"any\",\"any\",\"any\",\"any\"]},{\"topic\":\"my-topic\",\"partition\":2,\"replicas\":[1,3,0,4],\"log_dirs\":[\"any\",\"any\",\"any\",\"any\"]}]} Proposed partition reassignment configuration {\"version\":1,\"partitions\":[{\"topic\":\"my-topic\",\"partition\":0,\"replicas\":[0,1,2,3],\"log_dirs\":[\"any\",\"any\",\"any\",\"any\"]},{\"topic\":\"my-topic\",\"partition\":1,\"replicas\":[1,2,3,4],\"log_dirs\":[\"any\",\"any\",\"any\",\"any\"]},{\"topic\":\"my-topic\",\"partition\":2,\"replicas\":[2,3,4,0],\"log_dirs\":[\"any\",\"any\",\"any\",\"any\"]}]}", "jq '.partitions[].replicas |= del(.[-1])' reassignment.json > reassignment.json", "{\"version\":1,\"partitions\":[{\"topic\":\"my-topic\",\"partition\":0,\"replicas\":[0,1,2],\"log_dirs\":[\"any\",\"any\",\"any\",\"any\"]},{\"topic\":\"my-topic\",\"partition\":1,\"replicas\":[1,2,3],\"log_dirs\":[\"any\",\"any\",\"any\",\"any\"]},{\"topic\":\"my-topic\",\"partition\":2,\"replicas\":[2,3,4],\"log_dirs\":[\"any\",\"any\",\"any\",\"any\"]}]}", "cp reassignment.json <interactive_pod_name> :/tmp/reassignment.json", "exec -n <namespace> -ti <interactive_pod_name> /bin/bash", "bin/kafka-reassign-partitions.sh --bootstrap-server <cluster_name> -kafka-bootstrap:9093 --command-config /tmp/config.properties --reassignment-json-file /tmp/reassignment.json --execute", "bin/kafka-reassign-partitions.sh --bootstrap-server <cluster_name> -kafka-bootstrap:9093 --command-config /tmp/config.properties --reassignment-json-file /tmp/reassignment.json --verify", "bin/kafka-topics.sh --bootstrap-server <cluster_name> -kafka-bootstrap:9093 --command-config /tmp/config.properties --describe", "my-topic Partition: 0 Leader: 0 Replicas: 0,1,2 Isr: 0,1,2 my-topic Partition: 1 Leader: 2 Replicas: 1,2,3 Isr: 1,2,3 my-topic Partition: 2 Leader: 3 Replicas: 2,3,4 Isr: 2,3,4", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic labels: strimzi.io/cluster: my-cluster spec: partitions: 3 replicas: 3", "metrics ├── grafana-dashboards 1 │ ├── strimzi-cruise-control.json │ ├── strimzi-kafka-bridge.json │ ├── strimzi-kafka-connect.json │ ├── strimzi-kafka-exporter.json │ ├── strimzi-kafka-mirror-maker-2.json | ├── strimzi-kafka-oauth.json │ ├── strimzi-kafka.json | ├── strimzi-kraft.json │ ├── strimzi-operators.json │ └── strimzi-zookeeper.json ├── grafana-install │ └── grafana.yaml 2 ├── prometheus-additional-properties │ └── prometheus-additional.yaml 3 ├── prometheus-alertmanager-config │ └── alert-manager-config.yaml 4 ├── prometheus-install │ ├── alert-manager.yaml 5 │ ├── prometheus-rules.yaml 6 │ ├── prometheus.yaml 7 │ └── strimzi-pod-monitor.yaml 8 ├── kafka-bridge-metrics.yaml 9 ├── kafka-connect-metrics.yaml 10 ├── kafka-cruise-control-metrics.yaml 11 ├── kafka-metrics.yaml 12 ├── kafka-mirror-maker-2-metrics.yaml 13 └── oauth-metrics.yaml 14", "apply -f kafka-metrics.yaml", "edit kafka <kafka_configuration_file>", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # metricsConfig: 1 type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: kafka-metrics key: kafka-metrics-config.yml --- kind: ConfigMap 2 apiVersion: v1 metadata: name: kafka-metrics labels: app: strimzi data: kafka-metrics-config.yml: | # metrics configuration", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # kafkaExporter: image: my-registry.io/my-org/my-exporter-cluster:latest 1 groupRegex: \".*\" 2 topicRegex: \".*\" 3 groupExcludeRegex: \"^excluded-.*\" 4 topicExcludeRegex: \"^excluded-.*\" 5 showAllOffsets: false 6 resources: 7 requests: cpu: 200m memory: 64Mi limits: cpu: 500m memory: 128Mi logging: debug 8 enableSaramaLogging: true 9 template: 10 pod: metadata: labels: label1: value1 imagePullSecrets: - name: my-docker-credentials securityContext: runAsUser: 1000001 fsGroup: 0 terminationGracePeriodSeconds: 120 readinessProbe: 11 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: 12 initialDelaySeconds: 15 timeoutSeconds: 5", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # bootstrapServers: my-cluster-kafka:9092 http: # enableMetrics: true #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # listeners: - name: external3 port: 9094 type: loadbalancer tls: true authentication: type: oauth enableMetrics: true configuration: # authorization: type: keycloak enableMetrics: true #", "get pods -n openshift-user-workload-monitoring", "NAME READY STATUS RESTARTS AGE prometheus-operator-5cc59f9bc6-kgcq8 1/1 Running 0 25s prometheus-user-workload-0 5/5 Running 1 14s prometheus-user-workload-1 5/5 Running 1 14s thanos-ruler-user-workload-0 3/3 Running 0 14s thanos-ruler-user-workload-1 3/3 Running 0 14s", "apiVersion: monitoring.coreos.com/v1 kind: PodMonitor metadata: name: cluster-operator-metrics labels: app: strimzi spec: selector: matchLabels: strimzi.io/kind: cluster-operator namespaceSelector: matchNames: - <project-name> 1 podMetricsEndpoints: - path: /metrics port: http", "apply -f strimzi-pod-monitor.yaml -n MY-PROJECT", "apply -f prometheus-rules.yaml -n MY-PROJECT", "create sa grafana-service-account -n my-project", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: grafana-cluster-monitoring-binding labels: app: strimzi subjects: - kind: ServiceAccount name: grafana-service-account namespace: my-project roleRef: kind: ClusterRole name: cluster-monitoring-view apiGroup: rbac.authorization.k8s.io", "apply -f grafana-cluster-monitoring-binding.yaml -n my-project", "apiVersion: v1 kind: Secret metadata: name: secret-sa annotations: kubernetes.io/service-account.name: \"grafana-service-account\" 1 type: kubernetes.io/service-account-token 2", "create -f <secret_configuration>.yaml", "describe sa/grafana-service-account | grep Tokens: describe secret grafana-service-account-token-mmlp9 | grep token:", "apiVersion: 1 datasources: - name: Prometheus type: prometheus url: https://thanos-querier.openshift-monitoring.svc.cluster.local:9091 access: proxy basicAuth: false withCredentials: false isDefault: true jsonData: timeInterval: 5s tlsSkipVerify: true httpHeaderName1: \"Authorization\" secureJsonData: httpHeaderValue1: \"Bearer USD{ GRAFANA-ACCESS-TOKEN }\" 1 editable: true", "create configmap grafana-config --from-file=datasource.yaml -n MY-PROJECT", "apiVersion: apps/v1 kind: Deployment metadata: name: grafana labels: app: strimzi spec: replicas: 1 selector: matchLabels: name: grafana template: metadata: labels: name: grafana spec: serviceAccountName: grafana-service-account containers: - name: grafana image: grafana/grafana:11.5.1 ports: - name: grafana containerPort: 3000 protocol: TCP volumeMounts: - name: grafana-data mountPath: /var/lib/grafana - name: grafana-logs mountPath: /var/log/grafana - name: grafana-config mountPath: /etc/grafana/provisioning/datasources/datasource.yaml readOnly: true subPath: datasource.yaml readinessProbe: httpGet: path: /api/health port: 3000 initialDelaySeconds: 5 periodSeconds: 10 livenessProbe: httpGet: path: /api/health port: 3000 initialDelaySeconds: 15 periodSeconds: 20 volumes: - name: grafana-data emptyDir: {} - name: grafana-logs emptyDir: {} - name: grafana-config configMap: name: grafana-config --- apiVersion: v1 kind: Service metadata: name: grafana labels: app: strimzi spec: ports: - name: grafana port: 3000 targetPort: 3000 protocol: TCP selector: name: grafana type: ClusterIP", "apply -f <grafana-application> -n <my-project>", "create route edge <my-grafana-route> --service=grafana --namespace= KAFKA-NAMESPACE", "get routes NAME HOST/PORT PATH SERVICES MY-GRAFANA-ROUTE MY-GRAFANA-ROUTE-amq-streams.net grafana", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: # template: connectContainer: env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: \"http://otlp-host:4317\" tracing: type: opentelemetry #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker metadata: name: my-mirror-maker spec: # template: mirrorMakerContainer: env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: \"http://otlp-host:4317\" tracing: type: opentelemetry #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mm2-cluster spec: # template: connectContainer: env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: \"http://otlp-host:4317\" tracing: type: opentelemetry #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # template: bridgeContainer: env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: \"http://otlp-host:4317\" tracing: type: opentelemetry #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: # template: connectContainer: env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: \"https://otlp-host:4317\" - name: OTEL_EXPORTER_OTLP_CERTIFICATE value: \"/mnt/mysecret/my-certificate.crt\" volumeMounts: - name: tracing-secret-volume mountPath: /mnt/mysecret pod: volumes: - name: tracing-secret-volume secret: secretName: mysecret tracing: type: opentelemetry #", "<dependency> <groupId>io.opentelemetry.semconv</groupId> <artifactId>opentelemetry-semconv</artifactId> <version>1.21.0-alpha-redhat-00001</version> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-exporter-otlp</artifactId> <version>1.34.1</version> <exclusions> <exclusion> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-exporter-sender-okhttp</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-exporter-sender-grpc-managed-channel</artifactId> <version>1.34.1</version> <scope>runtime</scope> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-sdk-extension-autoconfigure</artifactId> <version>1.34.1</version> </dependency> <dependency> <groupId>io.opentelemetry.instrumentation</groupId> <artifactId>opentelemetry-kafka-clients-2.6</artifactId> <version>1.32.0-alpha</version> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-sdk</artifactId> <version>1.34.1</version> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-exporter-sender-jdk</artifactId> <version>1.34.1-alpha</version> <scope>runtime</scope> </dependency> <dependency> <groupId>io.grpc</groupId> <artifactId>grpc-netty-shaded</artifactId> <version>1.61.0</version> </dependency>", "OpenTelemetry ot = GlobalOpenTelemetry.get();", "GlobalTracer.register(tracer);", "// Producer instance Producer < String, String > op = new KafkaProducer < > ( configs, new StringSerializer(), new StringSerializer() ); Producer < String, String > producer = tracing.wrap(op); KafkaTracing tracing = KafkaTracing.create(GlobalOpenTelemetry.get()); producer.send(...); //consumer instance Consumer<String, String> oc = new KafkaConsumer<>( configs, new StringDeserializer(), new StringDeserializer() ); Consumer<String, String> consumer = tracing.wrap(oc); consumer.subscribe(Collections.singleton(\"mytopic\")); ConsumerRecords<Integer, String> records = consumer.poll(1000); ConsumerRecord<Integer, String> record = SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer);", "senderProps.put(ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingProducerInterceptor.class.getName()); KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); producer.send(...);", "consumerProps.put(ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingConsumerInterceptor.class.getName()); KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); consumer.subscribe(Collections.singletonList(\"messages\")); ConsumerRecords<Integer, String> records = consumer.poll(1000); ConsumerRecord<Integer, String> record = SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer);", "private class TracingKafkaClientSupplier extends DefaultKafkaClientSupplier { @Override public Producer<byte[], byte[]> getProducer(Map<String, Object> config) { KafkaTelemetry telemetry = KafkaTelemetry.create(GlobalOpenTelemetry.get()); return telemetry.wrap(super.getProducer(config)); } @Override public Consumer<byte[], byte[]> getConsumer(Map<String, Object> config) { KafkaTelemetry telemetry = KafkaTelemetry.create(GlobalOpenTelemetry.get()); return telemetry.wrap(super.getConsumer(config)); } @Override public Consumer<byte[], byte[]> getRestoreConsumer(Map<String, Object> config) { return this.getConsumer(config); } @Override public Consumer<byte[], byte[]> getGlobalConsumer(Map<String, Object> config) { return this.getConsumer(config); } }", "KafkaClientSupplier supplier = new TracingKafkaClientSupplier(tracer); KafkaStreams streams = new KafkaStreams(builder.build(), new StreamsConfig(config), supplier); streams.start();", "props.put(StreamsConfig.PRODUCER_PREFIX + ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingProducerInterceptor.class.getName()); props.put(StreamsConfig.CONSUMER_PREFIX + ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingConsumerInterceptor.class.getName());", "io.opentelemetry:opentelemetry-exporter-zipkin", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mm2-cluster spec: # template: connectContainer: env: - name: OTEL_SERVICE_NAME value: my-zipkin-service - name: OTEL_EXPORTER_ZIPKIN_ENDPOINT value: http://zipkin-exporter-host-name:9411/api/v2/spans 1 - name: OTEL_TRACES_EXPORTER value: zipkin 2 tracing: type: opentelemetry #", "//Defines attribute extraction for a producer private static class ProducerAttribExtractor implements AttributesExtractor < ProducerRecord < ? , ? > , Void > { @Override public void onStart(AttributesBuilder attributes, ProducerRecord < ? , ? > producerRecord) { set(attributes, AttributeKey.stringKey(\"prod_start\"), \"prod1\"); } @Override public void onEnd(AttributesBuilder attributes, ProducerRecord < ? , ? > producerRecord, @Nullable Void unused, @Nullable Throwable error) { set(attributes, AttributeKey.stringKey(\"prod_end\"), \"prod2\"); } } //Defines attribute extraction for a consumer private static class ConsumerAttribExtractor implements AttributesExtractor < ConsumerRecord < ? , ? > , Void > { @Override public void onStart(AttributesBuilder attributes, ConsumerRecord < ? , ? > producerRecord) { set(attributes, AttributeKey.stringKey(\"con_start\"), \"con1\"); } @Override public void onEnd(AttributesBuilder attributes, ConsumerRecord < ? , ? > producerRecord, @Nullable Void unused, @Nullable Throwable error) { set(attributes, AttributeKey.stringKey(\"con_end\"), \"con2\"); } } //Extracts the attributes public static void main(String[] args) throws Exception { Map < String, Object > configs = new HashMap < > (Collections.singletonMap(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, \"localhost:9092\")); System.setProperty(\"otel.traces.exporter\", \"jaeger\"); System.setProperty(\"otel.service.name\", \"myapp1\"); KafkaTracing tracing = KafkaTracing.newBuilder(GlobalOpenTelemetry.get()) .addProducerAttributesExtractors(new ProducerAttribExtractor()) .addConsumerAttributesExtractors(new ConsumerAttribExtractor()) .build();", "apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration webhooks: - name: strimzi-drain-cleaner.strimzi.io rules: - apiGroups: [\"\"] apiVersions: [\"v1\"] operations: [\"CREATE\"] resources: [\"pods/eviction\"] scope: \"Namespaced\" clientConfig: service: namespace: \"strimzi-drain-cleaner\" name: \"strimzi-drain-cleaner\" path: /drainer port: 443 caBundle: Cg== #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic labels: strimzi.io/cluster: my-cluster spec: partitions: 1 replicas: 3 config: # min.insync.replicas: 2 #", "apiVersion: apps/v1 kind: Deployment spec: # template: spec: serviceAccountName: strimzi-drain-cleaner containers: - name: strimzi-drain-cleaner # env: - name: STRIMZI_DENY_EVICTION value: \"true\" - name: STRIMZI_DRAIN_KAFKA value: \"true\" - name: STRIMZI_DRAIN_ZOOKEEPER value: \"false\" #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: template: podDisruptionBudget: maxUnavailable: 0 # zookeeper: template: podDisruptionBudget: maxUnavailable: 0 #", "apply -f <kafka_configuration_file>", "apply -f ./install/drain-cleaner/openshift", "get nodes drain <name-of-node> --delete-emptydir-data --ignore-daemonsets --timeout=6000s --force", "INFO ... Received eviction webhook for Pod my-cluster-zookeeper-2 in namespace my-project INFO ... Pod my-cluster-zookeeper-2 in namespace my-project will be annotated for restart INFO ... Pod my-cluster-zookeeper-2 in namespace my-project found and annotated for restart INFO ... Received eviction webhook for Pod my-cluster-kafka-0 in namespace my-project INFO ... Pod my-cluster-kafka-0 in namespace my-project will be annotated for restart INFO ... Pod my-cluster-kafka-0 in namespace my-project found and annotated for restart", "INFO PodOperator:68 - Reconciliation #13(timer) Kafka(my-project/my-cluster): Rolling Pod my-cluster-zookeeper-2 INFO PodOperator:68 - Reconciliation #13(timer) Kafka(my-project/my-cluster): Rolling Pod my-cluster-kafka-0 INFO AbstractOperator:500 - Reconciliation #13(timer) Kafka(my-project/my-cluster): reconciled", "apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-drain-cleaner labels: app: strimzi-drain-cleaner namespace: strimzi-drain-cleaner spec: # spec: serviceAccountName: strimzi-drain-cleaner containers: - name: strimzi-drain-cleaner # env: - name: STRIMZI_DRAIN_KAFKA value: \"true\" - name: STRIMZI_DRAIN_ZOOKEEPER value: \"true\" - name: STRIMZI_CERTIFICATE_WATCH_ENABLED value: \"true\" - name: STRIMZI_CERTIFICATE_WATCH_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: STRIMZI_CERTIFICATE_WATCH_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name #", "./report.sh --namespace=<cluster_namespace> --cluster=<cluster_name> --out-dir=<local_output_directory>", "./report.sh --namespace=my-amq-streams-namespace --cluster=my-kafka-cluster --bridge=my-bridge-component --secrets=all --out-dir=~/reports", "annotate strimzipodset <cluster_name>-kafka strimzi.io/manual-rolling-update=\"true\" annotate strimzipodset <cluster_name>-zookeeper strimzi.io/manual-rolling-update=\"true\" annotate strimzipodset <cluster_name>-connect strimzi.io/manual-rolling-update=\"true\" annotate strimzipodset <cluster_name>-mirrormaker2 strimzi.io/manual-rolling-update=\"true\"", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic labels: strimzi.io/cluster: my-cluster spec: partitions: 1 replicas: 3 config: # min.insync.replicas: 2 #", "annotate pod <cluster_name>-kafka-<index_number> strimzi.io/manual-rolling-update=\"true\" annotate pod <cluster_name>-zookeeper-<index_number> strimzi.io/manual-rolling-update=\"true\" annotate pod <cluster_name>-connect-<index_number> strimzi.io/manual-rolling-update=\"true\" annotate pod <cluster_name>-mirrormaker2-<index_number> strimzi.io/manual-rolling-update=\"true\"", "-n kafka get events --field-selector reportingController=strimzi.io/cluster-operator", "LAST SEEN TYPE REASON OBJECT MESSAGE 2m Normal CaCertRenewed pod/strimzi-cluster-kafka-0 CA certificate renewed 58m Normal PodForceRestartOnError pod/strimzi-cluster-kafka-1 Pod needs to be forcibly restarted due to an error 5m47s Normal ManualRollingUpdate pod/strimzi-cluster-kafka-2 Pod was manually annotated to be rolled", "-n kafka get events --field-selector reportingController=strimzi.io/cluster-operator,reason=PodForceRestartOnError", "-n kafka get events --field-selector reportingController=strimzi.io/cluster-operator,reason=PodForceRestartOnError -o yaml", "apiVersion: v1 items: - action: StrimziInitiatedPodRestart apiVersion: v1 eventTime: \"2022-05-13T00:22:34.168086Z\" firstTimestamp: null involvedObject: kind: Pod name: strimzi-cluster-kafka-1 namespace: kafka kind: Event lastTimestamp: null message: Pod needs to be forcibly restarted due to an error metadata: creationTimestamp: \"2022-05-13T00:22:34Z\" generateName: strimzi-event name: strimzi-eventwppk6 namespace: kafka resourceVersion: \"432961\" uid: 29fcdb9e-f2cf-4c95-a165-a5efcd48edfc reason: PodForceRestartOnError reportingController: strimzi.io/cluster-operator reportingInstance: strimzi-cluster-operator-6458cfb4c6-6bpdp source: {} type: Normal kind: List metadata: resourceVersion: \"\" selfLink: \"\"", "env: - name: STRIMZI_FEATURE_GATES value: -ControlPlaneListener", "env: - name: STRIMZI_FEATURE_GATES value: +ControlPlaneListener", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic labels: strimzi.io/cluster: my-cluster spec: partitions: 1 replicas: 3 config: # min.insync.replicas: 2 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # template: podDisruptionBudget: maxUnavailable: 0", "annotate pod my-cluster-pool-a-1 strimzi.io/manual-rolling-update=\"true\"", "sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml", "sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml", "create -f install/cluster-operator/020-RoleBinding-strimzi-cluster-operator.yaml -n <watched_namespace> create -f install/cluster-operator/023-RoleBinding-strimzi-cluster-operator.yaml -n <watched_namespace> create -f install/cluster-operator/031-RoleBinding-strimzi-cluster-operator-entity-operator-delegation.yaml -n <watched_namespace>", "create clusterrolebinding strimzi-cluster-operator-namespaced --clusterrole=strimzi-cluster-operator-namespaced --serviceaccount my-cluster-operator-namespace:strimzi-cluster-operator create clusterrolebinding strimzi-cluster-operator-watched --clusterrole=strimzi-cluster-operator-watched --serviceaccount my-cluster-operator-namespace:strimzi-cluster-operator create clusterrolebinding strimzi-cluster-operator-entity-operator-delegation --clusterrole=strimzi-entity-operator --serviceaccount my-cluster-operator-namespace:strimzi-cluster-operator", "replace -f install/cluster-operator", "get pods my-cluster-kafka-0 -o jsonpath='{.spec.containers[0].image}'", "registry.redhat.io/amq-streams/strimzi-kafka-39-rhel9:2.9.0", "delete USD(oc get kt -n <namespace> -o name | grep strimzi-store-topic) && oc delete USD(oc get kt -n <namespace> -o name | grep strimzi-topic-operator)", "annotate USD(oc get kt -n <namespace> -o name | grep consumer-offsets) strimzi.io/managed=\"false\" && oc annotate USD(oc get kt -n <namespace> -o name | grep transaction-state) strimzi.io/managed=\"false\"", "delete USD(oc get kt -n <namespace> -o name | grep consumer-offsets) && oc delete USD(oc get kt -n <namespace> -o name | grep transaction-state)", "get kafka <kafka_cluster_name> -n <namespace> -o jsonpath='{.status.conditions}'", "edit kafka <kafka_configuration_file>", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: replicas: 3 metadataVersion: 3.8-IV2 version: 3.8.0 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: replicas: 3 metadataVersion: 3.8-IV2 1 version: 3.9.0 2 #", "get pods my-cluster-kafka-0 -o jsonpath='{.spec.containers[0].image}'", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: replicas: 3 metadataVersion: 3.9-IV2 version: 3.9.0 #", "edit kafka <kafka_configuration_file>", "kind: Kafka spec: # kafka: version: 3.8.0 config: log.message.format.version: \"3.8\" inter.broker.protocol.version: \"3.8\" #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # kafka: version: 3.9.0 1 config: log.message.format.version: \"3.8\" 2 inter.broker.protocol.version: \"3.8\" 3 #", "get pods my-cluster-kafka-0 -o jsonpath='{.spec.containers[0].image}'", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # kafka: version: 3.9.0 config: log.message.format.version: \"3.8\" inter.broker.protocol.version: \"3.9\" #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # kafka: version: 3.9.0 config: log.message.format.version: \"3.9\" inter.broker.protocol.version: \"3.9\" #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: spec: # status: # kafkaVersion: 3.9.0 operatorLastSuccessfulVersion: 2.9 kafkaMetadataVersion: 3.9", "edit kafka <kafka_configuration_file>", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: replicas: 3 metadataVersion: 3.8-IV2 1 version: 3.9.0 2 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: replicas: 3 metadataVersion: 3.8-IV2 1 version: 3.8.0 2 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # kafka: version: 3.9.0 config: inter.broker.protocol.version: \"3.8\" log.message.format.version: \"3.8\" #", "edit kafka <kafka_configuration_file>", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # kafka: version: 3.9.0 1 config: inter.broker.protocol.version: \"3.8\" 2 log.message.format.version: \"3.8\" #", "get pods my-cluster-kafka-0 -o jsonpath='{.spec.containers[0].image}'", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # kafka: version: 3.8.0 1 config: inter.broker.protocol.version: \"3.8\" 2 log.message.format.version: \"3.8\" #", "run kafka-admin -ti --image=registry.redhat.io/amq-streams/kafka-39-rhel9:2.9.0 --rm=true --restart=Never -- ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi-topic-operator-kstreams-topic-store-changelog --delete && ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi_store_topic --delete", "sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml", "sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml", "replace -f install/cluster-operator", "get pod my-cluster-kafka-0 -o jsonpath='{.spec.containers[0].image}'", "get <resource_type> --all-namespaces | grep <kafka_cluster_name>", "delete subscription amq-streams -n openshift-operators", "delete csv amqstreams. <version> -n openshift-operators", "get crd -l app=strimzi -o name | xargs oc delete", "get <resource_type> --all-namespaces | grep <kafka_cluster_name>", "delete -f install/cluster-operator", "delete <resource_type> <resource_name> -n <namespace>", "delete secret my-cluster-clients-ca-cert -n my-project", "apiVersion: v1 kind: PersistentVolume spec: # persistentVolumeReclaimPolicy: Retain", "apiVersion: v1 kind: StorageClass metadata: name: gp2-retain parameters: # reclaimPolicy: Retain", "apiVersion: v1 kind: PersistentVolume spec: # storageClassName: gp2-retain", "get pv", "NAME RECLAIMPOLICY CLAIM pvc-5e9c5c7f-3317-11ea-a650-06e1eadd9a4c ... Retain ... myproject/data-0-my-cluster-broker-0 pvc-5e9cc72d-3317-11ea-97b0-0aef8816c7ea ... Retain ... myproject/data-0-my-cluster-broker-1 pvc-5ead43d1-3317-11ea-97b0-0aef8816c7ea ... Retain ... myproject/data-0-my-cluster-broker-2 pvc-7e1f67f9-3317-11ea-a650-06e1eadd9a4c ... Retain ... myproject/data-0-my-cluster-controller-3 pvc-7e21042e-3317-11ea-9786-02deaf9aa87e ... Retain ... myproject/data-0-my-cluster-controller-4 pvc-7e226978-3317-11ea-97b0-0aef8816c7ea ... Retain ... myproject/data-0-my-cluster-controller-5", "create namespace myproject", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: data-0-my-cluster-broker-0 spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Gi storageClassName: gp2-retain volumeMode: Filesystem volumeName: pvc-7e1f67f9-3317-11ea-a650-06e1eadd9a4c", "apiVersion: v1 kind: PersistentVolume metadata: annotations: kubernetes.io/createdby: aws-ebs-dynamic-provisioner pv.kubernetes.io/bound-by-controller: \"yes\" pv.kubernetes.io/provisioned-by: kubernetes.io/aws-ebs creationTimestamp: \"<date>\" finalizers: - kubernetes.io/pv-protection labels: failure-domain.beta.kubernetes.io/region: eu-west-1 failure-domain.beta.kubernetes.io/zone: eu-west-1c name: pvc-5ead43d1-3317-11ea-97b0-0aef8816c7ea resourceVersion: \"39431\" selfLink: /api/v1/persistentvolumes/pvc-7e226978-3317-11ea-97b0-0aef8816c7ea uid: 7efe6b0d-3317-11ea-a650-06e1eadd9a4c spec: accessModes: - ReadWriteOnce awsElasticBlockStore: fsType: xfs volumeID: aws://eu-west-1c/vol-09db3141656d1c258 capacity: storage: 100Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: data-0-my-cluster-kafka-2 namespace: myproject resourceVersion: \"39113\" uid: 54be1c60-3319-11ea-97b0-0aef8816c7ea nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: failure-domain.beta.kubernetes.io/zone operator: In values: - eu-west-1c - key: failure-domain.beta.kubernetes.io/region operator: In values: - eu-west-1 persistentVolumeReclaimPolicy: Retain storageClassName: gp2-retain volumeMode: Filesystem", "claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: data-0-my-cluster-broker-2 namespace: myproject resourceVersion: \"39113\" uid: 54be1c60-3319-11ea-97b0-0aef8816c7ea", "create -f install/cluster-operator -n myproject", "apply -f <topic_configuration_file> -n myproject", "apply -f <user_configuration_file> -n myproject", "apply -f <kafka_resource_configuration>.yaml -n myproject", "PVC_NAME=\"data-0-my-cluster-kafka-0\" COMMAND=\"grep cluster.id /disk/kafka-log*/meta.properties | awk -F'=' '{print \\USD2}'\" run tmp -itq --rm --restart \"Never\" --image \"foo\" --overrides \"{\\\"spec\\\": {\\\"containers\\\":[{\\\"name\\\":\\\"busybox\\\",\\\"image\\\":\\\"busybox\\\",\\\"command\\\":[\\\"/bin/sh\\\", \\\"-c\\\",\\\"USDCOMMAND\\\"],\\\"volumeMounts\\\":[{\\\"name\\\":\\\"disk\\\",\\\"mountPath\\\":\\\"/disk\\\"}]}], \\\"volumes\\\":[{\\\"name\\\":\\\"disk\\\",\\\"persistentVolumeClaim\\\":{\\\"claimName\\\": \\\"USDPVC_NAME\\\"}}]}}\" -n myproject", "edit kafka <cluster-name> --subresource status -n myproject", "annotate kafka my-cluster strimzi.io/pause-reconciliation=false --overwrite -n myproject", "get kafkatopics -o wide -w -n myproject", "NAME CLUSTER PARTITIONS REPLICATION FACTOR READY my-topic-1 my-cluster 10 3 True my-topic-2 my-cluster 10 3 True my-topic-3 my-cluster 10 3 True", "get kafkausers -o wide -w -n myproject", "NAME CLUSTER AUTHENTICATION AUTHORIZATION READY my-user-1 my-cluster tls simple True my-user-2 my-cluster tls simple True my-user-3 my-cluster tls simple True", "get pv", "NAME RECLAIMPOLICY CLAIM pvc-5e9c5c7f-3317-11ea-a650-06e1eadd9a4c ... Retain ... myproject/data-my-cluster-zookeeper-1 pvc-5e9cc72d-3317-11ea-97b0-0aef8816c7ea ... Retain ... myproject/data-my-cluster-zookeeper-0 pvc-5ead43d1-3317-11ea-97b0-0aef8816c7ea ... Retain ... myproject/data-my-cluster-zookeeper-2 pvc-7e1f67f9-3317-11ea-a650-06e1eadd9a4c ... Retain ... myproject/data-0-my-cluster-kafka-0 pvc-7e21042e-3317-11ea-9786-02deaf9aa87e ... Retain ... myproject/data-0-my-cluster-kafka-1 pvc-7e226978-3317-11ea-97b0-0aef8816c7ea ... Retain ... myproject/data-0-my-cluster-kafka-2", "create namespace myproject", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: data-0-my-cluster-kafka-0 spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Gi storageClassName: gp2-retain volumeMode: Filesystem volumeName: pvc-7e1f67f9-3317-11ea-a650-06e1eadd9a4c", "apiVersion: v1 kind: PersistentVolume metadata: annotations: kubernetes.io/createdby: aws-ebs-dynamic-provisioner pv.kubernetes.io/bound-by-controller: \"yes\" pv.kubernetes.io/provisioned-by: kubernetes.io/aws-ebs creationTimestamp: \"<date>\" finalizers: - kubernetes.io/pv-protection labels: failure-domain.beta.kubernetes.io/region: eu-west-1 failure-domain.beta.kubernetes.io/zone: eu-west-1c name: pvc-7e226978-3317-11ea-97b0-0aef8816c7ea resourceVersion: \"39431\" selfLink: /api/v1/persistentvolumes/pvc-7e226978-3317-11ea-97b0-0aef8816c7ea uid: 7efe6b0d-3317-11ea-a650-06e1eadd9a4c spec: accessModes: - ReadWriteOnce awsElasticBlockStore: fsType: xfs volumeID: aws://eu-west-1c/vol-09db3141656d1c258 capacity: storage: 100Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: data-0-my-cluster-kafka-2 namespace: myproject resourceVersion: \"39113\" uid: 54be1c60-3319-11ea-97b0-0aef8816c7ea nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: failure-domain.beta.kubernetes.io/zone operator: In values: - eu-west-1c - key: failure-domain.beta.kubernetes.io/region operator: In values: - eu-west-1 persistentVolumeReclaimPolicy: Retain storageClassName: gp2-retain volumeMode: Filesystem", "claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: data-0-my-cluster-kafka-2 namespace: myproject resourceVersion: \"39113\" uid: 54be1c60-3319-11ea-97b0-0aef8816c7ea", "create -f install/cluster-operator -n myproject", "apply -f <topic_configuration_file> -n myproject", "apply -f <user_configuration_file> -n myproject", "apply -f <kafka_resource_configuration>.yaml -n myproject", "get kafkatopics -o wide -w -n myproject", "NAME CLUSTER PARTITIONS REPLICATION FACTOR READY my-topic-1 my-cluster 10 3 True my-topic-2 my-cluster 10 3 True my-topic-3 my-cluster 10 3 True", "get kafkausers -o wide -w -n myproject", "NAME CLUSTER AUTHENTICATION AUTHORIZATION READY my-user-1 my-cluster tls simple True my-user-2 my-cluster tls simple True my-user-3 my-cluster tls simple True", "com.company=Red_Hat rht.prod_name=Red_Hat_Application_Foundations rht.prod_ver=2025.Q1 rht.comp=AMQ_Streams rht.comp_ver=2.9 rht.subcomp=entity-operator rht.subcomp_t=infrastructure", "com.company=Red_Hat rht.prod_name=Red_Hat_Application_Foundations rht.prod_ver=2025.Q1 rht.comp=AMQ_Streams rht.comp_ver=2.9 rht.subcomp=kafka-bridge rht.subcomp_t=application", "dnf install <package_name>", "dnf install <path_to_download_package>" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html-single/deploying_and_managing_streams_for_apache_kafka_on_openshift/index
Automating SAP HANA Scale-Out System Replication using the RHEL HA Add-On
Automating SAP HANA Scale-Out System Replication using the RHEL HA Add-On Red Hat Enterprise Linux for SAP Solutions 9 Red Hat Customer Content Services
[ "search fence-agents", "subscription-manager release Release: 8.2 [root:~]# cat /etc/redhat-release Red Hat Enterprise Linux release 8.2 (Ootpa) [root:~]#", "subscription-manager register", "subscription-manager list --available --matches=\"rhel-8-for-x86_64-sap-solutions-rpms\"", "subscription-manager attach --pool=XXXXXXXXXXXXXXXXXXXXXXXXXXXXX", "yum repolist | grep sap-solution rhel-9-for-x86_64-sap-solutions-rpms RHEL for x86_64 - SAP Solutions (RPMs)", "subscription-manager repos --enable=rhel-9-for-x86_64-sap-solutions-rpms --enable=rhel-9-for-x86_64-highavailability-rpms", "yum update -y", "nmcli con add con-name eth1 ifname eth1 autoconnect yes type ethernet ip4 192.168.101.101/24 nmcli con add con-name eth2 ifname eth2 autoconnect yes type ethernet ip4 192.168.102.101/24", "cat << EOF >> /etc/hosts 10.0.1.21 dc1hana01.example.com dc1hana01 10.0.1.22 dc1hana02.example.com dc1hana02 10.0.1.23 dc1hana03.example.com dc1hana03 10.0.1.24 dc1hana04.example.com dc1hana04 10.0.1.31 dc2hana01.example.com dc2hana01 10.0.1.32 dc2hana02.example.com dc2hana02 10.0.1.33 dc2hana03.example.com dc2hana03 10.0.1.34 dc2hana04.example.com dc2hana04 10.0.1.41 majoritymaker.example.com majoritymaker EOF", "mkdir -p /usr/sap", "mkfs -t xfs -b size=4096 /dev/sdb", "echo \"/dev/sdb /usr/sap xfs defaults 1 6\" >> /etc/fstab", "mount /usr/sap", "yum install -y nfs-utils", "mkdir -p /hana/{shared,data,log} cat <<EOF >> /etc/fstab 10.0.1.61:/data/dc1/shared /hana/shared nfs4 defaults 0 0 10.0.1.61:/data/dc1/data /hana/data nfs4 defaults 0 0 10.0.1.61:/data/dc1/log /hana/log nfs4 defaults 0 0 EOF", "mount -a", "mkdir -p /hana/{shared,data,log} cat <<EOF >> /etc/fstab 10.0.1.62:/data/dc2/shared /hana/shared nfs4 defaults 0 0 10.0.1.62:/data/dc2/data /hana/data nfs4 defaults 0 0 10.0.1.62:/data/dc2/log /hana/log nfs4 defaults 0 0 EOF", "mount -a", "hostnamectl set-hostname dc1hana01", "hostname <hostname> [root:~]# hostname -s <hostname> [root:~]# hostname -f <hostname>.example.com [root:~]# hostname -d example.com", "localectl set-locale LANG=en_US.UTF-8", "yum -y install chrony [root:~]# systemctl stop chronyd.service", "grep ^server /etc/chrony.conf server 0.de.pool.ntp.org server 1.de.pool.ntp.org", "systemctl enable chronyd.service [root:~]# systemctl start chronyd.service [root:~]# systemctl restart systemd-timedated.service", "systemctl status chronyd.service chronyd.service enabled [root:~]# chronyc sources 210 Number of sources = 3 MS Name/IP address Stratum Poll Reach LastRx Last sample ===================================================================== ^* 0.de.pool.ntp.org 2 8 377 200 -2659ns[-3000ns] +/- 28ms ^-de.pool.ntp.org 2 8 377 135 -533us[ -533us] +/- 116ms ^-ntp2.example.com 2 9 377 445 +14ms[ +14ms] +/- 217ms", "adduser sapadm --uid 996 [root:~]# groupadd sapsys --gid 79 [root:~]# passwd sapadm", "export TEMPDIR=USD(mktemp -d) [root:~]# export INSTALLDIRHOSTAGENT=/install/HANA/DATA_UNITS/HDB_SERVER_LINUX_X86_64/ [root:~]# systemctl disable abrtd [root:~]# systemctl disable abrt-ccpp [root:~]# cp -rp USD{INSTALLDIRHOSTAGENT}/server/HOSTAGENT.TGZ USDTEMPDIR/ cd USDTEMPDIR [root:~]# tar -xzvf HOSTAGENT.TGZ [root:~]# cd global/hdb/saphostagent_setup/ [root:~]# ./saphostexec -install", "export MYHOSTNAME=USD(hostname) [root:~]# export SSLPASSWORD=Us3Your0wnS3cur3Password [root:~]# export LD_LIBRARY_PATH=/usr/sap/hostctrl/exe/ [root:~]# export SECUDIR=/usr/sap/hostctrl/exe/sec [root:~]# cd /usr/sap/hostctrl/exe [root:~]# mkdir /usr/sap/hostctrl/exe/sec [root:~]# /usr/sap/hostctrl/exe/sapgenpse gen_pse -p SAPSSLS.pse -x USDSSLPASSWORD -r /tmp/USD{MYHOSTNAME}-csr.p10 \"CN=USDMYHOSTNAME\" [root:~]# /usr/sap/hostctrl/exe/sapgenpse seclogin -p SAPSSLS.pse -x USDSSLPASSWORD -O sapadm chown sapadm /usr/sap/hostctrl/exe/sec/SAPSSLS.pse [root:~]# /usr/sap/hostctrl/exe/saphostexec -restart *", "netstat -tulpen | grep sapstartsrv tcp 0 0 0.0.0.0:50014 0.0.0.0:* LISTEN 1002 84028 4319/sapstartsrv tcp 0 0 0.0.0.0:50013 0.0.0.0:* LISTEN 1002 47542 4319/sapstartsrv", "netstat -tulpen | grep 1129 tcp 0 0 0.0.0.0:1129 0.0.0.0:* LISTEN 996 25632 1345/sapstartsrv", "./hdblcm --action=configure_internal_network", "/hana/shared/RH1/hdblcm/hdblcm", "INSTALLDIR=/install/51053381/DATA_UNITS HDB_SERVER_LINUX_X86_64/ [root:~]# cd USDINSTALLDIR [root:~]# ./hdblcm --dump_configfile_template=/tmp/templateFile", "cat /tmp/templateFile.xml | ./hdblcm \\ --batch \\ --sid=RH1 \\ --number=10 \\ --action=install \\ --hostname=dc1hana01 \\ --addhosts=dc1hana02:role=worker,dc1hana03:role=worker,dc1hana04:role =standby \\ --install_hostagent \\ --system_usage=test \\ --sapmnt=/hana/shared \\ --datapath=/hana/data \\ --logpath=/hana/log \\ --root_user=root \\ --workergroup=default \\ --home=/usr/sap/RH1/home \\ --userid=79 \\ --shell=/bin/bash \\ --groupid=79 \\ --read_password_from_stdin=xml \\ --internal_network=192.168.101.0/24 \\ --remote_execution=saphostagent", "cat /tmp/templateFile.xml | ./hdblcm \\ --batch \\ --sid=RH1 \\ --number=10 \\ --action=install \\ --hostname=dc2hana01 \\ --addhosts=dc2hana02:role=worker,dc2hana03:role=worker,dc2hana04:role =standby \\ --install_hostagent \\ --system_usage=test \\ --sapmnt=/hana/shared \\ --datapath=/hana/data \\ --logpath=/hana/log \\ --root_user=root \\ --workergroup=default \\ --home=/usr/sap/RH1/home \\ --userid=79 \\ --shell=/bin/bash \\ --groupid=79 \\ --read_password_from_stdin=xml \\ --internal_network=192.168.101.0/24 \\ --remote_execution=saphostagent", "su - rh1adm /usr/sap/hostctrl/exe/sapcontrol -nr 10 -function GetSystemInstanceList 10.04.2019 08:38:21 GetSystemInstanceList OK hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus dc1hana01,10,51013,51014,0.3,HDB|HDB_WORKER, GREEN dc1hana03,10,51013,51014,0.3,HDB|HDB_STANDBY, GREEN dc1hana02,10,51013,51014,0.3,HDB|HDB_WORKER, GREEN dc1hana04,10,51013,51014,0.3,HDB|HDB_WORKER, GREEN rh1adm@dc1hana01:/usr/sap/RH1/HDB10> HDBSettings.sh landscapeHostConfiguration.py | Host | Host | Host | Failover | Remove | Storage | Storage | Failover | Failover | NameServer | NameServer | IndexServer | IndexServer | Host | Host | Worker | Worker | | | Active | Status | Status | Status | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | | | | | | | Partition | Partition | Group | Group | Role | Role | Role | Role | Roles | Roles | Groups | Groups | | --------- | ------ | ------ | -------- | ------ | --------- | --------- | -------- | -------- | ---------- | ---------- | ----------- | ----------- | ------- | ------- | ------- | ------- | | dc1hana01 | yes | ok | | | 1 | 1 | default | default | master 1 | master | worker | master | worker | worker | default | default | | dc1hana02 | yes | ok | | | 2 | 2 | default | default | master 3 | slave | worker | slave | worker | worker | default | default | | dc1hana03 | yes | ok | | | 2 | 2 | default | default | master 3 | slave | worker | slave | worker | worker | default | default | | dc1hana04 | yes | ignore | | | 0 | 0 | default | default | master 2 | slave | standby | standby | standby | standby | default | - | rh1adm@dc1hana01: HDB info USER PID PPID %CPU VSZ RSS COMMAND rh1adm 31321 31320 0.0 116200 2824 -bash rh1adm 32254 31321 0.0 113304 1680 \\_ /bin/sh /usr/sap/RH1/HDB10/HDB info rh1adm 32286 32254 0.0 155356 1868 \\_ ps fx -U rh1adm -o user:8,pid:8,ppid:8,pcpu:5,vsz:10,rss:10,args rh1adm 27853 1 0.0 23916 1780 sapstart pf=/hana/shared/RH1/profile/RH1_HDB10_dc1hana01 rh1adm 27863 27853 0.0 262272 32368 \\_ /usr/sap/RH1/HDB10/dc1hana01/trace/hdb.sapRH1_HDB10 -d -nw -f /usr/sap/RH1/HDB10/dc1hana01/daemon.ini pf=/usr/sap/RH1/SYS/profile/RH1_HDB10_dc1hana01 rh1adm 27879 27863 53.0 9919108 6193868 \\_ hdbnameserver rh1adm 28186 27863 0.7 1860416 268304 \\_ hdbcompileserver rh1adm 28188 27863 65.8 3481068 1834440 \\_ hdbpreprocessor rh1adm 28228 27863 48.2 9431440 6481212 \\_ hdbindexserver -port 31003 rh1adm 28231 27863 2.1 3064008 930796 \\_ hdbxsengine -port 31007 rh1adm 28764 27863 1.1 2162344 302344 \\_ hdbwebdispatcher rh1adm 27763 1 0.2 502424 23376 /usr/sap/RH1/HDB10/exe/sapstartsrvpf=/hana/shared/RH1/profile/RH1_HDB10_dc1hana01 -D -u rh1adm", "Do this as root [root@dc1hana01]# mkdir -p /hana/shared/backup/ [root@dc1hana01]# chown rh1adm /hana/shared/backup/ [root@dc1hana01]# su - rh1adm [rh1adm@dc1hana01]% hdbsql -i 10 -u SYSTEM -d SYSTEMDB \"BACKUP DATA USING FILE ('/hana/shared/backup/')\" [rh1adm@dc1hana01]% hdbsql -i 10 -u SYSTEM -d RH1 \"BACKUP DATA USING FILE ('/hana/shared/backup/')\"", "su - rh1adm [rh1adm@dc1hana01]% hdbnsutil -sr_enable --name=DC1 nameserver is active, proceeding ... successfully enabled system as system replication source site done.", "scp -rp /usr/sap/RH1/SYS/global/security/rsecssfs/data/SSFS_RH1.DAT root@dc2hana01:/usr/sap/RH1/SYS/global/security/rsecssfs/data/SSFS_RH 1.DAT [root@dc1hana01]# scp -rp /usr/sap/RH1/SYS/global/security/rsecssfs/key/SSFS_RH1.KEY root@dc2hana01:/usr/sap/RH1/SYS/global/security/rsecssfs/key/SSFS_RH1 .KEY", "su - rh1adm [rh1adm@dc1hana01]% hdbnsutil -sr_register --name=DC2 \\ --remoteHost=dc1hana03 --remoteInstance=10 \\ --replicationMode=sync --operationMode=logreplay \\ --online # Start System [rh1adm@dc1hana01]% /usr/sap/hostctrl/exe/sapcontrol -nr 10 -function StartSystem", "GetInstanceList: rh1adm@dc2hana01:/usr/sap/RH1/HDB10> /usr/sap/hostctrl/exe/sapcontrol -nr 10 -function GetSystemInstanceList 01.04.2019 14:17:28 GetSystemInstanceList OK hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus dc2hana02, 10, 51013, 51014, 0.3, HDB|HDB_WORKER, GREEN dc2hana01, 10, 51013, 51014, 0.3, HDB|HDB_WORKER, GREEN dc2hana04, 10, 51013, 51014, 0.3, HDB|HDB_STANDBY, GREEN dc2hana03, 10, 51013, 51014, 0.3, HDB|HDB_WORKER, GREEN Check landscapeHostConfiguration: rh1adm@dc2hana01:/usr/sap/RH1/HDB10> HDBSettings.sh landscapeHostConfiguration.py Storage | Failover | Failover | NameServer | NameServer | IndexServer | IndexServer | Host | Host | Worker | Worker | | | Active | Status | Status | Status | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | | | | | | | Partition | Partition | Group | Group | Role | Role | Role | Role | Roles | Roles | Groups | Groups | | | | | | | | | | | | | | | | | | | | dc2hana01 | yes | ok | | | 1 | | default | default | master 1 | master | worker | master | worker | worker | default | default | | dc2hana02 | yes | ok | | | 2 | | default | default | slave | slave | worker | slave | worker | worker | default | default | | dc2hana03 | yes | ok | | | 3 | | default | default | master 3 | slave | worker | slave | worker | worker | default | default | | dc2hana04 | yes | ignore | | | 0 | 0 | default | default | master 2 | slave | standby | standby | standby | standby | default | - | overall host status: ok", "rh1adm@dc1hana01: /usr/sap/hostctrl/exe/sapcontrol -nr 10 -function GetSystemInstanceList rh1adm@dc1hana01:/hana/shared/backup> /usr/sap/hostctrl/exe/sapcontrol -nr 10 -function GetSystemInstanceList Red Hat Enterprise Linux HA Solution for SAP HANA Scale Out and System Replication Page 55 26.03.2019 12:41:13 GetSystemInstanceList OK hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus dc1hana01, 10, 51013, 51014, 0.3, HDB|HDB_WORKER, GREEN dc1hana02, 10, 51013, 51014, 0.3, HDB|HDB_WORKER, GREEN dc1hana03, 10, 51013, 51014, 0.3, HDB|HDB_WORKER, GREEN dc1hana04, 10, 51013, 51014, 0.3, HDB|HDB_STANDBY, GREEN rh1adm@dc1hana01:/usr/sap/RH1/HDB10> HDBSettings.sh landscapeHostConfiguration.py | Host | Host | Host | Failover | Remove | Storage | Storage | Failover | Failover | NameServer | NameServer | IndexServer | IndexServer | Host | Host | Worker | Worker | | | Active | Status | Status | Status | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | | | | | | | Partition | Partition | Group | Group | Role | Role | Role | Role | Roles | Roles | Groups | Groups | | --------- | ------ | ------ | -------- | ------ | --------- | --------- | -------- | -------- | ---------- | ---------- | ----------- | ----------- | ------- | ------- | ------- | ------- | | dc1hana01 | yes | ok | | | 1 | 1 | default | default | master 1 | master | worker | master | worker | worker | default | default | | dc1hana02 | yes | ok | | | 2 | 2 | default | default | master 2 | slave | worker | slave | worker | worker | default | default | | dc1hana03 | yes | ok | | | 3 | 3 | default | default | slave | slave | worker | slave | worker | worker | default | default | | dc1hana04 | yes | ignore | | | 0 | 0 | default | default | master 3 | slave | standby | Red Hat Enterprise Linux HA Solution for SAP HANA Scale Out and System Replication Page 56 standby | standby | standby | default | - | overall host status: ok rh1adm@dc1hana01:/usr/sap/RH1/HDB10> # Show Systemreplication state rh1adm@dc1hana01:/usr/sap/RH1/HDB10> HDBSettings.sh systemReplicationStatus.py | Database | Host | Port | Service Name | Volume ID | Site ID | Site Name | Secondary | Secondary | Secondary | Secondary | Secondary | Replication | Replication | Replication | | | | | | | | | Host | Port | Site ID | Site Name | Active Status | Mode | Status | Status Details | | -------- | --------- | ----- | ------------ | --------- | ------- | --------- | --------- | --------- | --------- | --------- | ------------- | ----------- | ----------- | -------------- | | SYSTEMDB | dc1hana01 | 31001 | nameserver | 1 | 1 | DC1 | dc2hana01 | 31001 | 2 | DC2 | YES | SYNC | ACTIVE | | | RH1 | dc1hana01 | 31007 | xsengine | 2 | 1 | DC1 | dc2hana01 | 31007 | 2 | DC2 | YES | SYNC | ACTIVE | | | RH1 | dc1hana01 | 31003 | indexserver | 3 | 1 | DC1 | dc2hana01 | 31003 | 2 | DC2 | YES | SYNC | ACTIVE | | | RH1 | dc1hana03 | 31003 | indexserver | 5 | 1 | DC1 | dc2hana03 | 31003 | 2 | DC2 | YES | SYNC | ACTIVE | | | RH1 | dc1hana02 | 31003 | indexserver | 4 | 1 | DC1 | dc2hana02 | 31003 | 2 | DC2 | YES | SYNC | ACTIVE | | status system replication site \"2\": ACTIVE overall system replication status: ACTIVE Local System Replication State Red Hat Enterprise Linux HA Solution for SAP HANA Scale Out and System Replication Page 57 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ mode: PRIMARY site id: 1 site name: DC1 rh1adm@dc1hana01:/usr/sap/RH1/HDB10>", "rh1adm@dc1hana01:/usr/sap/RH1/HDB10> HDBSettings.sh systemReplicationStatus.py | Database | Host | Port | Service Name | Volume ID | Site ID | Site Name | Secondary | Secondary | Secondary | Secondary | Secondary | Replication | Replication | Replication | | | | | | | | | Host | Port | Site ID | Site Name | Active Status | Mode | Status | Status Details | | -------- | --------- | ----- | ------------ | --------- | ------- | --------- | --------- | --------- | --------- | --------- | ------------- | ----------- | ----------- | -------------- | | SYSTEMDB | dc1hana01 | 31001 | nameserver | 1 | 1 | DC1 | dc2hana01 | 31001 | 2 | DC2 | YES | SYNC | ACTIVE | | | RH1 | dc1hana01 | 31007 | xsengine | 2 | 1 | DC1 | dc2hana01 | 31007 | 2 | DC2 | YES | SYNC | ACTIVE | | | RH1 | dc1hana01 | 31003 | indexserver | 3 | 1 | DC1 | dc2hana01 | 31003 | 2 | DC2 | YES | SYNC | ACTIVE | | | RH1 | dc1hana03 | 31003 | indexserver | 5 | 1 | DC1 | dc2hana03 | 31003 | 2 | DC2 | YES | SYNC | ACTIVE | | | RH1 | dc1hana02 | 31003 | indexserver | 4 | 1 | DC1 | dc2hana02 | 31003 | 2 | DC2 | YES | SYNC | ACTIVE | | status system replication site \"2\": ACTIVE overall system replication status: ACTIVE Local System Replication State ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ mode: PRIMARY site id: 1 site name: DC1 rh1adm@dc1hana01:/usr/sap/RH1/HDB10>", "subscription-manager repos --list-enabled +----------------------------------------------------------+ Available Repositories in /etc/yum.repos.d/redhat.repo +----------------------------------------------------------+ Repo ID: rhel-9-for-x86_64-appstream-e4s-rpms Repo Name: Red Hat Enterprise Linux 9 for x86_64 - AppStream - Update Services for SAP Solutions (RPMs) Repo URL: <Your repo URL> Enabled: 1 Repo ID: rhel-9-for-x86_64-highavailability-e4s-rpms Repo Name: Red Hat Enterprise Linux 9 for x86_64 - High Availability - Update Services for SAP Solutions (RPMs) Repo URL: <Your repo URL> Enabled: 1 Repo ID: rhel-9-for-x86_64-sap-solutions-e4s-rpms Repo Name: Red Hat Enterprise Linux 9 for x86_64 - SAP Solutions - Update Services for SAP Solutions (RPMs) Repo URL: <Your repo URL> Enabled: 1 Repo ID: rhel-9-for-x86_64-baseos-e4s-rpms Repo Name: Red Hat Enterprise Linux 9 for x86_64 - BaseOS - Update Services for SAP Solutions (RPMs) Repo URL: <Your repo URL> Enabled: 1 [root@dc1hana01 ~]# yum repolist Updating Subscription Management repositories. repo id repo name rhel-9-for-x86_64-appstream-e4s-rpms Red Hat Enterprise Linux 9 for x86_64 - AppStream - Update Services for SAP Solutions (RPMs) rhel-9-for-x86_64-baseos-e4s-rpms Red Hat Enterprise Linux 9 for x86_64 - BaseOS - Update Services for SAP Solutions (RPMs) rhel-9-for-x86_64-highavailability-e4s-rpms Red Hat Enterprise Linux 9 for x86_64 - High Availability - Update Services for SAP Solutions (RPMs) rhel-9-for-x86_64-sap-netweaver-e4s-rpms Red Hat Enterprise Linux 9 for x86_64 - SAP NetWeaver - Update Services for SAP Solutions (RPMs) rhel-9-for-x86_64-sap-solutions-e4s-rpms Red Hat Enterprise Linux 9 for x86_64 - SAP Solutions - Update Services for SAP Solutions (RPMs)", "yum -y install pcs pacemaker fence-agents", "yum install fence-agents-sbd fence-agents-ipmilan", "firewall-cmd --permanent --add-service=high-availability [root]# firewall-cmd --add-service=high-availability", "passwd hacluster Changing password for user hacluster. New password: Retype new password: passwd: all authentication tokens updated successfully.", "systemctl start [root]# pcsd.service systemctl enable pcsd.service", "pcshost auth -u hacluster -p <clusterpassword> dc1hana01 dc1hana02 dc1hana03 dc1hana04 dc2hana01 dc2hana02 dc2hana03 dc2hana04 majoritymaker Username: hacluster Password: majoritymaker: Authorized dc1hana03: Authorized dc1hana02: Authorized dc1hana01: Authorized dc2hana01: Authorized dc2hana02: Authorized dc1hana04: Authorized dc2hana04: Authorized dc2hana03: Authorized", "pcs cluster setup scale_out_hsr majoritymaker addr=10.10.10.41 addr=192.168.102.100 dc1hana01 addr=10.10.10.21 addr=192.168.102.101 dc1hana02 addr=10.10.10.22 addr=192.168.102.102 dc1hana03 addr=10.10.10.23 addr=192.168.102.103 dc1hana04 addr=10.10.10.24 addr=192.168.102.104 dc2hana01 addr=10.10.10.31 addr=192.168.102.201 dc2hana02 addr=10.10.10.33 addr=192.168.102.202 dc2hana03 addr=10.10.10.34 addr=192.168.212.203 dc2hana04 addr=10.10.10.10 addr=192.168.102.204 Destroying cluster on nodes: dc1hana01, dc1hana02, dc1hana03, dc1hana04, dc2hana01, dc2hana02, dc2hana03, dc2hana04, majoritymaker dc1hana01: Stopping Cluster (pacemaker) dc1hana04: Stopping Cluster (pacemaker) dc1hana03: Stopping Cluster (pacemaker) dc2hana04: Stopping Cluster (pacemaker) dc2hana01: Stopping Cluster (pacemaker) dc2hana03: Stopping Cluster (pacemaker) majoritymaker: Stopping Cluster (pacemaker) dc2hana02: Stopping Cluster (pacemaker) dc1hana02: Stopping Cluster (pacemaker) dc2hana01: Successfully destroyed cluster dc2hana03: Successfully destroyed cluster dc1hana04: Successfully destroyed cluster dc1hana03: Successfully destroyed cluster dc2hana02: Successfully destroyed cluster dc1hana01: Successfully destroyed cluster dc1hana02: Successfully destroyed cluster dc2hana04: Successfully destroyed cluster majoritymaker: Successfully destroyed cluster Sending 'pacemaker_remote authkey' to 'dc1hana01', 'dc1hana02', 'dc1hana03', 'dc1hana04', 'dc2hana01', 'dc2hana02', 'dc2hana03', 'dc2hana04', 'majoritymaker' dc1hana01: successful distribution of the file 'pacemaker_remote authkey' dc1hana04: successful distribution of the file 'pacemaker_remote authkey' dc1hana03: successful distribution of the file 'pacemaker_remote authkey' dc2hana01: successful distribution of the file 'pacemaker_remote authkey' dc2hana02: successful distribution of the file 'pacemaker_remote authkey' dc2hana03: successful distribution of the file 'pacemaker_remote authkey' dc2hana04: successful distribution of the file 'pacemaker_remote authkey' majoritymaker: successful distribution of the file 'pacemaker_remote authkey' dc1hana02: successful distribution of the file 'pacemaker_remote authkey' Sending cluster config files to the nodes dc1hana01: Succeeded dc1hana02: Succeeded dc1hana03: Succeeded dc1hana04: Succeeded dc2hana01: Succeeded dc2hana02: Succeeded dc2hana03: Succeeded dc2hana04: Succeeded majoritymaker: Succeeded Starting cluster on nodes: dc1hana01, dc1hana02, dc1hana03, dc1hana04, dc2hana01, dc2hana02, dc2hana03, dc2hana04, majoritymaker dc2hana01: Starting Cluster dc1hana03: Starting Cluster dc1hana01: Starting Cluster dc1hana02: Starting Cluster dc1hana04: Starting Cluster majoritymaker: Starting Cluster dc2hana02: Starting Cluster dc2hana03: Starting Cluster dc2hana04: Starting Cluster Synchronizing pcsd certificates on nodes dc1hana01, dc1hana02, dc1hana03, dc1hana04, dc2hana01, dc2hana02, dc2hana03, dc2hana04, majoritymaker majoritymaker: Success dc1hana03: Success dc1hana02: Success dc1hana01: Success dc2hana01: Success dc2hana02: Success dc2hana03: Success dc2hana04: Success dc1hana04: Success Restarting pcsd on the nodes in order to reload the certificates dc1hana04: Success dc1hana03: Success dc2hana03: Success majoritymaker: Success dc2hana04: Success dc1hana02: Success dc1hana01: Success dc2hana01: Success dc2hana02: Success", "pcs cluster enable --all dc1hana01: Cluster Enabled dc1hana02: Cluster Enabled dc1hana03: Cluster Enabled dc1hana04: Cluster Enabled dc2hana01: Cluster Enabled dc2hana02: Cluster Enabled dc2hana03: Cluster Enabled dc2hana04: Cluster Enabled majoritymaker: Cluster Enabled", "pcs stonith create <stonith id> <fence_agent> ipaddr=<fence device> login=<login> passwd=<passwd>", "pcs status Cluster name: hanascaleoutsr Stack: corosync Current DC: dc2hana01 (version 1.1.18-11.el7_5.4-2b07d5c5a9) - partition with quorum Last updated: Tue Mar 26 13:03:01 2019 Last change: Tue Mar 26 13:02:54 2019 by root via cibadmin on dc1hana01 9 nodes configured 1 resource configured Online: [ dc1hana01 dc1hana02 dc1hana03 dc1hana04 dc2hana01 dc2hana02 dc2hana03 dc2hana04 majoritymaker ] Full list of resources: fencing (stonith:fence_rhevm): Started dc1hana01 Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled", "yum install resource-agents-sap-hana-scaleout", "root# yum repolist \"rhel-x86_64-server-sap-hana-<version>\" RHEL Server SAP HANA (v. <version> for 64-bit <architecture>).", "su - rh1adm [rh1adm@dc1hana01]% sapcontrol -nr 10 -function StopSystem *[rh1adm@dc1hana01]% cat <<EOF >> /hana/shared/RH1/global/hdb/custom/config/global.ini [ha_dr_provider_SAPHanaSR] provider = SAPHanaSR path = /usr/share/SAPHanaSR-ScaleOut execution_order = 1 [trace] ha_dr_saphanasr = info EOF", "rh1adm ALL=(ALL) NOPASSWD: /usr/sbin/crm_attribute -n hana_rh1_glob_srHook -v * -t crm_config -s SAPHanaSR rh1adm ALL=(ALL) NOPASSWD: /usr/sbin/crm_attribute -n hana_rh1_gsh -v * -l reboot -t crm_config -s SAPHanaSR Defaults:rh1adm !requiretty", "Execute the following commands on one HANA node in every datacenter [root]# su - rh1adm [rh1adm]% sapcontrol -nr 10 -function StartSystem", "[rh1adm@dc1hana01]% cdtrace [rh1adm@dc1hana01]% awk '/ha_dr_SAPHanaSR.*crm_attribute/ { printf \"%s %s %s %s\\n\",USD2,USD3,USD5,USD16 }' nameserver_ * 2018-05-04 12:34:04.476445 ha_dr_SAPHanaSR SFAIL 2018-05-04 12:53:06.316973 ha_dr_SAPHanaSR SOK", "pcs property set maintenance-mode=true", "pcs resource create rsc_SAPHanaTopology_RH1_HDB10 SAPHanaTopology SID=RH1 InstanceNumber=10 op methods interval=0s timeout=5 op monitor interval=10 timeout=600 clone clone-max=6 clone-node-max=1 interleave=true --disabled", "root# pcs status --full", "pcs resource create rsc_SAPHana_RH1_HDB10 SAPHanaController SID=RH1 InstanceNumber=10 PREFER_SITE_TAKEOVER=true DUPLICATE_PRIMARY_TIMEOUT=7200 AUTOMATED_REGISTER=true op demote interval=0s timeout=320 op methods interval=0s timeout=5 op monitor interval=59 role=\"Promoted\" timeout=700 op monitor interval=61 role=\"Unpromoted\" timeout=700 op promote interval=0 timeout=3600 op start interval=0 timeout=3600 op stop interval=0 timeout=3600 promotable clone-max=6 promoted-node-max=1 interleave=true --disabled", "/usr/sap/hostctrl/exe/sapcontrol -nr 10 -function GetSystemInstanceList GetSystemInstanceList OK hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus dc1hana01,10,51013,51014,0.3,HDB|HDB_WORKER,GREEN dc1hana02,10,51013,51014,0.3,HDB|HDB_WORKER,GREEN dc1hana03,10,51013,51014,0.3,HDB|HDB_WORKER,GREEN dc1hana04,10,51013,51014,0.3,HDB|HDB_STANDBY, GREEN", "pcs resource create rsc_ip_SAPHana_RH1_HDB10 ocf:heartbeat:IPaddr2 ip=10.0.0.250 op monitor interval=\"10s\" timeout=\"20s\"", "pcs constraint order start rsc_SAPHanaTopology_RH1_HDB10-clone then start rsc_SAPHana_RH1_HDB10-clone", "pcs constraint colocation add rsc_ip_SAPHana_RH1_HDB10 with promoted rsc_SAPHana_RH1_HDB10-clone", "pcs constraint location add topology-avoids-majoritymaker rsc_SAPHanaTopology_RH1_HDB10-clone majoritymaker -INFINITY resource-discovery=never [root@dc1hana01]# pcs constraint location add hana-avoids-majoritymaker rsc_SAPHana_RH1_HDB10-clone majoritymaker -INFINITY resource-discovery=never", "pcs resource enable <resource-name>", "pcs property set maintenance-mode=false", "pcs status Cluster name: hanascaleoutsr Stack: corosync Current DC: dc2hana01 (version 1.1.18-11.el7_5.4-2b07d5c5a9) - partition with quorum Last updated: Tue Mar 26 14:26:38 2019 Last change: Tue Mar 26 14:25:47 2019 by root via crm_attribute on dc1hana01 9 nodes configured 20 resources configured Online: [ dc1hana01 dc1hana02 dc1hana03 dc1hana04 dc2hana01 dc2hana02 dc2hana03 dc2hana04 majoritymaker ] Full list of resources: fencing (stonith:fence_rhevm): Started dc1hana01 Clone Set: rsc_SAPHanaTopology_RH1_HDB10-clone [rsc_SAPHanaTopology_RH1_HDB10] Started: [ dc1hana01 dc1hana02 dc1hana03 dc1hana04 dc2hana01 dc2hana02 dc2hana03 dc2hana04 ] Stopped: [ majoritymaker ] Clone Set: msl_rsc_SAPHana_RH1_HDB10 [rsc_SAPHana_RH1_HDB10] (promotable): Promoted: [ dc1hana01 ] Unpromoted: [ dc1hana02 dc1hana03 dc1hana04 dc2hana01 dc2hana02 dc2hana03 dc2hana04 ] Stopped: [ majoritymaker ] rsc_ip_SAPHana_RH1_HDB10 (ocf::heartbeat:IPaddr2): Started dc1hana01 Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled [root@dc1hana01]# SAPHanaSR-showAttr --sid=RH1 Global prim srHook sync_state ------------------------------ global DC1 SOK SOK Sit lpt lss mns srr --------------------------------- DC1 1553607125 4 dc1hana01 P DC2 30 4 dc2hana01 S H clone_state roles score site -------------------------------------------------------- 1 PROMOTED promoted1 promoted:worker promoted 150 DC1 2 DEMOTED promoted2:slave:worker:slave 110 DC1 3 DEMOTED slave:slave:worker:slave -10000 DC1 4 DEMOTED promoted3:slave:standby:standby 115 DC1 5 DEMOTED promoted2 promoted:worker promoted 100 DC2 6 DEMOTED promoted3:slave:worker:slave 80 DC2 7 DEMOTED slave:slave:worker:slave -12200 DC2 8 DEMOTED promoted1:slave:standby:standby 80 DC2 9 :shtdown:shtdown:shtdown", "root# pcs resource create rsc_ip2_SAPHana_RH1_HDB10 ocf:heartbeat:IPaddr2 ip=10.0.0.251 op monitor interval=\"10s\" timeout=\"20s", "root# pcs constraint location rsc_ip_SAPHana_RH1_HDB10 rule score=500 role=master hana_rh1_roles eq \"master1:master:worker:master\" and hana_rh1_clone_state eq PROMOTED", "root# pcs constraint location rsc_ip2_SAPHana_RH1_HDB10 rule score=50 id=vip_slave_master_constraint hana_rh1_roles eq 'master1:master:worker:master'", "root# pcs constraint order promote rsc_SAPHana_RH1_HDB10-clone then start rsc_ip_SAPHana_RH1_HDB10", "root# pcs constraint order start rsc_ip_SAPHana_RH1_HDB10 then start rsc_ip2_SAPHana_RH1_HDB10", "root# pcs constraint colocation add rsc_ip_SAPHana_RH1_HDB10 with Master rsc_SAPHana_RH1_HDB10-clone 2000", "root# pcs constraint colocation add rsc_ip2_SAPHana_RH1_HDB10 with Slave rsc_SAPHana_RH1_HDB10-clone 5", "root# watch pcs status", "sidadm% sapcontrol -nr USD{TINSTANCE} -function StopSystem HDB", "sidadm% sapcontrol -nr USD{TINSTANCE} -function StartSystem HDB", "pcs node attribute Node Attributes: saphdb1: hana_hdb_gra=2.0 hana_hdb_site=DC1 hana_hdb_vhost=sapvirthdb1 saphdb2: hana_hdb_gra=2.0 hana_hdb_site=DC1 hana_hdb_vhost=sapvirthdb2 saphdb3: hana_hdb_gra=2.0 hana_hdb_site=DC2 hana_hdb_vhost=sapvirthdb3 saphdb4: hana_hdb_gra=2.0 hana_hdb_site=DC2 hana_hdb_vhost=sapvirthdb4", "pcs resource create nfs_hana_shared_dc1 ocf:heartbeat:Filesystem device=svm-012ab34cd45ef67.fs-0879de29a7fbb752d.fsx.ap-southeast-2.amazonaws.com:/sap_hana_dc1_log_shared/shared directory=/hana/shared fstype=nfs options=defaults,suid op monitor interval=60s on-fail=fence timeout=20s OCF_CHECK_LEVEL=20 clone [root@dc1hana01]# pcs resource create nfs_hana_log_dc1 ocf:heartbeat:Filesystem device=svm-012ab34cd45ef67.fs-0879de29a7fbb752d.fsx.ap-southeast-2.amazonaws.com:/sap_hana_dc1_log_shared/lognode1 directory=/hana/log/HDB fstype=nfs options=defaults,suid op monitor interval=60s on-fail=fence timeout=20s OCF_CHECK_LEVEL=20 clone [root@dc1hana01]# pcs resource create nfs_hana_log2_dc1 ocf:heartbeat:Filesystem device=svm-012ab34cd45ef67.fs-0879de29a7fbb752d.fsx.ap-southeast-2.amazonaws.com:/sap_hana_dc1_log_shared/lognode2 directory=/hana/log/HDB fstype=nfs options=defaults,suid op monitor interval=60s on-fail=fence timeout=20s OCF_CHECK_LEVEL=20 clone [root@dc1hana01]# pcs resource create nfs_hana_shared_dc2 ocf:heartbeat:Filesystem device=svm-012ab34cd45ef78.fs-088e3f66bf4f22c33.fsx.ap-southeast-2.amazonaws.com:/sap_hana_dc2_log_shared/shared directory=/hana/shared fstype=nfs options=defaults,suid op monitor interval=60s on-fail=fence timeout=20s OCF_CHECK_LEVEL=20 clone [root@dc1hana01]# pcs resource create nfs_hana_log_dc2 ocf:heartbeat:Filesystem device=svm-012ab34cd45ef678.fs-088e3f66bf4f22c33.fsx.ap-southeast-2.amazonaws.com:/sap_hana_dc2_log_shared/lognode1 directory=/hana/log/HDB fstype=nfs options=defaults,suid op monitor interval=60s on-fail=fence timeout=20s OCF_CHECK_LEVEL=20 clone [root@dc1hana01]# pcs resource create nfs_hana_log2_dc2 ocf:heartbeat:Filesystem device=svm-012ab34cd45ef678.fs-088e3f66bf4f22c33.fsx.ap-southeast-2.amazonaws.com:/sap_hana_dc2_log_shared/lognode2 directory=/hana/log/HDB fstype=nfs options=defaults,suid op monitor interval=60s on-fail=fence timeout=20s OCF_CHECK_LEVEL=20 clone [root@dc1hana01]# pcs node attribute sap-dc1-dbn2 NFS_HDB_SITE=DC1N2 [root@dc1hana01]# pcs node attribute sap-dc2-dbn1 NFS_HDB_SITE=DC2N1 [root@dc1hana01]# pcs node attribute sap-dc2-dbn2 NFS_HDB_SITE=DC2N2 [root@dc1hana01]# pcs node attribute sap-dc1-dbn1 NFS_SHARED_HDB_SITE=DC1 [root@dc1hana01]# pcs node attribute sap-dc1-dbn2 NFS_SHARED_HDB_SITE=DC1 [root@dc1hana01]# pcs node attribute sap-dc2-dbn1 NFS_SHARED_HDB_SITE=DC2 [root@dc1hana01]# pcs node attribute sap-dc2-dbn2 NFS_SHARED_HDB_SITE=DC2 [root@dc1hana01]# pcs constraint location nfs_hana_shared_dc1-clone rule resource-discovery=never score=-INFINITY NFS_SHARED_HDB_SITE ne DC1 [root@dc1hana01]# pcs constraint location nfs_hana_log_dc1-clone rule resource-discovery=never score=-INFINITY NFS_HDB_SITE ne DC1N1 [root@dc1hana01]# pcs constraint location nfs_hana_log2_dc1-clone rule resource-discovery=never score=-INFINITY NFS_HDB_SITE ne DC1N2 [root@dc1hana01]# pcs constraint location nfs_hana_shared_dc2-clone rule resource-discovery=never score=-INFINITY NFS_SHARED_HDB_SITE ne DC2 [root@dc1hana01]# pcs constraint location nfs_hana_log_dc2-clone rule resource-discovery=never score=-INFINITY NFS_HDB_SITE ne DC2N1 [root@dc1hana01]# pcs constraint location nfs_hana_log2_dc2-clone rule resource-discovery=never score=-INFINITY NFS_HDB_SITE ne DC2N2 [root@dc1hana01]# pcs resource enable nfs_hana_shared_dc1 *[root@dc1hana01]# pcs resource enable nfs_hana_log_dc1 [root@dc1hana01]# pcs resource enable nfs_hana_log2_dc1 [root@dc1hana01]# pcs resource enable nfs_hana_shared_dc2 [root@dc1hana01]# pcs resource enable nfs_hana_log_dc2 [root@dc1hana01]# pcs resource enable nfs_hana_log2_dc2 [root@dc1hana01]# pcs resource update nfs_hana_shared_dc1-clone meta clone-max=2 interleave=true [root@dc1hana01]# pcs resource update nfs_hana_shared_dc2-clone meta clone-max=2 interleave=true [root@dc1hana01]# pcs resource update nfs_hana_log_dc1-clone meta clone-max=1 interleave=true [root@dc1hana01]# pcs resource update nfs_hana_log_dc2-clone meta clone-max=1 interleave=true [root@dc1hana01]# pcs resource update nfs_hana_log2_dc1-clone meta clone-max=1 interleave=true [root@dc1hana01]# pcs resource update nfs_hana_log2_dc2-clone meta clone-max=1 interleave=true", "root@saphdb1:/etc/systemd/system/resource-agents-deps.target.d# more sap_systemd_hdb_00.conf [Unit] Description=Pacemaker SAP resource HDB_00 needs the SAP Host Agent service Wants=saphostagent.service After=saphostagent.service Wants=SAPHDB_00.service After=SAPHDB_00.service", "systemctl daemon-reload", "[ha_dr_provider_chksrv] path = /usr/share/SAPHanaSR-ScaleOut execution_order = 2 action_on_lost = stop [trace] ha_dr_saphanasr = info ha_dr_chksrv = info", "[ rh1adm]USD hdbnsutil -reloadHADRProviders", "[rh1adm]USD cdtrace [rh1adm]USD cat nameserver_chksrv.trc", "pcs constraint location rsc_SAPHana_HDB_HDB00-clone rule role=master score=100 \\#uname eq saphdb3", "pcs constraint remove rsc_SAPHana_HDB_HDB00", "pcs stonith fence <nodename>", "sidadm% HDB kill", "export ListInstances=USD(/usr/sap/hostctrl/exe/saphostctrl -function ListInstances| head -1 ) export sid=USD(echo \"USDListInstances\" |cut -d \" \" -f 5| tr [A-Z] [a-z]) export SID=USD(echo USDsid | tr [a-z] [A-Z]) export Instance=USD(echo \"USDListInstances\" |cut -d \" \" -f 7 ) alias crmm='watch -n 1 crm_mon -1Arf' alias crmv='watch -n 1 /usr/local/bin/crmmv' alias clean=/usr/local/bin/cleanup alias cglo='su - USD{sid}adm -c cglo' alias cdh='cd /usr/lib/ocf/resource.d/heartbeat' alias vhdbinfo=\"vim /usr/sap/USD{SID}/home/hdbinfo;dcp /usr/sap/USD{SID}/home/hdbinfo\" alias gtr='su - USD{sid}adm -c gtr' alias hdb='su - USD{sid}adm -c hdb' alias hdbi='su - USD{sid}adm -c hdbi' alias hgrep='history | grep USD1' alias hri='su - USD{sid}adm -c hri' alias hris='su - USD{sid}adm -c hris' alias killnode=\"echo 'b' > /proc/sysrq-trigger\" alias lhc='su - USD{sid}adm -c lhc' alias python='/usr/sap/USD{SID}/HDBUSD{Instance}/exe/Python/bin/python' alias pss=\"watch 'pcs status --full | egrep -e Node\\|master\\|clone_state\\|roles'\" alias srstate='su - USD{sid}adm -c srstate' alias shr='watch -n 5 \"SAPHanaSR-monitor --sid=USD{SID}\"' alias sgsi='su - USD{sid}adm -c sgsi' alias spl='su - USD{sid}adm -c spl' alias srs='su - USD{sid}adm -c srs' alias sapstart='su - USD{sid}adm -c sapstart' alias sapstop='su - USD{sid}adm -c sapstop' alias sapmode='df -h /;su - USD{sid}adm -c sapmode' alias smm='pcs property set maintenance-mode=true' alias usmm='pcs property set maintenance-mode=false' alias tma='tmux attach -t 0:' alias tmkill='tmux killw -a' alias tm='tail -100f /var/log/messages |grep -v systemd' alias tms='tail -1000f /var/log/messages | egrep -s \"Setting master-rsc_SAPHana_USD{SID}_HDBUSD{Instance}|sr_register\\ *|WAITING4LPA\\|EXCLUDE as posible takeover node|SAPHanaSR|failed|USD{HOSTNAME} |PROMOTED|DEMOTED|UNDEFINED|master_walk|SWAIT|WaitforStopped|FAILED\"' alias tmss='tail -1000f /var/log/messages | grep -v systemd | egrep -s \"secondary with sync status|Settingmaster-rsc_SAPHana_USD{SID}_HDBUSD{Instance} |sr_register|WAITING4LPA|EXCLUDE as posible takeover node|SAPHanaSR |failed|USD{HOSTNAME}|PROMOTED|DEMOTED|UNDEFINED|master_walk|SWAIT|WaitforStopped|FAILED\"' alias tmm='tail -1000f /var/log/messages | egrep -s \"Settingmaster-rsc_SAPHana_USD{SID}_HDBUSD{Instance}|sr_register |WAITING4LPA|PROMOTED|DEMOTED|UNDEFINED|master_walk|SWAIT|W aitforStopped |FAILED|LPT|SOK|SFAIL|SAPHanaSR-mon\"| grep -v systemd' alias tmsl='tail -1000f /var/log/messages | egrep -s \"Settingmaster-rsc_SAPHana_USD{SID}_HDBUSD{Instance}|sr_register|WAITING4LPA |PROMOTED|DEMOTED|UNDEFINED|ERROR|Warning|mast er_walk|SWAIT |WaitforStopped|FAILED|LPT|SOK|SFAIL|SAPHanaSR-mon\"' alias vih='vim /usr/lib/ocf/resource.d/heartbeat/SAPHanaStart' alias switch1='pcs constraint location rsc_SAPHana_HDB_HDB00-clone rule role=master score=100 \\#uname eq saphdb1' alias switch3='pcs constraint location rsc_SAPHana_HDB_HDB00-clone rule role=master score=100 \\#uname eq saphdb3' alias switch0='pcs constraint remove location-rsc_SAPHana_HDB_HDB00-clone alias switchl='pcs constraint location | grep pcs resource | grep promotable | awk \"{ print USD4 }\"` | grep Constraint| awk \"{ print USDNF }\"' alias scl='pcs constraint location |grep \" Constraint\"'", "alias tm='tail -100f /var/log/messages |grep -v systemd' alias tms='tail -1000f /var/log/messages | egrep -s \"Settingmaster-rsc_SAPHana_USDSAPSYSTEMNAME_HDBUSD{TINSTANCE}|sr_register |WAITING4LPA|EXCLUDE as posible takeover node|SAPHanaSR|failed |USD{HOSTNAME}|PROMOTED|DEMOTED|UNDEFINED|master_walk|SWAIT|WaitforStopped|FAILED\"' alias tmsl='tail -1000f /var/log/messages | egrep -s \"Settingmaster-rsc_SAPHana_USDSAPSYSTEMNAME_HDBUSD{TINSTANCE}|sr_register |WAITING4LPA|PROMOTED|DEMOTED|UNDEFINED|master_walk|SWAIT|WaitforStopped|FAILED|LPT\"' alias sapstart='sapcontrol -nr USD{TINSTANCE} -function StartSystem HDB;hdbi' alias sapstop='sapcontrol -nr USD{TINSTANCE} -function StopSystem HDB;hdbi' alias sapmode='watch -n 5 \"hdbnsutil -sr_state --sapcontrol=1 |grep site.\\*Mode\"' alias sapprim='hdbnsutil -sr_stateConfiguration| grep -i primary' alias sgsi='watch sapcontrol -nr USD{TINSTANCE} -function GetSystemInstanceList' alias spl='watch sapcontrol -nr USD{TINSTANCE} -function GetProcessList' alias splh='watch \"sapcontrol -nr USD{TINSTANCE} -function GetProcessList | grep hdbdaemon\"' alias srs=\"watch -n 5 'python /usr/sap/USDSAPSYSTEMNAME/HDBUSD{TINSTANCE}/exe/python_support/systemReplicationStatus.py * *; echo Status \\USD?'\" alias cdb=\"cd /usr/sap/USD{SAPSYSTEMNAME}/HDBUSD{TINSTANCE}/backup\" alias srstate='watch -n 10 hdbnsutil -sr_state' alias hdb='watch -n 5 \"sapcontrol -nr USD{TINSTANCE} -function GetProcessList | egrep -s hdbdaemon\\|hdbnameserver\\|hdbindexserver \"' alias hdbi='watch -n 5 \"sapcontrol -nr USD{TINSTANCE} -function GetProcessList | egrep -s hdbdaemon\\|hdbnameserver\\|hdbindexserver ;sapcontrol -nr USD{TINSTANCE} -function GetSystemInstanceList \"' alias hgrep='history | grep USD1' alias vglo=\"vim /usr/sap/USDSAPSYSTEMNAME/SYS/global/hdb/custom/config/global.ini\" alias vgloh=\"vim /hana/shared/USD{SAPSYSTEMNAME}/HDBUSD{TINSTANCE}/USD{HOSTNAME}/global.ini\" alias hri='hdbcons -e hdbindexserver \"replication info\"' alias hris='hdbcons -e hdbindexserver \"replication info\" | egrep -e \"SiteID|ReplicationStatus_\"' alias gtr='watch -n 10 /usr/sap/USDSAPSYSTEMNAME/HDBUSD{TINSTANCE}/exe/Python/bin/python /usr/sap/USDSAPSYSTEMNAME/HDBUSD{TINSTANCE}/exe/python_support/getTakeoverRecommendation.py --sapcontrol=1' alias lhc='/usr/sap/USDSAPSYSTEMNAME/HDBUSD{TINSTANCE}/exe/Python/bin/python /usr/sap/USDSAPSYSTEMNAME/HDBUSD{TINSTANCE}/exe/python_support/landscapeHostConfiguration.py ;echo USD?' alias reg1='hdbnsutil -sr_register --remoteHost=hana07 -remoteInstance=USD{TINSTANCE} --replicationMode=syncmem --name=DC3 --remoteName=DC1 --operationMode=logreplay --online' alias reg2='hdbnsutil -sr_register --remoteHost=hana08 -remoteInstance=USD{TINSTANCE} --replicationMode=syncmem --name=DC3 --remoteName=DC2 --operationMode=logreplay --online' alias reg3='hdbnsutil -sr_register --remoteHost=hana09 -remoteInstance=USD{TINSTANCE} --replicationMode=syncmem --name=DC3 --remoteName=DC3 --operationMode=logreplay --online' PS1=\"\\[\\033[m\\][\\[\\e[1;33m\\]\\u\\[\\e[1;33m\\]\\[\\033[m\\]@\\[\\e[1;36m\\]\\h\\[\\033[m\\]: \\[\\e[0m\\]\\[\\e[1;32m\\]\\W\\[\\e[0m\\]]# \"", "alias pss='pcs status --full | egrep -e \"Node|master|clone_state|roles\"' [root@saphdb2:~]# pss Node List: Node Attributes: * Node: saphdb1 (1): * hana_hdb_clone_state : PROMOTED * hana_hdb_roles : master1:master:worker:master * master-rsc_SAPHana_HDB_HDB00 : 150 * Node: saphdb2 (2): * hana_hdb_clone_state : DEMOTED * hana_hdb_roles : slave:slave:worker:slave * master-rsc_SAPHana_HDB_HDB00 : -10000 * Node: saphdb3 (3): * hana_hdb_clone_state : DEMOTED * hana_hdb_roles : master1:master:worker:master * master-rsc_SAPHana_HDB_HDB00 : 100 * Node: saphdb4 (4): * hana_hdb_clone_state : DEMOTED * hana_hdb_roles : slave:slave:worker:slave * master-rsc_SAPHana_HDB_HDB00 : -12200", "pcs resource unmanage SAPHana_RH1_HDB10-clone", "pcs resource refresh SAPHana_RH1_HDB10-clone", "pcs resource manage SAPHana_RH1_HDB10-clone", "pcs resource move SAPHana_RH1_HDB10-clone", "pcs resource clear SAPHana_RH1_HDB10-clone" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html-single/automating_sap_hana_scale-out_system_replication_using_the_rhel_ha_add-on/index
Appendix C. Using AMQ Broker with the examples
Appendix C. Using AMQ Broker with the examples The AMQ Spring Boot Starter examples require a running message broker with a queue named example . Use the procedures below to install and start the broker and define the queue. C.1. Installing the broker Follow the instructions in Getting Started with AMQ Broker to install the broker and create a broker instance . Enable anonymous access. The following procedures refer to the location of the broker instance as <broker-instance-dir> . C.2. Starting the broker Procedure Use the artemis run command to start the broker. USD <broker-instance-dir> /bin/artemis run Check the console output for any critical errors logged during startup. The broker logs Server is now live when it is ready. USD example-broker/bin/artemis run __ __ ____ ____ _ /\ | \/ |/ __ \ | _ \ | | / \ | \ / | | | | | |_) |_ __ ___ | | _____ _ __ / /\ \ | |\/| | | | | | _ <| '__/ _ \| |/ / _ \ '__| / ____ \| | | | |__| | | |_) | | | (_) | < __/ | /_/ \_\_| |_|\___\_\ |____/|_| \___/|_|\_\___|_| Red Hat AMQ <version> 2020-06-03 12:12:11,807 INFO [org.apache.activemq.artemis.integration.bootstrap] AMQ101000: Starting ActiveMQ Artemis Server ... 2020-06-03 12:12:12,336 INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live ... C.3. Creating a queue In a new terminal, use the artemis queue command to create a queue named example . USD <broker-instance-dir> /bin/artemis queue create --name example --address example --auto-create-address --anycast You are prompted to answer a series of yes or no questions. Answer N for no to all of them. Once the queue is created, the broker is ready for use with the example programs. C.4. Stopping the broker When you are done running the examples, use the artemis stop command to stop the broker. USD <broker-instance-dir> /bin/artemis stop Revised on 2023-12-07 10:33:06 UTC
[ "<broker-instance-dir> /bin/artemis run", "example-broker/bin/artemis run __ __ ____ ____ _ /\\ | \\/ |/ __ \\ | _ \\ | | / \\ | \\ / | | | | | |_) |_ __ ___ | | _____ _ __ / /\\ \\ | |\\/| | | | | | _ <| '__/ _ \\| |/ / _ \\ '__| / ____ \\| | | | |__| | | |_) | | | (_) | < __/ | /_/ \\_\\_| |_|\\___\\_\\ |____/|_| \\___/|_|\\_\\___|_| Red Hat AMQ <version> 2020-06-03 12:12:11,807 INFO [org.apache.activemq.artemis.integration.bootstrap] AMQ101000: Starting ActiveMQ Artemis Server 2020-06-03 12:12:12,336 INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live", "<broker-instance-dir> /bin/artemis queue create --name example --address example --auto-create-address --anycast", "<broker-instance-dir> /bin/artemis stop" ]
https://docs.redhat.com/en/documentation/amq_spring_boot_starter/3.0/html/using_the_amq_spring_boot_starter/using_the_broker_with_the_examples
Managing networking infrastructure services
Managing networking infrastructure services Red Hat Enterprise Linux 8 A guide to managing networking infrastructure services in Red Hat Enterprise Linux 8 Red Hat Customer Content Services
[ "yum install bind bind-utils", "yum install bind-chroot", "listen-on port 53 { 127.0.0.1; 192.0.2.1; }; listen-on-v6 port 53 { ::1; 2001:db8:1::1; };", "allow-query { localhost; 192.0.2.0/24; 2001:db8:1::/64; };", "allow-recursion { localhost; 192.0.2.0/24; 2001:db8:1::/64; };", "forwarders { 198.51.100.1; 203.0.113.5; };", "named-checkconf", "firewall-cmd --permanent --add-service=dns firewall-cmd --reload", "systemctl enable --now named", "dig @ localhost www.example.org www.example.org. 86400 IN A 198.51.100.34 ;; Query time: 917 msec", "dig @ localhost www.example.org www.example.org. 85332 IN A 198.51.100.34 ;; Query time: 1 msec", "logging { category notify { zone_transfer_log; }; category xfer-in { zone_transfer_log; }; category xfer-out { zone_transfer_log; }; channel zone_transfer_log { file \" /var/named/log/transfer.log \" versions 10 size 50m ; print-time yes; print-category yes; print-severity yes; severity info; }; };", "mkdir /var/named/log/ chown named:named /var/named/log/ chmod 700 /var/named/log/", "named-checkconf", "systemctl restart named", "cat /var/named/log/transfer.log 06-Jul-2022 15:08:51.261 xfer-out: info: client @0x7fecbc0b0700 192.0.2.2#36121/key example-transfer-key (example.com): transfer of 'example.com/IN': AXFR started: TSIG example-transfer-key (serial 2022070603) 06-Jul-2022 15:08:51.261 xfer-out: info: client @0x7fecbc0b0700 192.0.2.2#36121/key example-transfer-key (example.com): transfer of 'example.com/IN': AXFR ended", "acl internal-networks { 127.0.0.1; 192.0.2.0/24; 2001:db8:1::/64; }; acl dmz-networks { 198.51.100.0/24; 2001:db8:2::/64; };", "allow-query { internal-networks; dmz-networks; }; allow-recursion { internal-networks; };", "named-checkconf", "systemctl reload named", "dig +short @ 192.0.2.1 www.example.com", "dig @ 192.0.2.1 www.example.com ;; WARNING: recursion requested but not available", "name class type mname rname serial refresh retry expire minimum", "@ IN SOA ns1.example.com. hostmaster.example.com. ( 2022070601 ; serial number 1d ; refresh period 3h ; retry period 3d ; expire time 3h ) ; minimum TTL", "zone \" example.com \" { type master; file \" example.com.zone \"; allow-query { any; }; allow-transfer { none; }; };", "named-checkconf", "USDTTL 8h @ IN SOA ns1.example.com. hostmaster.example.com. ( 2022070601 ; serial number 1d ; refresh period 3h ; retry period 3d ; expire time 3h ) ; minimum TTL IN NS ns1.example.com. IN MX 10 mail.example.com. www IN A 192.0.2.30 www IN AAAA 2001:db8:1::30 ns1 IN A 192.0.2.1 ns1 IN AAAA 2001:db8:1::1 mail IN A 192.0.2.20 mail IN AAAA 2001:db8:1::20", "chown root:named /var/named/ example.com.zone chmod 640 /var/named/ example.com.zone", "named-checkzone example.com /var/named/example.com.zone zone example.com/IN : loaded serial 2022070601 OK", "systemctl reload named", "dig +short @ localhost AAAA www.example.com 2001:db8:1::30 dig +short @ localhost NS example.com ns1.example.com. dig +short @ localhost A ns1.example.com 192.0.2.1", "zone \" 2.0.192.in-addr.arpa \" { type master; file \" 2.0.192.in-addr.arpa.zone \"; allow-query { any; }; allow-transfer { none; }; };", "named-checkconf", "USDTTL 8h @ IN SOA ns1.example.com. hostmaster.example.com. ( 2022070601 ; serial number 1d ; refresh period 3h ; retry period 3d ; expire time 3h ) ; minimum TTL IN NS ns1.example.com. 1 IN PTR ns1.example.com. 30 IN PTR www.example.com.", "chown root:named /var/named/ 2.0.192.in-addr.arpa.zone chmod 640 /var/named/ 2.0.192.in-addr.arpa.zone", "named-checkzone 2.0.192.in-addr.arpa /var/named/2.0.192.in-addr.arpa.zone zone 2.0.192.in-addr.arpa/IN : loaded serial 2022070601 OK", "systemctl reload named", "dig +short @ localhost -x 192.0.2.1 ns1.example.com. dig +short @ localhost -x 192.0.2.30 www.example.com.", "options { directory \" /var/named \"; } zone \" example.com \" { file \" example.com.zone \"; };", "named-checkzone example.com /var/named/example.com.zone zone example.com/IN : loaded serial 2022062802 OK", "systemctl reload named", "dig +short @ localhost A ns2.example.com 192.0.2.2", "zone \" example.com \" { dnssec-policy default; };", "systemctl reload named", "dnssec-dsfromkey /var/named/K example.com.+013+61141 .key example.com. IN DS 61141 13 2 3E184188CF6D2521EDFDC3F07CFEE8D0195AACBD85E68BAE0620F638B4B1B027", "grep DNSKEY /var/named/K example.com.+013+61141.key example.com. 3600 IN DNSKEY 257 3 13 sjzT3jNEp120aSO4mPEHHSkReHUf7AABNnT8hNRTzD5cKMQSjDJin2I3 5CaKVcWO1pm+HltxUEt+X9dfp8OZkg==", "dig +dnssec +short @ localhost A www.example.com 192.0.2.30 A 13 3 28800 20220718081258 20220705120353 61141 example.com. e7Cfh6GuOBMAWsgsHSVTPh+JJSOI/Y6zctzIuqIU1JqEgOOAfL/Qz474 M0sgi54m1Kmnr2ANBKJN9uvOs5eXYw==", "dig @ localhost example.com +dnssec ;; flags: qr rd ra ad ; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1", "tsig-keygen example-transfer-key | tee -a /etc/named.conf key \" example-transfer-key \" { algorithm hmac-sha256; secret \" q7ANbnyliDMuvWgnKOxMLi313JGcTZB5ydMW5CyUGXQ= \"; };", "zone \" example.com \" { allow-transfer { key example-transfer-key; }; };", "zone \" example.com \" { also-notify { 192.0.2.2; 2001:db8:1::2; }; };", "named-checkconf", "systemctl reload named", "key \" example-transfer-key \" { algorithm hmac-sha256; secret \" q7ANbnyliDMuvWgnKOxMLi313JGcTZB5ydMW5CyUGXQ= \"; };", "zone \" example.com \" { type slave; file \" slaves/example.com.zone \"; allow-query { any; }; allow-transfer { none; }; masters { 192.0.2.1 key example-transfer-key; 2001:db8:1::1 key example-transfer-key; }; };", "named-checkconf", "systemctl reload named", "journalctl -u named Jul 06 15:08:51 ns2.example.com named[2024]: zone example.com/IN: Transfer started. Jul 06 15:08:51 ns2.example.com named[2024]: transfer of 'example.com/IN' from 192.0.2.1#53: connected using 192.0.2.2#45803 Jul 06 15:08:51 ns2.example.com named[2024]: zone example.com/IN: transferred serial 2022070101 Jul 06 15:08:51 ns2.example.com named[2024]: transfer of 'example.com/IN' from 192.0.2.1#53: Transfer status: success Jul 06 15:08:51 ns2.example.com named[2024]: transfer of 'example.com/IN' from 192.0.2.1#53: Transfer completed: 1 messages, 29 records, 2002 bytes, 0.003 secs (667333 bytes/sec)", "ls -l /var/named/slaves/ total 4 -rw-r--r--. 1 named named 2736 Jul 6 15:08 example.com.zone", "dig +short @ 192.0.2.2 AAAA www.example.com 2001:db8:1::30", "options { response-policy { zone \" rpz.local \"; }; }", "zone \"rpz.local\" { type master; file \"rpz.local\"; allow-query { localhost; 192.0.2.0/24; 2001:db8:1::/64; }; allow-transfer { none; }; };", "named-checkconf", "USDTTL 10m @ IN SOA ns1.example.com. hostmaster.example.com. ( 2022070601 ; serial number 1h ; refresh period 1m ; retry period 3d ; expire time 1m ) ; minimum TTL IN NS ns1.example.com. example.org IN CNAME . *.example.org IN CNAME . example.net IN CNAME rpz-drop. *.example.net IN CNAME rpz-drop.", "named-checkzone rpz.local /var/named/rpz.local zone rpz.local/IN : loaded serial 2022070601 OK", "systemctl reload named", "dig @localhost www.example.org ;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN , id: 30286", "dig @localhost www.example.net ;; connection timed out; no servers could be reached", "options { dnstap { all; }; # Configure filter dnstap-output file \"/var/named/data/dnstap.bin\"; }; end of options", "systemctl restart named.service", "Example: sudoedit /etc/cron.daily/dnstap #!/bin/sh rndc dnstap -roll 3 mv /var/named/data/dnstap.bin.1 /var/log/named/dnstap/dnstap-USD(date -I).bin use dnstap-read to analyze saved logs sudo chmod a+x /etc/cron.daily/dnstap", "Example: dnstap-read -y [file-name]", "yum install unbound", "interface: 127.0.0.1 interface: 192.0.2.1 interface: 2001:db8:1::1", "access-control: 127.0.0.0/8 allow access-control: 192.0.2.0/24 allow access-control: 2001:db8:1::/64 allow", "systemctl restart unbound-keygen", "unbound-checkconf unbound-checkconf: no errors in /etc/unbound/unbound.conf", "firewall-cmd --permanent --add-service=dns firewall-cmd --reload", "systemctl enable --now unbound", "dig @ localhost www.example.com www.example.com. 86400 IN A 198.51.100.34 ;; Query time: 330 msec", "dig @ localhost www.example.com www.example.com. 85332 IN A 198.51.100.34 ;; Query time: 1 msec", "nmcli connection modify Example_Connection ipv4.dns 192.0.2.1 nmcli connection modify Example_Connection ipv6.dns 2001:db8:1::1", "yum install radvd", "interface enp1s0 { AdvSendAdvert on; AdvManagedFlag on; AdvOtherConfigFlag on; prefix 2001:db8:0:1::/64 { }; };", "systemctl enable radvd", "systemctl start radvd", "radvdump", "cp /usr/lib/systemd/system/dhcpd.service /etc/systemd/system/", "ExecStart=/usr/sbin/dhcpd -f -cf /etc/dhcp/dhcpd.conf -user dhcpd -group dhcpd --no-pid USDDHCPDARGS enp0s1 enp7s0", "systemctl daemon-reload", "systemctl restart dhcpd.service", "cp /usr/lib/systemd/system/dhcpd6.service /etc/systemd/system/", "ExecStart=/usr/sbin/dhcpd -f -6 -cf /etc/dhcp/dhcpd6.conf -user dhcpd -group dhcpd --no-pid USDDHCPDARGS enp0s1 enp7s0", "systemctl daemon-reload", "systemctl restart dhcpd6.service", "option domain-name \"example.com\"; default-lease-time 86400;", "authoritative;", "subnet 192.0.2.0 netmask 255.255.255.0 { range 192.0.2.20 192.0.2.100; option domain-name-servers 192.0.2.1; option routers 192.0.2.1; option broadcast-address 192.0.2.255; max-lease-time 172800; }", "systemctl enable dhcpd", "systemctl start dhcpd", "option dhcp6.domain-search \"example.com\"; default-lease-time 86400;", "authoritative;", "subnet6 2001:db8:0:1::/64 { range6 2001:db8:0:1::20 2001:db8:0:1::100; option dhcp6.name-servers 2001:db8:0:1::1; max-lease-time 172800; }", "systemctl enable dhcpd6", "systemctl start dhcpd6", "option domain-name \"example.com\"; default-lease-time 86400;", "authoritative;", "shared-network example { option domain-name-servers 192.0.2.1; subnet 192.0.2.0 netmask 255.255.255.0 { range 192.0.2.20 192.0.2.100; option routers 192.0.2.1; } subnet 198.51.100.0 netmask 255.255.255.0 { range 198.51.100.20 198.51.100.100; option routers 198.51.100.1; } }", "subnet 203.0.113.0 netmask 255.255.255.0 { }", "systemctl enable dhcpd", "systemctl start dhcpd", "option dhcp6.domain-search \"example.com\"; default-lease-time 86400;", "authoritative;", "shared-network example { option domain-name-servers 2001:db8:0:1::1:1 subnet6 2001:db8:0:1::1:0/120 { range6 2001:db8:0:1::1:20 2001:db8:0:1::1:100 } subnet6 2001:db8:0:1::2:0/120 { range6 2001:db8:0:1::2:20 2001:db8:0:1::2:100 } }", "subnet6 2001:db8:0:1::50:0/120 { }", "systemctl enable dhcpd6", "systemctl start dhcpd6", "host server.example.com { hardware ethernet 52:54:00:72:2f:6e; fixed-address 192.0.2.130; }", "systemctl start dhcpd", "host server.example.com { hardware ethernet 52:54:00:72:2f:6e; fixed-address6 2001:db8:0:1::200; }", "systemctl start dhcpd6", "group { option domain-name-servers 192.0.2.1; host server1.example.com { hardware ethernet 52:54:00:72:2f:6e; fixed-address 192.0.2.130; } host server2.example.com { hardware ethernet 52:54:00:1b:f3:cf; fixed-address 192.0.2.140; } }", "systemctl start dhcpd", "group { option dhcp6.domain-search \"example.com\"; host server1.example.com { hardware ethernet 52:54:00:72:2f:6e; fixed-address 2001:db8:0:1::200; } host server2.example.com { hardware ethernet 52:54:00:1b:f3:cf; fixed-address 2001:db8:0:1::ba3; } }", "systemctl start dhcpd6", "systemctl stop dhcpd", "mv /var/lib/dhcpd/dhcpd.leases /var/lib/dhcpd/dhcpd.leases.corrupt", "cp -p /var/lib/dhcpd/dhcpd.leases~ /var/lib/dhcpd/dhcpd.leases", "systemctl start dhcpd", "systemctl stop dhcpd6", "mv /var/lib/dhcpd/dhcpd6.leases /var/lib/dhcpd/dhcpd6.leases.corrupt", "cp -p /var/lib/dhcpd/dhcpd6.leases~ /var/lib/dhcpd/dhcpd6.leases", "systemctl start dhcpd6", "yum install dhcp-relay", "cp /lib/systemd/system/dhcrelay.service /etc/systemd/system/", "ExecStart=/usr/sbin/dhcrelay -d --no-pid -i enp1s0 192.0.2.1", "systemctl daemon-reload", "systemctl enable dhcrelay.service", "systemctl start dhcrelay.service", "yum install dhcp-relay", "cp /lib/systemd/system/dhcrelay.service /etc/systemd/system/dhcrelay6.service", "ExecStart=/usr/sbin/dhcrelay -d --no-pid -l enp1s0 -u enp7s0", "systemctl daemon-reload", "systemctl enable dhcrelay6.service", "systemctl start dhcrelay6.service" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html-single/managing_networking_infrastructure_services/index
Chapter 7. Understanding OpenShift Container Platform development
Chapter 7. Understanding OpenShift Container Platform development To fully leverage the capability of containers when developing and running enterprise-quality applications, ensure your environment is supported by tools that allow containers to be: Created as discrete microservices that can be connected to other containerized, and non-containerized, services. For example, you might want to join your application with a database or attach a monitoring application to it. Resilient, so if a server crashes or needs to go down for maintenance or to be decommissioned, containers can start on another machine. Automated to pick up code changes automatically and then start and deploy new versions of themselves. Scaled up, or replicated, to have more instances serving clients as demand increases and then spun down to fewer instances as demand declines. Run in different ways, depending on the type of application. For example, one application might run once a month to produce a report and then exit. Another application might need to run constantly and be highly available to clients. Managed so you can watch the state of your application and react when something goes wrong. Containers' widespread acceptance, and the resulting requirements for tools and methods to make them enterprise-ready, resulted in many options for them. The rest of this section explains options for assets you can create when you build and deploy containerized Kubernetes applications in OpenShift Container Platform. It also describes which approaches you might use for different kinds of applications and development requirements. 7.1. About developing containerized applications You can approach application development with containers in many ways, and different approaches might be more appropriate for different situations. To illustrate some of this variety, the series of approaches that is presented starts with developing a single container and ultimately deploys that container as a mission-critical application for a large enterprise. These approaches show different tools, formats, and methods that you can employ with containerized application development. This topic describes: Building a simple container and storing it in a registry Creating a Kubernetes manifest and saving it to a Git repository Making an Operator to share your application with others 7.2. Building a simple container You have an idea for an application and you want to containerize it. First you require a tool for building a container, like buildah or docker, and a file that describes what goes in your container, which is typically a Dockerfile . , you require a location to push the resulting container image so you can pull it to run anywhere you want it to run. This location is a container registry. Some examples of each of these components are installed by default on most Linux operating systems, except for the Dockerfile, which you provide yourself. The following diagram displays the process of building and pushing an image: Figure 7.1. Create a simple containerized application and push it to a registry If you use a computer that runs Red Hat Enterprise Linux (RHEL) as the operating system, the process of creating a containerized application requires the following steps: Install container build tools: RHEL contains a set of tools that includes podman, buildah, and skopeo that you use to build and manage containers. Create a Dockerfile to combine base image and software: Information about building your container goes into a file that is named Dockerfile . In that file, you identify the base image you build from, the software packages you install, and the software you copy into the container. You also identify parameter values like network ports that you expose outside the container and volumes that you mount inside the container. Put your Dockerfile and the software you want to containerize in a directory on your RHEL system. Run buildah or docker build: Run the buildah build-using-dockerfile or the docker build command to pull your chosen base image to the local system and create a container image that is stored locally. You can also build container images without a Dockerfile by using buildah. Tag and push to a registry: Add a tag to your new container image that identifies the location of the registry in which you want to store and share your container. Then push that image to the registry by running the podman push or docker push command. Pull and run the image: From any system that has a container client tool, such as podman or docker, run a command that identifies your new image. For example, run the podman run <image_name> or docker run <image_name> command. Here <image_name> is the name of your new container image, which resembles quay.io/myrepo/myapp:latest . The registry might require credentials to push and pull images. For more details on the process of building container images, pushing them to registries, and running them, see Custom image builds with Buildah . 7.2.1. Container build tool options Building and managing containers with buildah, podman, and skopeo results in industry standard container images that include features specifically tuned for deploying containers in OpenShift Container Platform or other Kubernetes environments. These tools are daemonless and can run without root privileges, requiring less overhead to run them. Important Support for Docker Container Engine as a container runtime is deprecated in Kubernetes 1.20 and will be removed in a future release. However, Docker-produced images will continue to work in your cluster with all runtimes, including CRI-O. For more information, see the Kubernetes blog announcement . When you ultimately run your containers in OpenShift Container Platform, you use the CRI-O container engine. CRI-O runs on every worker and control plane machine in an OpenShift Container Platform cluster, but CRI-O is not yet supported as a standalone runtime outside of OpenShift Container Platform. 7.2.2. Base image options The base image you choose to build your application on contains a set of software that resembles a Linux system to your application. When you build your own image, your software is placed into that file system and sees that file system as though it were looking at its operating system. Choosing this base image has major impact on how secure, efficient and upgradeable your container is in the future. Red Hat provides a new set of base images referred to as Red Hat Universal Base Images (UBI). These images are based on Red Hat Enterprise Linux and are similar to base images that Red Hat has offered in the past, with one major difference: they are freely redistributable without a Red Hat subscription. As a result, you can build your application on UBI images without having to worry about how they are shared or the need to create different images for different environments. These UBI images have standard, init, and minimal versions. You can also use the Red Hat Software Collections images as a foundation for applications that rely on specific runtime environments such as Node.js, Perl, or Python. Special versions of some of these runtime base images are referred to as Source-to-Image (S2I) images. With S2I images, you can insert your code into a base image environment that is ready to run that code. S2I images are available for you to use directly from the OpenShift Container Platform web UI. In the Developer perspective, navigate to the +Add view and in the Developer Catalog tile, view all of the available services in the Developer Catalog. Figure 7.2. Choose S2I base images for apps that need specific runtimes 7.2.3. Registry options Container registries are where you store container images so you can share them with others and make them available to the platform where they ultimately run. You can select large, public container registries that offer free accounts or a premium version that offer more storage and special features. You can also install your own registry that can be exclusive to your organization or selectively shared with others. To get Red Hat images and certified partner images, you can draw from the Red Hat Registry. The Red Hat Registry is represented by two locations: registry.access.redhat.com , which is unauthenticated and deprecated, and registry.redhat.io , which requires authentication. You can learn about the Red Hat and partner images in the Red Hat Registry from the Container images section of the Red Hat Ecosystem Catalog . Besides listing Red Hat container images, it also shows extensive information about the contents and quality of those images, including health scores that are based on applied security updates. Large, public registries include Docker Hub and Quay.io . The Quay.io registry is owned and managed by Red Hat. Many of the components used in OpenShift Container Platform are stored in Quay.io, including container images and the Operators that are used to deploy OpenShift Container Platform itself. Quay.io also offers the means of storing other types of content, including Helm charts. If you want your own, private container registry, OpenShift Container Platform itself includes a private container registry that is installed with OpenShift Container Platform and runs on its cluster. Red Hat also offers a private version of the Quay.io registry called Red Hat Quay . Red Hat Quay includes geo replication, Git build triggers, Clair image scanning, and many other features. All of the registries mentioned here can require credentials to download images from those registries. Some of those credentials are presented on a cluster-wide basis from OpenShift Container Platform, while other credentials can be assigned to individuals. 7.3. Creating a Kubernetes manifest for OpenShift Container Platform While the container image is the basic building block for a containerized application, more information is required to manage and deploy that application in a Kubernetes environment such as OpenShift Container Platform. The typical steps after you create an image are to: Understand the different resources you work with in Kubernetes manifests Make some decisions about what kind of an application you are running Gather supporting components Create a manifest and store that manifest in a Git repository so you can store it in a source versioning system, audit it, track it, promote and deploy it to the environment, roll it back to earlier versions, if necessary, and share it with others 7.3.1. About Kubernetes pods and services While the container image is the basic unit with docker, the basic units that Kubernetes works with are called pods . Pods represent the step in building out an application. A pod can contain one or more than one container. The key is that the pod is the single unit that you deploy, scale, and manage. Scalability and namespaces are probably the main items to consider when determining what goes in a pod. For ease of deployment, you might want to deploy a container in a pod and include its own logging and monitoring container in the pod. Later, when you run the pod and need to scale up an additional instance, those other containers are scaled up with it. For namespaces, containers in a pod share the same network interfaces, shared storage volumes, and resource limitations, such as memory and CPU, which makes it easier to manage the contents of the pod as a single unit. Containers in a pod can also communicate with each other by using standard inter-process communications, such as System V semaphores or POSIX shared memory. While individual pods represent a scalable unit in Kubernetes, a service provides a means of grouping together a set of pods to create a complete, stable application that can complete tasks such as load balancing. A service is also more permanent than a pod because the service remains available from the same IP address until you delete it. When the service is in use, it is requested by name and the OpenShift Container Platform cluster resolves that name into the IP addresses and ports where you can reach the pods that compose the service. By their nature, containerized applications are separated from the operating systems where they run and, by extension, their users. Part of your Kubernetes manifest describes how to expose the application to internal and external networks by defining network policies that allow fine-grained control over communication with your containerized applications. To connect incoming requests for HTTP, HTTPS, and other services from outside your cluster to services inside your cluster, you can use an Ingress resource. If your container requires on-disk storage instead of database storage, which might be provided through a service, you can add volumes to your manifests to make that storage available to your pods. You can configure the manifests to create persistent volumes (PVs) or dynamically create volumes that are added to your Pod definitions. After you define a group of pods that compose your application, you can define those pods in Deployment and DeploymentConfig objects. 7.3.2. Application types , consider how your application type influences how to run it. Kubernetes defines different types of workloads that are appropriate for different kinds of applications. To determine the appropriate workload for your application, consider if the application is: Meant to run to completion and be done. An example is an application that starts up to produce a report and exits when the report is complete. The application might not run again then for a month. Suitable OpenShift Container Platform objects for these types of applications include Job and CronJob objects. Expected to run continuously. For long-running applications, you can write a deployment . Required to be highly available. If your application requires high availability, then you want to size your deployment to have more than one instance. A Deployment or DeploymentConfig object can incorporate a replica set for that type of application. With replica sets, pods run across multiple nodes to make sure the application is always available, even if a worker goes down. Need to run on every node. Some types of Kubernetes applications are intended to run in the cluster itself on every master or worker node. DNS and monitoring applications are examples of applications that need to run continuously on every node. You can run this type of application as a daemon set . You can also run a daemon set on a subset of nodes, based on node labels. Require life-cycle management. When you want to hand off your application so that others can use it, consider creating an Operator . Operators let you build in intelligence, so it can handle things like backups and upgrades automatically. Coupled with the Operator Lifecycle Manager (OLM), cluster managers can expose Operators to selected namespaces so that users in the cluster can run them. Have identity or numbering requirements. An application might have identity requirements or numbering requirements. For example, you might be required to run exactly three instances of the application and to name the instances 0 , 1 , and 2 . A stateful set is suitable for this application. Stateful sets are most useful for applications that require independent storage, such as databases and zookeeper clusters. 7.3.3. Available supporting components The application you write might need supporting components, like a database or a logging component. To fulfill that need, you might be able to obtain the required component from the following Catalogs that are available in the OpenShift Container Platform web console: OperatorHub, which is available in each OpenShift Container Platform 4.16 cluster. The OperatorHub makes Operators available from Red Hat, certified Red Hat partners, and community members to the cluster operator. The cluster operator can make those Operators available in all or selected namespaces in the cluster, so developers can launch them and configure them with their applications. Templates, which are useful for a one-off type of application, where the lifecycle of a component is not important after it is installed. A template provides an easy way to get started developing a Kubernetes application with minimal overhead. A template can be a list of resource definitions, which could be Deployment , Service , Route , or other objects. If you want to change names or resources, you can set these values as parameters in the template. You can configure the supporting Operators and templates to the specific needs of your development team and then make them available in the namespaces in which your developers work. Many people add shared templates to the openshift namespace because it is accessible from all other namespaces. 7.3.4. Applying the manifest Kubernetes manifests let you create a more complete picture of the components that make up your Kubernetes applications. You write these manifests as YAML files and deploy them by applying them to the cluster, for example, by running the oc apply command. 7.3.5. steps At this point, consider ways to automate your container development process. Ideally, you have some sort of CI pipeline that builds the images and pushes them to a registry. In particular, a GitOps pipeline integrates your container development with the Git repositories that you use to store the software that is required to build your applications. The workflow to this point might look like: Day 1: You write some YAML. You then run the oc apply command to apply that YAML to the cluster and test that it works. Day 2: You put your YAML container configuration file into your own Git repository. From there, people who want to install that app, or help you improve it, can pull down the YAML and apply it to their cluster to run the app. Day 3: Consider writing an Operator for your application. 7.4. Develop for Operators Packaging and deploying your application as an Operator might be preferred if you make your application available for others to run. As noted earlier, Operators add a lifecycle component to your application that acknowledges that the job of running an application is not complete as soon as it is installed. When you create an application as an Operator, you can build in your own knowledge of how to run and maintain the application. You can build in features for upgrading the application, backing it up, scaling it, or keeping track of its state. If you configure the application correctly, maintenance tasks, like updating the Operator, can happen automatically and invisibly to the Operator's users. An example of a useful Operator is one that is set up to automatically back up data at particular times. Having an Operator manage an application's backup at set times can save a system administrator from remembering to do it. Any application maintenance that has traditionally been completed manually, like backing up data or rotating certificates, can be completed automatically with an Operator.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/architecture/understanding-development
Chapter 9. Tuning the Jakarta Enterprise Beans 3 Subsystem
Chapter 9. Tuning the Jakarta Enterprise Beans 3 Subsystem For tips on optimizing performance for the ejb3 subsystem, see the Jakarta Enterprise Beans Subsystem Tuning section of the Performance Tuning Guide .
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/developing_jakarta_enterprise_beans_applications/tuning_jakarta_enterprise_beans_subsystem
2.5. Tuning the Index
2.5. Tuning the Index 2.5.1. Near-Realtime Index Manager By default, each update is immediately flushed into the index. In order to achieve better throughput, the updates can be batched. However, this can result in a lag between the update and query -- the query can see outdated data. If this is acceptable, you can use the Near-Realtime Index Manager by setting the following. Report a bug 2.5.2. Tuning Infinispan Directory Lucene directory uses three caches to store the index: Data cache Metadata cache Locking cache Configuration for these caches can be set explicitly, specifying the cache names as in the example below, and configuring those caches as usual. All of these caches must be clustered unless Infinispan Directory is used in local mode. Example 2.7. Tuning the Infinispan Directory Report a bug 2.5.3. Per-Index Configuration The indexing properties in examples above apply for all indices - this is because we use the default. prefix for each property. To specify different configuration for each index, replace default with the index name. By default, this is the full class name of the indexed object, however you can override the index name in the @Indexed annotation. Report a bug
[ "<property name=\"default.indexmanager\" value=\"near-real-time\" />", "<namedCache name=\"indexedCache\"> <clustering mode=\"DIST\"/> <indexing enabled=\"true\"> <properties> <property name=\"default.indexmanager\" value=\"org.infinispan.query.indexmanager.InfinispanIndexManager\" /> <property name=\"default.metadata_cachename\" value=\"lucene_metadata_repl\"/> <property name=\"default.data_cachename\" value=\"lucene_data_dist\"/> <property name=\"default.locking_cachename\" value=\"lucene_locking_repl\"/> </properties> </indexing> </namedCache> <namedCache name=\"lucene_metadata_repl\"> <clustering mode=\"REPL\"/> </namedCache> <namedCache name=\"lucene_data_dist\"> <clustering mode=\"DIST\"/> </namedCache> <namedCache name=\"lucene_locking_repl\"> <clustering mode=\"REPL\"/> </namedCache>" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/infinispan_query_guide/sect-Tuning_the_Index
9.5. Network Devices
9.5. Network Devices Red Hat Virtualization is able to expose three different types of network interface controller to guests. The type of network interface controller to expose to a guest is chosen when the guest is created but is changeable from the Red Hat Virtualization Manager. The e1000 network interface controller exposes a virtualized Intel PRO/1000 (e1000) to guests. The virtio network interface controller exposes a para-virtualized network device to guests. The rtl8139 network interface controller exposes a virtualized Realtek Semiconductor Corp RTL8139 to guests. Multiple network interface controllers are permitted per guest. Each controller added takes up an available PCI slot on the guest.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/technical_reference/network_devices
Configuring and managing Identity Management
Configuring and managing Identity Management Red Hat Enterprise Linux 8 Logging in to IdM and managing services, users, hosts, groups, access control rules, and certificates. Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/index
Chapter 2. Accessing the Multicloud Object Gateway with your applications
Chapter 2. Accessing the Multicloud Object Gateway with your applications You can access the object service with any application targeting AWS S3 or code that uses AWS S3 Software Development Kit (SDK). Applications need to specify the Multicloud Object Gateway (MCG) endpoint, an access key, and a secret access key. You can use your terminal or the MCG CLI to retrieve this information. For information on accessing the RADOS Object Gateway (RGW) S3 endpoint, see Accessing the RADOS Object Gateway S3 endpoint . Prerequisites A running OpenShift Data Foundation Platform. Download the MCG command-line interface for easier management. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For IBM Power, use the following command: For IBM Z, use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found at Download RedHat OpenShift Data Foundation page . Note Choose the correct Product Variant according to your architecture. You can access the relevant endpoint, access key, and secret access key in two ways: Section 2.1, "Accessing the Multicloud Object Gateway from the terminal" Section 2.2, "Accessing the Multicloud Object Gateway from the MCG command-line interface" For example: Accessing the MCG bucket(s) using the virtual-hosted style If the client application tries to access https:// <bucket-name> .s3-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com <bucket-name> is the name of the MCG bucket For example, https://mcg-test-bucket.s3-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com A DNS entry is needed for mcg-test-bucket.s3-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com to point to the S3 Service. Important Ensure that you have a DNS entry in order to point the client application to the MCG bucket(s) using the virtual-hosted style. 2.1. Accessing the Multicloud Object Gateway from the terminal Procedure Run the describe command to view information about the Multicloud Object Gateway (MCG) endpoint, including its access key ( AWS_ACCESS_KEY_ID value) and secret access key ( AWS_SECRET_ACCESS_KEY value). The output will look similar to the following: 1 access key ( AWS_ACCESS_KEY_ID value) 2 secret access key ( AWS_SECRET_ACCESS_KEY value) 3 MCG endpoint Note The output from the oc describe noobaa command lists the internal and external DNS names that are available. When using the internal DNS, the traffic is free. The external DNS uses Load Balancing to process the traffic, and therefore has a cost per hour. 2.2. Accessing the Multicloud Object Gateway from the MCG command-line interface Prerequisites Download the MCG command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For IBM Power, use the following command: For IBM Z, use the following command: Procedure Run the status command to access the endpoint, access key, and secret access key: The output will look similar to the following: 1 endpoint 2 access key 3 secret access key You now have the relevant endpoint, access key, and secret access key in order to connect to your applications. For example: If AWS S3 CLI is the application, the following command will list the buckets in OpenShift Data Foundation:
[ "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "oc describe noobaa -n openshift-storage", "Name: noobaa Namespace: openshift-storage Labels: <none> Annotations: <none> API Version: noobaa.io/v1alpha1 Kind: NooBaa Metadata: Creation Timestamp: 2019-07-29T16:22:06Z Generation: 1 Resource Version: 6718822 Self Link: /apis/noobaa.io/v1alpha1/namespaces/openshift-storage/noobaas/noobaa UID: 019cfb4a-b21d-11e9-9a02-06c8de012f9e Spec: Status: Accounts: Admin: Secret Ref: Name: noobaa-admin Namespace: openshift-storage Actual Image: noobaa/noobaa-core:4.0 Observed Generation: 1 Phase: Ready Readme: Welcome to NooBaa! ----------------- Welcome to NooBaa! ----------------- NooBaa Core Version: NooBaa Operator Version: Lets get started: 1. Connect to Management console: Read your mgmt console login information (email & password) from secret: \"noobaa-admin\". kubectl get secret noobaa-admin -n openshift-storage -o json | jq '.data|map_values(@base64d)' Open the management console service - take External IP/DNS or Node Port or use port forwarding: kubectl port-forward -n openshift-storage service/noobaa-mgmt 11443:443 & open https://localhost:11443 2. Test S3 client: kubectl port-forward -n openshift-storage service/s3 10443:443 & 1 NOOBAA_ACCESS_KEY=USD(kubectl get secret noobaa-admin -n openshift-storage -o json | jq -r '.data.AWS_ACCESS_KEY_ID|@base64d') 2 NOOBAA_SECRET_KEY=USD(kubectl get secret noobaa-admin -n openshift-storage -o json | jq -r '.data.AWS_SECRET_ACCESS_KEY|@base64d') alias s3='AWS_ACCESS_KEY_ID=USDNOOBAA_ACCESS_KEY AWS_SECRET_ACCESS_KEY=USDNOOBAA_SECRET_KEY aws --endpoint https://localhost:10443 --no-verify-ssl s3' s3 ls Services: Service Mgmt: External DNS: https://noobaa-mgmt-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com https://a3406079515be11eaa3b70683061451e-1194613580.us-east-2.elb.amazonaws.com:443 Internal DNS: https://noobaa-mgmt.openshift-storage.svc:443 Internal IP: https://172.30.235.12:443 Node Ports: https://10.0.142.103:31385 Pod Ports: https://10.131.0.19:8443 serviceS3: External DNS: 3 https://s3-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com https://a340f4e1315be11eaa3b70683061451e-943168195.us-east-2.elb.amazonaws.com:443 Internal DNS: https://s3.openshift-storage.svc:443 Internal IP: https://172.30.86.41:443 Node Ports: https://10.0.142.103:31011 Pod Ports: https://10.131.0.19:6443", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "noobaa status -n openshift-storage", "INFO[0000] Namespace: openshift-storage INFO[0000] INFO[0000] CRD Status: INFO[0003] ✅ Exists: CustomResourceDefinition \"noobaas.noobaa.io\" INFO[0003] ✅ Exists: CustomResourceDefinition \"backingstores.noobaa.io\" INFO[0003] ✅ Exists: CustomResourceDefinition \"bucketclasses.noobaa.io\" INFO[0004] ✅ Exists: CustomResourceDefinition \"objectbucketclaims.objectbucket.io\" INFO[0004] ✅ Exists: CustomResourceDefinition \"objectbuckets.objectbucket.io\" INFO[0004] INFO[0004] Operator Status: INFO[0004] ✅ Exists: Namespace \"openshift-storage\" INFO[0004] ✅ Exists: ServiceAccount \"noobaa\" INFO[0005] ✅ Exists: Role \"ocs-operator.v0.0.271-6g45f\" INFO[0005] ✅ Exists: RoleBinding \"ocs-operator.v0.0.271-6g45f-noobaa-f9vpj\" INFO[0006] ✅ Exists: ClusterRole \"ocs-operator.v0.0.271-fjhgh\" INFO[0006] ✅ Exists: ClusterRoleBinding \"ocs-operator.v0.0.271-fjhgh-noobaa-pdxn5\" INFO[0006] ✅ Exists: Deployment \"noobaa-operator\" INFO[0006] INFO[0006] System Status: INFO[0007] ✅ Exists: NooBaa \"noobaa\" INFO[0007] ✅ Exists: StatefulSet \"noobaa-core\" INFO[0007] ✅ Exists: Service \"noobaa-mgmt\" INFO[0008] ✅ Exists: Service \"s3\" INFO[0008] ✅ Exists: Secret \"noobaa-server\" INFO[0008] ✅ Exists: Secret \"noobaa-operator\" INFO[0008] ✅ Exists: Secret \"noobaa-admin\" INFO[0009] ✅ Exists: StorageClass \"openshift-storage.noobaa.io\" INFO[0009] ✅ Exists: BucketClass \"noobaa-default-bucket-class\" INFO[0009] ✅ (Optional) Exists: BackingStore \"noobaa-default-backing-store\" INFO[0010] ✅ (Optional) Exists: CredentialsRequest \"noobaa-cloud-creds\" INFO[0010] ✅ (Optional) Exists: PrometheusRule \"noobaa-prometheus-rules\" INFO[0010] ✅ (Optional) Exists: ServiceMonitor \"noobaa-service-monitor\" INFO[0011] ✅ (Optional) Exists: Route \"noobaa-mgmt\" INFO[0011] ✅ (Optional) Exists: Route \"s3\" INFO[0011] ✅ Exists: PersistentVolumeClaim \"db-noobaa-core-0\" INFO[0011] ✅ System Phase is \"Ready\" INFO[0011] ✅ Exists: \"noobaa-admin\" #------------------# #- Mgmt Addresses -# #------------------# ExternalDNS : [https://noobaa-mgmt-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com https://a3406079515be11eaa3b70683061451e-1194613580.us-east-2.elb.amazonaws.com:443] ExternalIP : [] NodePorts : [https://10.0.142.103:31385] InternalDNS : [https://noobaa-mgmt.openshift-storage.svc:443] InternalIP : [https://172.30.235.12:443] PodPorts : [https://10.131.0.19:8443] #--------------------# #- Mgmt Credentials -# #--------------------# email : [email protected] password : HKLbH1rSuVU0I/souIkSiA== #----------------# #- S3 Addresses -# #----------------# 1 ExternalDNS : [https://s3-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com https://a340f4e1315be11eaa3b70683061451e-943168195.us-east-2.elb.amazonaws.com:443] ExternalIP : [] NodePorts : [https://10.0.142.103:31011] InternalDNS : [https://s3.openshift-storage.svc:443] InternalIP : [https://172.30.86.41:443] PodPorts : [https://10.131.0.19:6443] #------------------# #- S3 Credentials -# #------------------# 2 AWS_ACCESS_KEY_ID : jVmAsu9FsvRHYmfjTiHV 3 AWS_SECRET_ACCESS_KEY : E//420VNedJfATvVSmDz6FMtsSAzuBv6z180PT5c #------------------# #- Backing Stores -# #------------------# NAME TYPE TARGET-BUCKET PHASE AGE noobaa-default-backing-store aws-s3 noobaa-backing-store-15dc896d-7fe0-4bed-9349-5942211b93c9 Ready 141h35m32s #------------------# #- Bucket Classes -# #------------------# NAME PLACEMENT PHASE AGE noobaa-default-bucket-class {Tiers:[{Placement: BackingStores:[noobaa-default-backing-store]}]} Ready 141h35m33s #-----------------# #- Bucket Claims -# #-----------------# No OBC's found.", "AWS_ACCESS_KEY_ID=<AWS_ACCESS_KEY_ID> AWS_SECRET_ACCESS_KEY=<AWS_SECRET_ACCESS_KEY> aws --endpoint <ENDPOINT> --no-verify-ssl s3 ls" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/managing_hybrid_and_multicloud_resources/accessing-the-multicloud-object-gateway-with-your-applications_rhodf
probe::vm.oom_kill
probe::vm.oom_kill Name probe::vm.oom_kill - Fires when a thread is selected for termination by the OOM killer Synopsis vm.oom_kill Values name name of the probe point task the task being killed Context The process that tried to consume excessive memory, and thus triggered the OOM.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-vm-oom-kill
Chapter 5. Installation
Chapter 5. Installation Troubleshoot issues with your installation. 5.1. Issue - Cannot locate certain packages that come bundled with the Ansible Automation Platform installer You cannot locate certain packages that come bundled with the Ansible Automation Platform installer, or you are seeing a "Repositories disabled by configuration" message. To resolve this issue, enable the repository by using the subscription-manager command in the command line. For more information about resolving this issue, see the Troubleshooting section of Attaching your Red Hat Ansible Automation Platform subscription in the Red Hat Ansible Automation Platform Planning Guide.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/troubleshooting_ansible_automation_platform/troubleshoot-installation
function::pexecname
function::pexecname Name function::pexecname - Returns the execname of a target process's parent process Synopsis Arguments None Description This function returns the execname of a target process's parent procces.
[ "pexecname:string()" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-pexecname
Chapter 15. Starting a remote installation by using VNC
Chapter 15. Starting a remote installation by using VNC 15.1. Performing a remote RHEL installation in VNC Direct mode Use this procedure to perform a remote RHEL installation in VNC Direct mode. Direct mode expects the VNC viewer to initiate a connection to the target system that is being installed with RHEL. In this procedure, the system with the VNC viewer is called the remote system. You are prompted by the RHEL installation program to initiate the connection from the VNC viewer on the remote system to the target system. Note This procedure uses TigerVNC as the VNC viewer. Specific instructions for other viewers might differ, but the general principles apply. Prerequisites You have installed a VNC viewer on a remote system as a root user. You have set up a network boot server and booted the installation on the target system. Procedure From the RHEL boot menu on the target system, press the Tab key on your keyboard to edit the boot options. Append the inst.vnc option to the end of the command line. If you want to restrict VNC access to the system that is being installed, add the inst.vncpassword=PASSWORD boot option to the end of the command line. Replace PASSWORD with the password you want to use for the installation. The VNC password must be between 6 and 8 characters long. This is a temporary password for the inst.vncpassword= option and it should not be an existing or root password. Press Enter to start the installation. The target system initializes the installation program and starts the necessary services. When the system is ready, a message is displayed providing the IP address and port number of the system. Open the VNC viewer on the remote system. Enter the IP address and the port number into the VNC server field. Click Connect . Enter the VNC password and click OK . A new window opens with the VNC connection established, displaying the RHEL installation menu. From this window, you can install RHEL on the target system using the graphical user interface. 15.2. Performing a remote RHEL installation in VNC Connect mode Use this procedure to perform a remote RHEL installation in VNC Connect mode. In Connect mode, the target system that is being installed with RHEL initiates a connect to the VNC viewer that is installed on another system. In this procedure, the system with the VNC viewer is called the remote system. Note This procedure uses TigerVNC as the VNC viewer. Specific instructions for other viewers might differ, but the general principles apply. Prerequisites You have installed a VNC viewer on a remote system as a root user. You have set up a network boot server to start the installation on the target system. You have configured the target system to use the boot options for a VNC Connect installation. You have verified that the remote system with the VNC viewer is configured to accept an incoming connection on the required port. Verification is dependent on your network and system configuration. For more information, see Security hardening and Securing networks . Procedure Start the VNC viewer on the remote system in listening mode by running the following command: Replace PORT with the port number used for the connection. The terminal displays a message indicating that it is waiting for an incoming connection from the target system. Boot the target system from the network. From the RHEL boot menu on the target system, press the Tab key on your keyboard to edit the boot options. Append the inst.vnc inst.vncconnect=HOST:PORT option to the end of the command line. Replace HOST with the IP address of the remote system that is running the listening VNC viewer, and PORT with the port number that the VNC viewer is listening on. Press Enter to start the installation. The system initializes the installation program and starts the necessary services. When the initialization process is finished, the installation program attempts to connect to the IP address and port provided. When the connection is successful, a new window opens with the VNC connection established, displaying the RHEL installation menu. From this window, you can install RHEL on the target system using the graphical user interface.
[ "vncviewer -listen PORT", "TigerVNC Viewer 64-bit v1.8.0 Built on: 2017-10-12 09:20 Copyright (C) 1999-2017 TigerVNC Team and many others (see README.txt) See http://www.tigervnc.org for information about TigerVNC. Thu Jun 27 11:30:57 2019 main: Listening on port 5500" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/interactively_installing_rhel_over_the_network/starting-a-remote-installation-by-using-vnc_rhel-installer
Chapter 15. ReplicationController [v1]
Chapter 15. ReplicationController [v1] Description ReplicationController represents the configuration of a replication controller. Type object 15.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta If the Labels of a ReplicationController are empty, they are defaulted to be the same as the Pod(s) that the replication controller manages. Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ReplicationControllerSpec is the specification of a replication controller. status object ReplicationControllerStatus represents the current status of a replication controller. 15.1.1. .spec Description ReplicationControllerSpec is the specification of a replication controller. Type object Property Type Description minReadySeconds integer Minimum number of seconds for which a newly created pod should be ready without any of its container crashing, for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready) replicas integer Replicas is the number of desired replicas. This is a pointer to distinguish between explicit zero and unspecified. Defaults to 1. More info: https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller#what-is-a-replicationcontroller selector object (string) Selector is a label query over pods that should match the Replicas count. If Selector is empty, it is defaulted to the labels present on the Pod template. Label keys and values that must match in order to be controlled by this replication controller, if empty defaulted to labels on Pod template. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors template object PodTemplateSpec describes the data a pod should have when created from a template 15.1.2. .spec.template Description PodTemplateSpec describes the data a pod should have when created from a template Type object Property Type Description metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object PodSpec is a description of a pod. 15.1.3. .spec.template.spec Description PodSpec is a description of a pod. Type object Required containers Property Type Description activeDeadlineSeconds integer Optional duration in seconds the pod may be active on the node relative to StartTime before the system will actively try to mark it failed and kill associated containers. Value must be a positive integer. affinity object Affinity is a group of affinity scheduling rules. automountServiceAccountToken boolean AutomountServiceAccountToken indicates whether a service account token should be automatically mounted. containers array List of containers belonging to the pod. Containers cannot currently be added or removed. There must be at least one container in a Pod. Cannot be updated. containers[] object A single application container that you want to run within a pod. dnsConfig object PodDNSConfig defines the DNS parameters of a pod in addition to those generated from DNSPolicy. dnsPolicy string Set DNS policy for the pod. Defaults to "ClusterFirst". Valid values are 'ClusterFirstWithHostNet', 'ClusterFirst', 'Default' or 'None'. DNS parameters given in DNSConfig will be merged with the policy selected with DNSPolicy. To have DNS options set along with hostNetwork, you have to specify DNS policy explicitly to 'ClusterFirstWithHostNet'. Possible enum values: - "ClusterFirst" indicates that the pod should use cluster DNS first unless hostNetwork is true, if it is available, then fall back on the default (as determined by kubelet) DNS settings. - "ClusterFirstWithHostNet" indicates that the pod should use cluster DNS first, if it is available, then fall back on the default (as determined by kubelet) DNS settings. - "Default" indicates that the pod should use the default (as determined by kubelet) DNS settings. - "None" indicates that the pod should use empty DNS settings. DNS parameters such as nameservers and search paths should be defined via DNSConfig. enableServiceLinks boolean EnableServiceLinks indicates whether information about services should be injected into pod's environment variables, matching the syntax of Docker links. Optional: Defaults to true. ephemeralContainers array List of ephemeral containers run in this pod. Ephemeral containers may be run in an existing pod to perform user-initiated actions such as debugging. This list cannot be specified when creating a pod, and it cannot be modified by updating the pod spec. In order to add an ephemeral container to an existing pod, use the pod's ephemeralcontainers subresource. ephemeralContainers[] object An EphemeralContainer is a temporary container that you may add to an existing Pod for user-initiated activities such as debugging. Ephemeral containers have no resource or scheduling guarantees, and they will not be restarted when they exit or when a Pod is removed or restarted. The kubelet may evict a Pod if an ephemeral container causes the Pod to exceed its resource allocation. To add an ephemeral container, use the ephemeralcontainers subresource of an existing Pod. Ephemeral containers may not be removed or restarted. hostAliases array HostAliases is an optional list of hosts and IPs that will be injected into the pod's hosts file if specified. This is only valid for non-hostNetwork pods. hostAliases[] object HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pod's hosts file. hostIPC boolean Use the host's ipc namespace. Optional: Default to false. hostNetwork boolean Host networking requested for this pod. Use the host's network namespace. If this option is set, the ports that will be used must be specified. Default to false. hostPID boolean Use the host's pid namespace. Optional: Default to false. hostUsers boolean Use the host's user namespace. Optional: Default to true. If set to true or not present, the pod will be run in the host user namespace, useful for when the pod needs a feature only available to the host user namespace, such as loading a kernel module with CAP_SYS_MODULE. When set to false, a new userns is created for the pod. Setting false is useful for mitigating container breakout vulnerabilities even allowing users to run their containers as root without actually having root privileges on the host. This field is alpha-level and is only honored by servers that enable the UserNamespacesSupport feature. hostname string Specifies the hostname of the Pod If not specified, the pod's hostname will be set to a system-defined value. imagePullSecrets array ImagePullSecrets is an optional list of references to secrets in the same namespace to use for pulling any of the images used by this PodSpec. If specified, these secrets will be passed to individual puller implementations for them to use. More info: https://kubernetes.io/docs/concepts/containers/images#specifying-imagepullsecrets-on-a-pod imagePullSecrets[] object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. initContainers array List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added or removed. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ initContainers[] object A single application container that you want to run within a pod. nodeName string NodeName is a request to schedule this pod onto a specific node. If it is non-empty, the scheduler simply schedules this pod onto that node, assuming that it fits resource requirements. nodeSelector object (string) NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ os object PodOS defines the OS parameters of a pod. overhead object (Quantity) Overhead represents the resource overhead associated with running a pod for a given RuntimeClass. This field will be autopopulated at admission time by the RuntimeClass admission controller. If the RuntimeClass admission controller is enabled, overhead must not be set in Pod create requests. The RuntimeClass admission controller will reject Pod create requests which have the overhead already set. If RuntimeClass is configured and selected in the PodSpec, Overhead will be set to the value defined in the corresponding RuntimeClass, otherwise it will remain unset and treated as zero. More info: https://git.k8s.io/enhancements/keps/sig-node/688-pod-overhead/README.md preemptionPolicy string PreemptionPolicy is the Policy for preempting pods with lower priority. One of Never, PreemptLowerPriority. Defaults to PreemptLowerPriority if unset. priority integer The priority value. Various system components use this field to find the priority of the pod. When Priority Admission Controller is enabled, it prevents users from setting this field. The admission controller populates this field from PriorityClassName. The higher the value, the higher the priority. priorityClassName string If specified, indicates the pod's priority. "system-node-critical" and "system-cluster-critical" are two special keywords which indicate the highest priorities with the former being the highest priority. Any other name must be defined by creating a PriorityClass object with that name. If not specified, the pod priority will be default or zero if there is no default. readinessGates array If specified, all readiness gates will be evaluated for pod readiness. A pod is ready when all its containers are ready AND all conditions specified in the readiness gates have status equal to "True" More info: https://git.k8s.io/enhancements/keps/sig-network/580-pod-readiness-gates readinessGates[] object PodReadinessGate contains the reference to a pod condition restartPolicy string Restart policy for all containers within the pod. One of Always, OnFailure, Never. Default to Always. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy Possible enum values: - "Always" - "Never" - "OnFailure" runtimeClassName string RuntimeClassName refers to a RuntimeClass object in the node.k8s.io group, which should be used to run this pod. If no RuntimeClass resource matches the named class, the pod will not be run. If unset or empty, the "legacy" RuntimeClass will be used, which is an implicit class with an empty definition that uses the default runtime handler. More info: https://git.k8s.io/enhancements/keps/sig-node/585-runtime-class schedulerName string If specified, the pod will be dispatched by specified scheduler. If not specified, the pod will be dispatched by default scheduler. securityContext object PodSecurityContext holds pod-level security attributes and common container settings. Some fields are also present in container.securityContext. Field values of container.securityContext take precedence over field values of PodSecurityContext. serviceAccount string DeprecatedServiceAccount is a depreciated alias for ServiceAccountName. Deprecated: Use serviceAccountName instead. serviceAccountName string ServiceAccountName is the name of the ServiceAccount to use to run this pod. More info: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ setHostnameAsFQDN boolean If true the pod's hostname will be configured as the pod's FQDN, rather than the leaf name (the default). In Linux containers, this means setting the FQDN in the hostname field of the kernel (the nodename field of struct utsname). In Windows containers, this means setting the registry value of hostname for the registry key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters to FQDN. If a pod does not have FQDN, this has no effect. Default to false. shareProcessNamespace boolean Share a single process namespace between all of the containers in a pod. When this is set containers will be able to view and signal processes from other containers in the same pod, and the first process in each container will not be assigned PID 1. HostPID and ShareProcessNamespace cannot both be set. Optional: Default to false. subdomain string If specified, the fully qualified Pod hostname will be "<hostname>.<subdomain>.<pod namespace>.svc.<cluster domain>". If not specified, the pod will not have a domainname at all. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully. May be decreased in delete request. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). If this value is nil, the default grace period will be used instead. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. Defaults to 30 seconds. tolerations array If specified, the pod's tolerations. tolerations[] object The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. topologySpreadConstraints array TopologySpreadConstraints describes how a group of pods ought to spread across topology domains. Scheduler will schedule pods in a way which abides by the constraints. All topologySpreadConstraints are ANDed. topologySpreadConstraints[] object TopologySpreadConstraint specifies how to spread matching pods among the given topology. volumes array List of volumes that can be mounted by containers belonging to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes volumes[] object Volume represents a named volume in a pod that may be accessed by any container in the pod. 15.1.4. .spec.template.spec.affinity Description Affinity is a group of affinity scheduling rules. Type object Property Type Description nodeAffinity object Node affinity is a group of node affinity scheduling rules. podAffinity object Pod affinity is a group of inter pod affinity scheduling rules. podAntiAffinity object Pod anti affinity is a group of inter pod anti affinity scheduling rules. 15.1.5. .spec.template.spec.affinity.nodeAffinity Description Node affinity is a group of node affinity scheduling rules. Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). requiredDuringSchedulingIgnoredDuringExecution object A node selector represents the union of the results of one or more label queries over a set of nodes; that is, it represents the OR of the selectors represented by the node selector terms. 15.1.6. .spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. Type array 15.1.7. .spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). Type object Required weight preference Property Type Description preference object A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. weight integer Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100. 15.1.8. .spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference Description A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 15.1.9. .spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions Description A list of node selector requirements by node's labels. Type array 15.1.10. .spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. Possible enum values: - "DoesNotExist" - "Exists" - "Gt" - "In" - "Lt" - "NotIn" values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 15.1.11. .spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields Description A list of node selector requirements by node's fields. Type array 15.1.12. .spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. Possible enum values: - "DoesNotExist" - "Exists" - "Gt" - "In" - "Lt" - "NotIn" values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 15.1.13. .spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution Description A node selector represents the union of the results of one or more label queries over a set of nodes; that is, it represents the OR of the selectors represented by the node selector terms. Type object Required nodeSelectorTerms Property Type Description nodeSelectorTerms array Required. A list of node selector terms. The terms are ORed. nodeSelectorTerms[] object A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. 15.1.14. .spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms Description Required. A list of node selector terms. The terms are ORed. Type array 15.1.15. .spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[] Description A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 15.1.16. .spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions Description A list of node selector requirements by node's labels. Type array 15.1.17. .spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. Possible enum values: - "DoesNotExist" - "Exists" - "Gt" - "In" - "Lt" - "NotIn" values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 15.1.18. .spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields Description A list of node selector requirements by node's fields. Type array 15.1.19. .spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. Possible enum values: - "DoesNotExist" - "Exists" - "Gt" - "In" - "Lt" - "NotIn" values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 15.1.20. .spec.template.spec.affinity.podAffinity Description Pod affinity is a group of inter pod affinity scheduling rules. Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 15.1.21. .spec.template.spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 15.1.22. .spec.template.spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required weight podAffinityTerm Property Type Description podAffinityTerm object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 15.1.23. .spec.template.spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector LabelSelector A label query over a set of resources, in this case pods. namespaceSelector LabelSelector A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 15.1.24. .spec.template.spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 15.1.25. .spec.template.spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector LabelSelector A label query over a set of resources, in this case pods. namespaceSelector LabelSelector A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 15.1.26. .spec.template.spec.affinity.podAntiAffinity Description Pod anti affinity is a group of inter pod anti affinity scheduling rules. Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 15.1.27. .spec.template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 15.1.28. .spec.template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required weight podAffinityTerm Property Type Description podAffinityTerm object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 15.1.29. .spec.template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector LabelSelector A label query over a set of resources, in this case pods. namespaceSelector LabelSelector A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 15.1.30. .spec.template.spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 15.1.31. .spec.template.spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector LabelSelector A label query over a set of resources, in this case pods. namespaceSelector LabelSelector A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 15.1.32. .spec.template.spec.containers Description List of containers belonging to the pod. Containers cannot currently be added or removed. There must be at least one container in a Pod. Cannot be updated. Type array 15.1.33. .spec.template.spec.containers[] Description A single application container that you want to run within a pod. Type object Required name Property Type Description args array (string) Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command array (string) Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env array List of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images Possible enum values: - "Always" means that kubelet always attempts to pull the latest image. Container will fail If the pull fails. - "IfNotPresent" means that kubelet pulls if the image isn't present on disk. Container will fail if the image isn't present and the pull fails. - "Never" means that kubelet never pulls an image, but only uses a local image. Container will fail if the image isn't present lifecycle object Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted. livenessProbe object Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports array List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. ports[] object ContainerPort represents a network port in a single container. readinessProbe object Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. resources object ResourceRequirements describes the compute resource requirements. securityContext object SecurityContext holds security configuration that will be applied to a container. Some fields are present in both SecurityContext and PodSecurityContext. When both are set, the values in SecurityContext take precedence. startupProbe object Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. Possible enum values: - "FallbackToLogsOnError" will read the most recent contents of the container logs for the container status message when the container exits with an error and the terminationMessagePath has no contents. - "File" is the default behavior and will set the container status message to the contents of the container's terminationMessagePath when the container exits. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices array volumeDevices is the list of block devices to be used by the container. volumeDevices[] object volumeDevice describes a mapping of a raw block device within a container. volumeMounts array Pod volumes to mount into the container's filesystem. Cannot be updated. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. 15.1.34. .spec.template.spec.containers[].env Description List of environment variables to set in the container. Cannot be updated. Type array 15.1.35. .spec.template.spec.containers[].env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object EnvVarSource represents a source for the value of an EnvVar. 15.1.36. .spec.template.spec.containers[].env[].valueFrom Description EnvVarSource represents a source for the value of an EnvVar. Type object Property Type Description configMapKeyRef object Selects a key from a ConfigMap. fieldRef object ObjectFieldSelector selects an APIVersioned field of an object. resourceFieldRef object ResourceFieldSelector represents container resources (cpu, memory) and their output format secretKeyRef object SecretKeySelector selects a key of a Secret. 15.1.37. .spec.template.spec.containers[].env[].valueFrom.configMapKeyRef Description Selects a key from a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 15.1.38. .spec.template.spec.containers[].env[].valueFrom.fieldRef Description ObjectFieldSelector selects an APIVersioned field of an object. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 15.1.39. .spec.template.spec.containers[].env[].valueFrom.resourceFieldRef Description ResourceFieldSelector represents container resources (cpu, memory) and their output format Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor Quantity Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 15.1.40. .spec.template.spec.containers[].env[].valueFrom.secretKeyRef Description SecretKeySelector selects a key of a Secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 15.1.41. .spec.template.spec.containers[].envFrom Description List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. Type array 15.1.42. .spec.template.spec.containers[].envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object ConfigMapEnvSource selects a ConfigMap to populate the environment variables with. The contents of the target ConfigMap's Data field will represent the key-value pairs as environment variables. prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object SecretEnvSource selects a Secret to populate the environment variables with. The contents of the target Secret's Data field will represent the key-value pairs as environment variables. 15.1.43. .spec.template.spec.containers[].envFrom[].configMapRef Description ConfigMapEnvSource selects a ConfigMap to populate the environment variables with. The contents of the target ConfigMap's Data field will represent the key-value pairs as environment variables. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap must be defined 15.1.44. .spec.template.spec.containers[].envFrom[].secretRef Description SecretEnvSource selects a Secret to populate the environment variables with. The contents of the target Secret's Data field will represent the key-value pairs as environment variables. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret must be defined 15.1.45. .spec.template.spec.containers[].lifecycle Description Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted. Type object Property Type Description postStart object LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. preStop object LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. 15.1.46. .spec.template.spec.containers[].lifecycle.postStart Description LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. Type object Property Type Description exec object ExecAction describes a "run in container" action. httpGet object HTTPGetAction describes an action based on HTTP Get requests. tcpSocket object TCPSocketAction describes an action based on opening a socket 15.1.47. .spec.template.spec.containers[].lifecycle.postStart.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 15.1.48. .spec.template.spec.containers[].lifecycle.postStart.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 15.1.49. .spec.template.spec.containers[].lifecycle.postStart.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 15.1.50. .spec.template.spec.containers[].lifecycle.postStart.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 15.1.51. .spec.template.spec.containers[].lifecycle.postStart.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 15.1.52. .spec.template.spec.containers[].lifecycle.preStop Description LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. Type object Property Type Description exec object ExecAction describes a "run in container" action. httpGet object HTTPGetAction describes an action based on HTTP Get requests. tcpSocket object TCPSocketAction describes an action based on opening a socket 15.1.53. .spec.template.spec.containers[].lifecycle.preStop.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 15.1.54. .spec.template.spec.containers[].lifecycle.preStop.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 15.1.55. .spec.template.spec.containers[].lifecycle.preStop.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 15.1.56. .spec.template.spec.containers[].lifecycle.preStop.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 15.1.57. .spec.template.spec.containers[].lifecycle.preStop.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 15.1.58. .spec.template.spec.containers[].livenessProbe Description Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Type object Property Type Description exec object ExecAction describes a "run in container" action. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. httpGet object HTTPGetAction describes an action based on HTTP Get requests. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocketAction describes an action based on opening a socket terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 15.1.59. .spec.template.spec.containers[].livenessProbe.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 15.1.60. .spec.template.spec.containers[].livenessProbe.grpc Description GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 15.1.61. .spec.template.spec.containers[].livenessProbe.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 15.1.62. .spec.template.spec.containers[].livenessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 15.1.63. .spec.template.spec.containers[].livenessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 15.1.64. .spec.template.spec.containers[].livenessProbe.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 15.1.65. .spec.template.spec.containers[].ports Description List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. Type array 15.1.66. .spec.template.spec.containers[].ports[] Description ContainerPort represents a network port in a single container. Type object Required containerPort Property Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". Possible enum values: - "SCTP" is the SCTP protocol. - "TCP" is the TCP protocol. - "UDP" is the UDP protocol. 15.1.67. .spec.template.spec.containers[].readinessProbe Description Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Type object Property Type Description exec object ExecAction describes a "run in container" action. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. httpGet object HTTPGetAction describes an action based on HTTP Get requests. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocketAction describes an action based on opening a socket terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 15.1.68. .spec.template.spec.containers[].readinessProbe.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 15.1.69. .spec.template.spec.containers[].readinessProbe.grpc Description GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 15.1.70. .spec.template.spec.containers[].readinessProbe.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 15.1.71. .spec.template.spec.containers[].readinessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 15.1.72. .spec.template.spec.containers[].readinessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 15.1.73. .spec.template.spec.containers[].readinessProbe.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 15.1.74. .spec.template.spec.containers[].resources Description ResourceRequirements describes the compute resource requirements. Type object Property Type Description limits object (Quantity) Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests object (Quantity) Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 15.1.75. .spec.template.spec.containers[].securityContext Description SecurityContext holds security configuration that will be applied to a container. Some fields are present in both SecurityContext and PodSecurityContext. When both are set, the values in SecurityContext take precedence. Type object Property Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. capabilities object Adds and removes POSIX capabilities from running containers. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object SELinuxOptions are the labels to be applied to the container seccompProfile object SeccompProfile defines a pod/container's seccomp profile settings. Only one profile source may be set. windowsOptions object WindowsSecurityContextOptions contain Windows-specific options and credentials. 15.1.76. .spec.template.spec.containers[].securityContext.capabilities Description Adds and removes POSIX capabilities from running containers. Type object Property Type Description add array (string) Added capabilities drop array (string) Removed capabilities 15.1.77. .spec.template.spec.containers[].securityContext.seLinuxOptions Description SELinuxOptions are the labels to be applied to the container Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 15.1.78. .spec.template.spec.containers[].securityContext.seccompProfile Description SeccompProfile defines a pod/container's seccomp profile settings. Only one profile source may be set. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must only be set if type is "Localhost". type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. Possible enum values: - "Localhost" indicates a profile defined in a file on the node should be used. The file's location relative to <kubelet-root-dir>/seccomp. - "RuntimeDefault" represents the default container runtime seccomp profile. - "Unconfined" indicates no seccomp profile is applied (A.K.A. unconfined). 15.1.79. .spec.template.spec.containers[].securityContext.windowsOptions Description WindowsSecurityContextOptions contain Windows-specific options and credentials. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 15.1.80. .spec.template.spec.containers[].startupProbe Description Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Type object Property Type Description exec object ExecAction describes a "run in container" action. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. httpGet object HTTPGetAction describes an action based on HTTP Get requests. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocketAction describes an action based on opening a socket terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 15.1.81. .spec.template.spec.containers[].startupProbe.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 15.1.82. .spec.template.spec.containers[].startupProbe.grpc Description GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 15.1.83. .spec.template.spec.containers[].startupProbe.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 15.1.84. .spec.template.spec.containers[].startupProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 15.1.85. .spec.template.spec.containers[].startupProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 15.1.86. .spec.template.spec.containers[].startupProbe.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 15.1.87. .spec.template.spec.containers[].volumeDevices Description volumeDevices is the list of block devices to be used by the container. Type array 15.1.88. .spec.template.spec.containers[].volumeDevices[] Description volumeDevice describes a mapping of a raw block device within a container. Type object Required name devicePath Property Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod 15.1.89. .spec.template.spec.containers[].volumeMounts Description Pod volumes to mount into the container's filesystem. Cannot be updated. Type array 15.1.90. .spec.template.spec.containers[].volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required name mountPath Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 15.1.91. .spec.template.spec.dnsConfig Description PodDNSConfig defines the DNS parameters of a pod in addition to those generated from DNSPolicy. Type object Property Type Description nameservers array (string) A list of DNS name server IP addresses. This will be appended to the base nameservers generated from DNSPolicy. Duplicated nameservers will be removed. options array A list of DNS resolver options. This will be merged with the base options generated from DNSPolicy. Duplicated entries will be removed. Resolution options given in Options will override those that appear in the base DNSPolicy. options[] object PodDNSConfigOption defines DNS resolver options of a pod. searches array (string) A list of DNS search domains for host-name lookup. This will be appended to the base search paths generated from DNSPolicy. Duplicated search paths will be removed. 15.1.92. .spec.template.spec.dnsConfig.options Description A list of DNS resolver options. This will be merged with the base options generated from DNSPolicy. Duplicated entries will be removed. Resolution options given in Options will override those that appear in the base DNSPolicy. Type array 15.1.93. .spec.template.spec.dnsConfig.options[] Description PodDNSConfigOption defines DNS resolver options of a pod. Type object Property Type Description name string Required. value string 15.1.94. .spec.template.spec.ephemeralContainers Description List of ephemeral containers run in this pod. Ephemeral containers may be run in an existing pod to perform user-initiated actions such as debugging. This list cannot be specified when creating a pod, and it cannot be modified by updating the pod spec. In order to add an ephemeral container to an existing pod, use the pod's ephemeralcontainers subresource. Type array 15.1.95. .spec.template.spec.ephemeralContainers[] Description An EphemeralContainer is a temporary container that you may add to an existing Pod for user-initiated activities such as debugging. Ephemeral containers have no resource or scheduling guarantees, and they will not be restarted when they exit or when a Pod is removed or restarted. The kubelet may evict a Pod if an ephemeral container causes the Pod to exceed its resource allocation. To add an ephemeral container, use the ephemeralcontainers subresource of an existing Pod. Ephemeral containers may not be removed or restarted. Type object Required name Property Type Description args array (string) Arguments to the entrypoint. The image's CMD is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command array (string) Entrypoint array. Not executed within a shell. The image's ENTRYPOINT is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env array List of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images Possible enum values: - "Always" means that kubelet always attempts to pull the latest image. Container will fail If the pull fails. - "IfNotPresent" means that kubelet pulls if the image isn't present on disk. Container will fail if the image isn't present and the pull fails. - "Never" means that kubelet never pulls an image, but only uses a local image. Container will fail if the image isn't present lifecycle object Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted. livenessProbe object Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. name string Name of the ephemeral container specified as a DNS_LABEL. This name must be unique among all containers, init containers and ephemeral containers. ports array Ports are not allowed for ephemeral containers. ports[] object ContainerPort represents a network port in a single container. readinessProbe object Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. resources object ResourceRequirements describes the compute resource requirements. securityContext object SecurityContext holds security configuration that will be applied to a container. Some fields are present in both SecurityContext and PodSecurityContext. When both are set, the values in SecurityContext take precedence. startupProbe object Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false targetContainerName string If set, the name of the container from PodSpec that this ephemeral container targets. The ephemeral container will be run in the namespaces (IPC, PID, etc) of this container. If not set then the ephemeral container uses the namespaces configured in the Pod spec. The container runtime must implement support for this feature. If the runtime does not support namespace targeting then the result of setting this field is undefined. terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. Possible enum values: - "FallbackToLogsOnError" will read the most recent contents of the container logs for the container status message when the container exits with an error and the terminationMessagePath has no contents. - "File" is the default behavior and will set the container status message to the contents of the container's terminationMessagePath when the container exits. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices array volumeDevices is the list of block devices to be used by the container. volumeDevices[] object volumeDevice describes a mapping of a raw block device within a container. volumeMounts array Pod volumes to mount into the container's filesystem. Subpath mounts are not allowed for ephemeral containers. Cannot be updated. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. 15.1.96. .spec.template.spec.ephemeralContainers[].env Description List of environment variables to set in the container. Cannot be updated. Type array 15.1.97. .spec.template.spec.ephemeralContainers[].env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object EnvVarSource represents a source for the value of an EnvVar. 15.1.98. .spec.template.spec.ephemeralContainers[].env[].valueFrom Description EnvVarSource represents a source for the value of an EnvVar. Type object Property Type Description configMapKeyRef object Selects a key from a ConfigMap. fieldRef object ObjectFieldSelector selects an APIVersioned field of an object. resourceFieldRef object ResourceFieldSelector represents container resources (cpu, memory) and their output format secretKeyRef object SecretKeySelector selects a key of a Secret. 15.1.99. .spec.template.spec.ephemeralContainers[].env[].valueFrom.configMapKeyRef Description Selects a key from a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 15.1.100. .spec.template.spec.ephemeralContainers[].env[].valueFrom.fieldRef Description ObjectFieldSelector selects an APIVersioned field of an object. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 15.1.101. .spec.template.spec.ephemeralContainers[].env[].valueFrom.resourceFieldRef Description ResourceFieldSelector represents container resources (cpu, memory) and their output format Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor Quantity Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 15.1.102. .spec.template.spec.ephemeralContainers[].env[].valueFrom.secretKeyRef Description SecretKeySelector selects a key of a Secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 15.1.103. .spec.template.spec.ephemeralContainers[].envFrom Description List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. Type array 15.1.104. .spec.template.spec.ephemeralContainers[].envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object ConfigMapEnvSource selects a ConfigMap to populate the environment variables with. The contents of the target ConfigMap's Data field will represent the key-value pairs as environment variables. prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object SecretEnvSource selects a Secret to populate the environment variables with. The contents of the target Secret's Data field will represent the key-value pairs as environment variables. 15.1.105. .spec.template.spec.ephemeralContainers[].envFrom[].configMapRef Description ConfigMapEnvSource selects a ConfigMap to populate the environment variables with. The contents of the target ConfigMap's Data field will represent the key-value pairs as environment variables. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap must be defined 15.1.106. .spec.template.spec.ephemeralContainers[].envFrom[].secretRef Description SecretEnvSource selects a Secret to populate the environment variables with. The contents of the target Secret's Data field will represent the key-value pairs as environment variables. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret must be defined 15.1.107. .spec.template.spec.ephemeralContainers[].lifecycle Description Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted. Type object Property Type Description postStart object LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. preStop object LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. 15.1.108. .spec.template.spec.ephemeralContainers[].lifecycle.postStart Description LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. Type object Property Type Description exec object ExecAction describes a "run in container" action. httpGet object HTTPGetAction describes an action based on HTTP Get requests. tcpSocket object TCPSocketAction describes an action based on opening a socket 15.1.109. .spec.template.spec.ephemeralContainers[].lifecycle.postStart.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 15.1.110. .spec.template.spec.ephemeralContainers[].lifecycle.postStart.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 15.1.111. .spec.template.spec.ephemeralContainers[].lifecycle.postStart.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 15.1.112. .spec.template.spec.ephemeralContainers[].lifecycle.postStart.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 15.1.113. .spec.template.spec.ephemeralContainers[].lifecycle.postStart.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 15.1.114. .spec.template.spec.ephemeralContainers[].lifecycle.preStop Description LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. Type object Property Type Description exec object ExecAction describes a "run in container" action. httpGet object HTTPGetAction describes an action based on HTTP Get requests. tcpSocket object TCPSocketAction describes an action based on opening a socket 15.1.115. .spec.template.spec.ephemeralContainers[].lifecycle.preStop.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 15.1.116. .spec.template.spec.ephemeralContainers[].lifecycle.preStop.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 15.1.117. .spec.template.spec.ephemeralContainers[].lifecycle.preStop.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 15.1.118. .spec.template.spec.ephemeralContainers[].lifecycle.preStop.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 15.1.119. .spec.template.spec.ephemeralContainers[].lifecycle.preStop.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 15.1.120. .spec.template.spec.ephemeralContainers[].livenessProbe Description Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Type object Property Type Description exec object ExecAction describes a "run in container" action. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. httpGet object HTTPGetAction describes an action based on HTTP Get requests. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocketAction describes an action based on opening a socket terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 15.1.121. .spec.template.spec.ephemeralContainers[].livenessProbe.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 15.1.122. .spec.template.spec.ephemeralContainers[].livenessProbe.grpc Description GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 15.1.123. .spec.template.spec.ephemeralContainers[].livenessProbe.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 15.1.124. .spec.template.spec.ephemeralContainers[].livenessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 15.1.125. .spec.template.spec.ephemeralContainers[].livenessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 15.1.126. .spec.template.spec.ephemeralContainers[].livenessProbe.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 15.1.127. .spec.template.spec.ephemeralContainers[].ports Description Ports are not allowed for ephemeral containers. Type array 15.1.128. .spec.template.spec.ephemeralContainers[].ports[] Description ContainerPort represents a network port in a single container. Type object Required containerPort Property Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". Possible enum values: - "SCTP" is the SCTP protocol. - "TCP" is the TCP protocol. - "UDP" is the UDP protocol. 15.1.129. .spec.template.spec.ephemeralContainers[].readinessProbe Description Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Type object Property Type Description exec object ExecAction describes a "run in container" action. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. httpGet object HTTPGetAction describes an action based on HTTP Get requests. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocketAction describes an action based on opening a socket terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 15.1.130. .spec.template.spec.ephemeralContainers[].readinessProbe.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 15.1.131. .spec.template.spec.ephemeralContainers[].readinessProbe.grpc Description GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 15.1.132. .spec.template.spec.ephemeralContainers[].readinessProbe.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 15.1.133. .spec.template.spec.ephemeralContainers[].readinessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 15.1.134. .spec.template.spec.ephemeralContainers[].readinessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 15.1.135. .spec.template.spec.ephemeralContainers[].readinessProbe.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 15.1.136. .spec.template.spec.ephemeralContainers[].resources Description ResourceRequirements describes the compute resource requirements. Type object Property Type Description limits object (Quantity) Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests object (Quantity) Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 15.1.137. .spec.template.spec.ephemeralContainers[].securityContext Description SecurityContext holds security configuration that will be applied to a container. Some fields are present in both SecurityContext and PodSecurityContext. When both are set, the values in SecurityContext take precedence. Type object Property Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. capabilities object Adds and removes POSIX capabilities from running containers. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object SELinuxOptions are the labels to be applied to the container seccompProfile object SeccompProfile defines a pod/container's seccomp profile settings. Only one profile source may be set. windowsOptions object WindowsSecurityContextOptions contain Windows-specific options and credentials. 15.1.138. .spec.template.spec.ephemeralContainers[].securityContext.capabilities Description Adds and removes POSIX capabilities from running containers. Type object Property Type Description add array (string) Added capabilities drop array (string) Removed capabilities 15.1.139. .spec.template.spec.ephemeralContainers[].securityContext.seLinuxOptions Description SELinuxOptions are the labels to be applied to the container Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 15.1.140. .spec.template.spec.ephemeralContainers[].securityContext.seccompProfile Description SeccompProfile defines a pod/container's seccomp profile settings. Only one profile source may be set. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must only be set if type is "Localhost". type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. Possible enum values: - "Localhost" indicates a profile defined in a file on the node should be used. The file's location relative to <kubelet-root-dir>/seccomp. - "RuntimeDefault" represents the default container runtime seccomp profile. - "Unconfined" indicates no seccomp profile is applied (A.K.A. unconfined). 15.1.141. .spec.template.spec.ephemeralContainers[].securityContext.windowsOptions Description WindowsSecurityContextOptions contain Windows-specific options and credentials. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 15.1.142. .spec.template.spec.ephemeralContainers[].startupProbe Description Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Type object Property Type Description exec object ExecAction describes a "run in container" action. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. httpGet object HTTPGetAction describes an action based on HTTP Get requests. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocketAction describes an action based on opening a socket terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 15.1.143. .spec.template.spec.ephemeralContainers[].startupProbe.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 15.1.144. .spec.template.spec.ephemeralContainers[].startupProbe.grpc Description GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 15.1.145. .spec.template.spec.ephemeralContainers[].startupProbe.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 15.1.146. .spec.template.spec.ephemeralContainers[].startupProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 15.1.147. .spec.template.spec.ephemeralContainers[].startupProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 15.1.148. .spec.template.spec.ephemeralContainers[].startupProbe.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 15.1.149. .spec.template.spec.ephemeralContainers[].volumeDevices Description volumeDevices is the list of block devices to be used by the container. Type array 15.1.150. .spec.template.spec.ephemeralContainers[].volumeDevices[] Description volumeDevice describes a mapping of a raw block device within a container. Type object Required name devicePath Property Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod 15.1.151. .spec.template.spec.ephemeralContainers[].volumeMounts Description Pod volumes to mount into the container's filesystem. Subpath mounts are not allowed for ephemeral containers. Cannot be updated. Type array 15.1.152. .spec.template.spec.ephemeralContainers[].volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required name mountPath Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 15.1.153. .spec.template.spec.hostAliases Description HostAliases is an optional list of hosts and IPs that will be injected into the pod's hosts file if specified. This is only valid for non-hostNetwork pods. Type array 15.1.154. .spec.template.spec.hostAliases[] Description HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pod's hosts file. Type object Property Type Description hostnames array (string) Hostnames for the above IP address. ip string IP address of the host file entry. 15.1.155. .spec.template.spec.imagePullSecrets Description ImagePullSecrets is an optional list of references to secrets in the same namespace to use for pulling any of the images used by this PodSpec. If specified, these secrets will be passed to individual puller implementations for them to use. More info: https://kubernetes.io/docs/concepts/containers/images#specifying-imagepullsecrets-on-a-pod Type array 15.1.156. .spec.template.spec.imagePullSecrets[] Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 15.1.157. .spec.template.spec.initContainers Description List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added or removed. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ Type array 15.1.158. .spec.template.spec.initContainers[] Description A single application container that you want to run within a pod. Type object Required name Property Type Description args array (string) Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command array (string) Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env array List of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images Possible enum values: - "Always" means that kubelet always attempts to pull the latest image. Container will fail If the pull fails. - "IfNotPresent" means that kubelet pulls if the image isn't present on disk. Container will fail if the image isn't present and the pull fails. - "Never" means that kubelet never pulls an image, but only uses a local image. Container will fail if the image isn't present lifecycle object Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted. livenessProbe object Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports array List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. ports[] object ContainerPort represents a network port in a single container. readinessProbe object Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. resources object ResourceRequirements describes the compute resource requirements. securityContext object SecurityContext holds security configuration that will be applied to a container. Some fields are present in both SecurityContext and PodSecurityContext. When both are set, the values in SecurityContext take precedence. startupProbe object Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. Possible enum values: - "FallbackToLogsOnError" will read the most recent contents of the container logs for the container status message when the container exits with an error and the terminationMessagePath has no contents. - "File" is the default behavior and will set the container status message to the contents of the container's terminationMessagePath when the container exits. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices array volumeDevices is the list of block devices to be used by the container. volumeDevices[] object volumeDevice describes a mapping of a raw block device within a container. volumeMounts array Pod volumes to mount into the container's filesystem. Cannot be updated. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. 15.1.159. .spec.template.spec.initContainers[].env Description List of environment variables to set in the container. Cannot be updated. Type array 15.1.160. .spec.template.spec.initContainers[].env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object EnvVarSource represents a source for the value of an EnvVar. 15.1.161. .spec.template.spec.initContainers[].env[].valueFrom Description EnvVarSource represents a source for the value of an EnvVar. Type object Property Type Description configMapKeyRef object Selects a key from a ConfigMap. fieldRef object ObjectFieldSelector selects an APIVersioned field of an object. resourceFieldRef object ResourceFieldSelector represents container resources (cpu, memory) and their output format secretKeyRef object SecretKeySelector selects a key of a Secret. 15.1.162. .spec.template.spec.initContainers[].env[].valueFrom.configMapKeyRef Description Selects a key from a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 15.1.163. .spec.template.spec.initContainers[].env[].valueFrom.fieldRef Description ObjectFieldSelector selects an APIVersioned field of an object. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 15.1.164. .spec.template.spec.initContainers[].env[].valueFrom.resourceFieldRef Description ResourceFieldSelector represents container resources (cpu, memory) and their output format Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor Quantity Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 15.1.165. .spec.template.spec.initContainers[].env[].valueFrom.secretKeyRef Description SecretKeySelector selects a key of a Secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 15.1.166. .spec.template.spec.initContainers[].envFrom Description List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. Type array 15.1.167. .spec.template.spec.initContainers[].envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object ConfigMapEnvSource selects a ConfigMap to populate the environment variables with. The contents of the target ConfigMap's Data field will represent the key-value pairs as environment variables. prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object SecretEnvSource selects a Secret to populate the environment variables with. The contents of the target Secret's Data field will represent the key-value pairs as environment variables. 15.1.168. .spec.template.spec.initContainers[].envFrom[].configMapRef Description ConfigMapEnvSource selects a ConfigMap to populate the environment variables with. The contents of the target ConfigMap's Data field will represent the key-value pairs as environment variables. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap must be defined 15.1.169. .spec.template.spec.initContainers[].envFrom[].secretRef Description SecretEnvSource selects a Secret to populate the environment variables with. The contents of the target Secret's Data field will represent the key-value pairs as environment variables. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret must be defined 15.1.170. .spec.template.spec.initContainers[].lifecycle Description Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted. Type object Property Type Description postStart object LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. preStop object LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. 15.1.171. .spec.template.spec.initContainers[].lifecycle.postStart Description LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. Type object Property Type Description exec object ExecAction describes a "run in container" action. httpGet object HTTPGetAction describes an action based on HTTP Get requests. tcpSocket object TCPSocketAction describes an action based on opening a socket 15.1.172. .spec.template.spec.initContainers[].lifecycle.postStart.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 15.1.173. .spec.template.spec.initContainers[].lifecycle.postStart.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 15.1.174. .spec.template.spec.initContainers[].lifecycle.postStart.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 15.1.175. .spec.template.spec.initContainers[].lifecycle.postStart.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 15.1.176. .spec.template.spec.initContainers[].lifecycle.postStart.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 15.1.177. .spec.template.spec.initContainers[].lifecycle.preStop Description LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. Type object Property Type Description exec object ExecAction describes a "run in container" action. httpGet object HTTPGetAction describes an action based on HTTP Get requests. tcpSocket object TCPSocketAction describes an action based on opening a socket 15.1.178. .spec.template.spec.initContainers[].lifecycle.preStop.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 15.1.179. .spec.template.spec.initContainers[].lifecycle.preStop.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 15.1.180. .spec.template.spec.initContainers[].lifecycle.preStop.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 15.1.181. .spec.template.spec.initContainers[].lifecycle.preStop.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 15.1.182. .spec.template.spec.initContainers[].lifecycle.preStop.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 15.1.183. .spec.template.spec.initContainers[].livenessProbe Description Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Type object Property Type Description exec object ExecAction describes a "run in container" action. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. httpGet object HTTPGetAction describes an action based on HTTP Get requests. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocketAction describes an action based on opening a socket terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 15.1.184. .spec.template.spec.initContainers[].livenessProbe.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 15.1.185. .spec.template.spec.initContainers[].livenessProbe.grpc Description GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 15.1.186. .spec.template.spec.initContainers[].livenessProbe.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 15.1.187. .spec.template.spec.initContainers[].livenessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 15.1.188. .spec.template.spec.initContainers[].livenessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 15.1.189. .spec.template.spec.initContainers[].livenessProbe.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 15.1.190. .spec.template.spec.initContainers[].ports Description List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. Type array 15.1.191. .spec.template.spec.initContainers[].ports[] Description ContainerPort represents a network port in a single container. Type object Required containerPort Property Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". Possible enum values: - "SCTP" is the SCTP protocol. - "TCP" is the TCP protocol. - "UDP" is the UDP protocol. 15.1.192. .spec.template.spec.initContainers[].readinessProbe Description Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Type object Property Type Description exec object ExecAction describes a "run in container" action. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. httpGet object HTTPGetAction describes an action based on HTTP Get requests. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocketAction describes an action based on opening a socket terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 15.1.193. .spec.template.spec.initContainers[].readinessProbe.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 15.1.194. .spec.template.spec.initContainers[].readinessProbe.grpc Description GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 15.1.195. .spec.template.spec.initContainers[].readinessProbe.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 15.1.196. .spec.template.spec.initContainers[].readinessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 15.1.197. .spec.template.spec.initContainers[].readinessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 15.1.198. .spec.template.spec.initContainers[].readinessProbe.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 15.1.199. .spec.template.spec.initContainers[].resources Description ResourceRequirements describes the compute resource requirements. Type object Property Type Description limits object (Quantity) Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests object (Quantity) Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 15.1.200. .spec.template.spec.initContainers[].securityContext Description SecurityContext holds security configuration that will be applied to a container. Some fields are present in both SecurityContext and PodSecurityContext. When both are set, the values in SecurityContext take precedence. Type object Property Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. capabilities object Adds and removes POSIX capabilities from running containers. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object SELinuxOptions are the labels to be applied to the container seccompProfile object SeccompProfile defines a pod/container's seccomp profile settings. Only one profile source may be set. windowsOptions object WindowsSecurityContextOptions contain Windows-specific options and credentials. 15.1.201. .spec.template.spec.initContainers[].securityContext.capabilities Description Adds and removes POSIX capabilities from running containers. Type object Property Type Description add array (string) Added capabilities drop array (string) Removed capabilities 15.1.202. .spec.template.spec.initContainers[].securityContext.seLinuxOptions Description SELinuxOptions are the labels to be applied to the container Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 15.1.203. .spec.template.spec.initContainers[].securityContext.seccompProfile Description SeccompProfile defines a pod/container's seccomp profile settings. Only one profile source may be set. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must only be set if type is "Localhost". type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. Possible enum values: - "Localhost" indicates a profile defined in a file on the node should be used. The file's location relative to <kubelet-root-dir>/seccomp. - "RuntimeDefault" represents the default container runtime seccomp profile. - "Unconfined" indicates no seccomp profile is applied (A.K.A. unconfined). 15.1.204. .spec.template.spec.initContainers[].securityContext.windowsOptions Description WindowsSecurityContextOptions contain Windows-specific options and credentials. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 15.1.205. .spec.template.spec.initContainers[].startupProbe Description Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Type object Property Type Description exec object ExecAction describes a "run in container" action. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. httpGet object HTTPGetAction describes an action based on HTTP Get requests. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocketAction describes an action based on opening a socket terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 15.1.206. .spec.template.spec.initContainers[].startupProbe.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 15.1.207. .spec.template.spec.initContainers[].startupProbe.grpc Description GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 15.1.208. .spec.template.spec.initContainers[].startupProbe.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 15.1.209. .spec.template.spec.initContainers[].startupProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 15.1.210. .spec.template.spec.initContainers[].startupProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 15.1.211. .spec.template.spec.initContainers[].startupProbe.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 15.1.212. .spec.template.spec.initContainers[].volumeDevices Description volumeDevices is the list of block devices to be used by the container. Type array 15.1.213. .spec.template.spec.initContainers[].volumeDevices[] Description volumeDevice describes a mapping of a raw block device within a container. Type object Required name devicePath Property Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod 15.1.214. .spec.template.spec.initContainers[].volumeMounts Description Pod volumes to mount into the container's filesystem. Cannot be updated. Type array 15.1.215. .spec.template.spec.initContainers[].volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required name mountPath Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 15.1.216. .spec.template.spec.os Description PodOS defines the OS parameters of a pod. Type object Required name Property Type Description name string Name is the name of the operating system. The currently supported values are linux and windows. Additional value may be defined in future and can be one of: https://github.com/opencontainers/runtime-spec/blob/master/config.md#platform-specific-configuration Clients should expect to handle additional values and treat unrecognized values in this field as os: null 15.1.217. .spec.template.spec.readinessGates Description If specified, all readiness gates will be evaluated for pod readiness. A pod is ready when all its containers are ready AND all conditions specified in the readiness gates have status equal to "True" More info: https://git.k8s.io/enhancements/keps/sig-network/580-pod-readiness-gates Type array 15.1.218. .spec.template.spec.readinessGates[] Description PodReadinessGate contains the reference to a pod condition Type object Required conditionType Property Type Description conditionType string ConditionType refers to a condition in the pod's condition list with matching type. 15.1.219. .spec.template.spec.securityContext Description PodSecurityContext holds pod-level security attributes and common container settings. Some fields are also present in container.securityContext. Field values of container.securityContext take precedence over field values of PodSecurityContext. Type object Property Type Description fsGroup integer A special supplemental group that applies to all containers in a pod. Some volume types allow the Kubelet to change the ownership of that volume to be owned by the pod: 1. The owning GID will be the FSGroup 2. The setgid bit is set (new files created in the volume will be owned by FSGroup) 3. The permission bits are OR'd with rw-rw---- If unset, the Kubelet will not modify the ownership and permissions of any volume. Note that this field cannot be set when spec.os.name is windows. fsGroupChangePolicy string fsGroupChangePolicy defines behavior of changing ownership and permission of the volume before being exposed inside Pod. This field will only apply to volume types which support fsGroup based ownership(and permissions). It will have no effect on ephemeral volume types such as: secret, configmaps and emptydir. Valid values are "OnRootMismatch" and "Always". If not specified, "Always" is used. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object SELinuxOptions are the labels to be applied to the container seccompProfile object SeccompProfile defines a pod/container's seccomp profile settings. Only one profile source may be set. supplementalGroups array (integer) A list of groups applied to the first process run in each container, in addition to the container's primary GID. If unspecified, no groups will be added to any container. Note that this field cannot be set when spec.os.name is windows. sysctls array Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. sysctls[] object Sysctl defines a kernel parameter to be set windowsOptions object WindowsSecurityContextOptions contain Windows-specific options and credentials. 15.1.220. .spec.template.spec.securityContext.seLinuxOptions Description SELinuxOptions are the labels to be applied to the container Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 15.1.221. .spec.template.spec.securityContext.seccompProfile Description SeccompProfile defines a pod/container's seccomp profile settings. Only one profile source may be set. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must only be set if type is "Localhost". type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. Possible enum values: - "Localhost" indicates a profile defined in a file on the node should be used. The file's location relative to <kubelet-root-dir>/seccomp. - "RuntimeDefault" represents the default container runtime seccomp profile. - "Unconfined" indicates no seccomp profile is applied (A.K.A. unconfined). 15.1.222. .spec.template.spec.securityContext.sysctls Description Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. Type array 15.1.223. .spec.template.spec.securityContext.sysctls[] Description Sysctl defines a kernel parameter to be set Type object Required name value Property Type Description name string Name of a property to set value string Value of a property to set 15.1.224. .spec.template.spec.securityContext.windowsOptions Description WindowsSecurityContextOptions contain Windows-specific options and credentials. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 15.1.225. .spec.template.spec.tolerations Description If specified, the pod's tolerations. Type array 15.1.226. .spec.template.spec.tolerations[] Description The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. Type object Property Type Description effect string Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. Possible enum values: - "NoExecute" Evict any already-running pods that do not tolerate the taint. Currently enforced by NodeController. - "NoSchedule" Do not allow new pods to schedule onto the node unless they tolerate the taint, but allow all pods submitted to Kubelet without going through the scheduler to start, and allow all already-running pods to continue running. Enforced by the scheduler. - "PreferNoSchedule" Like TaintEffectNoSchedule, but the scheduler tries not to schedule new pods onto the node, rather than prohibiting new pods from scheduling onto the node entirely. Enforced by the scheduler. key string Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. operator string Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. Possible enum values: - "Equal" - "Exists" tolerationSeconds integer TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. value string Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. 15.1.227. .spec.template.spec.topologySpreadConstraints Description TopologySpreadConstraints describes how a group of pods ought to spread across topology domains. Scheduler will schedule pods in a way which abides by the constraints. All topologySpreadConstraints are ANDed. Type array 15.1.228. .spec.template.spec.topologySpreadConstraints[] Description TopologySpreadConstraint specifies how to spread matching pods among the given topology. Type object Required maxSkew topologyKey whenUnsatisfiable Property Type Description labelSelector LabelSelector LabelSelector is used to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select the pods over which spreading will be calculated. The keys are used to lookup values from the incoming pod labels, those key-value labels are ANDed with labelSelector to select the group of existing pods over which spreading will be calculated for the incoming pod. Keys that don't exist in the incoming pod labels will be ignored. A null or empty list means only match against labelSelector. maxSkew integer MaxSkew describes the degree to which pods may be unevenly distributed. When whenUnsatisfiable=DoNotSchedule , it is the maximum permitted difference between the number of matching pods in the target topology and the global minimum. The global minimum is the minimum number of matching pods in an eligible domain or zero if the number of eligible domains is less than MinDomains. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 2/2/1: In this case, the global minimum is 1. | zone1 | zone2 | zone3 | | P P | P P | P | - if MaxSkew is 1, incoming pod can only be scheduled to zone3 to become 2/2/2; scheduling it onto zone1(zone2) would make the ActualSkew(3-1) on zone1(zone2) violate MaxSkew(1). - if MaxSkew is 2, incoming pod can be scheduled onto any zone. When whenUnsatisfiable=ScheduleAnyway , it is used to give higher precedence to topologies that satisfy it. It's a required field. Default value is 1 and 0 is not allowed. minDomains integer MinDomains indicates a minimum number of eligible domains. When the number of eligible domains with matching topology keys is less than minDomains, Pod Topology Spread treats "global minimum" as 0, and then the calculation of Skew is performed. And when the number of eligible domains with matching topology keys equals or greater than minDomains, this value has no effect on scheduling. As a result, when the number of eligible domains is less than minDomains, scheduler won't schedule more than maxSkew Pods to those domains. If value is nil, the constraint behaves as if MinDomains is equal to 1. Valid values are integers greater than 0. When value is not nil, WhenUnsatisfiable must be DoNotSchedule. For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same labelSelector spread as 2/2/2: | zone1 | zone2 | zone3 | | P P | P P | P P | The number of domains is less than 5(MinDomains), so "global minimum" is treated as 0. In this situation, new pod with the same labelSelector cannot be scheduled, because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones, it will violate MaxSkew. This is a beta field and requires the MinDomainsInPodTopologySpread feature gate to be enabled (enabled by default). nodeAffinityPolicy string NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations. If this value is nil, the behavior is equivalent to the Honor policy. This is a alpha-level feature enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. nodeTaintsPolicy string NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included. If this value is nil, the behavior is equivalent to the Ignore policy. This is a alpha-level feature enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. topologyKey string TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each <key, value> as a "bucket", and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy. e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. It's a required field. whenUnsatisfiable string WhenUnsatisfiable indicates how to deal with a pod if it doesn't satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location, but giving higher precedence to topologies that would help reduce the skew. A constraint is considered "Unsatisfiable" for an incoming pod if and only if every possible node assignment for that pod would violate "MaxSkew" on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P | P | P | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won't make it more imbalanced. It's a required field. Possible enum values: - "DoNotSchedule" instructs the scheduler not to schedule the pod when constraints are not satisfied. - "ScheduleAnyway" instructs the scheduler to schedule the pod even if constraints are not satisfied. 15.1.229. .spec.template.spec.volumes Description List of volumes that can be mounted by containers belonging to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes Type array 15.1.230. .spec.template.spec.volumes[] Description Volume represents a named volume in a pod that may be accessed by any container in the pod. Type object Required name Property Type Description awsElasticBlockStore object Represents a Persistent Disk resource in AWS. An AWS EBS disk must exist before mounting to a container. The disk must also be in the same AWS zone as the kubelet. An AWS EBS disk can only be mounted as read/write once. AWS EBS volumes support ownership management and SELinux relabeling. azureDisk object AzureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. azureFile object AzureFile represents an Azure File Service mount on the host and bind mount to the pod. cephfs object Represents a Ceph Filesystem mount that lasts the lifetime of a pod Cephfs volumes do not support ownership management or SELinux relabeling. cinder object Represents a cinder volume resource in Openstack. A Cinder volume must exist before mounting to a container. The volume must also be in the same region as the kubelet. Cinder volumes support ownership management and SELinux relabeling. configMap object Adapts a ConfigMap into a volume. The contents of the target ConfigMap's Data field will be presented in a volume as files using the keys in the Data field as the file names, unless the items element is populated with specific mappings of keys to paths. ConfigMap volumes support ownership management and SELinux relabeling. csi object Represents a source location of a volume to mount, managed by an external CSI driver downwardAPI object DownwardAPIVolumeSource represents a volume containing downward API info. Downward API volumes support ownership management and SELinux relabeling. emptyDir object Represents an empty directory for a pod. Empty directory volumes support ownership management and SELinux relabeling. ephemeral object Represents an ephemeral volume that is handled by a normal storage driver. fc object Represents a Fibre Channel volume. Fibre Channel volumes can only be mounted as read/write once. Fibre Channel volumes support ownership management and SELinux relabeling. flexVolume object FlexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. flocker object Represents a Flocker volume mounted by the Flocker agent. One and only one of datasetName and datasetUUID should be set. Flocker volumes do not support ownership management or SELinux relabeling. gcePersistentDisk object Represents a Persistent Disk resource in Google Compute Engine. A GCE PD must exist before mounting to a container. The disk must also be in the same GCE project and zone as the kubelet. A GCE PD can only be mounted as read/write once or read-only many times. GCE PDs support ownership management and SELinux relabeling. gitRepo object Represents a volume that is populated with the contents of a git repository. Git repo volumes do not support ownership management. Git repo volumes support SELinux relabeling. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. glusterfs object Represents a Glusterfs mount that lasts the lifetime of a pod. Glusterfs volumes do not support ownership management or SELinux relabeling. hostPath object Represents a host path mapped into a pod. Host path volumes do not support ownership management or SELinux relabeling. iscsi object Represents an ISCSI disk. ISCSI volumes can only be mounted as read/write once. ISCSI volumes support ownership management and SELinux relabeling. name string name of the volume. Must be a DNS_LABEL and unique within the pod. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names nfs object Represents an NFS mount that lasts the lifetime of a pod. NFS volumes do not support ownership management or SELinux relabeling. persistentVolumeClaim object PersistentVolumeClaimVolumeSource references the user's PVC in the same namespace. This volume finds the bound PV and mounts that volume for the pod. A PersistentVolumeClaimVolumeSource is, essentially, a wrapper around another type of volume that is owned by someone else (the system). photonPersistentDisk object Represents a Photon Controller persistent disk resource. portworxVolume object PortworxVolumeSource represents a Portworx volume resource. projected object Represents a projected volume source quobyte object Represents a Quobyte mount that lasts the lifetime of a pod. Quobyte volumes do not support ownership management or SELinux relabeling. rbd object Represents a Rados Block Device mount that lasts the lifetime of a pod. RBD volumes support ownership management and SELinux relabeling. scaleIO object ScaleIOVolumeSource represents a persistent ScaleIO volume secret object Adapts a Secret into a volume. The contents of the target Secret's Data field will be presented in a volume as files using the keys in the Data field as the file names. Secret volumes support ownership management and SELinux relabeling. storageos object Represents a StorageOS persistent volume resource. vsphereVolume object Represents a vSphere volume resource. 15.1.231. .spec.template.spec.volumes[].awsElasticBlockStore Description Represents a Persistent Disk resource in AWS. An AWS EBS disk must exist before mounting to a container. The disk must also be in the same AWS zone as the kubelet. An AWS EBS disk can only be mounted as read/write once. AWS EBS volumes support ownership management and SELinux relabeling. Type object Required volumeID Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). readOnly boolean readOnly value true will force the readOnly setting in VolumeMounts. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore volumeID string volumeID is unique ID of the persistent disk resource in AWS (Amazon EBS volume). More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore 15.1.232. .spec.template.spec.volumes[].azureDisk Description AzureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. Type object Required diskName diskURI Property Type Description cachingMode string cachingMode is the Host Caching mode: None, Read Only, Read Write. diskName string diskName is the Name of the data disk in the blob storage diskURI string diskURI is the URI of data disk in the blob storage fsType string fsType is Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. kind string kind expected values are Shared: multiple blob disks per storage account Dedicated: single blob disk per storage account Managed: azure managed data disk (only in managed availability set). defaults to shared readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. 15.1.233. .spec.template.spec.volumes[].azureFile Description AzureFile represents an Azure File Service mount on the host and bind mount to the pod. Type object Required secretName shareName Property Type Description readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretName string secretName is the name of secret that contains Azure Storage Account Name and Key shareName string shareName is the azure share Name 15.1.234. .spec.template.spec.volumes[].cephfs Description Represents a Ceph Filesystem mount that lasts the lifetime of a pod Cephfs volumes do not support ownership management or SELinux relabeling. Type object Required monitors Property Type Description monitors array (string) monitors is Required: Monitors is a collection of Ceph monitors More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it path string path is Optional: Used as the mounted root, rather than the full Ceph tree, default is / readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretFile string secretFile is Optional: SecretFile is the path to key ring for User, default is /etc/ceph/user.secret More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretRef object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. user string user is optional: User is the rados user name, default is admin More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it 15.1.235. .spec.template.spec.volumes[].cephfs.secretRef Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 15.1.236. .spec.template.spec.volumes[].cinder Description Represents a cinder volume resource in Openstack. A Cinder volume must exist before mounting to a container. The volume must also be in the same region as the kubelet. Cinder volumes support ownership management and SELinux relabeling. Type object Required volumeID Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://examples.k8s.io/mysql-cinder-pd/README.md readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/mysql-cinder-pd/README.md secretRef object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. volumeID string volumeID used to identify the volume in cinder. More info: https://examples.k8s.io/mysql-cinder-pd/README.md 15.1.237. .spec.template.spec.volumes[].cinder.secretRef Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 15.1.238. .spec.template.spec.volumes[].configMap Description Adapts a ConfigMap into a volume. The contents of the target ConfigMap's Data field will be presented in a volume as files using the keys in the Data field as the file names, unless the items element is populated with specific mappings of keys to paths. ConfigMap volumes support ownership management and SELinux relabeling. Type object Property Type Description defaultMode integer defaultMode is optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean optional specify whether the ConfigMap or its keys must be defined 15.1.239. .spec.template.spec.volumes[].configMap.items Description items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 15.1.240. .spec.template.spec.volumes[].configMap.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 15.1.241. .spec.template.spec.volumes[].csi Description Represents a source location of a volume to mount, managed by an external CSI driver Type object Required driver Property Type Description driver string driver is the name of the CSI driver that handles this volume. Consult with your admin for the correct name as registered in the cluster. fsType string fsType to mount. Ex. "ext4", "xfs", "ntfs". If not provided, the empty value is passed to the associated CSI driver which will determine the default filesystem to apply. nodePublishSecretRef object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. readOnly boolean readOnly specifies a read-only configuration for the volume. Defaults to false (read/write). volumeAttributes object (string) volumeAttributes stores driver-specific properties that are passed to the CSI driver. Consult your driver's documentation for supported values. 15.1.242. .spec.template.spec.volumes[].csi.nodePublishSecretRef Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 15.1.243. .spec.template.spec.volumes[].downwardAPI Description DownwardAPIVolumeSource represents a volume containing downward API info. Downward API volumes support ownership management and SELinux relabeling. Type object Property Type Description defaultMode integer Optional: mode bits to use on created files by default. Must be a Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array Items is a list of downward API volume file items[] object DownwardAPIVolumeFile represents information to create the file containing the pod field 15.1.244. .spec.template.spec.volumes[].downwardAPI.items Description Items is a list of downward API volume file Type array 15.1.245. .spec.template.spec.volumes[].downwardAPI.items[] Description DownwardAPIVolumeFile represents information to create the file containing the pod field Type object Required path Property Type Description fieldRef object ObjectFieldSelector selects an APIVersioned field of an object. mode integer Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef object ResourceFieldSelector represents container resources (cpu, memory) and their output format 15.1.246. .spec.template.spec.volumes[].downwardAPI.items[].fieldRef Description ObjectFieldSelector selects an APIVersioned field of an object. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 15.1.247. .spec.template.spec.volumes[].downwardAPI.items[].resourceFieldRef Description ResourceFieldSelector represents container resources (cpu, memory) and their output format Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor Quantity Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 15.1.248. .spec.template.spec.volumes[].emptyDir Description Represents an empty directory for a pod. Empty directory volumes support ownership management and SELinux relabeling. Type object Property Type Description medium string medium represents what type of storage medium should back this directory. The default is "" which means to use the node's default medium. Must be an empty string (default) or Memory. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir sizeLimit Quantity sizeLimit is the total amount of local storage required for this EmptyDir volume. The size limit is also applicable for memory medium. The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod. The default is nil which means that the limit is undefined. More info: http://kubernetes.io/docs/user-guide/volumes#emptydir 15.1.249. .spec.template.spec.volumes[].ephemeral Description Represents an ephemeral volume that is handled by a normal storage driver. Type object Property Type Description volumeClaimTemplate object PersistentVolumeClaimTemplate is used to produce PersistentVolumeClaim objects as part of an EphemeralVolumeSource. 15.1.250. .spec.template.spec.volumes[].ephemeral.volumeClaimTemplate Description PersistentVolumeClaimTemplate is used to produce PersistentVolumeClaim objects as part of an EphemeralVolumeSource. Type object Required spec Property Type Description metadata ObjectMeta May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. spec object PersistentVolumeClaimSpec describes the common attributes of storage devices and allows a Source for provider-specific attributes 15.1.251. .spec.template.spec.volumes[].ephemeral.volumeClaimTemplate.spec Description PersistentVolumeClaimSpec describes the common attributes of storage devices and allows a Source for provider-specific attributes Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object TypedLocalObjectReference contains enough information to let you locate the typed referenced object inside the same namespace. dataSourceRef object TypedLocalObjectReference contains enough information to let you locate the typed referenced object inside the same namespace. resources object ResourceRequirements describes the compute resource requirements. selector LabelSelector selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. 15.1.252. .spec.template.spec.volumes[].ephemeral.volumeClaimTemplate.spec.dataSource Description TypedLocalObjectReference contains enough information to let you locate the typed referenced object inside the same namespace. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 15.1.253. .spec.template.spec.volumes[].ephemeral.volumeClaimTemplate.spec.dataSourceRef Description TypedLocalObjectReference contains enough information to let you locate the typed referenced object inside the same namespace. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 15.1.254. .spec.template.spec.volumes[].ephemeral.volumeClaimTemplate.spec.resources Description ResourceRequirements describes the compute resource requirements. Type object Property Type Description limits object (Quantity) Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests object (Quantity) Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 15.1.255. .spec.template.spec.volumes[].fc Description Represents a Fibre Channel volume. Fibre Channel volumes can only be mounted as read/write once. Fibre Channel volumes support ownership management and SELinux relabeling. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. lun integer lun is Optional: FC target lun number readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. targetWWNs array (string) targetWWNs is Optional: FC target worldwide names (WWNs) wwids array (string) wwids Optional: FC volume world wide identifiers (wwids) Either wwids or combination of targetWWNs and lun must be set, but not both simultaneously. 15.1.256. .spec.template.spec.volumes[].flexVolume Description FlexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. Type object Required driver Property Type Description driver string driver is the name of the driver to use for this volume. fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". The default filesystem depends on FlexVolume script. options object (string) options is Optional: this field holds extra command options if any. readOnly boolean readOnly is Optional: defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. 15.1.257. .spec.template.spec.volumes[].flexVolume.secretRef Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 15.1.258. .spec.template.spec.volumes[].flocker Description Represents a Flocker volume mounted by the Flocker agent. One and only one of datasetName and datasetUUID should be set. Flocker volumes do not support ownership management or SELinux relabeling. Type object Property Type Description datasetName string datasetName is Name of the dataset stored as metadata name on the dataset for Flocker should be considered as deprecated datasetUUID string datasetUUID is the UUID of the dataset. This is unique identifier of a Flocker dataset 15.1.259. .spec.template.spec.volumes[].gcePersistentDisk Description Represents a Persistent Disk resource in Google Compute Engine. A GCE PD must exist before mounting to a container. The disk must also be in the same GCE project and zone as the kubelet. A GCE PD can only be mounted as read/write once or read-only many times. GCE PDs support ownership management and SELinux relabeling. Type object Required pdName Property Type Description fsType string fsType is filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk pdName string pdName is unique name of the PD resource in GCE. Used to identify the disk in GCE. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk 15.1.260. .spec.template.spec.volumes[].gitRepo Description Represents a volume that is populated with the contents of a git repository. Git repo volumes do not support ownership management. Git repo volumes support SELinux relabeling. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. Type object Required repository Property Type Description directory string directory is the target directory name. Must not contain or start with '..'. If '.' is supplied, the volume directory will be the git repository. Otherwise, if specified, the volume will contain the git repository in the subdirectory with the given name. repository string repository is the URL revision string revision is the commit hash for the specified revision. 15.1.261. .spec.template.spec.volumes[].glusterfs Description Represents a Glusterfs mount that lasts the lifetime of a pod. Glusterfs volumes do not support ownership management or SELinux relabeling. Type object Required endpoints path Property Type Description endpoints string endpoints is the endpoint name that details Glusterfs topology. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod path string path is the Glusterfs volume path. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod readOnly boolean readOnly here will force the Glusterfs volume to be mounted with read-only permissions. Defaults to false. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod 15.1.262. .spec.template.spec.volumes[].hostPath Description Represents a host path mapped into a pod. Host path volumes do not support ownership management or SELinux relabeling. Type object Required path Property Type Description path string path of the directory on the host. If the path is a symlink, it will follow the link to the real path. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath type string type for HostPath Volume Defaults to "" More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath 15.1.263. .spec.template.spec.volumes[].iscsi Description Represents an ISCSI disk. ISCSI volumes can only be mounted as read/write once. ISCSI volumes support ownership management and SELinux relabeling. Type object Required targetPortal iqn lun Property Type Description chapAuthDiscovery boolean chapAuthDiscovery defines whether support iSCSI Discovery CHAP authentication chapAuthSession boolean chapAuthSession defines whether support iSCSI Session CHAP authentication fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#iscsi initiatorName string initiatorName is the custom iSCSI Initiator Name. If initiatorName is specified with iscsiInterface simultaneously, new iSCSI interface <target portal>:<volume name> will be created for the connection. iqn string iqn is the target iSCSI Qualified Name. iscsiInterface string iscsiInterface is the interface Name that uses an iSCSI transport. Defaults to 'default' (tcp). lun integer lun represents iSCSI Target Lun number. portals array (string) portals is the iSCSI Target Portal List. The portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. secretRef object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. targetPortal string targetPortal is iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). 15.1.264. .spec.template.spec.volumes[].iscsi.secretRef Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 15.1.265. .spec.template.spec.volumes[].nfs Description Represents an NFS mount that lasts the lifetime of a pod. NFS volumes do not support ownership management or SELinux relabeling. Type object Required server path Property Type Description path string path that is exported by the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs readOnly boolean readOnly here will force the NFS export to be mounted with read-only permissions. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs server string server is the hostname or IP address of the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs 15.1.266. .spec.template.spec.volumes[].persistentVolumeClaim Description PersistentVolumeClaimVolumeSource references the user's PVC in the same namespace. This volume finds the bound PV and mounts that volume for the pod. A PersistentVolumeClaimVolumeSource is, essentially, a wrapper around another type of volume that is owned by someone else (the system). Type object Required claimName Property Type Description claimName string claimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims readOnly boolean readOnly Will force the ReadOnly setting in VolumeMounts. Default false. 15.1.267. .spec.template.spec.volumes[].photonPersistentDisk Description Represents a Photon Controller persistent disk resource. Type object Required pdID Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. pdID string pdID is the ID that identifies Photon Controller persistent disk 15.1.268. .spec.template.spec.volumes[].portworxVolume Description PortworxVolumeSource represents a Portworx volume resource. Type object Required volumeID Property Type Description fsType string fSType represents the filesystem type to mount Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. volumeID string volumeID uniquely identifies a Portworx volume 15.1.269. .spec.template.spec.volumes[].projected Description Represents a projected volume source Type object Property Type Description defaultMode integer defaultMode are the mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. sources array sources is the list of volume projections sources[] object Projection that may be projected along with other supported volume types 15.1.270. .spec.template.spec.volumes[].projected.sources Description sources is the list of volume projections Type array 15.1.271. .spec.template.spec.volumes[].projected.sources[] Description Projection that may be projected along with other supported volume types Type object Property Type Description configMap object Adapts a ConfigMap into a projected volume. The contents of the target ConfigMap's Data field will be presented in a projected volume as files using the keys in the Data field as the file names, unless the items element is populated with specific mappings of keys to paths. Note that this is identical to a configmap volume source without the default mode. downwardAPI object Represents downward API info for projecting into a projected volume. Note that this is identical to a downwardAPI volume source without the default mode. secret object Adapts a secret into a projected volume. The contents of the target Secret's Data field will be presented in a projected volume as files using the keys in the Data field as the file names. Note that this is identical to a secret volume source without the default mode. serviceAccountToken object ServiceAccountTokenProjection represents a projected service account token volume. This projection can be used to insert a service account token into the pods runtime filesystem for use against APIs (Kubernetes API Server or otherwise). 15.1.272. .spec.template.spec.volumes[].projected.sources[].configMap Description Adapts a ConfigMap into a projected volume. The contents of the target ConfigMap's Data field will be presented in a projected volume as files using the keys in the Data field as the file names, unless the items element is populated with specific mappings of keys to paths. Note that this is identical to a configmap volume source without the default mode. Type object Property Type Description items array items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean optional specify whether the ConfigMap or its keys must be defined 15.1.273. .spec.template.spec.volumes[].projected.sources[].configMap.items Description items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 15.1.274. .spec.template.spec.volumes[].projected.sources[].configMap.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 15.1.275. .spec.template.spec.volumes[].projected.sources[].downwardAPI Description Represents downward API info for projecting into a projected volume. Note that this is identical to a downwardAPI volume source without the default mode. Type object Property Type Description items array Items is a list of DownwardAPIVolume file items[] object DownwardAPIVolumeFile represents information to create the file containing the pod field 15.1.276. .spec.template.spec.volumes[].projected.sources[].downwardAPI.items Description Items is a list of DownwardAPIVolume file Type array 15.1.277. .spec.template.spec.volumes[].projected.sources[].downwardAPI.items[] Description DownwardAPIVolumeFile represents information to create the file containing the pod field Type object Required path Property Type Description fieldRef object ObjectFieldSelector selects an APIVersioned field of an object. mode integer Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef object ResourceFieldSelector represents container resources (cpu, memory) and their output format 15.1.278. .spec.template.spec.volumes[].projected.sources[].downwardAPI.items[].fieldRef Description ObjectFieldSelector selects an APIVersioned field of an object. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 15.1.279. .spec.template.spec.volumes[].projected.sources[].downwardAPI.items[].resourceFieldRef Description ResourceFieldSelector represents container resources (cpu, memory) and their output format Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor Quantity Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 15.1.280. .spec.template.spec.volumes[].projected.sources[].secret Description Adapts a secret into a projected volume. The contents of the target Secret's Data field will be presented in a projected volume as files using the keys in the Data field as the file names. Note that this is identical to a secret volume source without the default mode. Type object Property Type Description items array items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean optional field specify whether the Secret or its key must be defined 15.1.281. .spec.template.spec.volumes[].projected.sources[].secret.items Description items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 15.1.282. .spec.template.spec.volumes[].projected.sources[].secret.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 15.1.283. .spec.template.spec.volumes[].projected.sources[].serviceAccountToken Description ServiceAccountTokenProjection represents a projected service account token volume. This projection can be used to insert a service account token into the pods runtime filesystem for use against APIs (Kubernetes API Server or otherwise). Type object Required path Property Type Description audience string audience is the intended audience of the token. A recipient of a token must identify itself with an identifier specified in the audience of the token, and otherwise should reject the token. The audience defaults to the identifier of the apiserver. expirationSeconds integer expirationSeconds is the requested duration of validity of the service account token. As the token approaches expiration, the kubelet volume plugin will proactively rotate the service account token. The kubelet will start trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours.Defaults to 1 hour and must be at least 10 minutes. path string path is the path relative to the mount point of the file to project the token into. 15.1.284. .spec.template.spec.volumes[].quobyte Description Represents a Quobyte mount that lasts the lifetime of a pod. Quobyte volumes do not support ownership management or SELinux relabeling. Type object Required registry volume Property Type Description group string group to map volume access to Default is no group readOnly boolean readOnly here will force the Quobyte volume to be mounted with read-only permissions. Defaults to false. registry string registry represents a single or multiple Quobyte Registry services specified as a string as host:port pair (multiple entries are separated with commas) which acts as the central registry for volumes tenant string tenant owning the given Quobyte volume in the Backend Used with dynamically provisioned Quobyte volumes, value is set by the plugin user string user to map volume access to Defaults to serivceaccount user volume string volume is a string that references an already created Quobyte volume by name. 15.1.285. .spec.template.spec.volumes[].rbd Description Represents a Rados Block Device mount that lasts the lifetime of a pod. RBD volumes support ownership management and SELinux relabeling. Type object Required monitors image Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#rbd image string image is the rados image name. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it keyring string keyring is the path to key ring for RBDUser. Default is /etc/ceph/keyring. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it monitors array (string) monitors is a collection of Ceph monitors. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it pool string pool is the rados pool name. Default is rbd. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it secretRef object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. user string user is the rados user name. Default is admin. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it 15.1.286. .spec.template.spec.volumes[].rbd.secretRef Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 15.1.287. .spec.template.spec.volumes[].scaleIO Description ScaleIOVolumeSource represents a persistent ScaleIO volume Type object Required gateway system secretRef Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Default is "xfs". gateway string gateway is the host address of the ScaleIO API Gateway. protectionDomain string protectionDomain is the name of the ScaleIO Protection Domain for the configured storage. readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. sslEnabled boolean sslEnabled Flag enable/disable SSL communication with Gateway, default false storageMode string storageMode indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned. Default is ThinProvisioned. storagePool string storagePool is the ScaleIO Storage Pool associated with the protection domain. system string system is the name of the storage system as configured in ScaleIO. volumeName string volumeName is the name of a volume already created in the ScaleIO system that is associated with this volume source. 15.1.288. .spec.template.spec.volumes[].scaleIO.secretRef Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 15.1.289. .spec.template.spec.volumes[].secret Description Adapts a Secret into a volume. The contents of the target Secret's Data field will be presented in a volume as files using the keys in the Data field as the file names. Secret volumes support ownership management and SELinux relabeling. Type object Property Type Description defaultMode integer defaultMode is Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. optional boolean optional field specify whether the Secret or its keys must be defined secretName string secretName is the name of the secret in the pod's namespace to use. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret 15.1.290. .spec.template.spec.volumes[].secret.items Description items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 15.1.291. .spec.template.spec.volumes[].secret.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 15.1.292. .spec.template.spec.volumes[].storageos Description Represents a StorageOS persistent volume resource. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. volumeName string volumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace. volumeNamespace string volumeNamespace specifies the scope of the volume within StorageOS. If no namespace is specified then the Pod's namespace will be used. This allows the Kubernetes name scoping to be mirrored within StorageOS for tighter integration. Set VolumeName to any name to override the default behaviour. Set to "default" if you are not using namespaces within StorageOS. Namespaces that do not pre-exist within StorageOS will be created. 15.1.293. .spec.template.spec.volumes[].storageos.secretRef Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 15.1.294. .spec.template.spec.volumes[].vsphereVolume Description Represents a vSphere volume resource. Type object Required volumePath Property Type Description fsType string fsType is filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. storagePolicyID string storagePolicyID is the storage Policy Based Management (SPBM) profile ID associated with the StoragePolicyName. storagePolicyName string storagePolicyName is the storage Policy Based Management (SPBM) profile name. volumePath string volumePath is the path that identifies vSphere volume vmdk 15.1.295. .status Description ReplicationControllerStatus represents the current status of a replication controller. Type object Required replicas Property Type Description availableReplicas integer The number of available replicas (ready for at least minReadySeconds) for this replication controller. conditions array Represents the latest available observations of a replication controller's current state. conditions[] object ReplicationControllerCondition describes the state of a replication controller at a certain point. fullyLabeledReplicas integer The number of pods that have labels matching the labels of the pod template of the replication controller. observedGeneration integer ObservedGeneration reflects the generation of the most recently observed replication controller. readyReplicas integer The number of ready replicas for this replication controller. replicas integer Replicas is the most recently oberved number of replicas. More info: https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller#what-is-a-replicationcontroller 15.1.296. .status.conditions Description Represents the latest available observations of a replication controller's current state. Type array 15.1.297. .status.conditions[] Description ReplicationControllerCondition describes the state of a replication controller at a certain point. Type object Required type status Property Type Description lastTransitionTime Time The last time the condition transitioned from one status to another. message string A human readable message indicating details about the transition. reason string The reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of replication controller condition. 15.2. API endpoints The following API endpoints are available: /api/v1/replicationcontrollers GET : list or watch objects of kind ReplicationController /api/v1/watch/replicationcontrollers GET : watch individual changes to a list of ReplicationController. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/replicationcontrollers DELETE : delete collection of ReplicationController GET : list or watch objects of kind ReplicationController POST : create a ReplicationController /api/v1/watch/namespaces/{namespace}/replicationcontrollers GET : watch individual changes to a list of ReplicationController. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/replicationcontrollers/{name} DELETE : delete a ReplicationController GET : read the specified ReplicationController PATCH : partially update the specified ReplicationController PUT : replace the specified ReplicationController /api/v1/watch/namespaces/{namespace}/replicationcontrollers/{name} GET : watch changes to an object of kind ReplicationController. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /api/v1/namespaces/{namespace}/replicationcontrollers/{name}/status GET : read status of the specified ReplicationController PATCH : partially update status of the specified ReplicationController PUT : replace status of the specified ReplicationController 15.2.1. /api/v1/replicationcontrollers Table 15.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind ReplicationController Table 15.2. HTTP responses HTTP code Reponse body 200 - OK ReplicationControllerList schema 401 - Unauthorized Empty 15.2.2. /api/v1/watch/replicationcontrollers Table 15.3. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of ReplicationController. deprecated: use the 'watch' parameter with a list operation instead. Table 15.4. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 15.2.3. /api/v1/namespaces/{namespace}/replicationcontrollers Table 15.5. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 15.6. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of ReplicationController Table 15.7. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 15.8. Body parameters Parameter Type Description body DeleteOptions schema Table 15.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind ReplicationController Table 15.10. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 15.11. HTTP responses HTTP code Reponse body 200 - OK ReplicationControllerList schema 401 - Unauthorized Empty HTTP method POST Description create a ReplicationController Table 15.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.13. Body parameters Parameter Type Description body ReplicationController schema Table 15.14. HTTP responses HTTP code Reponse body 200 - OK ReplicationController schema 201 - Created ReplicationController schema 202 - Accepted ReplicationController schema 401 - Unauthorized Empty 15.2.4. /api/v1/watch/namespaces/{namespace}/replicationcontrollers Table 15.15. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 15.16. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of ReplicationController. deprecated: use the 'watch' parameter with a list operation instead. Table 15.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 15.2.5. /api/v1/namespaces/{namespace}/replicationcontrollers/{name} Table 15.18. Global path parameters Parameter Type Description name string name of the ReplicationController namespace string object name and auth scope, such as for teams and projects Table 15.19. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a ReplicationController Table 15.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 15.21. Body parameters Parameter Type Description body DeleteOptions schema Table 15.22. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ReplicationController Table 15.23. HTTP responses HTTP code Reponse body 200 - OK ReplicationController schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ReplicationController Table 15.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 15.25. Body parameters Parameter Type Description body Patch schema Table 15.26. HTTP responses HTTP code Reponse body 200 - OK ReplicationController schema 201 - Created ReplicationController schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ReplicationController Table 15.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.28. Body parameters Parameter Type Description body ReplicationController schema Table 15.29. HTTP responses HTTP code Reponse body 200 - OK ReplicationController schema 201 - Created ReplicationController schema 401 - Unauthorized Empty 15.2.6. /api/v1/watch/namespaces/{namespace}/replicationcontrollers/{name} Table 15.30. Global path parameters Parameter Type Description name string name of the ReplicationController namespace string object name and auth scope, such as for teams and projects Table 15.31. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind ReplicationController. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 15.32. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 15.2.7. /api/v1/namespaces/{namespace}/replicationcontrollers/{name}/status Table 15.33. Global path parameters Parameter Type Description name string name of the ReplicationController namespace string object name and auth scope, such as for teams and projects Table 15.34. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified ReplicationController Table 15.35. HTTP responses HTTP code Reponse body 200 - OK ReplicationController schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ReplicationController Table 15.36. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 15.37. Body parameters Parameter Type Description body Patch schema Table 15.38. HTTP responses HTTP code Reponse body 200 - OK ReplicationController schema 201 - Created ReplicationController schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ReplicationController Table 15.39. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.40. Body parameters Parameter Type Description body ReplicationController schema Table 15.41. HTTP responses HTTP code Reponse body 200 - OK ReplicationController schema 201 - Created ReplicationController schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/workloads_apis/replicationcontroller-v1
Chapter 34. Relax-and-Recover (ReaR)
Chapter 34. Relax-and-Recover (ReaR) When a software or hardware failure breaks the system, the system administrator faces three tasks to restore it to the fully functioning state on a new hardware environment: booting a rescue system on the new hardware replicating the original storage layout restoring user and system files Most backup software solves only the third problem. To solve the first and second problems, use Relax-and-Recover (ReaR) , a disaster recovery and system migration utility. Backup software creates backups. ReaR complements backup software by creating a rescue system . Booting the rescue system on a new hardware allows you to issue the rear recover command, which starts the recovery process. During this process, ReaR replicates the partition layout and filesystems, prompts for restoring user and system files from the backup created by backup software, and finally installs the boot loader. By default, the rescue system created by ReaR only restores the storage layout and the boot loader, but not the actual user and system files. This chapter describes how to use ReaR. 34.1. Basic ReaR Usage 34.1.1. Installing ReaR Install the rear package by running the following command as root: ~]# yum install rear 34.1.2. Configuring ReaR ReaR is configured in the /etc/rear/local.conf file. Specify the rescue system configuration by adding these lines: Substitute output format with rescue system format, for example, ISO for an ISO disk image or USB for a bootable USB. Substitute output location with where it will be put, for example, file:///mnt/rescue_system/ for a local filesystem directory or sftp://backup:[email protected]/ for an SFTP directory. Example 34.1. Configuring Rescue System Format and Location To configure ReaR to output the rescue system as an ISO image into the /mnt/rescue_system/ directory, add these lines to the /etc/rear/local.conf file: See section "Rescue Image Configuration" of the rear(8) man page for a list of all options. 34.1.3. Creating a Rescue System The following example shows how to create a rescue system with verbose output: With the configuration from Example 34.1, "Configuring Rescue System Format and Location" , ReaR prints the above output. The last two lines confirm that the rescue system has been successfully created and copied to the configured backup location /mnt/rescue_system/ . Because the system's host name is rhel-68 , the backup location now contains directory rhel-68/ with the rescue system and auxiliary files: Transfer the rescue system to an external medium to not lose it in case of a disaster. 34.1.4. Scheduling ReaR To schedule ReaR to regularly create a rescue system using the cron job scheduler, add the following line to the /etc/crontab file: Substitute the above command with the cron time specification (described in detail in Section 27.1.4, "Configuring Cron Jobs" ). Example 34.2. Scheduling ReaR To make ReaR create a rescue system at 22:00 every weekday, add this line to the /etc/crontab file: 34.1.5. Performing a System Rescue To perform a restore or migration: Boot the rescue system on the new hardware. For example, burn the ISO image to a DVD and boot from the DVD. In the console interface, select the "Recover" option: Figure 34.1. Rescue system: menu You are taken to the prompt: Figure 34.2. Rescue system: prompt Warning Once you have started recovery in the step, it probably cannot be undone and you may lose anything stored on the physical disks of the system. Run the rear recover command to perform the restore or migration. The rescue system then recreates the partition layout and filesystems: Figure 34.3. Rescue system: running "rear recover" Restore user and system files from the backup into the /mnt/local/ directory. Example 34.3. Restoring User and System Files In this example, the backup file is a tar archive created per instructions in Section 34.2.1.1, "Configuring the Internal Backup Method" . First, copy the archive from its storage, then unpack the files into /mnt/local/ , then delete the archive: The new storage has to have enough space both for the archive and the extracted files. Verify that the files have been restored: Figure 34.4. Rescue system: restoring user and system files from the backup Ensure that SELinux relabels the files on the boot: Otherwise you may be unable to log in the system, because the /etc/passwd file may have the incorrect SELinux context. Finish the recovery and reboot the system: Figure 34.5. Rescue system: finishing recovery ReaR will then reinstall the boot loader. Upon reboot, SELinux will relabel the whole filesystem. Then you will be able to log in to the recovered system.
[ "OUTPUT= output format OUTPUT_URL= output location", "OUTPUT=ISO OUTPUT_URL=file:///mnt/rescue_system/", "~]# rear -v mkrescue Relax-and-Recover 1.17.2 / Git Using log file: /var/log/rear/rear-rhel68.log mkdir: created directory `/var/lib/rear/output' Creating disk layout Creating root filesystem layout TIP: To login as root via ssh you need to set up /root/.ssh/authorized_keys or SSH_ROOT_PASSWORD in your configuration file Copying files and directories Copying binaries and libraries Copying kernel modules Creating initramfs Making ISO image Wrote ISO image: /var/lib/rear/output/rear-rhel68.iso (82M) Copying resulting files to file location", "~]# ls -lh /mnt/rescue_system/rhel68/ total 82M -rw-------. 1 root root 202 May 9 11:46 README -rw-------. 1 root root 160K May 9 11:46 rear.log -rw-------. 1 root root 82M May 9 11:46 rear-rhel68.iso -rw-------. 1 root root 275 May 9 11:46 VERSION", "minute hour day_of_month month day_of_week root /usr/sbin/rear mkrescue", "0 22 * * 1-5 root /usr/sbin/rear mkrescue", "~]# scp [email protected]:/srv/backup/rhel68/backup.tar.gz /mnt/local/ ~]# tar xf /mnt/local/backup.tar.gz -C /mnt/local/ ~]# rm -f /mnt/local/backup.tar.gz", "~]# ls /mnt/local/", "~]# touch /mnt/local/.autorelabel" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/ch-Relax-and-Recover_ReaR
probe::ioscheduler.elv_completed_request
probe::ioscheduler.elv_completed_request Name probe::ioscheduler.elv_completed_request - Fires when a request is completed Synopsis Values disk_major Disk major number of the request rq Address of the request name Name of the probe point elevator_name The type of I/O elevator currently enabled disk_minor Disk minor number of the request rq_flags Request flags
[ "ioscheduler.elv_completed_request" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-ioscheduler-elv-completed-request
Creating CI/CD pipelines
Creating CI/CD pipelines Red Hat OpenShift Pipelines 1.18 Getting started with creating and running tasks and pipelines in OpenShift Pipelines Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_openshift_pipelines/1.18/html/creating_cicd_pipelines/index
Managing OpenStack Identity resources
Managing OpenStack Identity resources Red Hat OpenStack Platform 17.1 Configure users and keystone authentication OpenStack Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/managing_openstack_identity_resources/index
5.234. perl-GSSAPI
5.234. perl-GSSAPI 5.234.1. RHBA-2012:1340 - perl-GSSAPI bug fix update Updated perl-GSSAPI packages that fix one bug are now available for Red Hat Enterprise Linux 6. The perl-GSSAPI packages provide Perl extension for GSSAPIv2 access. Bug Fix BZ# 657274 Prior to this update, the perl-GSSAPI specification file used a krb5-devel file which was removed. As a consequence, the perl-GSSAPI package could not be rebuilt. This update modifies the specification file to use the current krb5-devel files. All users of perl-GSSAPI are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/perl-gssapi
8.49. gcc
8.49. gcc 8.49.1. RHBA-2013:1609 - gcc bug fix and enhancement update Updated gcc packages that fix several bugs and add one enhancement are now available for Red Hat Enterprise Linux 6. The gcc packages provide compilers for C, C++, Java, Fortran, Objective C, and Ada 95 GNU, as well as related support libraries. Bug Fixes BZ# 906234 Due to the small local buffer for read tokens, GCC (GNU Compiler Collection) could trigger stack smashing protector when reading digraphs in a program. The buffer has been enlarged, and thus the digraph tokens can be read without harming the memory. BZ# 921758 Previously, GCC could terminate unexpectedly when compiling C++ code that contained a structure with the "va_list" member field. The initialization of such a structure has been fixed, and GCC no longer crashes on such code. BZ# 959564 Prior to this update, the libgcc utility could terminate unexpectedly when unwinding the stack for a function annotated with "__attribute__((ms_abi))". This bug has been fixed by ignoring unwind data for unknown column numbers and libgcc no longer crashes. BZ# 967003 Previously, GCC could terminate unexpectedly when processing debug statements. This bug has been fixed by removing the value bound to the variable in such debug statements, and GCC no longer crashes in the described scenario. Enhancement BZ# 908025 GCC now supports strings with curly braces and vertical bar inside inline assembler code. That is, '{', '}', and '|' can now be prefixed with the '%' sign; in that case they are not handled as dialect delimiters, but are passed directly to the assembler instead. Users of gcc are advised to upgrade to these updated packages, which fix these bugs and add this enhancement.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/gcc
4.5. Security
4.5. Security TPM TPM (Trusted Platform Module) hardware can create, store and use RSA keys securely (without ever being exposed in memory), verify a platform's software state using cryptographic hashes and more. The trousers and tpm-tools packages are considered a Technology Preview. Packages: trousers-0.3.13.2 , tpm-tools-1.3.4-2
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/security_tp
9.12. Updating Certificates and CRLs in a Directory
9.12. Updating Certificates and CRLs in a Directory The Certificate Manager and the publishing directory can become out of sync if certificates are issued or revoked while the Directory Server is down. Certificates that were issued or revoked need to be published or unpublished manually when the Directory Server comes back up. To find certificates that are out of sync with the directory ‐ valid certificates that are not in the directory and revoked or expired certificates that are still in the directory ‐ the Certificate Manager keeps a record of whether a certificate in its internal database has been published to the directory. If the Certificate Manager and the publishing directory become out of sync, use the Update Directory option in the Certificate Manager agent services page to synchronize the publishing directory with the internal database. The following choices are available for synchronizing the directory with the internal database: Search the internal database for certificates that are out of sync and publish or unpublish. Publish certificates that were issued while the Directory Server was down. Similarly, unpublish certificates that were revoked or that expired while Directory Server was down. Publish or unpublish a range of certificates based on serial numbers, from serial number xx to serial number yy . A Certificate Manager's publishing directory can be manually updated by a Certificate Manager agent only. 9.12.1. Manually Updating Certificates in the Directory The Update Directory Server form in the Certificate Manager agent services page can be used to update the directory manually with certificate-related information. This form initiates a combination of the following operations: Update the directory with certificates. Remove expired certificates from the directory. Removing expired certificates from the publishing directory can be automated by scheduling an automated job. For details, see Chapter 13, Setting Automated Jobs . Remove revoked certificates from the directory. Manually update the directory with changes by doing the following: Open the Certificate Manager agent services page. Select the Update Directory Server link. Select the appropriate options, and click Update Directory . The Certificate Manager starts updating the directory with the certificate information in its internal database. If the changes are substantial, updating the directory can take considerable time. During this period, any changes made through the Certificate Manager, including any certificates issued or any certificates revoked, may not be included in the update. If any certificates are issued or revoked while the directory is updated, update the directory again to reflect those changes. When the directory update is complete, the Certificate Manager displays a status report. If the process is interrupted, the server logs an error message. If the Certificate Manager is installed as a root CA, the CA signing certificate may get published using the publishing rule set up for user certificates when using the agent interface to update the directory with valid certificates. This may return an object class violation error or other errors in the mapper. Selecting the appropriate serial number range to exclude the CA signing certificate can avoid this problem. The CA signing certificate is the first certificate a root CA issues. Modify the default publishing rule for user certificates by changing the value of the predicate parameter to profileId!=caCACert . Use the LdapCaCertPublisher publisher plug-in module to add another rule, with the predicate parameter set to profileId=caCACert , for publishing subordinate CA certificates. 9.12.2. Manually Updating the CRL in the Directory The Certificate Revocation List form in the Certificate Manager agent services page manually updates the directory with CRL-related information. Manually update the CRL information by doing the following: Open the Certificate Manager agent services page. Select Update Revocation List . Click Update . The Certificate Manager starts updating the directory with the CRL in its internal database. If the CRL is large, updating the directory takes considerable time. During this period, any changes made to the CRL may not be included in the update. When the directory is updated, the Certificate Manager displays a status report. If the process is interrupted, the server logs an error message.
null
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/Updating_Certificates_and_CRLs_in_a_Directory
9.9. NFS and rpcbind
9.9. NFS and rpcbind Note The following section only applies to NFSv2 or NFSv3 implementations that require the rpcbind service for backward compatibility. The rpcbind [3] utility maps RPC services to the ports on which they listen. RPC processes notify rpcbind when they start, registering the ports they are listening on and the RPC program numbers they expect to serve. The client system then contacts rpcbind on the server with a particular RPC program number. The rpcbind service redirects the client to the proper port number so it can communicate with the requested service. Because RPC-based services rely on rpcbind to make all connections with incoming client requests, rpcbind must be available before any of these services start. The rpcbind service uses TCP wrappers for access control, and access control rules for rpcbind affect all RPC-based services. Alternatively, it is possible to specify access control rules for each of the NFS RPC daemons. The man pages for rpc.mountd and rpc.statd contain information regarding the precise syntax for these rules. 9.9.1. Troubleshooting NFS and rpcbind Because rpcbind [3] provides coordination between RPC services and the port numbers used to communicate with them, it is useful to view the status of current RPC services using rpcbind when troubleshooting. The rpcinfo command shows each RPC-based service with port numbers, an RPC program number, a version number, and an IP protocol type (TCP or UDP). To make sure the proper NFS RPC-based services are enabled for rpcbind , issue the following command: Example 9.7. rpcinfo -p command output The following is sample output from this command: If one of the NFS services does not start up correctly, rpcbind will be unable to map RPC requests from clients for that service to the correct port. In many cases, if NFS is not present in rpcinfo output, restarting NFS causes the service to correctly register with rpcbind and begin working. For more information and a list of options on rpcinfo , refer to its man page.
[ "rpcinfo -p", "program vers proto port service 100021 1 udp 32774 nlockmgr 100021 3 udp 32774 nlockmgr 100021 4 udp 32774 nlockmgr 100021 1 tcp 34437 nlockmgr 100021 3 tcp 34437 nlockmgr 100021 4 tcp 34437 nlockmgr 100011 1 udp 819 rquotad 100011 2 udp 819 rquotad 100011 1 tcp 822 rquotad 100011 2 tcp 822 rquotad 100003 2 udp 2049 nfs 100003 3 udp 2049 nfs 100003 2 tcp 2049 nfs 100003 3 tcp 2049 nfs 100005 1 udp 836 mountd 100005 1 tcp 839 mountd 100005 2 udp 836 mountd 100005 2 tcp 839 mountd 100005 3 udp 836 mountd 100005 3 tcp 839 mountd" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/s2-nfs-methodology-portmap
Chapter 6. Installing an IdM server: Without integrated DNS, with an external CA as the root CA
Chapter 6. Installing an IdM server: Without integrated DNS, with an external CA as the root CA You can install a new Identity Management (IdM) server, without integrated DNS, that uses an external certificate authority (CA) as the root CA. Note Install IdM-integrated DNS for basic usage within the IdM deployment. When the IdM server also manages DNS, there is tight integration between DNS and native IdM tools which enables automating some of the DNS record management. For more details, see Planning your DNS services and host names . 6.1. Options used when installing an IdM CA with an external CA as the root CA You might want to install an Identity Management IdM certificate authority (CA) with an external CA as the root CA if one of the following conditions applies: You are installing a new IdM server or replica by using the ipa-server-install command. You are installing the CA component into an existing IdM server by using the ipa-ca-install command. You can use following options for both commands that you can use for creating a certificate signing request (CSR) during the installation of an IdM CA with an external CA as the root CA. --external-ca-type= TYPE Type of the external CA. Possible values are generic and ms-cs . The default value is generic . Use ms-cs to include a template name required by Microsoft Certificate Services (MS CS) in the generated CSR. To use a non-default profile, use the --external-ca-profile option in conjunction with --external-ca-type=ms-cs . --external-ca-profile= PROFILE_SPEC Specify the certificate profile or template that you want the MS CS to apply when issuing the certificate for your IdM CA. Note that the --external-ca-profile option can only be used if --external-ca-type is ms-cs. You can identify the MS CS template in one of the following ways: <oid>:<majorVersion>[:<minorVersion>] . You can specify a certificate template by its object identifier (OID) and major version. You can optionally also specify the minor version. <name> . You can specify a certificate template by its name. The name cannot contain any : characters and cannot be an OID, otherwise the OID-based template specifier syntax takes precedence. default . If you use this specifier, the template name SubCA is used. In certain scenarios, the Active Directory (AD) administrator can use the Subordinate Certification Authority (SCA) template, which is a built-in template in AD CS, to create a unique template to better suit the needs of the organization. The new template can, for example, have a customized validity period and customized extensions. The associated Object Identifier (OID) can be found in the AD Certificates Template console. If the AD administrator has disabled the original, built-in template, you must specify the OID or name of the new template when requesting a certificate for your IdM CA. Ask your AD administrator to provide you with the name or OID of the new template. If the original SCA AD CS template is still enabled, you can use it by specifying --external-ca-type=ms-cs without additionally using the --external-ca-profile option. In this case, the subCA external CA profile is used, which is the default IdM template corresponding to the SCA AD CS template. 6.2. Interactive installation During the interactive installation using the ipa-server-install utility, you are asked to supply basic configuration of the system, for example the realm, the administrator's password and the Directory Manager's password. The ipa-server-install installation script creates a log file at /var/log/ipaserver-install.log . If the installation fails, the log can help you identify the problem. Follow this procedure to install a server: Without integrated DNS With an external certificate authority (CA) as the root CA Prerequisites You have determined the type of the external CA to specify with the --external-ca-type option. See the ipa-server-install (1) man page for details. If you are using a Microsoft Certificate Services certificate authority (MS CS CA) as your external CA: you have determined the certificate profile or template to specify with the --external-ca-profile option. By default, the SubCA template is used. For more information about the --external-ca-type and --external-ca-profile options, see Options used when installing an IdM CA with an external CA as the root CA . Procedure Run the ipa-server-install utility with the --external-ca option. If you are using the Microsoft Certificate Services (MS CS) CA, also use the --external-ca-type option and, optionally, the --external-ca-profile option: If you are not using MS CS to generate the signing certificate for your IdM CA, no other option may be necessary: The script prompts to configure an integrated DNS service. Press Enter to select the default no option. The script prompts for several required settings and offers recommended default values in brackets. To accept a default value, press Enter . To provide a custom value, enter the required value. Warning Plan these names carefully. You will not be able to change them after the installation is complete. Enter the passwords for the Directory Server superuser ( cn=Directory Manager ) and for the IdM administration system user account ( admin ). Enter yes to confirm the server configuration. During the configuration of the Certificate System instance, the utility prints the location of the certificate signing request (CSR): /root/ipa.csr : When this happens: Submit the CSR located in /root/ipa.csr to the external CA. The process differs depending on the service to be used as the external CA. Retrieve the issued certificate and the CA certificate chain for the issuing CA in a base 64-encoded blob (either a PEM file or a Base_64 certificate from a Windows CA). Again, the process differs for every certificate service. Usually, a download link on a web page or in the notification email allows the administrator to download all the required certificates. Important Be sure to get the full certificate chain for the CA, not just the CA certificate. Run ipa-server-install again, this time specifying the locations and names of the newly-issued CA certificate and the CA chain files. For example: The installation script now configures the server. Wait for the operation to complete. The installation script produces a file with DNS resource records: the /tmp/ipa.system.records.UFRPto.db file in the example output below. Add these records to the existing external DNS servers. The process of updating the DNS records varies depending on the particular DNS solution. Important The server installation is not complete until you add the DNS records to the existing DNS servers. Additional resources For more information about the DNS resource records you must add to your DNS system, see IdM DNS records for external DNS systems . The ipa-server-install --external-ca command can sometimes fail with the following error: This failure occurs when the *_proxy environmental variables are set. For a solution of the problem, see Troubleshooting: External CA installation fails . 6.3. Non-interactive installation This procedure installs a server: Without integrated DNS With an external certificate authority (CA) as the root CA Note The ipa-server-install installation script creates a log file at /var/log/ipaserver-install.log . If the installation fails, the log can help you identify the problem. Prerequisites You have determined the type of the external CA to specify with the --external-ca-type option. See the ipa-server-install (1) man page for details. If you are using a Microsoft Certificate Services certificate authority (MS CS CA) as your external CA: you have determined the certificate profile or template to specify with the --external-ca-profile option. By default, the SubCA template is used. For more information about the --external-ca-type and --external-ca-profile options, see Options used when installing an IdM CA with an external CA as the root CA . Procedure Run the ipa-server-install utility with the options to supply all the required information. The minimum required options for non-interactive installation of an IdM server with an external CA as the root CA are: --external-ca to specify an external CA is the root CA --realm to provide the Kerberos realm name --ds-password to provide the password for the Directory Manager (DM), the Directory Server super user --admin-password to provide the password for admin , the IdM administrator --unattended to let the installation process select default options for the host name and domain name For example: If you are using a Microsoft Certificate Services (MS CS) CA, also use the --external-ca-type option and, optionally, the --external-ca-profile option. For more information, see Options used when installing an IdM CA with an external CA as the root CA . During the configuration of the Certificate System instance, the utility prints the location of the certificate signing request (CSR): /root/ipa.csr : When this happens: Submit the CSR located in /root/ipa.csr to the external CA. The process differs depending on the service to be used as the external CA. Retrieve the issued certificate and the CA certificate chain for the issuing CA in a base 64-encoded blob (either a PEM file or a Base_64 certificate from a Windows CA). Again, the process differs for every certificate service. Usually, a download link on a web page or in the notification email allows the administrator to download all the required certificates. Important Be sure to get the full certificate chain for the CA, not just the CA certificate. Run ipa-server-install again, this time specifying the locations and names of the newly-issued CA certificate and the CA chain files. For example: The installation script now configures the server. Wait for the operation to complete. The installation script produces a file with DNS resource records: the /tmp/ipa.system.records.UFRPto.db file in the example output below. Add these records to the existing external DNS servers. The process of updating the DNS records varies depending on the particular DNS solution. Important The server installation is not complete until you add the DNS records to the existing DNS servers. Additional resources For more information about the DNS resource records you must add to your DNS system, see IdM DNS records for external DNS systems . 6.4. IdM DNS records for external DNS systems After installing an IdM server without integrated DNS, you must add LDAP and Kerberos DNS resource records for the IdM server to your external DNS system. The ipa-server-install installation script generates a file containing the list of DNS resource records with a file name in the format /tmp/ipa.system.records. <random_characters> .db and prints instructions to add those records: This is an example of the contents of the file: Note After adding the LDAP and Kerberos DNS resource records for the IdM server to your DNS system, ensure that the DNS management tools have not added PTR records for ipa-ca . The presence of PTR records for ipa-ca in your DNS could cause subsequent IdM replica installations to fail.
[ "ipa-server-install --external-ca --external-ca-type=ms-cs --external-ca-profile= <oid>/<name>/default", "ipa-server-install --external-ca", "Do you want to configure integrated DNS (BIND)? [no]:", "Server host name [ server.idm.example.com ]: Please confirm the domain name [ idm.example.com ]: Please provide a realm name [ IDM.EXAMPLE.COM ]:", "Directory Manager password: IPA admin password:", "Continue to configure the system with these values? [no]: yes", "Configuring certificate server (pki-tomcatd): Estimated time 3 minutes 30 seconds [1/8]: creating certificate server user [2/8]: configuring certificate server instance The next step is to get /root/ipa.csr signed by your CA and re-run /sbin/ipa-server-install as: /sbin/ipa-server-install --external-cert-file=/path/to/signed_certificate --external-cert-file=/path/to/external_ca_certificate", "ipa-server-install --external-cert-file= /tmp/servercert20170601.pem --external-cert-file= /tmp/cacert.pem", "Restarting the KDC Please add records in this file to your DNS system: /tmp/ipa.system.records.UFRBto.db Restarting the web server", "ipa : CRITICAL failed to configure ca instance Command '/usr/sbin/pkispawn -s CA -f /tmp/pass:quotes[ configuration_file ]' returned non-zero exit status 1 Configuration of CA failed", "ipa-server-install --external-ca --realm IDM.EXAMPLE.COM --ds-password DM_password --admin-password admin_password --unattended", "Configuring certificate server (pki-tomcatd). Estimated time: 3 minutes [1/11]: configuring certificate server instance The next step is to get /root/ipa.csr signed by your CA and re-run /usr/sbin/ipa-server-install as: /usr/sbin/ipa-server-install --external-cert-file=/path/to/signed_certificate --external-cert-file=/path/to/external_ca_certificate The ipa-server-install command was successful", "ipa-server-install --external-cert-file= /tmp/servercert20170601.pem --external-cert-file= /tmp/cacert.pem --realm IDM.EXAMPLE.COM --ds-password DM_password --admin-password admin_password --unattended", "Restarting the KDC Please add records in this file to your DNS system: /tmp/ipa.system.records.UFRBto.db Restarting the web server", "Please add records in this file to your DNS system: /tmp/ipa.system.records.6zdjqxh3.db", "_kerberos-master._tcp.example.com. 86400 IN SRV 0 100 88 server.example.com. _kerberos-master._udp.example.com. 86400 IN SRV 0 100 88 server.example.com. _kerberos._tcp.example.com. 86400 IN SRV 0 100 88 server.example.com. _kerberos._udp.example.com. 86400 IN SRV 0 100 88 server.example.com. _kerberos.example.com. 86400 IN TXT \"EXAMPLE.COM\" _kpasswd._tcp.example.com. 86400 IN SRV 0 100 464 server.example.com. _kpasswd._udp.example.com. 86400 IN SRV 0 100 464 server.example.com. _ldap._tcp.example.com. 86400 IN SRV 0 100 389 server.example.com." ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/installing_identity_management/assembly_installing-an-ipa-server-without-dns-with-external-ca_installing-identity-management
Chapter 16. Uninstalling Logging
Chapter 16. Uninstalling Logging You can remove logging from your OpenShift Container Platform cluster by removing installed Operators and related custom resources (CRs). 16.1. Uninstalling the logging You can stop aggregating logs by deleting the Red Hat OpenShift Logging Operator and the ClusterLogging custom resource (CR). Prerequisites You have administrator permissions. You have access to the Administrator perspective of the OpenShift Container Platform web console. Procedure Go to the Administration Custom Resource Definitions page, and click ClusterLogging . On the Custom Resource Definition Details page, click Instances . Click the options menu to the instance, and click Delete ClusterLogging . Go to the Administration Custom Resource Definitions page. Click the options menu to ClusterLogging , and select Delete Custom Resource Definition . Warning Deleting the ClusterLogging CR does not remove the persistent volume claims (PVCs). To delete the remaining PVCs, persistent volumes (PVs), and associated data, you must take further action. Releasing or deleting PVCs can delete PVs and cause data loss. If you have created a ClusterLogForwarder CR, click the options menu to ClusterLogForwarder , and then click Delete Custom Resource Definition . Go to the Operators Installed Operators page. Click the options menu to the Red Hat OpenShift Logging Operator, and then click Uninstall Operator . Optional: Delete the openshift-logging project. Warning Deleting the openshift-logging project deletes everything in that namespace, including any persistent volume claims (PVCs). If you want to preserve logging data, do not delete the openshift-logging project. Go to the Home Projects page. Click the options menu to the openshift-logging project, and then click Delete Project . Confirm the deletion by typing openshift-logging in the dialog box, and then click Delete . 16.2. Deleting logging PVCs To keep persistent volume claims (PVCs) for reuse with other pods, keep the labels or PVC names that you need to reclaim the PVCs. If you do not want to keep the PVCs, you can delete them. If you want to recover storage space, you can also delete the persistent volumes (PVs). Prerequisites You have administrator permissions. You have access to the Administrator perspective of the OpenShift Container Platform web console. Procedure Go to the Storage Persistent Volume Claims page. Click the options menu to each PVC, and select Delete Persistent Volume Claim . 16.3. Uninstalling Loki Prerequisites You have administrator permissions. You have access to the Administrator perspective of the OpenShift Container Platform web console. If you have not already removed the Red Hat OpenShift Logging Operator and related resources, you have removed references to LokiStack from the ClusterLogging custom resource. Procedure Go to the Administration Custom Resource Definitions page, and click LokiStack . On the Custom Resource Definition Details page, click Instances . Click the options menu to the instance, and then click Delete LokiStack . Go to the Administration Custom Resource Definitions page. Click the options menu to LokiStack , and select Delete Custom Resource Definition . Delete the object storage secret. Go to the Operators Installed Operators page. Click the options menu to the Loki Operator, and then click Uninstall Operator . Optional: Delete the openshift-operators-redhat project. Important Do not delete the openshift-operators-redhat project if other global Operators are installed in this namespace. Go to the Home Projects page. Click the options menu to the openshift-operators-redhat project, and then click Delete Project . Confirm the deletion by typing openshift-operators-redhat in the dialog box, and then click Delete . 16.4. Uninstalling Elasticsearch Prerequisites You have administrator permissions. You have access to the Administrator perspective of the OpenShift Container Platform web console. If you have not already removed the Red Hat OpenShift Logging Operator and related resources, you must remove references to Elasticsearch from the ClusterLogging custom resource. Procedure Go to the Administration Custom Resource Definitions page, and click Elasticsearch . On the Custom Resource Definition Details page, click Instances . Click the options menu to the instance, and then click Delete Elasticsearch . Go to the Administration Custom Resource Definitions page. Click the options menu to Elasticsearch , and select Delete Custom Resource Definition . Delete the object storage secret. Go to the Operators Installed Operators page. Click the options menu to the OpenShift Elasticsearch Operator, and then click Uninstall Operator . Optional: Delete the openshift-operators-redhat project. Important Do not delete the openshift-operators-redhat project if other global Operators are installed in this namespace. Go to the Home Projects page. Click the options menu to the openshift-operators-redhat project, and then click Delete Project . Confirm the deletion by typing openshift-operators-redhat in the dialog box, and then click Delete . 16.5. Deleting Operators from a cluster using the CLI Cluster administrators can delete installed Operators from a selected namespace by using the CLI. Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. The OpenShift CLI ( oc ) is installed on your workstation. Procedure Ensure the latest version of the subscribed operator (for example, serverless-operator ) is identified in the currentCSV field. USD oc get subscription.operators.coreos.com serverless-operator -n openshift-serverless -o yaml | grep currentCSV Example output currentCSV: serverless-operator.v1.28.0 Delete the subscription (for example, serverless-operator ): USD oc delete subscription.operators.coreos.com serverless-operator -n openshift-serverless Example output subscription.operators.coreos.com "serverless-operator" deleted Delete the CSV for the Operator in the target namespace using the currentCSV value from the step: USD oc delete clusterserviceversion serverless-operator.v1.28.0 -n openshift-serverless Example output clusterserviceversion.operators.coreos.com "serverless-operator.v1.28.0" deleted Additional resources Reclaiming a persistent volume manually
[ "oc get subscription.operators.coreos.com serverless-operator -n openshift-serverless -o yaml | grep currentCSV", "currentCSV: serverless-operator.v1.28.0", "oc delete subscription.operators.coreos.com serverless-operator -n openshift-serverless", "subscription.operators.coreos.com \"serverless-operator\" deleted", "oc delete clusterserviceversion serverless-operator.v1.28.0 -n openshift-serverless", "clusterserviceversion.operators.coreos.com \"serverless-operator.v1.28.0\" deleted" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/logging/cluster-logging-uninstall
Chapter 3. Configuring IAM for IBM Cloud
Chapter 3. Configuring IAM for IBM Cloud In environments where the cloud identity and access management (IAM) APIs are not reachable, you must put the Cloud Credential Operator (CCO) into manual mode before you install the cluster. 3.1. Alternatives to storing administrator-level secrets in the kube-system project The Cloud Credential Operator (CCO) manages cloud provider credentials as Kubernetes custom resource definitions (CRDs). You can configure the CCO to suit the security requirements of your organization by setting different values for the credentialsMode parameter in the install-config.yaml file. Storing an administrator-level credential secret in the cluster kube-system project is not supported for IBM Cloud(R); therefore, you must set the credentialsMode parameter for the CCO to Manual when installing OpenShift Container Platform and manage your cloud credentials manually. Using manual mode allows each cluster component to have only the permissions it requires, without storing an administrator-level credential in the cluster. You can also use this mode if your environment does not have connectivity to the cloud provider public IAM endpoint. However, you must manually reconcile permissions with new release images for every upgrade. You must also manually supply credentials for every component that requests them. Additional resources About the Cloud Credential Operator 3.2. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ 1 -a ~/.pull-secret 1 For <rhel_version> , specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by default. The following values are valid: rhel8 : Specify this value for hosts that use RHEL 8. rhel9 : Specify this value for hosts that use RHEL 9. Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl.<rhel_version> Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. Additional resources Rotating API keys for IBM Cloud(R) 3.3. steps Installing a cluster on IBM Cloud(R) with customizations 3.4. Additional resources Preparing to update a cluster with manually maintained credentials
[ "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret", "chmod 775 ccoctl.<rhel_version>", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command." ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_ibm_cloud/configuring-iam-ibm-cloud
Chapter 29. KafkaJmxOptions schema reference
Chapter 29. KafkaJmxOptions schema reference Used in: KafkaClusterSpec , KafkaConnectSpec , KafkaMirrorMaker2Spec , ZookeeperClusterSpec Full list of KafkaJmxOptions schema properties Configures JMX connection options. Get JMX metrics from Kafka brokers, ZooKeeper nodes, Kafka Connect, and MirrorMaker 2. by connecting to port 9999. Use the jmxOptions property to configure a password-protected or an unprotected JMX port. Using password protection prevents unauthorized pods from accessing the port. You can then obtain metrics about the component. For example, for each Kafka broker you can obtain bytes-per-second usage data from clients, or the request rate of the network of the broker. To enable security for the JMX port, set the type parameter in the authentication field to password . Example password-protected JMX configuration for Kafka brokers and ZooKeeper nodes apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... jmxOptions: authentication: type: "password" # ... zookeeper: # ... jmxOptions: authentication: type: "password" #... You can then deploy a pod into a cluster and obtain JMX metrics using the headless service by specifying which broker you want to address. For example, to get JMX metrics from broker 0 you specify: " CLUSTER-NAME -kafka-0. CLUSTER-NAME -kafka-brokers" CLUSTER-NAME -kafka-0 is name of the broker pod, and CLUSTER-NAME -kafka-brokers is the name of the headless service to return the IPs of the broker pods. If the JMX port is secured, you can get the username and password by referencing them from the JMX Secret in the deployment of your pod. For an unprotected JMX port, use an empty object {} to open the JMX port on the headless service. You deploy a pod and obtain metrics in the same way as for the protected port, but in this case any pod can read from the JMX port. Example open port JMX configuration for Kafka brokers and ZooKeeper nodes apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... jmxOptions: {} # ... zookeeper: # ... jmxOptions: {} # ... Additional resources For more information on the Kafka component metrics exposed using JMX, see the Apache Kafka documentation . 29.1. KafkaJmxOptions schema properties Property Description authentication Authentication configuration for connecting to the JMX port. The type depends on the value of the authentication.type property within the given object, which must be one of [password]. KafkaJmxAuthenticationPassword
[ "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # jmxOptions: authentication: type: \"password\" # zookeeper: # jmxOptions: authentication: type: \"password\" #", "\" CLUSTER-NAME -kafka-0. CLUSTER-NAME -kafka-brokers\"", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # jmxOptions: {} # zookeeper: # jmxOptions: {} #" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-KafkaJmxOptions-reference
Chapter 78. KafkaAutoRebalanceStatus schema reference
Chapter 78. KafkaAutoRebalanceStatus schema reference Used in: KafkaStatus Property Property type Description state string (one of [RebalanceOnScaleUp, Idle, RebalanceOnScaleDown]) The current state of an auto-rebalancing operation. Possible values are: Idle as the initial state when an auto-rebalancing is requested or as final state when it completes or fails. RebalanceOnScaleDown if an auto-rebalance related to a scale-down operation is running. RebalanceOnScaleUp if an auto-rebalance related to a scale-up operation is running. lastTransitionTime string The timestamp of the latest auto-rebalancing state update. modes KafkaAutoRebalanceStatusBrokers array List of modes where an auto-rebalancing operation is either running or queued. Each mode entry ( add-brokers or remove-brokers ) includes one of the following: Broker IDs for a current auto-rebalance. Broker IDs for a queued auto-rebalance (if a rebalance is still in progress).
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-KafkaAutoRebalanceStatus-reference
Preface
Preface Red Hat OpenStack Platform provides the foundation to build a private or public Infrastructure-as-a-Service (IaaS) cloud on top of Red Hat Enterprise Linux. It is a scalable, fault-tolerant platform for the development of cloud-enabled workloads. You can manage most features of the backup service by using either the OpenStack dashboard or the command-line client methods, however you must use the command line to execute some of the more advanced procedures. Note For the complete suite of documentation for Red Hat OpenStack Platform, see Red Hat OpenStack Platform Documentation .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/block_storage_backup_guide/pr01
Chapter 1. Preparing to deploy OpenShift Data Foundation
Chapter 1. Preparing to deploy OpenShift Data Foundation When you deploy OpenShift Data Foundation on OpenShift Container Platform using local storage devices, you can create internal cluster resources. This approach internally provisions base services and all applications can access additional storage classes. Before you begin the deployment of Red Hat OpenShift Data Foundation using local storage, ensure that your resource requirements are met. See requirements for installing OpenShift Data Foundation using local storage devices . On the external key management system (KMS), When the Token authentication method is selected for encryption then refer to Enabling cluster-wide encryption with the Token authentication using KMS . Ensure that you are using signed certificates on your Vault servers. After you have addressed the above, follow these steps in the order given: Install the Red Hat OpenShift Data Foundation Operator . Install Local Storage Operator . Find the available storage devices . Create the OpenShift Data Foundation cluster service on IBM Z . 1.1. Requirements for installing OpenShift Data Foundation using local storage devices Node requirements The cluster must consist of at least three OpenShift Container Platform worker or infrastructure nodes with locally attached-storage devices on each of them. Each of the three selected nodes must have at least one raw block device available. OpenShift Data Foundation uses the one or more available raw block devices. Note Make sure that the devices have a unique by-id device name for each available raw block device. The devices you use must be empty, the disks must not include Physical Volumes (PVs), Volume Groups (VGs), or Logical Volumes (LVs) remaining on the disk. For more information, see the Resource requirements section in the Planning guide . 1.2. Enabling cluster-wide encryption with KMS using the Token authentication method You can enable the key value backend path and policy in the vault for token authentication. Prerequisites Administrator access to the vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . Carefully, select a unique path name as the backend path that follows the naming convention since you cannot change it later. Procedure Enable the Key/Value (KV) backend path in the vault. For vault KV secret engine API, version 1: For vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Create a token that matches the above policy:
[ "vault secrets enable -path=odf kv", "vault secrets enable -path=odf kv-v2", "echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -", "vault token create -policy=odf -format json" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_openshift_data_foundation_using_ibm_z/preparing_to_deploy_openshift_data_foundation
Chapter 3. Metrics
Chapter 3. Metrics 3.1. Metrics in the Block and File dashboard You can navigate to the Block and File dashboard in the OpenShift Web Console as follows: Click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. Click the Block and File tab. The following cards on the Block and File dashboard provides the metrics based on deployment mode (internal or external): Details card The Details card shows the following: Service name Cluster name The name of the Provider on which the system runs (example: AWS , VSphere , None for Bare metal) Mode (deployment mode as either Internal or External) OpenShift Data Foundation operator version. In-transit encryption (shows whether the encryption is enabled or disabled) Storage Efficiency card This card shows the compression ratio that represents a compressible data effectiveness metric, which includes all the compression-enabled pools. This card also shows the savings metric that represents the actual disk capacity saved, which includes all the compression-enabled pools and associated replicas. Inventory card The Inventory card shows the total number of active nodes, disks, pools, storage classes, PVCs and deployments backed by OpenShift Data Foundation provisioner. Note For external mode, the number of nodes will be 0 by default as there are no dedicated nodes for OpenShift Data Foundation. Status card This card shows whether the cluster is up and running without any errors or is experiencing some issues. For internal mode, Data Resiliency indicates the status of data re-balancing in Ceph across the replicas. When the internal mode cluster is in a warning or error state, the Alerts section is shown along with the relevant alerts. For external mode, Data Resiliency and alerts are not displayed Raw Capacity card This card shows the total raw storage capacity which includes replication on the cluster. Used legend indicates space used raw storage capacity on the cluster Available legend indicates the available raw storage capacity on the cluster. Note This card is not applicable for external mode clusters. Requested Capacity This card shows the actual amount of non-replicated data stored in the cluster and its distribution. You can choose between Projects, Storage Classes, Pods, and Peristent Volume Claims from the drop-down list on the top of the card. You need to select a namespace for the Persistent Volume Claims option. These options are for filtering the data shown in the graph. The graph displays the requested capacity for only the top five entities based on usage. The aggregate requested capacity of the remaining entities is displayed as Other. Option Display Projects The aggregated capacity of each project which is using the OpenShift Data Foundation and how much is being used. Storage Classes The aggregate capacity which is based on the OpenShift Data Foundation based storage classes. Pods All the pods that are trying to use the PVC that are backed by OpenShift Data Foundation provisioner. PVCs All the PVCs in the namespace that you selected from the dropdown list and that are mounted on to an active pod. PVCs that are not attached to pods are not included. For external mode, see the Capacity breakdown card . Capacity breakdown card This card is only applicable for external mode clusters. In this card, you can view a graphic breakdown of capacity per project, storage classes, and pods. You can choose between Projects, Storage Classes and Pods from the drop-down menu on the top of the card. These options are for filtering the data shown in the graph. The graph displays the used capacity for only the top five entities, based on usage. The aggregate usage of the remaining entities is displayed as Other . Utilization card The card shows used capacity, input/output operations per second, latency, throughput, and recovery information for the internal mode cluster. For external mode, this card shows only the used and requested capacity details for that cluster. Activity card This card shows the current and the past activities of the OpenShift Data Foundation cluster. The card is separated into two sections: Ongoing : Displays the progress of ongoing activities related to rebuilding of data resiliency and upgrading of OpenShift Data Foundation operator. Recent Events : Displays the list of events that happened in the openshift-storage namespace. 3.2. Metrics in the Object dashboard You can navigate to the Object dashboard in the OpenShift Web Console as follows: Click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. Click the Object tab. The following metrics are available in the Object dashboard: Details card This card shows the following information: Service Name : The Multicloud Object Gateway (MCG) service name. System Name : The Multicloud Object Gateway and RADOS Object Gateway system names. The Multicloud Object Gateway system name is also a hyperlink to the MCG management user interface. Provider : The name of the provider on which the system runs (example: AWS , VSphere , None for Baremetal) Version : OpenShift Data Foundation operator version. Storage Efficiency card In this card you can view how the MCG optimizes the consumption of the storage backend resources through deduplication and compression and provides you with a calculated efficiency ratio (application data vs logical data) and an estimated savings figure (how many bytes the MCG did not send to the storage provider) based on capacity of bare metal and cloud based storage and egress of cloud based storage. Buckets card Buckets are containers maintained by the MCG and RADOS Object Gateway to store data on behalf of the applications. These buckets are created and accessed through object bucket claims (OBCs). A specific policy can be applied to bucket to customize data placement, data spill-over, data resiliency, capacity quotas, and so on. In this card, information about object buckets (OB) and object bucket claims (OBCs) is shown separately. OB includes all the buckets that are created using S3 or the user interface(UI) and OBC includes all the buckets created using YAMLs or the command line interface (CLI). The number displayed on the left of the bucket type is the total count of OBs or OBCs. The number displayed on the right shows the error count and is visible only when the error count is greater than zero. You can click on the number to see the list of buckets that has the warning or error status. Resource Providers card This card displays a list of all Multicloud Object Gateway and RADOS Object Gateway resources that are currently in use. Those resources are used to store data according to the buckets policies and can be a cloud-based resource or a bare metal resource. Status card This card shows whether the system and its services are running without any issues. When the system is in a warning or error state, the alerts section is shown and the relevant alerts are displayed there. Click the alert links beside each alert for more information about the issue. For information about health checks, see Cluster health . If multiple object storage services are available in the cluster, click the service type (such as Object Service or Data Resiliency ) to see the state of the individual services. Data resiliency in the status card indicates if there is any resiliency issue regarding the data stored through the Multicloud Object Gateway and RADOS Object Gateway. Capacity breakdown card In this card you can visualize how applications consume the object storage through the Multicloud Object Gateway and RADOS Object Gateway. You can use the Service Type drop-down to view the capacity breakdown for the Multicloud Gateway and Object Gateway separately. When viewing the Multicloud Object Gateway, you can use the Break By drop-down to filter the results in the graph by either Projects or Bucket Class . Performance card In this card, you can view the performance of the Multicloud Object Gateway or RADOS Object Gateway. Use the Service Type drop-down to choose which you would like to view. For Multicloud Object Gateway accounts, you can view the I/O operations and logical used capacity. For providers, you can view I/O operation, physical and logical usage, and egress. The following tables explain the different metrics that you can view based on your selection from the drop-down menus on the top of the card: Table 3.1. Indicators for Multicloud Object Gateway Consumer types Metrics Chart display Accounts I/O operations Displays read and write I/O operations for the top five consumers. The total reads and writes of all the consumers is displayed at the bottom. This information helps you monitor the throughput demand (IOPS) per application or account. Accounts Logical Used Capacity Displays total logical usage of each account for the top five consumers. This helps you monitor the throughput demand per application or account. Providers I/O operations Displays the count of I/O operations generated by the MCG when accessing the storage backend hosted by the provider. This helps you understand the traffic in the cloud so that you can improve resource allocation according to the I/O pattern, thereby optimizing the cost. Providers Physical vs Logical usage Displays the data consumption in the system by comparing the physical usage with the logical usage per provider. This helps you control the storage resources and devise a placement strategy in line with your usage characteristics and your performance requirements while potentially optimizing your costs. Providers Egress The amount of data the MCG retrieves from each provider (read bandwidth originated with the applications). This helps you understand the traffic in the cloud to improve resource allocation according to the egress pattern, thereby optimizing the cost. For the RADOS Object Gateway, you can use the Metric drop-down to view the Latency or Bandwidth . Latency : Provides a visual indication of the average GET/PUT latency imbalance across RADOS Object Gateway instances. Bandwidth : Provides a visual indication of the sum of GET/PUT bandwidth across RADOS Object Gateway instances. Activity card This card displays what activities are happening or have recently happened in the OpenShift Data Foundation cluster. The card is separated into two sections: Ongoing : Displays the progress of ongoing activities related to rebuilding of data resiliency and upgrading of OpenShift Data Foundation operator. Recent Events : Displays the list of events that happened in the openshift-storage namespace. 3.3. Pool metrics The Pool metrics dashboard provides information to ensure efficient data consumption, and how to enable or disable compression if less effective. Viewing pool metrics To view the pool list: Click Storage Data Foundation . In the Storage systems tab, select the storage system and then click BlockPools . When you click on a pool name, the following cards on each Pool dashboard is displayed along with the metrics based on deployment mode (internal or external): Details card The Details card shows the following: Pool Name Volume type Replicas Status card This card shows whether the pool is up and running without any errors or is experiencing some issues. Mirroring card When the mirroring option is enabled, this card shows the mirroring status, image health, and last checked time-stamp. The mirroring metrics are displayed when cluster level mirroring is enabled. The metrics help to prevent disaster recovery failures and notify of any discrepancies so that the data is kept intact. The mirroring card shows high-level information such as: Mirroring state as either enabled or disabled for the particular pool. Status of all images under the pool as replicating successfully or not. Percentage of images that are replicating and not replicating. Inventory card The Inventory card shows the number of storage classes and Persistent Volume Claims. Compression card This card shows the compression status as enabled or disabled as the case may be. It also displays the storage efficiency details as follows: Compression eligibility that indicates what portion of written compression-eligible data is compressible (per ceph parameters) Compression ratio of compression-eligible data Compression savings provides the total savings (including replicas) of compression-eligible data For information on how to enable or disable compression for an existing pool, see Updating an existing pool . Raw Capacity card This card shows the total raw storage capacity which includes replication, on the cluster. Used legend indicates storage capacity used by the pool Available legend indicates the available raw storage capacity on the cluster Performance card In this card, you can view the usage of I/O operations and throughput demand per application or account. The graph indicates the average latency or bandwidth across the instances. 3.4. Network File System metrics The Network File System (NFS) metrics dashboard provides enhanced observability for NFS mounts such as the following: Mount point for any exported NFS shares Number of client mounts A breakdown statistics of the clients that are connected to help determine internal versus the external client mounts Grace period status of the Ganesha server Health statuses of the Ganesha server Prerequisites OpenShift Container Platform is installed and you have administrative access to OpenShift Web Console. Ensure that NFS is enabled. Procedure You can navigate to the Network file system dashboard in the OpenShift Web Console as follows: Click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. Click the Network file system tab. This tab is available only when NFS is enabled. Note When you enable or disable NFS from command-line interface, you must perform hard refresh to display or hide the Network file system tab in the dashboard. The following NFS metrics are displayed: Status Card This card shows the status of the server based on the total number of active worker threads. Non-zero threads specify healthy status. Throughput Card This card shows the throughput of the server which is the summation of the total request bytes and total response bytes for both read and write operations of the server. Top client Card This card shows the throughput of clients which is the summation of the total of the response bytes sent by a client and the total request bytes by a client for both read and write operations. It shows the top three of such clients. 3.5. Enabling metadata on RBD and CephFS volumes You can set the persistent volume claim (PVC), persistent volume (PV), and Namespace names in the RADOS block device (RBD) and CephFS volumes for monitoring purposes. This enables you to read the RBD and CephFS metadata to identify the mapping between the OpenShift Container Platform and RBD and CephFS volumes. To enable RADOS block device (RBD) and CephFS volume metadata feature, you need to set the CSI_ENABLE_METADATA variable in the rook-ceph-operator-config configmap . By default, this feature is disabled. If you enable the feature after upgrading from a version, the existing PVCs will not contain the metadata. Also, when you enable the metadata feature, the PVCs that were created before enabling will not have the metadata. Prerequisites Ensure to install ocs_operator and create a storagecluster for the operator. Ensure that the storagecluster is in Ready state. Procedure Edit the rook-ceph operator ConfigMap to mark CSI_ENABLE_METADATA to true . Wait for the respective CSI CephFS plugin provisioner pods and CSI RBD plugin pods to reach the Running state. Note Ensure that the setmetadata variable is automatically set after the metadata feature is enabled. This variable should not be available when the metadata feature is disabled. Verification steps To verify the metadata for RBD PVC: Create a PVC. Check the status of the PVC. Verify the metadata in the Red Hat Ceph Storage command-line interface (CLI). For information about how to access the Red Hat Ceph Storage CLI, see the How to access Red Hat Ceph Storage CLI in Red Hat OpenShift Data Foundation environment article. There are four metadata on this image: To verify the metadata for RBD clones: Create a clone. Check the status of the clone. Verify the metadata in the Red Hat Ceph Storage command-line interface (CLI). For information about how to access the Red Hat Ceph Storage CLI, see the How to access Red Hat Ceph Storage CLI in Red Hat OpenShift Data Foundation environment article. To verify the metadata for RBD Snapshots: Create a snapshot. Check the status of the snapshot. Verify the metadata in the Red Hat Ceph Storage command-line interface (CLI). For information about how to access the Red Hat Ceph Storage CLI, see the How to access Red Hat Ceph Storage CLI in Red Hat OpenShift Data Foundation environment article. Verify the metadata for RBD Restore: Restore a volume snapshot. Check the status of the restored volume snapshot. Verify the metadata in the Red Hat Ceph Storage command-line interface (CLI). For information about how to access the Red Hat Ceph Storage CLI, see the How to access Red Hat Ceph Storage CLI in Red Hat OpenShift Data Foundation environment article. To verify the metadata for CephFS PVC: Create a PVC. Check the status of the PVC. Verify the metadata in the Red Hat Ceph Storage command-line interface (CLI). For information about how to access the Red Hat Ceph Storage CLI, see the How to access Red Hat Ceph Storage CLI in Red Hat OpenShift Data Foundation environment article. To verify the metadata for CephFS clone: Create a clone. Check the status of the clone. Verify the metadata in the Red Hat Ceph Storage command-line interface (CLI). For information about how to access the Red Hat Ceph Storage CLI, see the How to access Red Hat Ceph Storage CLI in Red Hat OpenShift Data Foundation environment article. To verify the metadata for CephFS volume snapshot: Create a volume snapshot. Check the status of the volume snapshot. Verify the metadata in the Red Hat Ceph Storage command-line interface (CLI). For information about how to access the Red Hat Ceph Storage CLI, see the How to access Red Hat Ceph Storage CLI in Red Hat OpenShift Data Foundation environment article. To verify the metadata of the CephFS Restore: Restore a volume snapshot. Check the status of the restored volume snapshot. Verify the metadata in the Red Hat Ceph Storage command-line interface (CLI). For information about how to access the Red Hat Ceph Storage CLI, see the How to access Red Hat Ceph Storage CLI in Red Hat OpenShift Data Foundation environment article.
[ "oc get storagecluster NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-storagecluster 57m Ready 2022-08-30T06:52:58Z 4.12.0", "oc patch cm rook-ceph-operator-config -n openshift-storage -p USD'data:\\n \"CSI_ENABLE_METADATA\": \"true\"' configmap/rook-ceph-operator-config patched", "oc get pods | grep csi csi-cephfsplugin-b8d6c 2/2 Running 0 56m csi-cephfsplugin-bnbg9 2/2 Running 0 56m csi-cephfsplugin-kqdw4 2/2 Running 0 56m csi-cephfsplugin-provisioner-7dcd78bb9b-q6dxb 5/5 Running 0 56m csi-cephfsplugin-provisioner-7dcd78bb9b-zc4q5 5/5 Running 0 56m csi-rbdplugin-776dl 3/3 Running 0 56m csi-rbdplugin-ffl52 3/3 Running 0 56m csi-rbdplugin-jx9mz 3/3 Running 0 56m csi-rbdplugin-provisioner-5f6d766b6c-694fx 6/6 Running 0 56m csi-rbdplugin-provisioner-5f6d766b6c-vzv45 6/6 Running 0 56m", "cat <<EOF | oc create -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: rbd-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: ocs-storagecluster-ceph-rbd EOF", "oc get pvc | grep rbd-pvc rbd-pvc Bound pvc-30628fa8-2966-499c-832d-a6a3a8ebc594 1Gi RWO ocs-storagecluster-ceph-rbd 32s", "[sh-4.x]USD rbd ls ocs-storagecluster-cephblockpool csi-vol-7d67bfad-2842-11ed-94bd-0a580a830012 csi-vol-ed5ce27b-2842-11ed-94bd-0a580a830012 [sh-4.x]USD rbd image-meta ls ocs-storagecluster-cephblockpool/csi-vol-ed5ce27b-2842-11ed-94bd-0a580a830012", "Key Value csi.ceph.com/cluster/name 6cd7a18d-7363-4830-ad5c-f7b96927f026 csi.storage.k8s.io/pv/name pvc-30628fa8-2966-499c-832d-a6a3a8ebc594 csi.storage.k8s.io/pvc/name rbd-pvc csi.storage.k8s.io/pvc/namespace openshift-storage", "cat <<EOF | oc create -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: rbd-pvc-clone spec: storageClassName: ocs-storagecluster-ceph-rbd dataSource: name: rbd-pvc kind: PersistentVolumeClaim accessModes: - ReadWriteOnce resources: requests: storage: 1Gi EOF", "oc get pvc | grep rbd-pvc rbd-pvc Bound pvc-30628fa8-2966-499c-832d-a6a3a8ebc594 1Gi RWO ocs-storagecluster-ceph-rbd 15m rbd-pvc-clone Bound pvc-0d72afda-f433-4d46-a7f1-a5fcb3d766e0 1Gi RWO ocs-storagecluster-ceph-rbd 52s", "[sh-4.x]USD rbd ls ocs-storagecluster-cephblockpool csi-vol-063b982d-2845-11ed-94bd-0a580a830012 csi-vol-063b982d-2845-11ed-94bd-0a580a830012-temp csi-vol-7d67bfad-2842-11ed-94bd-0a580a830012 csi-vol-ed5ce27b-2842-11ed-94bd-0a580a830012 [sh-4.x]USD rbd image-meta ls ocs-storagecluster-cephblockpool/csi-vol-063b982d-2845-11ed-94bd-0a580a830012 There are 4 metadata on this image: Key Value csi.ceph.com/cluster/name 6cd7a18d-7363-4830-ad5c-f7b96927f026 csi.storage.k8s.io/pv/name pvc-0d72afda-f433-4d46-a7f1-a5fcb3d766e0 csi.storage.k8s.io/pvc/name rbd-pvc-clone csi.storage.k8s.io/pvc/namespace openshift-storage", "cat <<EOF | oc create -f - apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: rbd-pvc-snapshot spec: volumeSnapshotClassName: ocs-storagecluster-rbdplugin-snapclass source: persistentVolumeClaimName: rbd-pvc EOF volumesnapshot.snapshot.storage.k8s.io/rbd-pvc-snapshot created", "oc get volumesnapshot NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE rbd-pvc-snapshot true rbd-pvc 1Gi ocs-storagecluster-rbdplugin-snapclass snapcontent-b992b782-7174-4101-8fe3-e6e478eb2c8f 17s 18s", "[sh-4.x]USD rbd ls ocs-storagecluster-cephblockpool csi-snap-a1e24408-2848-11ed-94bd-0a580a830012 csi-vol-063b982d-2845-11ed-94bd-0a580a830012 csi-vol-063b982d-2845-11ed-94bd-0a580a830012-temp csi-vol-7d67bfad-2842-11ed-94bd-0a580a830012 csi-vol-ed5ce27b-2842-11ed-94bd-0a580a830012 [sh-4.x]USD rbd image-meta ls ocs-storagecluster-cephblockpool/csi-snap-a1e24408-2848-11ed-94bd-0a580a830012 There are 4 metadata on this image: Key Value csi.ceph.com/cluster/name 6cd7a18d-7363-4830-ad5c-f7b96927f026 csi.storage.k8s.io/volumesnapshot/name rbd-pvc-snapshot csi.storage.k8s.io/volumesnapshot/namespace openshift-storage csi.storage.k8s.io/volumesnapshotcontent/name snapcontent-b992b782-7174-4101-8fe3-e6e478eb2c8f", "cat <<EOF | oc create -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: rbd-pvc-restore spec: storageClassName: ocs-storagecluster-ceph-rbd dataSource: name: rbd-pvc-snapshot kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io accessModes: - ReadWriteOnce resources: requests: storage: 1Gi EOF persistentvolumeclaim/rbd-pvc-restore created", "oc get pvc | grep rbd db-noobaa-db-pg-0 Bound pvc-615e2027-78cd-4ea2-a341-fdedd50c5208 50Gi RWO ocs-storagecluster-ceph-rbd 51m rbd-pvc Bound pvc-30628fa8-2966-499c-832d-a6a3a8ebc594 1Gi RWO ocs-storagecluster-ceph-rbd 47m rbd-pvc-clone Bound pvc-0d72afda-f433-4d46-a7f1-a5fcb3d766e0 1Gi RWO ocs-storagecluster-ceph-rbd 32m rbd-pvc-restore Bound pvc-f900e19b-3924-485c-bb47-01b84c559034 1Gi RWO ocs-storagecluster-ceph-rbd 111s", "[sh-4.x]USD rbd ls ocs-storagecluster-cephblockpool csi-snap-a1e24408-2848-11ed-94bd-0a580a830012 csi-vol-063b982d-2845-11ed-94bd-0a580a830012 csi-vol-063b982d-2845-11ed-94bd-0a580a830012-temp csi-vol-5f6e0737-2849-11ed-94bd-0a580a830012 csi-vol-7d67bfad-2842-11ed-94bd-0a580a830012 csi-vol-ed5ce27b-2842-11ed-94bd-0a580a830012 [sh-4.x]USD rbd image-meta ls ocs-storagecluster-cephblockpool/csi-vol-5f6e0737-2849-11ed-94bd-0a580a830012 There are 4 metadata on this image: Key Value csi.ceph.com/cluster/name 6cd7a18d-7363-4830-ad5c-f7b96927f026 csi.storage.k8s.io/pv/name pvc-f900e19b-3924-485c-bb47-01b84c559034 csi.storage.k8s.io/pvc/name rbd-pvc-restore csi.storage.k8s.io/pvc/namespace openshift-storage", "cat <<EOF | oc create -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cephfs-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: ocs-storagecluster-cephfs EOF", "get pvc | grep cephfs cephfs-pvc Bound pvc-4151128c-86f0-468b-b6e7-5fdfb51ba1b9 1Gi RWO ocs-storagecluster-cephfs 11s", "ceph fs volume ls [ { \"name\": \"ocs-storagecluster-cephfilesystem\" } ] ceph fs subvolumegroup ls ocs-storagecluster-cephfilesystem [ { \"name\": \"csi\" } ] ceph fs subvolume ls ocs-storagecluster-cephfilesystem --group_name csi [ { \"name\": \"csi-vol-25266061-284c-11ed-95e0-0a580a810215\" } ] ceph fs subvolume metadata ls ocs-storagecluster-cephfilesystem csi-vol-25266061-284c-11ed-95e0-0a580a810215 --group_name=csi --format=json { \"csi.ceph.com/cluster/name\": \"6cd7a18d-7363-4830-ad5c-f7b96927f026\", \"csi.storage.k8s.io/pv/name\": \"pvc-4151128c-86f0-468b-b6e7-5fdfb51ba1b9\", \"csi.storage.k8s.io/pvc/name\": \"cephfs-pvc\", \"csi.storage.k8s.io/pvc/namespace\": \"openshift-storage\" }", "cat <<EOF | oc create -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cephfs-pvc-clone spec: storageClassName: ocs-storagecluster-cephfs dataSource: name: cephfs-pvc kind: PersistentVolumeClaim accessModes: - ReadWriteMany resources: requests: storage: 1Gi EOF persistentvolumeclaim/cephfs-pvc-clone created", "oc get pvc | grep cephfs cephfs-pvc Bound pvc-4151128c-86f0-468b-b6e7-5fdfb51ba1b9 1Gi RWO ocs-storagecluster-cephfs 9m5s cephfs-pvc-clone Bound pvc-3d4c4e78-f7d5-456a-aa6e-4da4a05ca4ce 1Gi RWX ocs-storagecluster-cephfs 20s", "[rook@rook-ceph-tools-c99fd8dfc-6sdbg /]USD ceph fs subvolume ls ocs-storagecluster-cephfilesystem --group_name csi [ { \"name\": \"csi-vol-5ea23eb0-284d-11ed-95e0-0a580a810215\" }, { \"name\": \"csi-vol-25266061-284c-11ed-95e0-0a580a810215\" } ] [rook@rook-ceph-tools-c99fd8dfc-6sdbg /]USD ceph fs subvolume metadata ls ocs-storagecluster-cephfilesystem csi-vol-5ea23eb0-284d-11ed-95e0-0a580a810215 --group_name=csi --format=json { \"csi.ceph.com/cluster/name\": \"6cd7a18d-7363-4830-ad5c-f7b96927f026\", \"csi.storage.k8s.io/pv/name\": \"pvc-3d4c4e78-f7d5-456a-aa6e-4da4a05ca4ce\", \"csi.storage.k8s.io/pvc/name\": \"cephfs-pvc-clone\", \"csi.storage.k8s.io/pvc/namespace\": \"openshift-storage\" }", "cat <<EOF | oc create -f - apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: cephfs-pvc-snapshot spec: volumeSnapshotClassName: ocs-storagecluster-cephfsplugin-snapclass source: persistentVolumeClaimName: cephfs-pvc EOF volumesnapshot.snapshot.storage.k8s.io/cephfs-pvc-snapshot created", "oc get volumesnapshot NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE cephfs-pvc-snapshot true cephfs-pvc 1Gi ocs-storagecluster-cephfsplugin-snapclass snapcontent-f0f17463-d13b-4e13-b44e-6340bbb3bee0 9s 9s", "ceph fs subvolume snapshot ls ocs-storagecluster-cephfilesystem csi-vol-25266061-284c-11ed-95e0-0a580a810215 --group_name csi [ { \"name\": \"csi-snap-06336f4e-284e-11ed-95e0-0a580a810215\" } ] ceph fs subvolume snapshot metadata ls ocs-storagecluster-cephfilesystem csi-vol-25266061-284c-11ed-95e0-0a580a810215 csi-snap-06336f4e-284e-11ed-95e0-0a580a810215 --group_name=csi --format=json { \"csi.ceph.com/cluster/name\": \"6cd7a18d-7363-4830-ad5c-f7b96927f026\", \"csi.storage.k8s.io/volumesnapshot/name\": \"cephfs-pvc-snapshot\", \"csi.storage.k8s.io/volumesnapshot/namespace\": \"openshift-storage\", \"csi.storage.k8s.io/volumesnapshotcontent/name\": \"snapcontent-f0f17463-d13b-4e13-b44e-6340bbb3bee0\" }", "cat <<EOF | oc create -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cephfs-pvc-restore spec: storageClassName: ocs-storagecluster-cephfs dataSource: name: cephfs-pvc-snapshot kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io accessModes: - ReadWriteMany resources: requests: storage: 1Gi EOF persistentvolumeclaim/cephfs-pvc-restore created", "oc get pvc | grep cephfs cephfs-pvc Bound pvc-4151128c-86f0-468b-b6e7-5fdfb51ba1b9 1Gi RWO ocs-storagecluster-cephfs 29m cephfs-pvc-clone Bound pvc-3d4c4e78-f7d5-456a-aa6e-4da4a05ca4ce 1Gi RWX ocs-storagecluster-cephfs 20m cephfs-pvc-restore Bound pvc-43d55ea1-95c0-42c8-8616-4ee70b504445 1Gi RWX ocs-storagecluster-cephfs 21s", "ceph fs subvolume ls ocs-storagecluster-cephfilesystem --group_name csi [ { \"name\": \"csi-vol-3536db13-2850-11ed-95e0-0a580a810215\" }, { \"name\": \"csi-vol-5ea23eb0-284d-11ed-95e0-0a580a810215\" }, { \"name\": \"csi-vol-25266061-284c-11ed-95e0-0a580a810215\" } ] ceph fs subvolume metadata ls ocs-storagecluster-cephfilesystem csi-vol-3536db13-2850-11ed-95e0-0a580a810215 --group_name=csi --format=json { \"csi.ceph.com/cluster/name\": \"6cd7a18d-7363-4830-ad5c-f7b96927f026\", \"csi.storage.k8s.io/pv/name\": \"pvc-43d55ea1-95c0-42c8-8616-4ee70b504445\", \"csi.storage.k8s.io/pvc/name\": \"cephfs-pvc-restore\", \"csi.storage.k8s.io/pvc/namespace\": \"openshift-storage\" }" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/monitoring_openshift_data_foundation/metrics
Chapter 13. Configuring System Purpose using the subscription-manager command-line tool
Chapter 13. Configuring System Purpose using the subscription-manager command-line tool System purpose is a feature of the Red Hat Enterprise Linux installation to help RHEL customers get the benefit of our subscription experience and services offered in the Red Hat Hybrid Cloud Console, a dashboard-based, Software-as-a-Service (SaaS) application that enables you to view subscription usage in your Red Hat account. You can configure system purpose attributes either on the activation keys or by using the subscription manager tool. Prerequisites You have installed and registered your Red Hat Enterprise Linux 8 system, but system purpose is not configured. You are logged in as a root user. Note In the entitlement mode, if your system is registered but has subscriptions that do not satisfy the required purpose, you can run the subscription-manager remove --all command to remove attached subscriptions. You can then use the command-line subscription-manager syspurpose {role, usage, service-level} tools to set the required purpose attributes, and lastly run subscription-manager attach --auto to re-entitle the system with considerations for the updated attributes. Whereas, in the SCA enabled account, you can directly update the system purpose details post registration without making an update to the subscriptions in the system. Procedure From a terminal window, run the following command to set the intended role of the system: Replace VALUE with the role that you want to assign: Red Hat Enterprise Linux Server Red Hat Enterprise Linux Workstation Red Hat Enterprise Linux Compute Node For example: Optional: Before setting a value, see the available roles supported by the subscriptions for your organization: Optional: Run the following command to unset the role: Run the following command to set the intended Service Level Agreement (SLA) of the system: Replace VALUE with the SLA that you want to assign: Premium Standard Self-Support For example: Optional: Before setting a value, see the available service-levels supported by the subscriptions for your organization: Optional: Run the following command to unset the SLA: Run the following command to set the intended usage of the system: Replace VALUE with the usage that you want to assign: Production Disaster Recovery Development/Test For example: Optional: Before setting a value, see the available usages supported by the subscriptions for your organization: Optional: Run the following command to unset the usage: Run the following command to show the current system purpose properties: Optional: For more detailed syntax information run the following command to access the subscription-manager man page and browse to the SYSPURPOSE OPTIONS: Verification To verify the system's subscription status in a system registered with an account having entitlement mode enabled: An overall status Current means that all of the installed products are covered by the subscription(s) attached and entitlements to access their content set repositories has been granted. A system purpose status Matched means that all of the system purpose attributes (role, usage, service-level) that were set on the system are satisfied by the subscription(s) attached. When the status information is not ideal, additional information is displayed to help the system administrator decide what corrections to make to the attached subscriptions to cover the installed products and intended system purpose. To verify the system's subscription status in a system registered with an account having SCA mode enabled: In SCA mode, subscriptions are no longer required to be attached to individual systems. Hence, both the overall status and system purpose status are displayed as Disabled . However, the technical, business, and operational use cases supplied by system purpose attributes are important to the subscriptions service. Without these attributes, the subscriptions service data is less accurate. Additional resources To learn more about the subscriptions service, see the Getting Started with the Subscriptions Service guide .
[ "subscription-manager syspurpose role --set \"VALUE\"", "subscription-manager syspurpose role --set \"Red Hat Enterprise Linux Server\"", "subscription-manager syspurpose role --list", "subscription-manager syspurpose role --unset", "subscription-manager syspurpose service-level --set \"VALUE\"", "subscription-manager syspurpose service-level --set \"Standard\"", "subscription-manager syspurpose service-level --list", "subscription-manager syspurpose service-level --unset", "subscription-manager syspurpose usage --set \"VALUE\"", "subscription-manager syspurpose usage --set \"Production\"", "subscription-manager syspurpose usage --list", "subscription-manager syspurpose usage --unset", "subscription-manager syspurpose --show", "man subscription-manager", "subscription-manager status +-------------------------------------------+ System Status Details +-------------------------------------------+ Overall Status: Current System Purpose Status: Matched", "subscription-manager status +-------------------------------------------+ System Status Details +-------------------------------------------+ Overall Status: Disabled Content Access Mode is set to Simple Content Access. This host has access to content, regardless of subscription status. System Purpose Status: Disabled" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/interactively_installing_rhel_from_installation_media/proc_configuring-system-purpose-using-the-subscription-manager-command-line-tool_rhel-installer
Chapter 14. OpenShift SDN default CNI network provider
Chapter 14. OpenShift SDN default CNI network provider 14.1. About the OpenShift SDN default CNI network provider OpenShift Container Platform uses a software-defined networking (SDN) approach to provide a unified cluster network that enables communication between pods across the OpenShift Container Platform cluster. This pod network is established and maintained by the OpenShift SDN, which configures an overlay network using Open vSwitch (OVS). 14.1.1. OpenShift SDN network isolation modes OpenShift SDN provides three SDN modes for configuring the pod network: Network policy mode allows project administrators to configure their own isolation policies using NetworkPolicy objects. Network policy is the default mode in OpenShift Container Platform 4.7. Multitenant mode provides project-level isolation for pods and services. Pods from different projects cannot send packets to or receive packets from pods and services of a different project. You can disable isolation for a project, allowing it to send network traffic to all pods and services in the entire cluster and receive network traffic from those pods and services. Subnet mode provides a flat pod network where every pod can communicate with every other pod and service. The network policy mode provides the same functionality as subnet mode. 14.1.2. Supported default CNI network provider feature matrix OpenShift Container Platform offers two supported choices, OpenShift SDN and OVN-Kubernetes, for the default Container Network Interface (CNI) network provider. The following table summarizes the current feature support for both network providers: Table 14.1. Default CNI network provider feature comparison Feature OpenShift SDN OVN-Kubernetes Egress IPs Supported Supported Egress firewall [1] Supported Supported Egress router Supported Partially supported [3] IPsec encryption Not supported Supported Kubernetes network policy Partially supported [2] Supported Multicast Supported Supported Egress firewall is also known as egress network policy in OpenShift SDN. This is not the same as network policy egress. Network policy for OpenShift SDN does not support egress rules and some ipBlock rules. Egress router for OVN-Kubernetes supports only redirect mode. 14.2. Configuring egress IPs for a project As a cluster administrator, you can configure the OpenShift SDN default Container Network Interface (CNI) network provider to assign one or more egress IP addresses to a project. 14.2.1. Egress IP address assignment for project egress traffic By configuring an egress IP address for a project, all outgoing external connections from the specified project will share the same, fixed source IP address. External resources can recognize traffic from a particular project based on the egress IP address. An egress IP address assigned to a project is different from the egress router, which is used to send traffic to specific destinations. Egress IP addresses are implemented as additional IP addresses on the primary network interface of the node and must be in the same subnet as the node's primary IP address. Important Egress IP addresses must not be configured in any Linux network configuration files, such as ifcfg-eth0 . Egress IPs on Amazon Web Services (AWS), Google Cloud Platform (GCP), and Azure are supported only on OpenShift Container Platform version 4.10 and later. Allowing additional IP addresses on the primary network interface might require extra configuration when using some virtual machines solutions. You can assign egress IP addresses to namespaces by setting the egressIPs parameter of the NetNamespace object. After an egress IP is associated with a project, OpenShift SDN allows you to assign egress IPs to hosts in two ways: In the automatically assigned approach, an egress IP address range is assigned to a node. In the manually assigned approach, a list of one or more egress IP address is assigned to a node. Namespaces that request an egress IP address are matched with nodes that can host those egress IP addresses, and then the egress IP addresses are assigned to those nodes. If the egressIPs parameter is set on a NetNamespace object, but no node hosts that egress IP address, then egress traffic from the namespace will be dropped. High availability of nodes is automatic. If a node that hosts an egress IP address is unreachable and there are nodes that are able to host that egress IP address, then the egress IP address will move to a new node. When the unreachable node comes back online, the egress IP address automatically moves to balance egress IP addresses across nodes. Important The following limitations apply when using egress IP addresses with the OpenShift SDN cluster network provider: You cannot use manually assigned and automatically assigned egress IP addresses on the same nodes. If you manually assign egress IP addresses from an IP address range, you must not make that range available for automatic IP assignment. You cannot share egress IP addresses across multiple namespaces using the OpenShift SDN egress IP address implementation. If you need to share IP addresses across namespaces, the OVN-Kubernetes cluster network provider egress IP address implementation allows you to span IP addresses across multiple namespaces. Note If you use OpenShift SDN in multitenant mode, you cannot use egress IP addresses with any namespace that is joined to another namespace by the projects that are associated with them. For example, if project1 and project2 are joined by running the oc adm pod-network join-projects --to=project1 project2 command, neither project can use an egress IP address. For more information, see BZ#1645577 . 14.2.1.1. Considerations when using automatically assigned egress IP addresses When using the automatic assignment approach for egress IP addresses the following considerations apply: You set the egressCIDRs parameter of each node's HostSubnet resource to indicate the range of egress IP addresses that can be hosted by a node. OpenShift Container Platform sets the egressIPs parameter of the HostSubnet resource based on the IP address range you specify. Only a single egress IP address per namespace is supported when using the automatic assignment mode. If the node hosting the namespace's egress IP address is unreachable, OpenShift Container Platform will reassign the egress IP address to another node with a compatible egress IP address range. The automatic assignment approach works best for clusters installed in environments with flexibility in associating additional IP addresses with nodes. 14.2.1.2. Considerations when using manually assigned egress IP addresses This approach is used for clusters where there can be limitations on associating additional IP addresses with nodes such as in public cloud environments. When using the manual assignment approach for egress IP addresses the following considerations apply: You set the egressIPs parameter of each node's HostSubnet resource to indicate the IP addresses that can be hosted by a node. Multiple egress IP addresses per namespace are supported. When a namespace has multiple egress IP addresses, if the node hosting the first egress IP address is unreachable, OpenShift Container Platform will automatically switch to using the available egress IP address until the first egress IP address is reachable again. 14.2.2. Configuring automatically assigned egress IP addresses for a namespace In OpenShift Container Platform you can enable automatic assignment of an egress IP address for a specific namespace across one or more nodes. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Update the NetNamespace object with the egress IP address using the following JSON: USD oc patch netnamespace <project_name> --type=merge -p \ 1 '{ "egressIPs": [ "<ip_address>" 2 ] }' 1 Specify the name of the project. 2 Specify a single egress IP address. Using multiple IP addresses is not supported. For example, to assign project1 to an IP address of 192.168.1.100 and project2 to an IP address of 192.168.1.101: USD oc patch netnamespace project1 --type=merge -p \ '{"egressIPs": ["192.168.1.100"]}' USD oc patch netnamespace project2 --type=merge -p \ '{"egressIPs": ["192.168.1.101"]}' Note Because OpenShift SDN manages the NetNamespace object, you can make changes only by modifying the existing NetNamespace object. Do not create a new NetNamespace object. Indicate which nodes can host egress IP addresses by setting the egressCIDRs parameter for each host using the following JSON: USD oc patch hostsubnet <node_name> --type=merge -p \ 1 '{ "egressCIDRs": [ "<ip_address_range_1>", "<ip_address_range_2>" 2 ] }' 1 Specify a node name. 2 Specify one or more IP address ranges in CIDR format. For example, to set node1 and node2 to host egress IP addresses in the range 192.168.1.0 to 192.168.1.255: USD oc patch hostsubnet node1 --type=merge -p \ '{"egressCIDRs": ["192.168.1.0/24"]}' USD oc patch hostsubnet node2 --type=merge -p \ '{"egressCIDRs": ["192.168.1.0/24"]}' OpenShift Container Platform automatically assigns specific egress IP addresses to available nodes in a balanced way. In this case, it assigns the egress IP address 192.168.1.100 to node1 and the egress IP address 192.168.1.101 to node2 or vice versa. 14.2.3. Configuring manually assigned egress IP addresses for a namespace In OpenShift Container Platform you can associate one or more egress IP addresses with a namespace. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Update the NetNamespace object by specifying the following JSON object with the desired IP addresses: USD oc patch netnamespace <project> --type=merge -p \ 1 '{ "egressIPs": [ 2 "<ip_address>" ] }' 1 Specify the name of the project. 2 Specify one or more egress IP addresses. The egressIPs parameter is an array. For example, to assign the project1 project to an IP address of 192.168.1.100 : USD oc patch netnamespace project1 --type=merge \ -p '{"egressIPs": ["192.168.1.100"]}' You can set egressIPs to two or more IP addresses on different nodes to provide high availability. If multiple egress IP addresses are set, pods use the first IP in the list for egress, but if the node hosting that IP address fails, pods switch to using the IP in the list after a short delay. Note Because OpenShift SDN manages the NetNamespace object, you can make changes only by modifying the existing NetNamespace object. Do not create a new NetNamespace object. Manually assign the egress IP to the node hosts. Set the egressIPs parameter on the HostSubnet object on the node host. Using the following JSON, include as many IPs as you want to assign to that node host: USD oc patch hostsubnet <node_name> --type=merge -p \ 1 '{ "egressIPs": [ 2 "<ip_address_1>", "<ip_address_N>" ] }' 1 Specify the name of the node. 2 Specify one or more egress IP addresses. The egressIPs field is an array. For example, to specify that node1 should have the egress IPs 192.168.1.100 , 192.168.1.101 , and 192.168.1.102 : USD oc patch hostsubnet node1 --type=merge -p \ '{"egressIPs": ["192.168.1.100", "192.168.1.101", "192.168.1.102"]}' In the example, all egress traffic for project1 will be routed to the node hosting the specified egress IP, and then connected (using NAT) to that IP address. 14.3. Configuring an egress firewall for a project As a cluster administrator, you can create an egress firewall for a project that restricts egress traffic leaving your OpenShift Container Platform cluster. 14.3.1. How an egress firewall works in a project As a cluster administrator, you can use an egress firewall to limit the external hosts that some or all pods can access from within the cluster. An egress firewall supports the following scenarios: A pod can only connect to internal hosts and cannot initiate connections to the public Internet. A pod can only connect to the public Internet and cannot initiate connections to internal hosts that are outside the OpenShift Container Platform cluster. A pod cannot reach specified internal subnets or hosts outside the OpenShift Container Platform cluster. A pod can connect to only specific external hosts. For example, you can allow one project access to a specified IP range but deny the same access to a different project. Or you can restrict application developers from updating from Python pip mirrors, and force updates to come only from approved sources. You configure an egress firewall policy by creating an EgressNetworkPolicy custom resource (CR) object. The egress firewall matches network traffic that meets any of the following criteria: An IP address range in CIDR format A DNS name that resolves to an IP address Important If your egress firewall includes a deny rule for 0.0.0.0/0 , access to your OpenShift Container Platform API servers is blocked. To ensure that pods can continue to access the OpenShift Container Platform API servers, you must include the IP address range that the API servers listen on in your egress firewall rules, as in the following example: apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: default namespace: <namespace> 1 spec: egress: - to: cidrSelector: <api_server_address_range> 2 type: Allow # ... - to: cidrSelector: 0.0.0.0/0 3 type: Deny 1 The namespace for the egress firewall. 2 The IP address range that includes your OpenShift Container Platform API servers. 3 A global deny rule prevents access to the OpenShift Container Platform API servers. To find the IP address for your API servers, run oc get ep kubernetes -n default . For more information, see BZ#1988324 . Important You must have OpenShift SDN configured to use either the network policy or multitenant mode to configure an egress firewall. If you use network policy mode, an egress firewall is compatible with only one policy per namespace and will not work with projects that share a network, such as global projects. Warning Egress firewall rules do not apply to traffic that goes through routers. Any user with permission to create a Route CR object can bypass egress firewall policy rules by creating a route that points to a forbidden destination. 14.3.1.1. Limitations of an egress firewall An egress firewall has the following limitations: No project can have more than one EgressNetworkPolicy object. A maximum of one EgressNetworkPolicy object with a maximum of 1,000 rules can be defined per project. The default project cannot use an egress firewall. When using the OpenShift SDN default Container Network Interface (CNI) network provider in multitenant mode, the following limitations apply: Global projects cannot use an egress firewall. You can make a project global by using the oc adm pod-network make-projects-global command. Projects merged by using the oc adm pod-network join-projects command cannot use an egress firewall in any of the joined projects. Violating any of these restrictions results in a broken egress firewall for the project, and may cause all external network traffic to be dropped. An Egress Firewall resource can be created in the kube-node-lease , kube-public , kube-system , openshift and openshift- projects. 14.3.1.2. Matching order for egress firewall policy rules The egress firewall policy rules are evaluated in the order that they are defined, from first to last. The first rule that matches an egress connection from a pod applies. Any subsequent rules are ignored for that connection. 14.3.1.3. How Domain Name Server (DNS) resolution works If you use DNS names in any of your egress firewall policy rules, proper resolution of the domain names is subject to the following restrictions: Domain name updates are polled based on the TTL (time to live) value of the domain returned by the local non-authoritative servers. The pod must resolve the domain from the same local name servers when necessary. Otherwise the IP addresses for the domain known by the egress firewall controller and the pod can be different. If the IP addresses for a hostname differ, the egress firewall might not be enforced consistently. Because the egress firewall controller and pods asynchronously poll the same local name server, the pod might obtain the updated IP address before the egress controller does, which causes a race condition. Due to this current limitation, domain name usage in EgressNetworkPolicy objects is only recommended for domains with infrequent IP address changes. Note The egress firewall always allows pods access to the external interface of the node that the pod is on for DNS resolution. If you use domain names in your egress firewall policy and your DNS resolution is not handled by a DNS server on the local node, then you must add egress firewall rules that allow access to your DNS server's IP addresses. if you are using domain names in your pods. 14.3.2. EgressNetworkPolicy custom resource (CR) object You can define one or more rules for an egress firewall. A rule is either an Allow rule or a Deny rule, with a specification for the traffic that the rule applies to. The following YAML describes an EgressNetworkPolicy CR object: EgressNetworkPolicy object apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: <name> 1 spec: egress: 2 ... 1 A name for your egress firewall policy. 2 A collection of one or more egress network policy rules as described in the following section. 14.3.2.1. EgressNetworkPolicy rules The following YAML describes an egress firewall rule object. The egress stanza expects an array of one or more objects. Egress policy rule stanza egress: - type: <type> 1 to: 2 cidrSelector: <cidr> 3 dnsName: <dns_name> 4 1 The type of rule. The value must be either Allow or Deny . 2 A stanza describing an egress traffic match rule. A value for either the cidrSelector field or the dnsName field for the rule. You cannot use both fields in the same rule. 3 An IP address range in CIDR format. 4 A domain name. 14.3.2.2. Example EgressNetworkPolicy CR objects The following example defines several egress firewall policy rules: apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: default spec: egress: 1 - type: Allow to: cidrSelector: 1.2.3.0/24 - type: Allow to: dnsName: www.example.com - type: Deny to: cidrSelector: 0.0.0.0/0 1 A collection of egress firewall policy rule objects. 14.3.3. Creating an egress firewall policy object As a cluster administrator, you can create an egress firewall policy object for a project. Important If the project already has an EgressNetworkPolicy object defined, you must edit the existing policy to make changes to the egress firewall rules. Prerequisites A cluster that uses the OpenShift SDN default Container Network Interface (CNI) network provider plug-in. Install the OpenShift CLI ( oc ). You must log in to the cluster as a cluster administrator. Procedure Create a policy rule: Create a <policy_name>.yaml file where <policy_name> describes the egress policy rules. In the file you created, define an egress policy object. Enter the following command to create the policy object. Replace <policy_name> with the name of the policy and <project> with the project that the rule applies to. USD oc create -f <policy_name>.yaml -n <project> In the following example, a new EgressNetworkPolicy object is created in a project named project1 : USD oc create -f default.yaml -n project1 Example output egressnetworkpolicy.network.openshift.io/v1 created Optional: Save the <policy_name>.yaml file so that you can make changes later. 14.4. Editing an egress firewall for a project As a cluster administrator, you can modify network traffic rules for an existing egress firewall. 14.4.1. Viewing an EgressNetworkPolicy object You can view an EgressNetworkPolicy object in your cluster. Prerequisites A cluster using the OpenShift SDN default Container Network Interface (CNI) network provider plug-in. Install the OpenShift Command-line Interface (CLI), commonly known as oc . You must log in to the cluster. Procedure Optional: To view the names of the EgressNetworkPolicy objects defined in your cluster, enter the following command: USD oc get egressnetworkpolicy --all-namespaces To inspect a policy, enter the following command. Replace <policy_name> with the name of the policy to inspect. USD oc describe egressnetworkpolicy <policy_name> Example output Name: default Namespace: project1 Created: 20 minutes ago Labels: <none> Annotations: <none> Rule: Allow to 1.2.3.0/24 Rule: Allow to www.example.com Rule: Deny to 0.0.0.0/0 14.5. Editing an egress firewall for a project As a cluster administrator, you can modify network traffic rules for an existing egress firewall. 14.5.1. Editing an EgressNetworkPolicy object As a cluster administrator, you can update the egress firewall for a project. Prerequisites A cluster using the OpenShift SDN default Container Network Interface (CNI) network provider plug-in. Install the OpenShift CLI ( oc ). You must log in to the cluster as a cluster administrator. Procedure Find the name of the EgressNetworkPolicy object for the project. Replace <project> with the name of the project. USD oc get -n <project> egressnetworkpolicy Optional: If you did not save a copy of the EgressNetworkPolicy object when you created the egress network firewall, enter the following command to create a copy. USD oc get -n <project> egressnetworkpolicy <name> -o yaml > <filename>.yaml Replace <project> with the name of the project. Replace <name> with the name of the object. Replace <filename> with the name of the file to save the YAML to. After making changes to the policy rules, enter the following command to replace the EgressNetworkPolicy object. Replace <filename> with the name of the file containing the updated EgressNetworkPolicy object. USD oc replace -f <filename>.yaml 14.6. Removing an egress firewall from a project As a cluster administrator, you can remove an egress firewall from a project to remove all restrictions on network traffic from the project that leaves the OpenShift Container Platform cluster. 14.6.1. Removing an EgressNetworkPolicy object As a cluster administrator, you can remove an egress firewall from a project. Prerequisites A cluster using the OpenShift SDN default Container Network Interface (CNI) network provider plug-in. Install the OpenShift CLI ( oc ). You must log in to the cluster as a cluster administrator. Procedure Find the name of the EgressNetworkPolicy object for the project. Replace <project> with the name of the project. USD oc get -n <project> egressnetworkpolicy Enter the following command to delete the EgressNetworkPolicy object. Replace <project> with the name of the project and <name> with the name of the object. USD oc delete -n <project> egressnetworkpolicy <name> 14.7. Considerations for the use of an egress router pod 14.7.1. About an egress router pod The OpenShift Container Platform egress router pod redirects traffic to a specified remote server from a private source IP address that is not used for any other purpose. An egress router pod enables you to send network traffic to servers that are set up to allow access only from specific IP addresses. Note The egress router pod is not intended for every outgoing connection. Creating large numbers of egress router pods can exceed the limits of your network hardware. For example, creating an egress router pod for every project or application could exceed the number of local MAC addresses that the network interface can handle before reverting to filtering MAC addresses in software. Important The egress router image is not compatible with Amazon AWS, Azure Cloud, or any other cloud platform that does not support layer 2 manipulations due to their incompatibility with macvlan traffic. 14.7.1.1. Egress router modes In redirect mode , an egress router pod configures iptables rules to redirect traffic from its own IP address to one or more destination IP addresses. Client pods that need to use the reserved source IP address must be modified to connect to the egress router rather than connecting directly to the destination IP. In HTTP proxy mode , an egress router pod runs as an HTTP proxy on port 8080 . This mode only works for clients that are connecting to HTTP-based or HTTPS-based services, but usually requires fewer changes to the client pods to get them to work. Many programs can be told to use an HTTP proxy by setting an environment variable. In DNS proxy mode , an egress router pod runs as a DNS proxy for TCP-based services from its own IP address to one or more destination IP addresses. To make use of the reserved, source IP address, client pods must be modified to connect to the egress router pod rather than connecting directly to the destination IP address. This modification ensures that external destinations treat traffic as though it were coming from a known source. Redirect mode works for all services except for HTTP and HTTPS. For HTTP and HTTPS services, use HTTP proxy mode. For TCP-based services with IP addresses or domain names, use DNS proxy mode. 14.7.1.2. Egress router pod implementation The egress router pod setup is performed by an initialization container. That container runs in a privileged context so that it can configure the macvlan interface and set up iptables rules. After the initialization container finishes setting up the iptables rules, it exits. the egress router pod executes the container to handle the egress router traffic. The image used varies depending on the egress router mode. The environment variables determine which addresses the egress-router image uses. The image configures the macvlan interface to use EGRESS_SOURCE as its IP address, with EGRESS_GATEWAY as the IP address for the gateway. Network Address Translation (NAT) rules are set up so that connections to the cluster IP address of the pod on any TCP or UDP port are redirected to the same port on IP address specified by the EGRESS_DESTINATION variable. If only some of the nodes in your cluster are capable of claiming the specified source IP address and using the specified gateway, you can specify a nodeName or nodeSelector to identify which nodes are acceptable. 14.7.1.3. Deployment considerations An egress router pod adds an additional IP address and MAC address to the primary network interface of the node. As a result, you might need to configure your hypervisor or cloud provider to allow the additional address. Red Hat OpenStack Platform (RHOSP) If you deploy OpenShift Container Platform on RHOSP, you must allow traffic from the IP and MAC addresses of the egress router pod on your OpenStack environment. If you do not allow the traffic, then communication will fail : USD openstack port set --allowed-address \ ip_address=<ip_address>,mac_address=<mac_address> <neutron_port_uuid> Red Hat Virtualization (RHV) If you are using RHV , you must select No Network Filter for the Virtual network interface controller (vNIC). VMware vSphere If you are using VMware vSphere, see the VMware documentation for securing vSphere standard switches . View and change VMware vSphere default settings by selecting the host virtual switch from the vSphere Web Client. Specifically, ensure that the following are enabled: MAC Address Changes Forged Transits Promiscuous Mode Operation 14.7.1.4. Failover configuration To avoid downtime, you can deploy an egress router pod with a Deployment resource, as in the following example. To create a new Service object for the example deployment, use the oc expose deployment/egress-demo-controller command. apiVersion: apps/v1 kind: Deployment metadata: name: egress-demo-controller spec: replicas: 1 1 selector: matchLabels: name: egress-router template: metadata: name: egress-router labels: name: egress-router annotations: pod.network.openshift.io/assign-macvlan: "true" spec: 2 initContainers: ... containers: ... 1 Ensure that replicas is set to 1 , because only one pod can use a given egress source IP address at any time. This means that only a single copy of the router runs on a node. 2 Specify the Pod object template for the egress router pod. 14.7.2. Additional resources Deploying an egress router in redirection mode Deploying an egress router in HTTP proxy mode Deploying an egress router in DNS proxy mode 14.8. Deploying an egress router pod in redirect mode As a cluster administrator, you can deploy an egress router pod that is configured to redirect traffic to specified destination IP addresses. 14.8.1. Egress router pod specification for redirect mode Define the configuration for an egress router pod in the Pod object. The following YAML describes the fields for the configuration of an egress router pod in redirect mode: apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: "true" 1 spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE 2 value: <egress_router> - name: EGRESS_GATEWAY 3 value: <egress_gateway> - name: EGRESS_DESTINATION 4 value: <egress_destination> - name: EGRESS_ROUTER_MODE value: init containers: - name: egress-router-wait image: registry.redhat.io/openshift4/ose-pod 1 The annotation tells OpenShift Container Platform to create a macvlan network interface on the primary network interface controller (NIC) and move that macvlan interface into the pod's network namespace. You must include the quotation marks around the "true" value. To have OpenShift Container Platform create the macvlan interface on a different NIC interface, set the annotation value to the name of that interface. For example, eth1 . 2 IP address from the physical network that the node is on that is reserved for use by the egress router pod. Optional: You can include the subnet length, the /24 suffix, so that a proper route to the local subnet is set. If you do not specify a subnet length, then the egress router can access only the host specified with the EGRESS_GATEWAY variable and no other hosts on the subnet. 3 Same value as the default gateway used by the node. 4 External server to direct traffic to. Using this example, connections to the pod are redirected to 203.0.113.25 , with a source IP address of 192.168.12.99 . Example egress router pod specification apiVersion: v1 kind: Pod metadata: name: egress-multi labels: name: egress-multi annotations: pod.network.openshift.io/assign-macvlan: "true" spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE value: 192.168.12.99/24 - name: EGRESS_GATEWAY value: 192.168.12.1 - name: EGRESS_DESTINATION value: | 80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 203.0.113.27 - name: EGRESS_ROUTER_MODE value: init containers: - name: egress-router-wait image: registry.redhat.io/openshift4/ose-pod 14.8.2. Egress destination configuration format When an egress router pod is deployed in redirect mode, you can specify redirection rules by using one or more of the following formats: <port> <protocol> <ip_address> - Incoming connections to the given <port> should be redirected to the same port on the given <ip_address> . <protocol> is either tcp or udp . <port> <protocol> <ip_address> <remote_port> - As above, except that the connection is redirected to a different <remote_port> on <ip_address> . <ip_address> - If the last line is a single IP address, then any connections on any other port will be redirected to the corresponding port on that IP address. If there is no fallback IP address then connections on other ports are rejected. In the example that follows several rules are defined: The first line redirects traffic from local port 80 to port 80 on 203.0.113.25 . The second and third lines redirect local ports 8080 and 8443 to remote ports 80 and 443 on 203.0.113.26 . The last line matches traffic for any ports not specified in the rules. Example configuration 80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 203.0.113.27 14.8.3. Deploying an egress router pod in redirect mode In redirect mode , an egress router pod sets up iptables rules to redirect traffic from its own IP address to one or more destination IP addresses. Client pods that need to use the reserved source IP address must be modified to connect to the egress router rather than connecting directly to the destination IP. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an egress router pod. To ensure that other pods can find the IP address of the egress router pod, create a service to point to the egress router pod, as in the following example: apiVersion: v1 kind: Service metadata: name: egress-1 spec: ports: - name: http port: 80 - name: https port: 443 type: ClusterIP selector: name: egress-1 Your pods can now connect to this service. Their connections are redirected to the corresponding ports on the external server, using the reserved egress IP address. 14.8.4. Additional resources Configuring an egress router destination mappings with a ConfigMap 14.9. Deploying an egress router pod in HTTP proxy mode As a cluster administrator, you can deploy an egress router pod configured to proxy traffic to specified HTTP and HTTPS-based services. 14.9.1. Egress router pod specification for HTTP mode Define the configuration for an egress router pod in the Pod object. The following YAML describes the fields for the configuration of an egress router pod in HTTP mode: apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: "true" 1 spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE 2 value: <egress-router> - name: EGRESS_GATEWAY 3 value: <egress-gateway> - name: EGRESS_ROUTER_MODE value: http-proxy containers: - name: egress-router-pod image: registry.redhat.io/openshift4/ose-egress-http-proxy env: - name: EGRESS_HTTP_PROXY_DESTINATION 4 value: |- ... ... 1 The annotation tells OpenShift Container Platform to create a macvlan network interface on the primary network interface controller (NIC) and move that macvlan interface into the pod's network namespace. You must include the quotation marks around the "true" value. To have OpenShift Container Platform create the macvlan interface on a different NIC interface, set the annotation value to the name of that interface. For example, eth1 . 2 IP address from the physical network that the node is on that is reserved for use by the egress router pod. Optional: You can include the subnet length, the /24 suffix, so that a proper route to the local subnet is set. If you do not specify a subnet length, then the egress router can access only the host specified with the EGRESS_GATEWAY variable and no other hosts on the subnet. 3 Same value as the default gateway used by the node. 4 A string or YAML multi-line string specifying how to configure the proxy. Note that this is specified as an environment variable in the HTTP proxy container, not with the other environment variables in the init container. 14.9.2. Egress destination configuration format When an egress router pod is deployed in HTTP proxy mode, you can specify redirection rules by using one or more of the following formats. Each line in the configuration specifies one group of connections to allow or deny: An IP address allows connections to that IP address, such as 192.168.1.1 . A CIDR range allows connections to that CIDR range, such as 192.168.1.0/24 . A hostname allows proxying to that host, such as www.example.com . A domain name preceded by *. allows proxying to that domain and all of its subdomains, such as *.example.com . A ! followed by any of the match expressions denies the connection instead. If the last line is * , then anything that is not explicitly denied is allowed. Otherwise, anything that is not allowed is denied. You can also use * to allow connections to all remote destinations. Example configuration !*.example.com !192.168.1.0/24 192.168.2.1 * 14.9.3. Deploying an egress router pod in HTTP proxy mode In HTTP proxy mode , an egress router pod runs as an HTTP proxy on port 8080 . This mode only works for clients that are connecting to HTTP-based or HTTPS-based services, but usually requires fewer changes to the client pods to get them to work. Many programs can be told to use an HTTP proxy by setting an environment variable. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an egress router pod. To ensure that other pods can find the IP address of the egress router pod, create a service to point to the egress router pod, as in the following example: apiVersion: v1 kind: Service metadata: name: egress-1 spec: ports: - name: http-proxy port: 8080 1 type: ClusterIP selector: name: egress-1 1 Ensure the http port is set to 8080 . To configure the client pod (not the egress proxy pod) to use the HTTP proxy, set the http_proxy or https_proxy variables: apiVersion: v1 kind: Pod metadata: name: app-1 labels: name: app-1 spec: containers: env: - name: http_proxy value: http://egress-1:8080/ 1 - name: https_proxy value: http://egress-1:8080/ ... 1 The service created in the step. Note Using the http_proxy and https_proxy environment variables is not necessary for all setups. If the above does not create a working setup, then consult the documentation for the tool or software you are running in the pod. 14.9.4. Additional resources Configuring an egress router destination mappings with a ConfigMap 14.10. Deploying an egress router pod in DNS proxy mode As a cluster administrator, you can deploy an egress router pod configured to proxy traffic to specified DNS names and IP addresses. 14.10.1. Egress router pod specification for DNS mode Define the configuration for an egress router pod in the Pod object. The following YAML describes the fields for the configuration of an egress router pod in DNS mode: apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: "true" 1 spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE 2 value: <egress-router> - name: EGRESS_GATEWAY 3 value: <egress-gateway> - name: EGRESS_ROUTER_MODE value: dns-proxy containers: - name: egress-router-pod image: registry.redhat.io/openshift4/ose-egress-dns-proxy securityContext: privileged: true env: - name: EGRESS_DNS_PROXY_DESTINATION 4 value: |- ... - name: EGRESS_DNS_PROXY_DEBUG 5 value: "1" ... 1 The annotation tells OpenShift Container Platform to create a macvlan network interface on the primary network interface controller (NIC) and move that macvlan interface into the pod's network namespace. You must include the quotation marks around the "true" value. To have OpenShift Container Platform create the macvlan interface on a different NIC interface, set the annotation value to the name of that interface. For example, eth1 . 2 IP address from the physical network that the node is on that is reserved for use by the egress router pod. Optional: You can include the subnet length, the /24 suffix, so that a proper route to the local subnet is set. If you do not specify a subnet length, then the egress router can access only the host specified with the EGRESS_GATEWAY variable and no other hosts on the subnet. 3 Same value as the default gateway used by the node. 4 Specify a list of one or more proxy destinations. 5 Optional: Specify to output the DNS proxy log output to stdout . 14.10.2. Egress destination configuration format When the router is deployed in DNS proxy mode, you specify a list of port and destination mappings. A destination may be either an IP address or a DNS name. An egress router pod supports the following formats for specifying port and destination mappings: Port and remote address You can specify a source port and a destination host by using the two field format: <port> <remote_address> . The host can be an IP address or a DNS name. If a DNS name is provided, DNS resolution occurs at runtime. For a given host, the proxy connects to the specified source port on the destination host when connecting to the destination host IP address. Port and remote address pair example 80 172.16.12.11 100 example.com Port, remote address, and remote port You can specify a source port, a destination host, and a destination port by using the three field format: <port> <remote_address> <remote_port> . The three field format behaves identically to the two field version, with the exception that the destination port can be different than the source port. Port, remote address, and remote port example 8080 192.168.60.252 80 8443 web.example.com 443 14.10.3. Deploying an egress router pod in DNS proxy mode In DNS proxy mode , an egress router pod acts as a DNS proxy for TCP-based services from its own IP address to one or more destination IP addresses. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an egress router pod. Create a service for the egress router pod: Create a file named egress-router-service.yaml that contains the following YAML. Set spec.ports to the list of ports that you defined previously for the EGRESS_DNS_PROXY_DESTINATION environment variable. apiVersion: v1 kind: Service metadata: name: egress-dns-svc spec: ports: ... type: ClusterIP selector: name: egress-dns-proxy For example: apiVersion: v1 kind: Service metadata: name: egress-dns-svc spec: ports: - name: con1 protocol: TCP port: 80 targetPort: 80 - name: con2 protocol: TCP port: 100 targetPort: 100 type: ClusterIP selector: name: egress-dns-proxy To create the service, enter the following command: USD oc create -f egress-router-service.yaml Pods can now connect to this service. The connections are proxied to the corresponding ports on the external server, using the reserved egress IP address. 14.10.4. Additional resources Configuring an egress router destination mappings with a ConfigMap 14.11. Configuring an egress router pod destination list from a config map As a cluster administrator, you can define a ConfigMap object that specifies destination mappings for an egress router pod. The specific format of the configuration depends on the type of egress router pod. For details on the format, refer to the documentation for the specific egress router pod. 14.11.1. Configuring an egress router destination mappings with a config map For a large or frequently-changing set of destination mappings, you can use a config map to externally maintain the list. An advantage of this approach is that permission to edit the config map can be delegated to users without cluster-admin privileges. Because the egress router pod requires a privileged container, it is not possible for users without cluster-admin privileges to edit the pod definition directly. Note The egress router pod does not automatically update when the config map changes. You must restart the egress router pod to get updates. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a file containing the mapping data for the egress router pod, as in the following example: You can put blank lines and comments into this file. Create a ConfigMap object from the file: USD oc delete configmap egress-routes --ignore-not-found USD oc create configmap egress-routes \ --from-file=destination=my-egress-destination.txt In the command, the egress-routes value is the name of the ConfigMap object to create and my-egress-destination.txt is the name of the file that the data is read from. Create an egress router pod definition and specify the configMapKeyRef stanza for the EGRESS_DESTINATION field in the environment stanza: ... env: - name: EGRESS_DESTINATION valueFrom: configMapKeyRef: name: egress-routes key: destination ... 14.11.2. Additional resources Redirect mode HTTP proxy mode DNS proxy mode 14.12. Enabling multicast for a project 14.12.1. About multicast With IP multicast, data is broadcast to many IP addresses simultaneously. Important At this time, multicast is best used for low-bandwidth coordination or service discovery and not a high-bandwidth solution. Multicast traffic between OpenShift Container Platform pods is disabled by default. If you are using the OpenShift SDN default Container Network Interface (CNI) network provider, you can enable multicast on a per-project basis. When using the OpenShift SDN network plug-in in networkpolicy isolation mode: Multicast packets sent by a pod will be delivered to all other pods in the project, regardless of NetworkPolicy objects. Pods might be able to communicate over multicast even when they cannot communicate over unicast. Multicast packets sent by a pod in one project will never be delivered to pods in any other project, even if there are NetworkPolicy objects that allow communication between the projects. When using the OpenShift SDN network plug-in in multitenant isolation mode: Multicast packets sent by a pod will be delivered to all other pods in the project. Multicast packets sent by a pod in one project will be delivered to pods in other projects only if each project is joined together and multicast is enabled in each joined project. 14.12.2. Enabling multicast between pods You can enable multicast between pods for your project. Prerequisites Install the OpenShift CLI ( oc ). You must log in to the cluster with a user that has the cluster-admin role. Procedure Run the following command to enable multicast for a project. Replace <namespace> with the namespace for the project you want to enable multicast for. USD oc annotate netnamespace <namespace> \ netnamespace.network.openshift.io/multicast-enabled=true Verification To verify that multicast is enabled for a project, complete the following procedure: Change your current project to the project that you enabled multicast for. Replace <project> with the project name. USD oc project <project> Create a pod to act as a multicast receiver: USD cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: mlistener labels: app: multicast-verify spec: containers: - name: mlistener image: registry.access.redhat.com/ubi8 command: ["/bin/sh", "-c"] args: ["dnf -y install socat hostname && sleep inf"] ports: - containerPort: 30102 name: mlistener protocol: UDP EOF Create a pod to act as a multicast sender: USD cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: msender labels: app: multicast-verify spec: containers: - name: msender image: registry.access.redhat.com/ubi8 command: ["/bin/sh", "-c"] args: ["dnf -y install socat && sleep inf"] EOF In a new terminal window or tab, start the multicast listener. Get the IP address for the Pod: USD POD_IP=USD(oc get pods mlistener -o jsonpath='{.status.podIP}') Start the multicast listener by entering the following command: USD oc exec mlistener -i -t -- \ socat UDP4-RECVFROM:30102,ip-add-membership=224.1.0.1:USDPOD_IP,fork EXEC:hostname Start the multicast transmitter. Get the pod network IP address range: USD CIDR=USD(oc get Network.config.openshift.io cluster \ -o jsonpath='{.status.clusterNetwork[0].cidr}') To send a multicast message, enter the following command: USD oc exec msender -i -t -- \ /bin/bash -c "echo | socat STDIO UDP4-DATAGRAM:224.1.0.1:30102,range=USDCIDR,ip-multicast-ttl=64" If multicast is working, the command returns the following output: mlistener 14.13. Disabling multicast for a project 14.13.1. Disabling multicast between pods You can disable multicast between pods for your project. Prerequisites Install the OpenShift CLI ( oc ). You must log in to the cluster with a user that has the cluster-admin role. Procedure Disable multicast by running the following command: USD oc annotate netnamespace <namespace> \ 1 netnamespace.network.openshift.io/multicast-enabled- 1 The namespace for the project you want to disable multicast for. 14.14. Configuring network isolation using OpenShift SDN When your cluster is configured to use the multitenant isolation mode for the OpenShift SDN CNI plug-in, each project is isolated by default. Network traffic is not allowed between pods or services in different projects in multitenant isolation mode. You can change the behavior of multitenant isolation for a project in two ways: You can join one or more projects, allowing network traffic between pods and services in different projects. You can disable network isolation for a project. It will be globally accessible, accepting network traffic from pods and services in all other projects. A globally accessible project can access pods and services in all other projects. 14.14.1. Prerequisites You must have a cluster configured to use the OpenShift SDN Container Network Interface (CNI) plug-in in multitenant isolation mode. 14.14.2. Joining projects You can join two or more projects to allow network traffic between pods and services in different projects. Prerequisites Install the OpenShift CLI ( oc ). You must log in to the cluster with a user that has the cluster-admin role. Procedure Use the following command to join projects to an existing project network: USD oc adm pod-network join-projects --to=<project1> <project2> <project3> Alternatively, instead of specifying specific project names, you can use the --selector=<project_selector> option to specify projects based upon an associated label. Optional: Run the following command to view the pod networks that you have joined together: USD oc get netnamespaces Projects in the same pod-network have the same network ID in the NETID column. 14.14.3. Isolating a project You can isolate a project so that pods and services in other projects cannot access its pods and services. Prerequisites Install the OpenShift CLI ( oc ). You must log in to the cluster with a user that has the cluster-admin role. Procedure To isolate the projects in the cluster, run the following command: USD oc adm pod-network isolate-projects <project1> <project2> Alternatively, instead of specifying specific project names, you can use the --selector=<project_selector> option to specify projects based upon an associated label. 14.14.4. Disabling network isolation for a project You can disable network isolation for a project. Prerequisites Install the OpenShift CLI ( oc ). You must log in to the cluster with a user that has the cluster-admin role. Procedure Run the following command for the project: USD oc adm pod-network make-projects-global <project1> <project2> Alternatively, instead of specifying specific project names, you can use the --selector=<project_selector> option to specify projects based upon an associated label. 14.15. Configuring kube-proxy The Kubernetes network proxy (kube-proxy) runs on each node and is managed by the Cluster Network Operator (CNO). kube-proxy maintains network rules for forwarding connections for endpoints associated with services. 14.15.1. About iptables rules synchronization The synchronization period determines how frequently the Kubernetes network proxy (kube-proxy) syncs the iptables rules on a node. A sync begins when either of the following events occurs: An event occurs, such as service or endpoint is added to or removed from the cluster. The time since the last sync exceeds the sync period defined for kube-proxy. 14.15.2. kube-proxy configuration parameters You can modify the following kubeProxyConfig parameters. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. Table 14.2. Parameters Parameter Description Values Default iptablesSyncPeriod The refresh period for iptables rules. A time interval, such as 30s or 2m . Valid suffixes include s , m , and h and are described in the Go time package documentation. 30s proxyArguments.iptables-min-sync-period The minimum duration before refreshing iptables rules. This parameter ensures that the refresh does not happen too frequently. By default, a refresh starts as soon as a change that affects iptables rules occurs. A time interval, such as 30s or 2m . Valid suffixes include s , m , and h and are described in the Go time package 0s 14.15.3. Modifying the kube-proxy configuration You can modify the Kubernetes network proxy configuration for your cluster. Prerequisites Install the OpenShift CLI ( oc ). Log in to a running cluster with the cluster-admin role. Procedure Edit the Network.operator.openshift.io custom resource (CR) by running the following command: USD oc edit network.operator.openshift.io cluster Modify the kubeProxyConfig parameter in the CR with your changes to the kube-proxy configuration, such as in the following example CR: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: kubeProxyConfig: iptablesSyncPeriod: 30s proxyArguments: iptables-min-sync-period: ["30s"] Save the file and exit the text editor. The syntax is validated by the oc command when you save the file and exit the editor. If your modifications contain a syntax error, the editor opens the file and displays an error message. Enter the following command to confirm the configuration update: USD oc get networks.operator.openshift.io -o yaml Example output apiVersion: v1 items: - apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 defaultNetwork: type: OpenShiftSDN kubeProxyConfig: iptablesSyncPeriod: 30s proxyArguments: iptables-min-sync-period: - 30s serviceNetwork: - 172.30.0.0/16 status: {} kind: List Optional: Enter the following command to confirm that the Cluster Network Operator accepted the configuration change: USD oc get clusteroperator network Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE network 4.1.0-0.9 True False False 1m The AVAILABLE field is True when the configuration update is applied successfully.
[ "oc patch netnamespace <project_name> --type=merge -p \\ 1 '{ \"egressIPs\": [ \"<ip_address>\" 2 ] }'", "oc patch netnamespace project1 --type=merge -p '{\"egressIPs\": [\"192.168.1.100\"]}' oc patch netnamespace project2 --type=merge -p '{\"egressIPs\": [\"192.168.1.101\"]}'", "oc patch hostsubnet <node_name> --type=merge -p \\ 1 '{ \"egressCIDRs\": [ \"<ip_address_range_1>\", \"<ip_address_range_2>\" 2 ] }'", "oc patch hostsubnet node1 --type=merge -p '{\"egressCIDRs\": [\"192.168.1.0/24\"]}' oc patch hostsubnet node2 --type=merge -p '{\"egressCIDRs\": [\"192.168.1.0/24\"]}'", "oc patch netnamespace <project> --type=merge -p \\ 1 '{ \"egressIPs\": [ 2 \"<ip_address>\" ] }'", "oc patch netnamespace project1 --type=merge -p '{\"egressIPs\": [\"192.168.1.100\"]}'", "oc patch hostsubnet <node_name> --type=merge -p \\ 1 '{ \"egressIPs\": [ 2 \"<ip_address_1>\", \"<ip_address_N>\" ] }'", "oc patch hostsubnet node1 --type=merge -p '{\"egressIPs\": [\"192.168.1.100\", \"192.168.1.101\", \"192.168.1.102\"]}'", "apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: default namespace: <namespace> 1 spec: egress: - to: cidrSelector: <api_server_address_range> 2 type: Allow - to: cidrSelector: 0.0.0.0/0 3 type: Deny", "apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: <name> 1 spec: egress: 2", "egress: - type: <type> 1 to: 2 cidrSelector: <cidr> 3 dnsName: <dns_name> 4", "apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: default spec: egress: 1 - type: Allow to: cidrSelector: 1.2.3.0/24 - type: Allow to: dnsName: www.example.com - type: Deny to: cidrSelector: 0.0.0.0/0", "oc create -f <policy_name>.yaml -n <project>", "oc create -f default.yaml -n project1", "egressnetworkpolicy.network.openshift.io/v1 created", "oc get egressnetworkpolicy --all-namespaces", "oc describe egressnetworkpolicy <policy_name>", "Name: default Namespace: project1 Created: 20 minutes ago Labels: <none> Annotations: <none> Rule: Allow to 1.2.3.0/24 Rule: Allow to www.example.com Rule: Deny to 0.0.0.0/0", "oc get -n <project> egressnetworkpolicy", "oc get -n <project> egressnetworkpolicy <name> -o yaml > <filename>.yaml", "oc replace -f <filename>.yaml", "oc get -n <project> egressnetworkpolicy", "oc delete -n <project> egressnetworkpolicy <name>", "openstack port set --allowed-address ip_address=<ip_address>,mac_address=<mac_address> <neutron_port_uuid>", "apiVersion: apps/v1 kind: Deployment metadata: name: egress-demo-controller spec: replicas: 1 1 selector: matchLabels: name: egress-router template: metadata: name: egress-router labels: name: egress-router annotations: pod.network.openshift.io/assign-macvlan: \"true\" spec: 2 initContainers: containers:", "apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: \"true\" 1 spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE 2 value: <egress_router> - name: EGRESS_GATEWAY 3 value: <egress_gateway> - name: EGRESS_DESTINATION 4 value: <egress_destination> - name: EGRESS_ROUTER_MODE value: init containers: - name: egress-router-wait image: registry.redhat.io/openshift4/ose-pod", "apiVersion: v1 kind: Pod metadata: name: egress-multi labels: name: egress-multi annotations: pod.network.openshift.io/assign-macvlan: \"true\" spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE value: 192.168.12.99/24 - name: EGRESS_GATEWAY value: 192.168.12.1 - name: EGRESS_DESTINATION value: | 80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 203.0.113.27 - name: EGRESS_ROUTER_MODE value: init containers: - name: egress-router-wait image: registry.redhat.io/openshift4/ose-pod", "80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 203.0.113.27", "apiVersion: v1 kind: Service metadata: name: egress-1 spec: ports: - name: http port: 80 - name: https port: 443 type: ClusterIP selector: name: egress-1", "apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: \"true\" 1 spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE 2 value: <egress-router> - name: EGRESS_GATEWAY 3 value: <egress-gateway> - name: EGRESS_ROUTER_MODE value: http-proxy containers: - name: egress-router-pod image: registry.redhat.io/openshift4/ose-egress-http-proxy env: - name: EGRESS_HTTP_PROXY_DESTINATION 4 value: |-", "!*.example.com !192.168.1.0/24 192.168.2.1 *", "apiVersion: v1 kind: Service metadata: name: egress-1 spec: ports: - name: http-proxy port: 8080 1 type: ClusterIP selector: name: egress-1", "apiVersion: v1 kind: Pod metadata: name: app-1 labels: name: app-1 spec: containers: env: - name: http_proxy value: http://egress-1:8080/ 1 - name: https_proxy value: http://egress-1:8080/", "apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: \"true\" 1 spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE 2 value: <egress-router> - name: EGRESS_GATEWAY 3 value: <egress-gateway> - name: EGRESS_ROUTER_MODE value: dns-proxy containers: - name: egress-router-pod image: registry.redhat.io/openshift4/ose-egress-dns-proxy securityContext: privileged: true env: - name: EGRESS_DNS_PROXY_DESTINATION 4 value: |- - name: EGRESS_DNS_PROXY_DEBUG 5 value: \"1\"", "80 172.16.12.11 100 example.com", "8080 192.168.60.252 80 8443 web.example.com 443", "apiVersion: v1 kind: Service metadata: name: egress-dns-svc spec: ports: type: ClusterIP selector: name: egress-dns-proxy", "apiVersion: v1 kind: Service metadata: name: egress-dns-svc spec: ports: - name: con1 protocol: TCP port: 80 targetPort: 80 - name: con2 protocol: TCP port: 100 targetPort: 100 type: ClusterIP selector: name: egress-dns-proxy", "oc create -f egress-router-service.yaml", "Egress routes for Project \"Test\", version 3 80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 Fallback 203.0.113.27", "oc delete configmap egress-routes --ignore-not-found", "oc create configmap egress-routes --from-file=destination=my-egress-destination.txt", "env: - name: EGRESS_DESTINATION valueFrom: configMapKeyRef: name: egress-routes key: destination", "oc annotate netnamespace <namespace> netnamespace.network.openshift.io/multicast-enabled=true", "oc project <project>", "cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: mlistener labels: app: multicast-verify spec: containers: - name: mlistener image: registry.access.redhat.com/ubi8 command: [\"/bin/sh\", \"-c\"] args: [\"dnf -y install socat hostname && sleep inf\"] ports: - containerPort: 30102 name: mlistener protocol: UDP EOF", "cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: msender labels: app: multicast-verify spec: containers: - name: msender image: registry.access.redhat.com/ubi8 command: [\"/bin/sh\", \"-c\"] args: [\"dnf -y install socat && sleep inf\"] EOF", "POD_IP=USD(oc get pods mlistener -o jsonpath='{.status.podIP}')", "oc exec mlistener -i -t -- socat UDP4-RECVFROM:30102,ip-add-membership=224.1.0.1:USDPOD_IP,fork EXEC:hostname", "CIDR=USD(oc get Network.config.openshift.io cluster -o jsonpath='{.status.clusterNetwork[0].cidr}')", "oc exec msender -i -t -- /bin/bash -c \"echo | socat STDIO UDP4-DATAGRAM:224.1.0.1:30102,range=USDCIDR,ip-multicast-ttl=64\"", "mlistener", "oc annotate netnamespace <namespace> \\ 1 netnamespace.network.openshift.io/multicast-enabled-", "oc adm pod-network join-projects --to=<project1> <project2> <project3>", "oc get netnamespaces", "oc adm pod-network isolate-projects <project1> <project2>", "oc adm pod-network make-projects-global <project1> <project2>", "oc edit network.operator.openshift.io cluster", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: kubeProxyConfig: iptablesSyncPeriod: 30s proxyArguments: iptables-min-sync-period: [\"30s\"]", "oc get networks.operator.openshift.io -o yaml", "apiVersion: v1 items: - apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 defaultNetwork: type: OpenShiftSDN kubeProxyConfig: iptablesSyncPeriod: 30s proxyArguments: iptables-min-sync-period: - 30s serviceNetwork: - 172.30.0.0/16 status: {} kind: List", "oc get clusteroperator network", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE network 4.1.0-0.9 True False False 1m" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/networking/openshift-sdn-default-cni-network-provider
21.3.2. Red Hat Documentation
21.3.2. Red Hat Documentation Red Hat SELinux Guide ; - Explains what SELinux is and explains how to work with SELinux.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-SELinux-resources-RH
Chapter 6. Using Metering
Chapter 6. Using Metering Important Metering is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. 6.1. Prerequisites Install Metering Review the details about the available options that can be configured for a report and how they function. 6.2. Writing Reports Writing a report is the way to process and analyze data using metering. To write a report, you must define a Report resource in a YAML file, specify the required parameters, and create it in the openshift-metering namespace. Prerequisites Metering is installed. Procedure Change to the openshift-metering project: USD oc project openshift-metering Create a Report resource as a YAML file: Create a YAML file with the following content: apiVersion: metering.openshift.io/v1 kind: Report metadata: name: namespace-cpu-request-2019 1 namespace: openshift-metering spec: reportingStart: '2019-01-01T00:00:00Z' reportingEnd: '2019-12-30T23:59:59Z' query: namespace-cpu-request 2 runImmediately: true 3 2 The query specifies the ReportQuery resources used to generate the report. Change this based on what you want to report on. For a list of options, run oc get reportqueries | grep -v raw . 1 Use a descriptive name about what the report does for metadata.name . A good name describes the query, and the schedule or period you used. 3 Set runImmediately to true for it to run with whatever data is available, or set it to false if you want it to wait for reportingEnd to pass. Run the following command to create the Report resource: USD oc create -f <file-name>.yaml Example output report.metering.openshift.io/namespace-cpu-request-2019 created You can list reports and their Running status with the following command: USD oc get reports Example output NAME QUERY SCHEDULE RUNNING FAILED LAST REPORT TIME AGE namespace-cpu-request-2019 namespace-cpu-request Finished 2019-12-30T23:59:59Z 26s 6.3. Viewing report results Viewing a report's results involves querying the reporting API route and authenticating to the API using your OpenShift Container Platform credentials. Reports can be retrieved as JSON , CSV , or Tabular formats. Prerequisites Metering is installed. To access report results, you must either be a cluster administrator, or you need to be granted access using the report-exporter role in the openshift-metering namespace. Procedure Change to the openshift-metering project: USD oc project openshift-metering Query the reporting API for results: Create a variable for the metering reporting-api route then get the route: USD meteringRoute="USD(oc get routes metering -o jsonpath='{.spec.host}')" USD echo "USDmeteringRoute" Get the token of your current user to be used in the request: USD token="USD(oc whoami -t)" Set reportName to the name of the report you created: USD reportName=namespace-cpu-request-2019 Set reportFormat to one of csv , json , or tabular to specify the output format of the API response: USD reportFormat=csv To get the results, use curl to make a request to the reporting API for your report: USD curl --insecure -H "Authorization: Bearer USD{token}" "https://USD{meteringRoute}/api/v1/reports/get?name=USD{reportName}&namespace=openshift-metering&format=USDreportFormat" Example output with reportName=namespace-cpu-request-2019 and reportFormat=csv period_start,period_end,namespace,pod_request_cpu_core_seconds 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-apiserver,11745.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-apiserver-operator,261.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-authentication,522.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-authentication-operator,261.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-cloud-credential-operator,261.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-cluster-machine-approver,261.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-cluster-node-tuning-operator,3385.800000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-cluster-samples-operator,261.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-cluster-version,522.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-console,522.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-console-operator,261.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-controller-manager,7830.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-controller-manager-operator,261.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-dns,34372.800000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-dns-operator,261.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-etcd,23490.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-image-registry,5993.400000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-ingress,5220.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-ingress-operator,261.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-kube-apiserver,12528.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-kube-apiserver-operator,261.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-kube-controller-manager,8613.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-kube-controller-manager-operator,261.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-machine-api,1305.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-machine-config-operator,9637.800000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-metering,19575.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-monitoring,6256.800000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-network-operator,261.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-sdn,94503.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-service-ca,783.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-service-ca-operator,261.000000
[ "oc project openshift-metering", "apiVersion: metering.openshift.io/v1 kind: Report metadata: name: namespace-cpu-request-2019 1 namespace: openshift-metering spec: reportingStart: '2019-01-01T00:00:00Z' reportingEnd: '2019-12-30T23:59:59Z' query: namespace-cpu-request 2 runImmediately: true 3", "oc create -f <file-name>.yaml", "report.metering.openshift.io/namespace-cpu-request-2019 created", "oc get reports", "NAME QUERY SCHEDULE RUNNING FAILED LAST REPORT TIME AGE namespace-cpu-request-2019 namespace-cpu-request Finished 2019-12-30T23:59:59Z 26s", "oc project openshift-metering", "meteringRoute=\"USD(oc get routes metering -o jsonpath='{.spec.host}')\"", "echo \"USDmeteringRoute\"", "token=\"USD(oc whoami -t)\"", "reportName=namespace-cpu-request-2019", "reportFormat=csv", "curl --insecure -H \"Authorization: Bearer USD{token}\" \"https://USD{meteringRoute}/api/v1/reports/get?name=USD{reportName}&namespace=openshift-metering&format=USDreportFormat\"", "period_start,period_end,namespace,pod_request_cpu_core_seconds 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-apiserver,11745.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-apiserver-operator,261.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-authentication,522.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-authentication-operator,261.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-cloud-credential-operator,261.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-cluster-machine-approver,261.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-cluster-node-tuning-operator,3385.800000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-cluster-samples-operator,261.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-cluster-version,522.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-console,522.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-console-operator,261.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-controller-manager,7830.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-controller-manager-operator,261.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-dns,34372.800000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-dns-operator,261.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-etcd,23490.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-image-registry,5993.400000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-ingress,5220.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-ingress-operator,261.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-kube-apiserver,12528.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-kube-apiserver-operator,261.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-kube-controller-manager,8613.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-kube-controller-manager-operator,261.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-machine-api,1305.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-machine-config-operator,9637.800000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-metering,19575.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-monitoring,6256.800000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-network-operator,261.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-sdn,94503.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-service-ca,783.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-service-ca-operator,261.000000" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/metering/using-metering
Chapter 2. Introduction to JDK Mission Control
Chapter 2. Introduction to JDK Mission Control JDK Mission Control (JMC) is a collection of tools to read and analyze JFR files. JMC includes detailed views and graphs that plot JFR events. With JFR analysis, JMC also consists of the following components: JMX Console MBean Historical analysis via flight recordings and hprof files (as of JMC 7.1.0) HPROF-dump analyzer JMC is based on the Eclipse platform. You can extend JMC by adding plug-ins using the Eclipse RCP API and other specific APIs. You can use JMC and its plug-ins on either Red Hat Enterprise Linux or Microsoft Windows. For Red Hat Enterprise Linux, the CodeReady Linux Builder (CRB) repository with RHEL 9 provides the JMC package. Note The CRB repository is also known as the Builder repository. You must enable the CRB repository on RHEL 9, so that you can install JMC on RHEL. CRB packages are built with the Source Red Hat Package Manager (SRPM) as productized RHEL packages, so CRB packages receive regular updates. The CRB is a developer repository that is disabled on RHEL by default. The CRB contains parts of the buildroot root file system that are shipped to your RHEL user account. The buildroot root file system contains developer-level build dependencies for building applications. For more information about the CRB repository, see The CodeReady Linux Builder repository (Package manifest). 2.1. Downloading and installing JMC Red Hat build of OpenJDK distributions for Red Hat Enterprise Linux and Microsoft Windows include a version of JDK Mission Control (JMC). For Red Hat Enterprise Linux, you can use the Red Hat Subscription Manager tool to download and install JMC on your local operating system. On Microsoft Windows, the JMC package is included with the archive file that you can download from the Red Hat Customer Portal. After you download and install Red Hat build of OpenJDK 8 on Microsoft Windows, you can navigate to the directory that contains the jmc.exe file and then issue the jmc command. 2.1.1. Downloading and installing JMC on RHEL 9 You can download and install JDK Mission Control (JMC) on your local Red Hat Enterprise Linux (RHEL) 9 operating system by using the Red Hat Subscription Manager (RHSM) tool. Prerequisites Downloaded and installed Red Hat build of OpenJDK 8.0.412 on RHEL 9. Logged in as the root user on your operating system. Registered an account on the Red Hat Customer Portal . Registered an RHSM account that has an active subscription for providing you access to the Red Hat build of OpenJDK 8 repository. For more information about registering your system to your RHSM account, see Registering a system using Red Hat Subscription Management (Using Red Hat Subscription Management). Procedure Enable the CodeReady Linux Builder (CRB) repository on RHEL, so that you can install the downloaded JMC package on RHEL. You can enable the CRB repository by completing the following actions: To enable the CRB repository on RHEL, issue the following RHSM command: To check the list of modules in the CRB repository, issue the following command: The following example output shows a javapackages-tools module that is defined in the common profile of the repository: The example also shows a virt-devel module that is not assigned to any profile. Install your target package. For example, to install a package called xz-java , issue the following command and ensure that you follow any CLI command prompts: To start the JMC console on your operating system, choose one of the following options: Navigate to the directory that contains the JMC executable file, and then issue the following command: Use your system's file explorer application to navigate to the JDK Mission Control directory, such as /usr/bin/jmc , and then double-click the JMC executable file. Additional resources Installing and using Red Hat build of OpenJDK 8 on RHEL Installing and using Red Hat build of OpenJDK 8 for Microsoft Windows 2.1.2. Downloading and installing JMC on RHEL 7 or RHEL 8 You can download and install JDK Mission Control (JMC) on your local Red Hat Enterprise Linux(RHEL) 7 or RHEL 8 operating system by using the Red Hat Subscription Manager (RHSM) tool. Prerequisites Downloaded and installed Red Hat build of OpenJDK 8.0.412 on your version of RHEL (either RHEL 7 or RHEL 8). Logged in as the root user on your operating system. Registered an account on the Red Hat Customer Portal . Registered an RHSM account that has an active subscription for providing you access to the Red Hat build of OpenJDK 8 repository. For more information about registering your system to your RHSM account, see Registering a system using Red Hat Subscription Management (Using Red Hat Subscription Management). Procedure To download the JMC package on your version of RHEL, issue the following command. RHEL 8: RHEL 7: The command uses the Red Hat Subscription Management tool to download the JMC package to your RHEL operating system. This JMC package is available in the jmc module stream of the Red Hat Subscription Management service. To start the JMC console on your operating system, choose one of the following options: Navigate to the directory that contains the JMC executable file and then issue the following command: Use your system's file explorer application to navigate to the JDK Mission Control directory, such as /usr/bin/jmc , and then double-click the JMC executable file. Additional resources Installing and using Red Hat build of OpenJDK 8 on RHEL Installing and using Red Hat build of OpenJDK 8 for Microsoft Windows 2.2. JDK Mission Control (JMC) Agent You can use the JMC Agent to add JDK Flight Recorder (JFR) functionality to a running application. You can also use the JMC Agent to add a custom flight recorder event into a running Java Virtual Machine (JVM). The JMC Agent includes the following capabilities: Better control of enabling or disabling generated events when using JFR templates. Efficient timestamp capturing when using the Timestamp class. Low memory consumption when generating flight recordings. The Red Hat build of OpenJDK 8.0.412 installation files for Red Hat Enterprise Linux and Microsoft Windows do not include the JMC Agent with the JMC package. You must download and install a third-party version of the JMC Agent, and then check its compatibility with the JMC package for the Red Hat build of OpenJDK on your chosen platform. Important Third-party applications, such as the JMC Agent, are not supported by Red Hat. Before you decide to use any third-party applications with Red Hat products, ensure you test the security and trustworthiness of the downloaded software. Note The graphical user interface (GUI) for the JMC Agent displays similarly on both Red Hat Enterprise Linux and Microsoft Windows, except for graphical changes introduced by the Standard Widget Toolkit (SWT) for Java that is specific to either platform. When you have built the JMC Agent, and you have a JMC Agent JAR file, you can access the JMC Agent Plugin in the JVM Browser panel of your JMC console. With this plug-in you can use the JMC Agent functionality on the JMC console, such as configuring the JMC Agent or managing how the JMC Agent interacts with JFR data. 2.3. Starting the JDK Mission Control (JMC) Agent You can start the JMC Agent by using the JMC Agent Plugin. Red Hat Enterprise Linux and Microsoft Windows support the use of this plug-in. After you start your JMC Agent, you can configure the agent or manage how the agent interacts with your JFR data. Prerequisites Downloaded and installed the jmc package on either Red Hat Enterprise Linux or Microsoft Windows Downloaded the Eclipse Adoptium Agent JAR file. See adoptium/jmc-build (GitHub) . Started your Java application with the --add-opens=java.base/jdk.internal.misc=ALL-UNNAMED flag. For example, ./<your_application> --add-opens=java.base/jdk.internal.misc=ALL-UNNAMED . Note Eclipse Adoptium is a community-supported project. Using the agent.jar file from Eclipse Adoptium is not supported with Red Hat production service level agreements (SLAs). Procedure Depending on your operating system, choose one of the following methods to start your JMC console: On Red Hat Enterprise Linux, navigate to the directory that contains the executable file, and then issue the ./jmc command. On Microsoft Windows, navigate to the directory that contains the jmc.exe file, and then issue the jmc command. Note You can also start your JMC application on either operating system by using your system's file explorer application to navigate to the JDK Mission Control directory, and then double-click the JMC executable file. Navigate to the JVM Browser navigation panel. On this panel, you can view any available JVM connections. Expand your target JVM instance, such as [11.0.13] The JVM Running Mission Control , in the JVM Browser panel. A list of items displays under your target JVM instance. Double-click the JMC Agent item in the navigation panel. A Start JMC Agent window opens in your JMC console: Figure 2.1. Start JMC Agent window Use the Browse button to add your JMC Agent's JAR file to the Agent JAR field. The Agent XML field is optional. Note You do not need to enter a value in the Target JVM field, because JMC automatically adds a value based on your selected target JVM instance. Click the Start button. JMC adds the Agent Plugin item under your target JVM instance in the JVM Browser navigation panel. The JMC console automatically opens the Agent Live Config pane. Figure 2.2. Agent Live Config pane You can now configure your JMC Agent or manage interactions between the JMC Agent and your JFR data. After you generate an XML configuration and then upload it to the JMC console, the Agent Live Config pane displays metadata associated with that XML file. Figure 2.3. Example of an XML configuration file that has been added to the JMC console 2.4. Creating presets with the JMC Agent You can configure your JMC Agent instance in the JMC console. The JMC console provides the following JMC Agent configuration options, to name but a few: Create customized presets with the Agent Preset Manager option. Import XML configurations into your JMC Agent preset. Use the defineEventProbes function to add an XML description of custom JFR events. Store active custom JFR events as a preset, so you can retrieve them at a later stage. Prerequisites Started a JMC Agent instance on your JMC console. Procedure You can create a new preset by clicking Window from the menu bar, and then clicking the JMC Agent Preset Manager menu item. A JMC Agent Configuration Preset Manager wizard opens in your JMC console. Click the Add button to access the Edit Preset Global Configurations window. Figure 2.4. Edit Preset Global Configurations window From this window, you can enter a name for your preset. Optionally, you can enter a class prefix for any events that you want to inject into your target JVM. You can also select the AllowtoString check box and the Allow Converter check box. Click the button. An Add or Remove Preset Events window opens. From this window, you can add new events, edit events, or remove events for your preset. Figure 2.5. Add or Remove Preset Events Follow the wizard's instructions, where you can complete the following steps: Edit Event Configurations Edit a Parameter or Return Value step Edit a Parameter or Return Value Capturing Tip You can select any of the available buttons on each wizard step to complete your desired configuration, such as Add , Remove , and so on. You can click the Back button at any stage to edit a wizard step. Click the Finish button to return to the Add or Remove Preset Events window. Click . A Preview Preset Output window opens. Review the generated XML data before clicking the Finish button: Figure 2.6. Preview Preset Output Click the Load preset button on the top-right side of the JMC console window, and then upload your preset to the JMC application. On the JMC Agent Configuration Preset Manager window, click the OK button to load your preset into your target JVM. The Agent Live Present panel on your JMC console shows your active agent configuration and any of its injected events. For example: Figure 2.7. Example output on the Agent Live Present pane Additional resources For information about JMC XML attributes, see JMC Agent Plugin attributes . 2.5. JMC Agent Plugin attributes The JMC console supports many attributes in the form of buttons, drop-down lists, text fields, and so on. You can use specific JMC Agent attributes to configure your agent. The following tables outline categories of attributes that you can use to configure your JMC Agent, so that you can use the agent to monitor JFR data specific to your needs. Table 2.1. List of configuration attributes for use with your JMC Agent. Attribute Description <allowconverter> Determines if the JMC Agent can use converters. With converters enabled, you can convert custom data types or objects to JFR content types. JFR can then record these types alongside the custom events. <allowtostring> Determines if the JMC Agent can record arrays and object parameters as strings. Note: Check that the toString method supports JMC Agent array elements and objects. Otherwise, the toString method's behavior might cause issues for your JMC Agent. <classPrefix> Determines the prefix for injected events. For example: __JFR_EVENT <config> Contains the configuration options for the JMC Agent. <jfragent> Begins the event definition. The <jfragent> attribute is the parent attribute of all other configuration attributes. Table 2.2. List of event type attributes for use with your JMC Agent. Attribute Description <class> Defines the class that receives event types from the method. <description> Describes the event type. <events> Lists the set of events that the agent injects into a defined method.The event tag requires an ID. The JFR uses the event tag for the custom event. <label> Defines the name of the event type. <location> Determines the location in the method that receives injected events. For example: ENTRY , EXIT , WRAP , and so on. <path> Path that points to the location that stores custom events. This path relates to any events listed under the JVM Browser navigation panel on the JMC console. <method> Defines the method that receives injected events. The method attribute requires that you define the following two values: name : name of the method descriptor : formal method descriptor. Takes the form of (ParameterDescriptors)ReturnDescriptor <stacktrace> Determines whether the event type records a stack trace. Table 2.3. List of custom caption attributes for use with your JMC Agent. Attribute Description <converter> Qualified name of the converter class that converts an attribute to a JFR data type. <contenttype> Defines the JFR content type that the converter attribute receives. <description> The description of the custom caption attribute. <parameters> Optional attribute. Lists method parameters based on the index value assigned to a parameter tag. <name> The name of the custom caption attribute. Table 2.4. List of field capturing attributes for use with your JMC Agent. Attribute Description <description> The description of the field that you want to capture. <expression> Defines an expression that the agent analyzes to locate a defined field. <fields> Determines class field values that the JMC Agent captures and emits with any defined event types. <name> The name of the class field capturing attribute .
[ "subscription-manager repos --enable codeready-builder-for-rhel-9-x86_64-rpms", "yum module list --disablerepo=* --enablerepo=codeready-builder-for-rhel-9-x86_64-rpms", "yum module list --disablerepo=* --enablerepo=codeready-builder-for-rhel-9-x86_64-rpms Updating Subscription Management repositories. Last metadata expiration check: 0:40:08 ago on Tue 02 May 2023 08:49:29 AM EDT. Red Hat CodeReady Linux Builder for RHEL 9 x86_64 (RPMs) Name Stream Profiles Summary javapackages-tools 201801 common Tools and macros for Java packaging support virt-devel rhel Virtualization module Hint: [d]efault, [e]nabled, [x]disabled, [i]nstalled", "yum install xz-java", "jmc -vm /usr/lib/jvm/java-11/bin/java", "sudo yum module install jmc:rhel8/common", "sudo yum module install jmc:rhel7/common", "jmc -vm /usr/lib/jvm/java-11/bin/java" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/using_jdk_flight_recorder_with_red_hat_build_of_openjdk/overview-jmc
probe::nfs.proc.read_setup
probe::nfs.proc.read_setup Name probe::nfs.proc.read_setup - NFS client setting up a read RPC task Synopsis nfs.proc.read_setup Values offset the file offset server_ip IP address of server prot transfer protocol version NFS version count read bytes in this execution size read bytes in this execution Description The read_setup function is used to setup a read RPC task. It is not doing the actual read operation.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-nfs-proc-read-setup
8.3. SSSD
8.3. SSSD SSSD (System Security Services Daemon) offers access to remote identity and authentication mechanisms, referred to as providers . SSSD allows these providers to be configured as SSSD back-ends, abstracting the actual (local and network) identity and authentication sources. It also allows any kind of identity data provider to be plugged in. A domain is a database containing user information, which can serve as the source of a provider's identity information. Multiple identity providers are supported, allowing two or more identity servers to act as separate user namespaces. Collected information is available to applications on the front-end through standard PAM and NSS interfaces. SSSD runs as a suite of services, independent of the applications that use it. Those applications therefore no longer need to make their own connections to remote domains, or even be aware of which is being used. Robust local caching of identity and group membership information allows operations regardless of where identity comes from (e.g., LDAP, NIS, IPA, DB, Samba, and so on), offers improved performance, and allows authentication to be performed even when operating offline and online authentication is unavailable. SSSD also allows the use of multiple providers of the same type (e.g., multiple LDAP providers) and allows domain-qualified identity requests to be resolved by those different providers. Further details can found in the Red Hat Enterprise Linux 6 Deployment Guide.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/migration_planning_guide/sect-migration_guide-security_authentication-sssd
Part I. Overview of Red Hat Identity Management
Part I. Overview of Red Hat Identity Management This part explains the purpose of Red Hat Identity Management . It also provides basic information about the Identity Management domain, including the client and server machines that are part of the domain.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/p.overview
Chapter 6. Finding More Information
Chapter 6. Finding More Information The following table includes additional Red Hat documentation for reference: The Red Hat OpenStack Platform documentation suite can be found here: Red Hat OpenStack Platform Documentation Suite Table 6.1. List of Available Documentation Component Reference Red Hat Enterprise Linux Red Hat OpenStack Platform is supported on Red Hat Enterprise Linux 8.0. For information on installing Red Hat Enterprise Linux, see the corresponding installation guide at: Red Hat Enterprise Linux Documentation Suite . Red Hat OpenStack Platform To install OpenStack components and their dependencies, use the Red Hat OpenStack Platform director. The director uses a basic OpenStack installation as the undercloud to install, configure, and manage the OpenStack nodes in the final overcloud . Ensure that you have one extra host machine for the installation of the undercloud, in addition to the environment necessary for the deployed overcloud. For detailed instructions, see Red Hat OpenStack Platform Director Installation and Usage . For information on configuring advanced features for a Red Hat OpenStack Platform enterprise environment using the Red Hat OpenStack Platform director, such as network isolation, storage configuration, SSL communication, and general configuration method, see Advanced Overcloud Customization . NFV Documentation For more details on planning and configuring your Red Hat OpenStack Platform deployment with single root I/O virtualization (SR-IOV) and Open vSwitch with Data Plane Development Kit (OVS-DPDK), see Network Function Virtualization Planning and Configuration Guide .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/network_functions_virtualization_product_guide/ch-more_information
System Administrator's Guide
System Administrator's Guide Red Hat Enterprise Linux 7 Deployment, configuration, and administration of RHEL 7 Abstract The System Administrator's Guide documents relevant information regarding the deployment, configuration, and administration of Red Hat Enterprise Linux 7. It is oriented towards system administrators with a basic understanding of the system. Note To expand your expertise, you might also be interested in the Red Hat System Administration I (RH124) , Red Hat System Administration II (RH134) , Red Hat System Administration III (RH254) , or RHCSA Rapid Track (RH199) training courses. If you want to use Red Hat Enterprise Linux 7 with the Linux Containers functionality, see Product Documentation for Red Hat Enterprise Linux Atomic Host . For an overview of general Linux Containers concept and their current capabilities implemented in Red Hat Enterprise Linux 7, see Overview of Containers in Red Hat Systems . The topics related to containers management and administration are described in the Red Hat Enterprise Linux Atomic Host 7 Managing Containers guide.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system_administrators_guide/index
9.8.2. NFS security with AUTH_GSS
9.8.2. NFS security with AUTH_GSS The release of NFSv4 brought a revolution to NFS security by mandating the implementation of RPCSEC_GSS and the Kerberos version 5 GSS-API mechanism. However, RPCSEC_GSS and the Kerberos mechanism are also available for all versions of NFS. In FIPS mode, only FIPS-approved algorithms can be used. With the RPCSEC_GSS Kerberos mechanism, the server no longer depends on the client to correctly represent which user is accessing the file, as is the case with AUTH_SYS. Instead, it uses cryptography to authenticate users to the server, preventing a malicious client from impersonating a user without having that user's kerberos credentials. Note It is assumed that a Kerberos ticket-granting server (KDC) is installed and configured correctly, prior to configuring an NFSv4 server. Kerberos is a network authentication system which allows clients and servers to authenticate to each other through use of symmetric encryption and a trusted third party, the KDC. For more information on Kerberos see Red Hat's Identity Management Guide . To set up RPCSEC_GSS, use the following procedure: Procedure 9.4. Set up RPCSEC_GSS Create nfs/client. mydomain @ MYREALM and nfs/server. mydomain @ MYREALM principals. Add the corresponding keys to keytabs for the client and server. On the server side, add sec=krb5,krb5i,krb5p to the export. To continue allowing AUTH_SYS, add sec=sys,krb5,krb5i,krb5p instead. On the client side, add sec=krb5 (or sec=krb5i , or sec=krb5p depending on the set up) to the mount options. For more information, such as the difference between krb5 , krb5i , and krb5p , refer to the exports and nfs man pages or to Section 9.5, "Common NFS Mount Options" . For more information on the RPCSEC_GSS framework, including how rpc.svcgssd and rpc.gssd inter-operate, refer to http://www.citi.umich.edu/projects/nfsv4/gssd/ . 9.8.2.1. NFS security with NFSv4 NFSv4 includes ACL support based on the Microsoft Windows NT model, not the POSIX model, because of the former's features and wide deployment. Another important security feature of NFSv4 is the removal of the use of the MOUNT protocol for mounting file systems. This protocol presented possible security holes because of the way that it processed file handles.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/s3-nfs-security-hosts-nfsv4
Chapter 14. Managing containers with Ansible
Chapter 14. Managing containers with Ansible Note This feature is available in this release as a Technology Preview , and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details . Red Hat OpenStack Platform 16.2 uses Paunch to manage containers. However, you can also use the Ansible role tripleo-container-manage to perform management operations on your containers. If you want to use the tripleo-container-manage role, you must first disable Paunch. With Paunch disabled, director uses the Ansible role automatically, and you can also write custom playbooks to perform specific container management operations: Collect the container configuration data that heat generates. The tripleo-container-manage role uses this data to orchestrate container deployment. Start containers. Stop containers. Update containers. Delete containers. Run a container with a specific configuration. Although director performs container management automatically, you might want to customize a container configuration, or apply a hotfix to a container without redeploying the overcloud. Note This role supports only Podman container management. Prerequisites A successful undercloud installation. For more information, see Section 4.8, "Installing director" . 14.1. Enabling the tripleo-container-manage Ansible role on the undercloud Note This feature is available in this release as a Technology Preview , and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details . Paunch is the default container management mechanism in Red Hat OpenStack Platform 16.2. However, you can also use the tripleo-container-manage Ansible role. If you want to use this role, you must disable Paunch. Prerequisites A host machine with a base operating system and the python3-tripleoclient package installed. For more information, see Chapter 3, Preparing for director installation . Procedure Log in to the undercloud host as the stack user. Set the undercloud_enable_paunch parameter to false in the undercloud.conf file: Run the openstack undercloud install command: 14.2. Enabling the tripleo-container-manage Ansible role on the overcloud Note This feature is available in this release as a Technology Preview , and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details . Paunch is the default container management mechanism in Red Hat OpenStack Platform 16.2. However, you can also use the tripleo-container-manage Ansible role. If you want to use this role, you must disable Paunch. Prerequisites A successful undercloud installation. For more information, see Chapter 4, Installing director on the undercloud . Procedure Log in to the undercloud host as the stack user. Source the stackrc credentials file: Include the /usr/share/openstack-tripleo-heat-templates/environments/disable-paunch.yaml file in the overcloud deployment command, along with any other environment files that are relevant for your deployment: 14.3. Performing operations on a single container Note This feature is available in this release as a Technology Preview , and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details . You can use the tripleo-container-manage role to manage all containers, or a specific container. If you want to manage a specific container, you must identify the container deployment step and the name of the container configuration JSON file so that you can target the specific container with a custom Ansible playbook. Prerequisites A successful undercloud installation. For more information, see Chapter 4, Installing director on the undercloud . Procedure Log in to the undercloud as the stack user. Source the overcloudrc credential file: Identify the container deployment step. You can find the container configuration for each step in the /var/lib/tripleo-config/container-startup-config/step_{1,2,3,4,5,6} directory. Identify the JSON configuration file for the container. You can find the container configuration file in the relevant step_* directory. For example, the configuration file for the HAProxy container in step 1 is /var/lib/tripleo-config/container-startup-config/step_1/haproxy.json . Write a suitable Ansible playbook. For example, to replace the HAProxy container image, use the following sample playbook: For more information about the variables that you can use with the tripleo-container-manage role, see Section 14.4, "tripleo-container-manage role variables" . Run the playbook: If you want to execute the playbook without applying any changes, include the --check option in the ansible-playbook command: If you want to identify the changes that your playbook makes to your containers without applying the changes, include the --check and --diff options in the ansible-playbook command: 14.4. tripleo-container-manage role variables Note This feature is available in this release as a Technology Preview , and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details . The tripleo-container-manage Ansible role contains the following variables: Table 14.1. Role variables Name Default value Description tripleo_container_manage_check_puppet_config false Use this variable if you want Ansible to check Puppet container configurations. Ansible can identify updated container configuration using the configuration hash. If a container has a new configuration from Puppet, set this variable to true so that Ansible can detect the new configuration and add the container to the list of containers that Ansible must restart. tripleo_container_manage_cli podman Use this variable to set the command line interface that you want to use to manage containers. The tripleo-container-manage role supports only Podman. tripleo_container_manage_concurrency 1 Use this variable to set the number of containers that you want to manage concurrently. tripleo_container_manage_config /var/lib/tripleo-config/ Use this variable to set the path to the container configuration directory. tripleo_container_manage_config_id tripleo Use this variable to set the ID of a specific configuration step. For example, set this value to tripleo_step2 to manage containers for step two of the deployment. tripleo_container_manage_config_patterns *.json Use this variable to set the bash regular expression that identifies configuration files in the container configuration directory. tripleo_container_manage_debug false Use this variable to enable or disable debug mode. Run the tripleo-container-manage role in debug mode if you want to run a container with a specific one-time configuration, to output the container commands that manage the lifecycle of containers, or to run no-op container management operations for testing and verification purposes. tripleo_container_manage_healthcheck_disable false Use this variable to enable or disable healthchecks. tripleo_container_manage_log_path /var/log/containers/stdouts Use this variable to set the stdout log path for containers. tripleo_container_manage_systemd_order false Use this variable to enable or disable systemd shutdown ordering with Ansible. tripleo_container_manage_systemd_teardown true Use this variable to trigger the cleanup of obsolete containers. tripleo_container_manage_config_overrides {} Use this variable to override any container configuration. This variable takes a dictionary of values where each key is the container name and the parameters that you want to override, for example, the container image or user. This variable does not write custom overrides to the JSON container configuration files and any new container deployments, updates, or upgrades revert to the content of the JSON configuration file. tripleo_container_manage_valid_exit_code [] Use this variable to check if a container returns an exit code. This value must be a list, for example, [0,3] .
[ "undercloud_enable_paunch: false", "openstack undercloud install", "source ~/stackrc", "(undercloud) [stack@director ~]USD openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/disable-paunch.yaml -e <other_environment_files>", "source ~/overcloudrc", "- hosts: localhost become: true tasks: - name: Manage step_1 containers using tripleo-ansible block: - name: \"Manage HAproxy container at step 1 with tripleo-ansible\" include_role: name: tripleo-container-manage vars: tripleo_container_manage_systemd_order: true tripleo_container_manage_config_patterns: 'haproxy.json' tripleo_container_manage_config: \"/var/lib/tripleo-config/container-startup-config/step_1\" tripleo_container_manage_config_id: \"tripleo_step1\" tripleo_container_manage_config_overrides: haproxy: image: registry.redhat.io/tripleomaster/<HAProxy-container>:hotfix", "(overcloud) [stack@director]USD ansible-playbook <custom_playbook>.yaml", "(overcloud) [stack@director]USD ansible-playbook <custom_playbook>.yaml --check", "(overcloud) [stack@director]USD ansible-playbook <custom_playbook>.yaml --check --diff" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/director_installation_and_usage/assembly_managing-containers-with-ansible
Appendix D. ACL reference
Appendix D. ACL reference This section describes what each resource controls, lists the possible operations describing the outcome of those operations, and provides the default ACIs for each ACL resource defined. Each subsystem contains only those ACLs that are relevant to that subsystem. D.1. About ACL configuration files Access control is the method to set rules on who can access part of a server and the operations that user can perform. The four subsystems which depend on the LDAP directory service and use a Java console - the CA, KRA, OCSP, and TKS - all implement LDAP-style access control to access their resources. These access control lists (ACL) are located in the /var/lib/pki/instance_name/conf/subsystem/acl.ldif file. NOTE This section provides only a very brief overview of access control concepts. Access control is described in much more detail in the Managing Access Control chapter in the Red Hat Directory Server Administration Guide . The Certificate System ACL files are LDIF files that are loaded by the internal database. The individual ACLs are defined as resourceACLS attributes which identify the area of the subsystem being protected and then a list of all of the specific access controls being set. Each rule which allows or denies access to a resource is called an access control instruction (ACI). (The sum of all of the ACIs for a resource is an access control list.) Before defining the actual ACI, the ACL attribute is first applied to a specific plugin class used by the Certificate System subsystem. This focuses each ACL to a specific function performed by the subsystem, providing both more security for the instance and better control over applying ACLs. Example D.1. Default ACL to list certificate profiles Because each subsystem (CA, KRA, OCSP, and TKS) has different resources for its operations, each subsystem instance has its own acl.ldif file and its own defined ACLs. Each ACI defines what access or behavior can be done (the right ) and who the ACI applies to (the target ). The basic format of an ACI is, then: Rights are types of operations that the ACI allows a user to perform. For LDAP ACIs, there is a relatively limited list of rights to directory entries, like search, read, write, and delete. The Certificate System uses additional rights that cover common PKI tasks, like revoke, submit, and assign. If an operation is not explicitly allowed in an ACI, then it is implicitly denied. If an operation is explicitly denied in one ACI, then it trumps any ACI which explicitly allows it. Deny rules are always superior to allow rules to provide additional security. Each ACI has to apply to specific users or groups. This is set using a couple of common conditions, usually user= or group= , though there are other options, like ipaddress= which defines client-based access rather than entry-based access. If there is more than one condition, the conditions can be composed using the double pipe (||) operator, signifying logical disjunction ("or"), and the double ampersand (&&) operator, signifying logical conjunction ("and"). For example, group="group1" || "group2" . Each area of the resourceACLS attribute value is defined in the below table. Table D.1. Sections of the ACL attribute value Value Description class_name The plugin class to which the ACI is applied. all operations The list of every operation covered in the ACI definition. There can be multiple operations in a single ACI and multiple ACIs in a single resourceACLS attribute. allow|deny Whether the action is being allowed for the target user or group or denied to the target user or group. ( operations ) The operations being allowed or denied. type=target The target to identify who this applies to. This is commonly a user (such as user= "name" ) or a group ( group= "group" ). If there is more than one condition, the conditions can be composed using the double pipe (||) operator (logical "or") and the double ampersand (&&) operator (logical "and"). For example, group="group1" || "group2" . description A description of what the ACL is doing. D.2. Common ACLs This section covers the default access control configuration that is common for all four subsystem types. These access control rules manage access to basic and common configuration settings, such as logging and adding users and groups. IMPORTANT These ACLs are common in that the same ACLs occur in each subsystem instance's acl.ldif file. These are not shared ACLs in the sense that the configuration files or settings are held in common by all subsystem instances. As with all other instance configuration, these ACLs are maintained independently of other subsystem instances, in the instance-specific acl.ldif file. D.2.1. certServer.acl.configuration Controls operations to the ACL configuration. The default configuration is: Table D.2. certServer.acl.configuration ACL summary Operations Description Allow/Deny Access Targeted Users/Groups read View ACL resources and list ACL resources, ACL listing evaluators, and ACL evaluator types. Allow Administrators Agents Auditors modify Add, delete, and update ACL evaluators. Allow Administrators D.2.2. certServer.admin.certificate Controls which users can import a certificate through a Certificate Manager. By default, this operation is allowed to everyone. The default configuration is: NOTE This entry is associated with the CA administration web interface which is used to configure the instance. This ACL is only available during instance configuration and is unavailable after the CA is running. Table D.3. certServer.admin.certificate ACL summary Operations Description Allow/Deny Access Targeted Users/Groups import Import a CA administrator certificate, and retrieve certificates by serial number. Allow Anyone D.2.3. certServer.auth.configuration Controls operations on the authentication configuration. Table D.4. certServer.auth.configuration ACL summary Operations Description Allow/Deny Access Targeted Users/Groups read View authentication plugins, authentication type, configured authentication manager plugins, and authentication instances. List authentication manager plugins and authentication manager instances. Allow Administrators Agents Auditors modify Add or delete authentication plugins and authentication instances. Modify authentication instances. Allow Administrators D.2.4. certServer.clone.configuration Controls who can read and modify the configuration information used in cloning. The default setting is: Table D.5. certServer.clone.configuration ACL summary Operations Description Allow/Deny Access Targeted Users/Groups read View original instance configuration. Allow Enterprise Administrators modify Modify original instance configuration. Allow Enterprise Administrators D.2.5. certServer.general.configuration Controls access to the general configuration of the subsystem instance, including who can view and edit the CA's settings. Table D.6. certServer.general.configuration ACL summary Operations Description Allow/Deny Access Targeted Users/Groups read View the operating environment, LDAP configuration, SMTP configuration, server statistics, encryption, token names, subject name of certificates, certificate nicknames, all subsystems loaded by the server, CA certificates, and all certificates for management. Allow Administrators Agents Auditors modify Modify the settings for the LDAP database, SMTP, and encryption. Issue import certificates, install certificates, trust and untrust CA certificates, import cross-pair certificates, and delete certificates. Perform server restart and stop operations. Log in all tokens and check token status. Run self-tests on demand. Get certificate information. Process the certificate subject name. Validate the certificate subject name, certificate key length, and certificate extension. Allow Administrators D.2.6. certServer.log.configuration Controls access to the log configuration for the Certificate Manager, including changing the log settings. Table D.7. certServer.log.configuration ACL summary Operations Description Allow/Deny Access Targeted Users/Groups read View log plugin information, log plugin configuration, and log instance configuration. List log plugins and log instances (excluding NTEventLog). Allow Administrators Agents Auditors modify Add and delete log plugins and log instances. Modify log instances, including log rollover parameters and log level. Allow Administrators D.2.7. certServer.log.configuration.fileName Restricts access to change the file name of a log for the instance. Table D.8. certServer.log.configuration.fileName ACL summary Operations Description Allow/Deny Access Targeted Users/Groups read View the value of the fileName parameter for a log instance. Allow Administrators Agents Auditors modify Change the value of the fileName parameter for a log instance. Deny Anyone D.2.8. certServer.log.content.signedAudit Controls who has access to the signed audit logs. The default setting is: Table D.9. certServer.log.content.signedAudit ACL summary Operations Description Allow/Deny Access Targeted Users/Groups read View log content. List logs. Allow Auditors D.2.9. certServer.registry.configuration Controls access to the administration registry, the file that is used to register plugin modules. Currently, this is only used to register certificate profile plugins. Table D.10. certServer.registry.configuration ACL summary Operations Description Allow/Deny Access Targeted Users/Groups read View the administration registry, supported policy constraints, profile plugin configuration, and the list of profile plugins. Allow Administrators Agents Auditors modify Register individual profile implementation plugins. Allow Administrators D.3. Certificate manager-specific ACLs This section covers the default access control configuration attributes which are set specifically for the Certificate Manager. The CA ACL configuration also includes all of the common ACLs listed in Section D.2, "Common ACLs" . There are access control rules set for each of the CA's interfaces (administrative console and agents and end-entities services pages) and for common operations like listing and downloading certificates. D.3.1. certServer.admin.ocsp Limits access to the Certificate Manager's OCSP configuration to members of the enterprise OCSP administrators group. Table D.11. certServer.admin.ocsp ACL summary Operations Description Allow/Deny Access Targeted Users/Groups modify Modify the OCSP configuration, OCSP stores configuration, and default OCSP store. Allow Enterprise OCSP Administrators read Read the OCSP configuration. Allow Enterprise OCSP Administrators D.3.2. certServer.ca.certificate Controls basic management operations for certificates in the agents services interface, including importing and revoking certificates. The default configuration is: Table D.12. certServer.ca.certificate ACL summary Operations Description Allow/Deny Access Targeted Users/Groups import Retrieve a certificate by serial number. Allow Certificate Manager Agents unrevoke Change the status of a certificate from revoked. Allow Certificate Manager Agents revoke Change the status of a certificate to revoked. Allow Certificate Manager Agents read Retrieve certificates based on the request ID, and display certificate details based on the request ID or serial number. Allow Certificate Manager Agents D.3.3. certServer.ca.certificates Controls operations for listing or revoking certificates through the agent services interface. The default configuration is: Table D.13. certServer.ca.certificates ACL summary Operations Description Allow/Deny Access Targeted Users/Groups revoke Revoke a certificates, or approve certificate revocation requests. Revoke a certificate from the TPS. Prompt users for additional data about a revocation request. Allow Certificate Manager Agents Registration Manager Agents list List certificates based on a search. Retrieve details about a range of certificates based on a range of serial numbers. Allow Certificate Manager Agents Registration Manager Agents D.3.4. certServer.ca.configuration Controls operations on the general configuration for a Certificate Manager. The default configuration is: Table D.14. certServer.ca.configuration ACL summary Operations Description Allow/Deny Access Targeted Users/Groups read View CRL plugin information, general CA configuration, CA connector configuration, CRL issuing points configuration, CRL profile configuration, request notification configuration, revocation notification configuration, request in queue notification configuration, and CRL extensions configuration. List CRL extensions configuration and CRL issuing points configuration. Allow Administrators Agents Auditors modify Add and delete CRL issuing points. Modify general CA settings, CA connector configuration, CRL issuing points configuration, CRL configuration, request notification configuration, revocation notification configuration, request in queue notification configuration, and CRL extensions configuration. Allow Administrators D.3.5. certServer.ca.connector Controls operations to submit requests over a special connector to the CA. The default configuration is: Table D.15. certServer.ca.connector ACL summary Operations Description Allow/Deny Access Targeted Users/Groups submit Submit requests from remote trusted managers. Allow Trusted Managers D.3.6. certServer.ca.connectorInfo Controls access to the connector information to manage trusted relationships between a CA and KRA. These trust relationships are special configurations which allow a CA and KRA to automatically connect to perform key archival and recovery operations. These trust relationships are configured through special connector plugins. Table D.16. certServer.ca.connectorInfo ACL summary Operations Description Allow/Deny Access Targeted Users/Groups read Read connector plugin settings. Allow Enterprise KRA Administrators modify Modify connector plugin settings. Allow Enterprise KRA Administrators Subsystem Group D.3.7. certServer.ca.crl Controls access to read or update CRLs through the agent services interface. The default setting is: Table D.17. certServer.ca.crl ACL summary Operations Description Allow/Deny Access Targeted Users/Groups read Display CRLs and get detailed information about CA CRL processing. Allow Certificate Manager Agents update Update CRLs. Allow Certificate Manager Agents D.3.8. certServer.ca.directory Controls access to the LDAP directory used for publishing certificates and CRLs. Table D.18. certServer.ca.directory ACL summary Operations Description Allow/Deny Access Targeted Users/Groups update Publish CA certificates, CRLs, and user certificates to the LDAP directory. Allow Certificate Manager Agents D.3.9. certServer.ca.group Controls access to the internal database for adding users and groups for the Certificate Manager instance. Table D.19. certServer.ca.group ACL summary Operations Description Allow/Deny Access Targeted Users/Groups modify Create, edit, or delete user and group entries for the instance. Add or modify a user certificate within attributes Allow Administrators read View user and group entries for the instance. Allow Administrators D.3.10. certServer.ca.ocsp Controls the ability to access and read OCSP information, such as usage statistics, through the agent services interface. Table D.20. certServer.ca.ocsp ACL summary Operations Description Allow/Deny Access Targeted Users/Groups read Retrieve OCSP usage statistics. Allow Certificate Manager Agents D.3.11. certServer.ca.profile Controls access to certificate profile configuration in the agent services pages. Table D.21. certServer.ca.profile ACL summary Operations Description Allow/Deny Access Targeted Users/Groups read View the details of the certificate profiles. Allow Certificate Manager Agents approve Approve and enable certificate profiles. Allow Certificate Manager Agents D.3.12. certServer.ca.profiles Controls access to list certificate profiles in the agent services interface. Table D.22. certServer.ca.profiles ACL summary Operations Description Allow/Deny Access Targeted Users/Groups list List certificate profiles. Allow Certificate Manager Agents D.3.13. certServer.ca.registerUser Defines which group or user can create an agent user for the instance. The default configuration is: Table D.23. certServer.ca.registerUser ACL summary Operations Description Allow/Deny Access Targeted Users/Groups modify Register a new agent. Allow Enterprise Administrators read Read existing agent information. Allow Enterprise Administrators D.3.14. certServer.ca.request.enrollment Controls how the enrollment request are handled and assigned. The default setting is: Table D.24. certServer.ca.request.enrollment ACL summary Operations Description Allow/Deny Access Targeted Users/Groups read View an enrollment request. Allow Certificate Manager Agents execute Modify the approval state of a request. Allow Certificate Manager Agents submit Sumbit a request. Allow Anybody assign Assign a request to a Certificate Manager agent. Allow Certificate Manager Agents unassign Change the assignment of a request. Allow Certificate Manager Agents D.3.15. certServer.ca.request.profile Controls the handling of certificate profile-based requests. The default setting is: Table D.25. certServer.ca.request.profile ACL summary Operations Description Allow/Deny Access Targeted Users/Groups approve Modify the approval state of a certificate profile-based certificate request. Allow Certificate Manager Agents read View a certificate profile-based certificate request. Allow Certificate Manager Agents D.3.16. certServer.ca.requests Controls who can list certificate requests in the agents services interface. Table D.26. certServer.ca.requests ACL summary Operations Description Allow/Deny Access Targeted Users/Groups list Retrieve details on a range of requests, and search for certificates using a complex filter. Allow Certificate Manager Agents Registration Manager Agents D.3.17. certServer.ca.systemstatus Controls who can view the statistics for the Certificate Manager instance. Table D.27. certServer.ca.systemstatus ACL summary Operations Description Allow/Deny Access Targeted Users/Groups read View statistics. Allow Certificate Manager Agents D.3.18. certServer.ee.certchain Controls who can access the CA certificate chain in the end-entities page. Table D.28. certServer.ee.certchain ACL summary Operations Description Allow/Deny Access Targeted Users/Groups download Download the CA's certificate chain. Allow Anyone read View the CA's certificate chain. Allow Anyone D.3.19. certServer.ee.certificate Controls who can access certificates, for most operations like importing or revoking certificates, through the end-entities page. Table D.29. certServer.ee.certificate ACL summary Operations Description Allow/Deny Access Targeted Users/Groups renew Submit a request to renew an existing certificate. Allow Anyone revoke Submit a revocation request for a user certificate. Allow Anyone read Retrieve and view certificates based on the certificate serial number or request ID. Allow Anyone import Import a certificate based on serial number. Allow Anyone D.3.20. certServer.ee.certificates Controls who can list revoked certificates or submit a revocation request in the end-entities page. Table D.30. certServer.ee.certificates ACL summary Operations Description Allow/Deny Access Targeted Users/Groups revoke Submit a list of certificates to revoke. Allow Subject of Certificate to be Revoked must match Certificate presented to authenticate to the CA. list Search for certificates matching specified criteria. Allow Anyone D.3.21. certServer.ee.crl Controls access to CRLs through the end-entities page. Table D.31. certServer.ee.crl ACL summary Operations Description Allow/Deny Access Targeted Users/Groups read Retrieve and view the certificate revocation list. Allow Anyone add Add CRLs to the OCSP server. Allow Anyone D.3.22. certServer.ee.profile Controls some access to certificate profiles in the end-entities page, including who can view details about a profile or submit a request through the profile. Table D.32. certServer.ee.profile ACL summary Operations Description Allow/Deny Access Targeted Users/Groups submit Submit a certificate request through a certificate profile. Allow Anyone read Displaying details of a certificate profile. Allow Anyone D.3.23. certServer.ee.profiles Controls who can list active certificate profiles in the end-entities page. Table D.33. certServer.ee.profiles ACL summary Operations Description Allow/Deny Access Targeted Users/Groups list List certificate profiles. Allow Anyone D.3.24. certServer.ee.request.ocsp Controls access, based on IP address, on which clients submit OCSP requests. Table D.34. certServer.ee.request.ocsp ACL summary Operations Description Allow/Deny Access Targeted Users/Groups submit Submit OCSP requests. Allow All IP addresses D.3.25. certServer.ee.request.revocation Controls what users can submit certificate revocation requests in the end-entities page. Table D.35. certServer.ee.request.revocation ACL summary Operations Description Allow/Deny Access Targeted Users/Groups submit Submit a request to revoke a certificate. Allow Anyone D.3.26. certServer.ee.requestStatus Controls who can view the status for a certificate request in the end-entities page. Table D.36. certServer.ee.requestStatus ACL summary Operations Description Allow/Deny Access Targeted Users/Groups read Retrieve the status of a request and serial numbers of any certificates that have been issued against that request. Allow Anyone D.3.27. certServer.job.configuration Controls who can configure jobs for the Certificate Manager. Table D.37. certServer.job.configuration ACL summary Operations Description Allow/Deny Access Targeted Users/Groups read View basic job settings, job instance settings, and job plugin settings. List job plugins and job instances. Allow Administrators Agents Auditors modify Add and delete job plugins and job instances. Modify job plugins and job instances. Allow Administrators D.3.28. certServer.profile.configuration Controls access to the certificate profile configuration. The default setting is: Table D.38. certServer.profile.configuration ACL summary Operations Description Allow/Deny Access Targeted Users/Groups read View certificate profile defaults and constraints, input, output, input configuration, output configuration, default configuration, policy constraints configuration, and certificate profile instance configuration. List certificate profile plugins and certificate profile instances. Allow Administrators Agents Auditors modify Add, modify, and delete certificate profile defaults and constraints, input, output, and certificate profile instances. Add and modify default policy constraints configuration. Allow Administrators D.3.29. certServer.publisher.configuration Controls who can view and edit the publishing configuration for the Certificate Manager. The default configuration is: Table D.39. certServer.publisher.configuration ACL summary Operations Description Allow/Deny Access Targeted Users/Groups read View LDAP server destination information, publisher plugin configuration, publisher instance configuration, mapper plugin configuration, mapper instance configuration, rules plugin configuration, and rules instance configuration. List publisher plugins and instances, rules plugins and instances, and mapper plugins and instances. Allow Administrators Agents Auditors modify Add and delete publisher plugins, publisher instances, mapper plugins, mapper instances, rules plugins, and rules instances. Modify publisher instances, mapper instances, rules instances, and LDAP server destination information. Allow Administrators D.3.30. certServer.securitydomain.domainxml Controls access to the security domain information maintained in a registry by the domain host Certificate Manager. The security domain configuration is directly accessed and modified by subsystem instances during configuration, so appropriate access must always be allowed to subsystems, or configuration could fail. Table D.40. certServer.securitydomain.domainxml ACL summary Operations Description Allow/Deny Access Targeted Users/Groups read View the security domain configuration. Allow Anybody modify Modify the security domain configuration by changing instance information and adding and removing instances. Allow Subsystem Groups Enterprise Administrators D.4. Key Recovery Authority-specific ACLs This section covers the default access control configuration which apply specifically to the KRA. The KRA ACL configuration also includes all of the common ACLs listed in Section D.2, "Common ACLs" . There are access control rules set for each of the KRA's interfaces (administrative console and agents and end-entities services pages) and for common operations like listing and downloading keys. D.4.1. certServer.job.configuration Controls who can configure jobs for the KRA. Table D.41. certServer.job.configuration ACL summary Operations Description Allow/Deny Access Targeted Users/Groups read View basic job settings, job instance settings, and job plugin settings. List job plugins and job instances. Allow Administrators Agents Auditors modify Add and delete job plugins and job instances. Modify job plugins and job instances. Allow Administrators D.4.2. certServer.kra.certificate.transport Controls who can view the transport certificate for the KRA. Table D.42. certServer.kra.certificate.transport ACL summary Operations Description Allow/Deny Access Targeted Users/Groups read View the transport certificate for the KRA instance. Allow Anyone D.4.3. certServer.kra.configuration Controls who can configure and manage the setup for the KRA. Table D.43. certServer.kra.configuration ACL summary Operations Description Allow/Deny Access Targeted Users/Groups read Read the number of required recovery agent approvals. Allow Administrators Agents Auditors modify Change the number of required recovery agent approvals. Allow Administrators D.4.4. certServer.kra.connector Controls what entities can submit requests over a special connector configured on the CA to connect to the KRA. The default configuration is: Table D.44. certServer.kra.connector ACL summary Operations Description Allow/Deny Access Targeted Users/Groups submit Submit a new key archival request (for non-TMS only). Allow Trusted Managers D.4.5. certServer.kra.GenerateKeyPair Controls who can submit key recovery requests to the KRA. The default configuration is: Table D.45. certServer.kra.GenerateKeyPair ACL summary Operations Description Allow/Deny Access Targeted Users/Groups Execute Execute server-side key generation (TMS only). Allow KRA Agents D.4.6. certServer.kra.getTransportCert Controls who can submit key recovery requests to the KRA. The default configuration is: Table D.46. certServer.kra.getTransportCert ACL summary Operations Description Allow/Deny Access Targeted Users/Groups download Retrieve KRA transport certificate. Allow Enterprise Administrators D.4.7. certServer.kra.group Controls access to the internal database for adding users and groups for the KRA instance. Table D.47. certServer.kra.group ACL summary Operations Description Allow/Deny Access Targeted Users/Groups modify Create, edit, or delete user and group entries for the instance. Allow Administrators read View user and group entries for the instance. Allow Administrators D.4.8. certServer.kra.key Controls who can access key information through viewing, recovering, or downloading keys. The default configuration is: Table D.48. certServer.kra.key ACL summary Operations Description Allow/Deny Access Targeted Users/Groups read Display public information about key archival record. Allow KRA Agents recover Retrieve key information from the database to perform a recovery operation. Allow KRA Agents download Download key information through the agent services pages. Allow KRA Agents D.4.9. certServer.kra.keys Controls who can list archived keys through the agent services pages. Table D.49. certServer.kra.keys ACL summary Operations Description Allow/Deny Access Targeted Users/Groups list Search for and list a range of archived keys. Allow KRA Agents D.4.10. certServer.kra.registerUser Defines which group or user can create an agent user for the instance. The default configuration is: Table D.50. certServer.kra.registerUser ACL summary Operations Description Allow/Deny Access Targeted Users/Groups modify Register a new user. Allow Enterprise Administrators read Read existing user info. Allow Enterprise Administrators D.4.11. certServer.kra.request Controls who can view key archival and recovery requests in the agents services interface. Table D.51. certServer.kra.request ACL summary Operations Description Allow/Deny Access Targeted Users/Groups read View a key archival or recovery request. Allow KRA Agents D.4.12. certServer.kra.request.status Controls who can view the status for a key recovery request in the end-entities page. Table D.52. certServer.kra.request.status ACL summary Operations Description Allow/Deny Access Targeted Users/Groups read Retrieve the status of a key recovery request in the agents services pages. Allow KRA Agents D.4.13. certServer.kra.requests Controls who can list key archival and recovery requests in the agents services interface. Table D.53. certServer.kra.requests ACL summary Operations Description Allow/Deny Access Targeted Users/Groups list Retrieve details on a range of key archival and recovery requests. Allow KRA Agents D.4.14. certServer.kra.systemstatus Controls who can view the statistics for the KRA instance. Table D.54. certServer.kra.systemstatus ACL summary Operations Description Allow/Deny Access Targeted Users/Groups read View statistics. Allow KRA Agents D.4.15. certServer.kra.TokenKeyRecovery Controls who can submit key recovery requests for a token to the KRA. This is a common request for replacing a lost token. The default configuration is: Table D.55. certServer.kra.TokenKeyRecovery ACL summary Operations Description Allow/Deny Access Targeted Users/Groups submit Submit or initiate key recovery requests for a token recovery. Allow KRA Agents D.5. Online Certificate Status Manager-specific ACLs This section covers the default access control configuration attributes which are set specifically for the Online Certificate Status Manager. The OCSP responder's ACL configuration also includes all of the common ACLs listed in Section D.2, "Common ACLs" . There are access control rules set for each of the OCSP's interfaces (administrative console and agents and end-entities services pages) and for common operations like listing and downloading CRLs. D.5.1. certServer.ee.crl Controls access to CRLs through the end-entities page. Table D.56. certServer.ee.crl ACL summary Operations Description Allow/Deny Access Targeted Users/Groups read Retrieve and view the certificate revocation list. Allow Anyone D.5.2. certServer.ee.request.ocsp Controls access, based on IP address, on which clients submit OCSP requests. Table D.57. certServer.ee.request.ocsp ACL summary Operations Description Allow/Deny Access Targeted Users/Groups submit Submit OCSP requests. Allow All IP addresses D.5.3. certServer.ocsp.ca Controls who can instruct the OCSP responder. The default setting is: Table D.58. certServer.ocsp.ca ACL summary Operations Description Allow/Deny Access Targeted Users/Groups Add Instruct the OCSP responder to respond to OCSP requests for a new CA. Allow OCSP Manager Agents D.5.4. certServer.ocsp.cas Controls who can list, in the agent services interface, all of the Certificate Managers which publish CRLs to the Online Certificate Status Manager. The default setting is: Table D.59. certServer.ocsp.cas ACL summary Operations Description Allow/Deny Access Targeted Users/Groups list Lists all of the Certificate Managers which publish CRLs to the OCSP responder. Allow Agents D.5.5. certServer.ocsp.certificate Controls who can validate the status of a certificate. The default setting is: Table D.60. certServer.ocsp.certificate ACL summary Operations Description Allow/Deny Access Targeted Users/Groups validate Verifies the status of a specified certificate. Allow OCSP Agents D.5.6. certServer.ocsp.configuration Controls who can access, view, or modify the configuration for the Certificate Manager's OCSP services. The default configuration is: Table D.61. certServer.ocsp.configuration ACL summary Operations Description Allow/Deny Access Targeted Users/Groups read View OCSP plugin information, OCSP configuration, and OCSP stores configuration. List OCSP stores configuration. Allow Administrators Online Certificate Status Manager Agents Auditors modify Modify the OCSP configuration, OCSP stores configuration, and default OCSP store. Allow Administrators D.5.7. certServer.ocsp.crl Controls access to read or update CRLs through the agent services interface. The default setting is: Table D.62. certServer.ocsp.crl ACL summary Operations Description Allow/Deny Access Targeted Users/Groups add Add new CRLs to those managed by the OCSP responder. Allow OCSP Agents Trusted Managers D.5.8. certServer.ocsp.group Controls access to the internal database for adding users and groups for the Online Certificate Status Manager instance. Table D.63. certServer.ocsp.group ACL summary Operations Description Allow/Deny Access Targeted Users/Groups modify Create, edit or delete user and group entries for the instance. Allow Administrators read View user and group entries for the instance. Allow Administrators D.5.9. certServer.ocsp.info Controls who can read information about the OCSP responder. Table D.64. certServer.ocsp.info ACL summary Operations Description Allow/Deny Access Targeted Users/Groups read View OCSP responder information. Allow OCSP Agents D.6. Token Key Service-specific ACLs This section covers the default access control configuration attributes which are set specifically for the Token Key Service (TKS). The TKS ACL configuration also includes all of the common ACLs listed in Section D.2, "Common ACLs" . There are access control rules set for the TKS's administrative console and for access by other subsystems to the TKS. D.6.1. certServer.tks.encrypteddata Controls who can encrypt data. Table D.65. certServer.tks.encrypteddata ACL summary Operations Description Allow/Deny Access Targeted Users/Groups Execute Encrypted data stored in the TKS. Allow TKS Agents D.6.2. certServer.tks.group Controls access to the internal database for adding users and groups for the TKS instance. Table D.66. certServer.tks.group ACL summary Operations Description Allow/Deny Access Targeted Users/Groups modify Create, edit, or delete user and group entries for the instance. Allow Administrators read View user and group entries for the instance. Allow Administrators D.6.3. certServer.tks.importTransportCert Controls who can import the transport certificate used by the TKS to deliver keys. Table D.67. certServer.tks.importTransportCert ACL summary Operations Description Allow/Deny Access Targeted Users/Groups modify Update the transport certificate. Allow Enterprise Administrators read Import the transport certificate. Allow Enterprise Administrators D.6.4. certServer.tks.keysetdata Controls who can view information about key sets derived and stored by the TKS. Table D.68. certServer.tks.keysetdata ACL summary Operations Description Allow/Deny Access Targeted Users/Groups Execute Create diversified key set data. Allow TKS Agents D.6.5. certServer.tks.registerUser Defines which group or user can create an agent user for the instance. The default configuration is: Table D.69. certServer.tks.registerUser ACL summary Operations Description Allow/Deny Access Targeted Users/Groups modify Register a new agent. Allow Enterprise Administrators read Read existing agent information. Allow Enterprise Administrators D.6.6. certServer.tks.sessionkey Controls who can create the session keys used by the TKS instance to connections to the TPS. Table D.70. certServer.tks.sessionkey ACL summary Operations Description Allow/Deny Access Targeted Users/Groups Execute Create session keys generated by the TKS. Allow TKS Agents D.6.7. certServer.tks.randomdata Controls who can create random data. Table D.71. certServer.tks.randomdata ACL summary Operations Description Allow/Deny Access Targeted Users/Groups Execute Generate random data. Allow TKS Agents
[ "resourceACLS: class_name:all rights: allow|deny (rights) type=target description", "resourceACLS: certServer.ca.profiles:list:allow (list) group=\"Certificate Manager Agents\":Certificate Manager agents may list profiles", "allow|deny (rights) user|group", "allow (read) group=\"Administrators\" || group=\"Certificate Manager Agents\" || group=\"Registration Manager Agents\" || group=\"Key Recovery Authority Agents\" || group=\"Online Certificate Status Manager Agents\" || group=\"Auditors\";allow (modify) group=\"Administrators\"", "allow (import) user=\"anybody\"", "allow (read) group=\"Administrators\" || group=\"Certificate Manager Agents\" || group=\"Registration Manager Agents\" || group=\"Key Recovery Authority Agents\" || group=\"Online Certificate Status Manager Agents\" || group=\"Auditors\";allow (modify) group=\"Administrators", "allow (modify,read) group=\"Enterprise CA Administrators\" || group=\"Enterprise KRA Administrators\" || group=\"Enterprise OCSP Administrators\" || group=\"Enterprise TKS Administrators\"", "allow (read) group=\"Administrators\" || group=\"Auditors\" || group=\"Certificate Manager Agents\" || group=\"Registration Manager Agents\" || group=\"Key Recovery Authority Agents\" || group=\"Online Certificate Status Manager Agents\";allow (modify) group=\"Administrators\"", "allow (read) group=\"Administrators\" || group=\"Auditors\" || group=\"Certificate Manager Agents\" || group=\"Registration Manager Agents\" || group=\"Key Recovery Authority Agents\" || group=\"Online Certificate Status Manager Agents\";allow (modify) group=\"Administrators\"", "allow (read) group=\"Administrators\" || group=\"Auditors\" || group=\"Certificate Manager Agents\" || group=\"Registration Manager Agents\" || group=\"Key Recovery Authority Agents\" || group=\"Online Certificate Status Manager Agents\";deny (modify) user=anybody", "allow (read) group=\"Auditors\"", "allow (read) group=\"Administrators\" || group=\"Certificate Manager Agents\" || group=\"Registration Manager Agents\" || group=\"Key Recovery Authority Agents\" || group=\"Online Certificate Status Manager Agents\" || group=\"Auditors\";allow (modify) group=\"Administrators\"", "allow (modify,read) group=\"Enterprise OCSP Administrators\"", "allow (import,unrevoke,revoke,read) group=\"Certificate Manager Agents\"", "allow (revoke,list) group=\"Certificate Manager Agents\"|| group=\"Registration Manager Agents\"", "allow (read) group=\"Administrators\" || group=\"Certificate Manager Agents\" || group=\"Registration Manager Agents\" || group=\"Key Recovery Authority Agents\" || group=\"Online Certificate Status Manager Agents\" || group=\"Auditors\";allow (modify) group=\"Administrators\"", "allow (submit) group=\"Trusted Managers\"", "allow (read) group=\"Enterprise KRA Administrators\";allow (modify) group=\"Enterprise KRA Administrators\" || group=\"Subsystem Group\"", "allow (read,update) group=\"Certificate Manager Agents\"", "allow (update) group=\"Certificate Manager Agents\"", "allow (modify,read) group=\"Administrators\"", "allow (read) group=\"Certificate Manager Agents\"", "allow (read,approve) group=\"Certificate Manager Agents\"", "allow (list) group=\"Certificate Manager Agents\"", "allow (modify,read) group=\"Enterprise CA Administrators\" || group=\"Enterprise KRA Administrators\" || group=\"Enterprise OCSP Administrators\" || group=\"Enterprise TKS Administrators\" || group=\"Enterprise TPS Administrators\"", "allow (submit) user=\"anybody\";allow (read,execute,assign,unassign) group=\"Certificate Manager Agents\"", "allow (approve,read) group=\"Certificate Manager Agents\"", "allow (list) group=\"Certificate Manager Agents\"|| group=\"Registration Manager Agents\"", "allow (read) group=\"Certificate Manager Agents\"", "allow (download,read) user=\"anybody\"", "allow (renew,revoke,read,import) user=\"anybody\"", "allow (revoke,list) user=\"anybody\"", "allow (read,add) user=\"anybody\"", "allow (submit,read) user=\"anybody\"", "allow (list) user=\"anybody\"", "allow (submit) ipaddress=\".*\"", "allow (submit) user=\"anybody\"", "allow (read) user=\"anybody\"", "allow (read) group=\"Administrators\" || group=\"Certificate Manager Agents\" || group=\"Registration Manager Agents\" || group=\"Key Recovery Authority Agents\" || group=\"Online Certificate Status Manager Agents\" || group=\"Auditors\";allow (modify) group=\"Administrators\"", "allow (read) group=\"Administrators\" || group=\"Certificate Manager Agents\" || group=\"Registration Manager Agents\" || group=\"Key Recovery Authority Agents\" || group=\"Online Certificate Status Manager Agents\" || group=\"Auditors\";allow (modify) group=\"Administrators\"", "allow (read) group=\"Administrators\" || group=\"Auditors\" || group=\"Certificate Manager Agents\" || group=\"Registration Manager Agents\" || group=\"Key Recovery Authority Agents\" || group=\"Online Certificate Status Manager Agents\";allow (modify) group=\"Administrators\"", "allow (read) user=\"anybody\";allow (modify) group=\"Subsystem Group\"", "allow (read) group=\"Administrators\" || group=\"Key Recovery Authority Agents\" || group=\"Auditors\";allow (modify) group=\"Administrators\"", "allow (read) user=\"anybody\"", "allow (read) group=\"Administrators\" || group=\"Auditors\" || group=\"Key Recovery Authority Agents\" || allow (modify) group=\"Administrators\"", "allow (submit) group=\"Trusted Managers\"", "allow (execute) group=\"Key Recovery Authority Agents\"", "allow (download) group=\"Enterprise CA Administrators\" || group=\"Enterprise KRA Administrators\" || group=\"Enterprise OCSP Administrators\" || group=\"Enterprise TKS Administrators\" || group=\"Enterprise TPS Administrators\"", "allow (modify,read) group=\"Administrators\"", "allow (read,recover,download) group=\"Key Recovery Authority Agents\"", "allow (list) group=\"Key Recovery Authority Agents\"", "allow (modify,read) group=\"Enterprise CA Administrators\" || group=\"Enterprise KRA Administrators\" || group=\"Enterprise OCSP Administrators\" || group=\"Enterprise TKS Administrators\" || group=\"Enterprise TPS Administrators\"", "allow (read) group=\"Key Recovery Authority Agents\"", "allow (read) group=\"Key Recovery Authority Agents\"", "allow (list) group=\"Key Recovery Authority Agents\"", "allow (read) group=\"Key Recovery Authority Agents\"", "allow (submit) group=\"Key Recovery Authority Agents\"", "allow (read) user=\"anybody\"", "allow (submit) ipaddress=\".*\"", "allow (add) group=\"Online Certificate Status Manager Agents\"", "allow (list) group=\"Online Certificate Status Manager Agents\"", "allow (validate) group=\"Online Certificate Status Manager Agents\"", "allow (read) group=\"Administrators\" || group=\"Online Certificate Status Manager Agents\" || group=\"Auditors\";allow (modify) group=\"Administrators\"", "allow (add) group=\"Online Certificate Status Manager Agents\" || group=\"Trusted Managers\"", "allow (modify,read) group=\"Administrators\"", "allow (read) group=\"Online Certificate Status Manager Agents\"", "allow(execute) group=\"Token Key Service Manager Agents\"", "allow (modify,read) group=\"Administrators\"", "allow (modify,read) group=\"Enterprise CA Administrators\" || group=\"Enterprise KRA Administrators\" || group=\"Enterprise OCSP Administrators\" || group=\"Enterprise TKS Administrators\" || group=\"Enterprise TPS Administrators\"", "allow (execute) group=\"Token Key Service Manager Agents\"", "allow (modify,read) group=\"Enterprise CA Administrators\" || group=\"Enterprise KRA Administrators\" || group=\"Enterprise OCSP Administrators\" || group=\"Enterprise TKS Administrators\" || group=\"Enterprise TPS Administrators\"", "allow (execute) group=\"Token Key Service Manager Agents\"", "allow (execute) group=\"Token Key Service Manager Agents\"" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide_common_criteria_edition/aclref
Chapter 5. Disaster recovery with stretch cluster for OpenShift Data Foundation
Chapter 5. Disaster recovery with stretch cluster for OpenShift Data Foundation Red Hat OpenShift Data Foundation deployment can be stretched between two different geographical locations to provide the storage infrastructure with disaster recovery capabilities. When faced with a disaster, such as one of the two locations is partially or totally not available, OpenShift Data Foundation deployed on the OpenShift Container Platform deployment must be able to survive. This solution is available only for metropolitan spanned data centers with specific latency requirements between the servers of the infrastructure. Note The stretch cluster solution is designed for deployments where latencies do not exceed 10 ms maximum round-trip time (RTT) between the zones containing data volumes. For Arbiter nodes follow the latency requirements specified for etcd, see Guidance for Red Hat OpenShift Container Platform Clusters - Deployments Spanning Multiple Sites(Data Centers/Regions) . Contact Red Hat Customer Support if you are planning to deploy with higher latencies. The following diagram shows the simplest deployment for a stretched cluster: OpenShift nodes and OpenShift Data Foundation daemons In the diagram the OpenShift Data Foundation monitor pod deployed in the Arbiter zone has a built-in tolerance for the master nodes. The diagram shows the master nodes in each Data Zone which are required for a highly available OpenShift Container Platform control plane. Also, it is important that the OpenShift Container Platform nodes in one of the zones have network connectivity with the OpenShift Container Platform nodes in the other two zones. Important You can now easily set up disaster recovery with stretch cluster for workloads based on OpenShift virtualization technology using OpenShift Data Foundation. For more information, see OpenShift Virtualization in OpenShift Container Platform guide. 5.1. Requirements for enabling stretch cluster Ensure you have addressed OpenShift Container Platform requirements for deployments spanning multiple sites. For more information, see knowledgebase article on cluster deployments spanning multiple sites . Ensure that you have at least three OpenShift Container Platform master nodes in three different zones. One master node in each of the three zones. Ensure that you have at least four OpenShift Container Platform worker nodes evenly distributed across the two Data Zones. For stretch clusters on bare metall, use the SSD drive as the root drive for OpenShift Container Platform master nodes. Ensure that each node is pre-labeled with its zone label. For more information, see the Applying topology zone labels to OpenShift Container Platform node section. The stretch cluster solution is designed for deployments where latencies do not exceed 10 ms between zones. Contact Red Hat Customer Support if you are planning to deploy with higher latencies. Note Flexible scaling and Arbiter both cannot be enabled at the same time as they have conflicting scaling logic. With Flexible scaling, you can add one node at a time to your OpenShift Data Foundation cluster. Whereas in an Arbiter cluster, you need to add at least one node in each of the two data zones. 5.2. Applying topology zone labels to OpenShift Container Platform nodes During a site outage, the zone that has the arbiter function makes use of the arbiter label. These labels are arbitrary and must be unique for the three locations. For example, you can label the nodes as follows: To apply the labels to the node: <NODENAME> Is the name of the node <LABEL> Is the topology zone label To validate the labels using the example labels for the three zones: <LABEL> Is the topology zone label Alternatively, you can run a single command to see all the nodes with its zone. The stretch cluster topology zone labels are now applied to the appropriate OpenShift Container Platform nodes to define the three locations. step Install the local storage operator from the OpenShift Container Platform web console . 5.3. Installing Local Storage Operator Install the Local Storage Operator from the Operator Hub before creating Red Hat OpenShift Data Foundation clusters on local storage devices. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Type local storage in the Filter by keyword box to find the Local Storage Operator from the list of operators, and click on it. Set the following options on the Install Operator page: Update channel as stable . Installation mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-local-storage . Update approval as Automatic . Click Install . Verification steps Verify that the Local Storage Operator shows a green tick indicating successful installation. 5.4. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and Operator installation permissions. You must have at least four worker nodes evenly distributed across two data centers in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see Planning your deployment . Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command in command-line interface to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation chapter in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to search for the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.15 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you selected Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. steps Create an OpenShift Data Foundation cluster . 5.5. Creating OpenShift Data Foundation cluster Prerequisites Ensure that you have met all the requirements in Requirements for enabling stretch cluster section. Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, select the Create a new StorageClass using the local storage devices option. Click . Important You are prompted to install the Local Storage Operator if it is not already installed. Click Install , and follow the procedure as described in Installing Local Storage Operator . In the Create local volume set page, provide the following information: Enter a name for the LocalVolumeSet and the StorageClass . By default, the local volume set name appears for the storage class name. You can change the name. Choose one of the following: Disks on all nodes Uses the available disks that match the selected filters on all the nodes. Disks on selected nodes Uses the available disks that match the selected filters only on selected nodes. Important If the nodes selected do not match the OpenShift Data Foundation cluster requirement of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed. For minimum starting node requirements, see the Resource requirements section in the Planning guide. Select SSD or NVMe to build a supported configuration. You can select HDDs for unsupported test installations. Expand the Advanced section and set the following options: Volume Mode Block is selected by default. Device Type Select one or more device types from the dropdown list. Disk Size Set a minimum size of 100GB for the device and maximum available size of the device that needs to be included. Maximum Disks Limit This indicates the maximum number of PVs that can be created on a node. If this field is left empty, then PVs are created for all the available disks on the matching nodes. Click . A pop-up to confirm the creation of LocalVolumeSet is displayed. Click Yes to continue. In the Capacity and nodes page, configure the following: Available raw capacity is populated with the capacity value based on all the attached disks associated with the storage class. This takes some time to show up. The Selected nodes list shows the nodes based on the storage class. Select Enable arbiter checkbox if you want to use the stretch clusters. This option is available only when all the prerequisites for arbiter are fulfilled and the selected nodes are populated. For more information, see Arbiter stretch cluster requirements in Requirements for enabling stretch cluster . Select the arbiter zone from the dropdown list. Choose a performance profile for Configure performance . You can also configure the performance profile after the deployment using the Configure performance option from the options menu of the StorageSystems tab. Important Before selecting a resource profile, make sure to check the current availability of resources within the cluster. Opting for a higher resource profile in a cluster with insufficient resources might lead to installation failures. For more information about resource requirements, see Resource requirement for performance profiles . Click . Optional: In the Security and network page, configure the following based on your requirement: To enable encryption, select Enable data encryption for block and file storage . Select one of the following Encryption level : Cluster-wide encryption to encrypt the entire cluster (block and file). StorageClass encryption to create encrypted persistent volume (block only) using encryption enabled storage class. Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in the Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in the Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Network is set to Default (OVN) if you are using a single network. You can switch to Custom (Multus) if you are using multiple network interfaces and then choose any one of the following: Select a Public Network Interface from the dropdown. Select a Cluster Network Interface from the dropdown. Note If you are using only one additional network interface, select the single NetworkAttachementDefinition , that is, ocs-public-cluster for the Public Network Interface, and leave the Cluster Network Interface blank. Click . In the Data Protection page, click . In the Review and create page, review the configuration details. To modify any configuration settings, click Back to go back to the configuration page. Click Create StorageSystem . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources . Verify that the Status of StorageCluster is Ready and has a green tick mark to it. For arbiter mode of deployment: In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources ocs-storagecluster . In the YAML tab, search for the arbiter key in the spec section and ensure enable is set to true . To verify that all the components for OpenShift Data Foundation are successfully installed, see Verifying your OpenShift Data Foundation installation . 5.6. Verifying OpenShift Data Foundation deployment To verify that OpenShift Data Foundation is deployed correctly: Verify the state of the pods . Verify that the OpenShift Data Foundation cluster is healthy . Verify that the Multicloud Object Gateway is healthy . Verify that the OpenShift Data Foundation specific storage classes exist . 5.6.1. Verifying the state of the pods Procedure Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. For more information about the expected number of pods for each component and how it varies depending on the number of nodes, see Table 5.1, "Pods corresponding to OpenShift Data Foundation cluster" . Click the Running and Completed tabs to verify that the following pods are in Running and Completed state: Table 5.1. Pods corresponding to OpenShift Data Foundation cluster Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any worker node) ocs-metrics-exporter-* (1 pod on any worker node) odf-operator-controller-manager-* (1 pod on any worker node) odf-console-* (1 pod on any worker node) csi-addons-controller-manager-* (1 pod on any worker node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any worker node) Multicloud Object Gateway noobaa-operator-* (1 pod on any worker node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) MON rook-ceph-mon-* (5 pods are distributed across 3 zones, 2 per data-center zones and 1 in arbiter zone) MGR rook-ceph-mgr-* (2 pods on any storage node) MDS rook-ceph-mds-ocs-storagecluster-cephfilesystem-* (2 pods are distributed across 2 data-center zones) RGW rook-ceph-rgw-ocs-storagecluster-cephobjectstore-* (2 pods are distributed across 2 data-center zones) CSI cephfs csi-cephfsplugin-* (1 pod on each worker node) csi-cephfsplugin-provisioner-* (2 pods distributed across worker nodes) rbd csi-rbdplugin-* (1 pod on each worker node) csi-rbdplugin-provisioner-* (2 pods distributed across worker nodes) rook-ceph-crashcollector rook-ceph-crashcollector-* (1 pod on each storage node and 1 pod in arbiter zone) OSD rook-ceph-osd-* (1 pod for each device) rook-ceph-osd-prepare-ocs-deviceset-* (1 pod for each device) 5.6.2. Verifying the OpenShift Data Foundation cluster is healthy Procedure In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Block and File tab, verify that the Storage Cluster has a green tick. In the Details card, verify that the cluster information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation . 5.6.3. Verifying the Multicloud Object Gateway is healthy Procedure In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the object service dashboard, see Monitoring OpenShift Data Foundation . Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . 5.6.4. Verifying that the specific storage classes exist Procedure Click Storage Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation: ocs-storagecluster-ceph-rbd ocs-storagecluster-cephfs openshift-storage.noobaa.io ocs-storagecluster-ceph-rgw 5.7. Install Zone Aware Sample Application Deploy a zone aware sample application to validate whether an OpenShift Data Foundation, stretch cluster setup is configured correctly. Important With latency between the data zones, you can expect to see performance degradation compared to an OpenShift cluster with low latency between nodes and zones (for example, all nodes in the same location). The rate of or amount of performance degradation depends on the latency between the zones and on the application behavior using the storage (such as heavy write traffic). Ensure that you test the critical applications with stretch cluster configuration to ensure sufficient application performance for the required service levels. A ReadWriteMany (RWX) Persistent Volume Claim (PVC) is created using the ocs-storagecluster-cephfs storage class. Multiple pods use the newly created RWX PVC at the same time. The application used is called File Uploader. Demonstration on how an application is spread across topology zones so that it is still available in the event of a site outage: Note This demonstration is possible since this application shares the same RWX volume for storing files. It works for persistent data access as well because Red Hat OpenShift Data Foundation is configured as a stretched cluster with zone awareness and high availability. Create a new project. Deploy the example PHP application called file-uploader. Example Output: View the build log and wait until the application is deployed. Example Output: The command prompt returns out of the tail mode after you see Push successful . Note The new-app command deploys the application directly from the git repository and does not use the OpenShift template, hence the OpenShift route resource is not created by default. You need to create the route manually. 5.7.1. Scaling the application after installation Procedure Scale the application to four replicas and expose its services to make the application zone aware and available. You should have four file-uploader pods in a few minutes. Repeat the above command until there are 4 file-uploader pods in the Running status. Create a PVC and attach it into an application. This command: Creates a PVC. Updates the application deployment to include a volume definition. Updates the application deployment to attach a volume mount into the specified mount-path. Creates a new deployment with the four application pods. Check the result of adding the volume. Example Output: Notice the ACCESS MODE is set to RWX. All the four file-uploader pods are using the same RWX volume. Without this access mode, OpenShift does not attempt to attach multiple pods to the same Persistent Volume (PV) reliably. If you attempt to scale up the deployments that are using ReadWriteOnce (RWO) PV, the pods may get colocated on the same node. 5.7.2. Modify Deployment to be Zone Aware Currently, the file-uploader Deployment is not zone aware and can schedule all the pods in the same zone. In this case, if there is a site outage then the application is unavailable. For more information, see Controlling pod placement by using pod topology spread constraints . Add the pod placement rule in the application deployment configuration to make the application zone aware. Run the following command, and review the output: Example Output: Edit the deployment to use the topology zone labels. Add add the following new lines between the Start and End (shown in the output in the step): Example output: Scale down the deployment to zero pods and then back to four pods. This is needed because the deployment changed in terms of pod placement. Scaling down to zero pods Example output: Scaling up to four pods Example output: Verify that the four pods are spread across the four nodes in datacenter1 and datacenter2 zones. Example output: Search for the zone labels used. Example output: Use the file-uploader web application using your browser to upload new files. Find the route that is created. Example Output: Point your browser to the web application using the route in the step. The web application lists all the uploaded files and offers the ability to upload new ones as well as you download the existing data. Right now, there is nothing. Select an arbitrary file from your local machine and upload it to the application. Click Choose file to select an arbitrary file. Click Upload . Figure 5.1. A simple PHP-based file upload tool Click List uploaded files to see the list of all currently uploaded files. Note The OpenShift Container Platform image registry, ingress routing, and monitoring services are not zone aware. 5.8. Recovering OpenShift Data Foundation stretch cluster Given that the stretch cluster disaster recovery solution is to provide resiliency in the face of a complete or partial site outage, it is important to understand the different methods of recovery for applications and their storage. How the application is architected determines how soon it becomes available again on the active zone. There are different methods of recovery for applications and their storage depending on the site outage. The recovery time depends on the application architecture. The different methods of recovery are as follows: Recovering zone-aware HA applications with RWX storage . Recovering HA applications with RWX storage . Recovering applications with RWO storage . Recovering StatefulSet pods . 5.8.1. Understanding zone failure For the purpose of this section, zone failure is considered as a failure where all OpenShift Container Platform, master and worker nodes in a zone are no longer communicating with the resources in the second data zone (for example, powered down nodes). If communication between the data zones is still partially working (intermittently up or down), the cluster, storage, and network admins should disconnect the communication path between the data zones for recovery to succeed. Important When you install the sample application, power off the OpenShift Container Platform nodes (at least the nodes with OpenShift Data Foundation devices) to test the failure of a data zone in order to validate that your file-uploader application is available, and you can upload new files. 5.8.2. Recovering zone-aware HA applications with RWX storage Applications that are deployed with topologyKey: topology.kubernetes.io/zone have one or more replicas scheduled in each data zone, and are using shared storage, that is, ReadWriteMany (RWX) CephFS volume, terminate themselves in the failed zone after few minutes and new pods are rolled in and stuck in pending state until the zones are recovered. An example of this type of application is detailed in the Install Zone Aware Sample Application section. Important During zone recovery if application pods go into CrashLoopBackOff (CLBO) state with permission denied error while mounting the CephFS volume, then restart the nodes where the pods are scheduled. Wait for some time and then check if the pods are running again. 5.8.3. Recovering HA applications with RWX storage Applications that are using topologyKey: kubernetes.io/hostname or no topology configuration have no protection against all of the application replicas being in the same zone. Note This can happen even with podAntiAffinity and topologyKey: kubernetes.io/hostname in the Pod spec because this anti-affinity rule is host-based and not zone-based. If this happens and all replicas are located in the zone that fails, the application using ReadWriteMany (RWX) storage takes 6-8 minutes to recover on the active zone. This pause is for the OpenShift Container Platform nodes in the failed zone to become NotReady (60 seconds) and then for the default pod eviction timeout to expire (300 seconds). 5.8.4. Recovering applications with RWO storage Applications that use ReadWriteOnce (RWO) storage have a known behavior described in this Kubernetes issue . Because of this issue, if there is a data zone failure, any application pods in that zone mounting RWO volumes (for example, cephrbd based volumes) are stuck with Terminating status after 6-8 minutes and are not re-created on the active zone without manual intervention. Check the OpenShift Container Platform nodes with a status of NotReady . There may be an issue that prevents the nodes from communicating with the OpenShift control plane. However, the nodes may still be performing I/O operations against Persistent Volumes (PVs). If two pods are concurrently writing to the same RWO volume, there is a risk of data corruption. Ensure that processes on the NotReady node are either terminated or blocked until they are terminated. Example solutions: Use an out of band management system to power off a node, with confirmation, to ensure process termination. Withdraw a network route that is used by nodes at a failed site to communicate with storage. Note Before restoring service to the failed zone or nodes, confirm that all the pods with PVs have terminated successfully. To get the Terminating pods to recreate on the active zone, you can either force delete the pod or delete the finalizer on the associated PV. Once one of these two actions are completed, the application pod should recreate on the active zone and successfully mount its RWO storage. Force deleting the pod Force deletions do not wait for confirmation from the kubelet that the pod has been terminated. <PODNAME> Is the name of the pod <NAMESPACE> Is the project namespace Warning OpenShift Data Foundation does not support taints relating to non-graceful node shutdown for automated pod eviction and volume detachment operations. For information, see Non-graceful node shutdown handling . It is mandatory to ensure that the node is shutdown, or network route to the node is withdrawn, prior to force deleting pods. 5.8.5. Recovering StatefulSet pods Pods that are part of a StatefulSet have a similar issue as pods mounting ReadWriteOnce (RWO) volumes. More information is referenced in the Kubernetes resource StatefulSet considerations . To get the pods part of a StatefulSet to re-create on the active zone after 6-8 minutes you need to force delete the pod with the same requirements (that is, OpenShift Container Platform node powered off or communication disconnected) as pods with RWO volumes.
[ "topology.kubernetes.io/zone=arbiter for Master0 topology.kubernetes.io/zone=datacenter1 for Master1, Worker1, Worker2 topology.kubernetes.io/zone=datacenter2 for Master2, Worker3, Worker4", "oc label node <NODENAME> topology.kubernetes.io/zone= <LABEL>", "oc get nodes -l topology.kubernetes.io/zone= <LABEL> -o name", "oc get nodes -L topology.kubernetes.io/zone", "oc annotate namespace openshift-storage openshift.io/node-selector=", "spec: arbiter: enable: true [..] nodeTopologies: arbiterLocation: arbiter #arbiter zone storageDeviceSets: - config: {} count: 1 [..] replica: 4 status: conditions: [..] failureDomain: zone", "oc new-project my-shared-storage", "oc new-app openshift/php:latest~https://github.com/mashetty330/openshift-php-upload-demo --name=file-uploader", "Found image 4f2dcc0 (9 days old) in image stream \"openshift/php\" under tag \"7.2-ubi8\" for \"openshift/php:7.2- ubi8\" Apache 2.4 with PHP 7.2 ----------------------- PHP 7.2 available as container is a base platform for building and running various PHP 7.2 applications and frameworks. PHP is an HTML-embedded scripting language. PHP attempts to make it easy for developers to write dynamically generated web pages. PHP also offers built-in database integration for several commercial and non-commercial database management systems, so writing a database-enabled webpage with PHP is fairly simple. The most common use of PHP coding is probably as a replacement for CGI scripts. Tags: builder, php, php72, php-72 * A source build using source code from https://github.com/christianh814/openshift-php-upload-demo will be cr eated * The resulting image will be pushed to image stream tag \"file-uploader:latest\" * Use 'oc start-build' to trigger a new build --> Creating resources imagestream.image.openshift.io \"file-uploader\" created buildconfig.build.openshift.io \"file-uploader\" created deployment.apps \"file-uploader\" created service \"file-uploader\" created --> Success Build scheduled, use 'oc logs -f buildconfig/file-uploader' to track its progress. Application is not exposed. You can expose services to the outside world by executing one or more of the commands below: 'oc expose service/file-uploader' Run 'oc status' to view your app.", "oc logs -f bc/file-uploader -n my-shared-storage", "Cloning \"https://github.com/christianh814/openshift-php-upload-demo\" [...] Generating dockerfile with builder image image-registry.openshift-image-regis try.svc:5000/openshift/php@sha256:d97466f33999951739a76bce922ab17088885db610c 0e05b593844b41d5494ea STEP 1: FROM image-registry.openshift-image-registry.svc:5000/openshift/php@s ha256:d97466f33999951739a76bce922ab17088885db610c0e05b593844b41d5494ea STEP 2: LABEL \"io.openshift.build.commit.author\"=\"Christian Hernandez <christ [email protected]>\" \"io.openshift.build.commit.date\"=\"Sun Oct 1 1 7:15:09 2017 -0700\" \"io.openshift.build.commit.id\"=\"288eda3dff43b02f7f7 b6b6b6f93396ffdf34cb2\" \"io.openshift.build.commit.ref\"=\"master\" \" io.openshift.build.commit.message\"=\"trying to modularize\" \"io.openshift .build.source-location\"=\"https://github.com/christianh814/openshift-php-uploa d-demo\" \"io.openshift.build.image\"=\"image-registry.openshift-image-regi stry.svc:5000/openshift/php@sha256:d97466f33999951739a76bce922ab17088885db610 c0e05b593844b41d5494ea\" STEP 3: ENV OPENSHIFT_BUILD_NAME=\"file-uploader-1\" OPENSHIFT_BUILD_NAMESP ACE=\"my-shared-storage\" OPENSHIFT_BUILD_SOURCE=\"https://github.com/christ ianh814/openshift-php-upload-demo\" OPENSHIFT_BUILD_COMMIT=\"288eda3dff43b0 2f7f7b6b6b6f93396ffdf34cb2\" STEP 4: USER root STEP 5: COPY upload/src /tmp/src STEP 6: RUN chown -R 1001:0 /tmp/src STEP 7: USER 1001 STEP 8: RUN /usr/libexec/s2i/assemble ---> Installing application source => sourcing 20-copy-config.sh ---> 17:24:39 Processing additional arbitrary httpd configuration provide d by s2i => sourcing 00-documentroot.conf => sourcing 50-mpm-tuning.conf => sourcing 40-ssl-certs.sh STEP 9: CMD /usr/libexec/s2i/run STEP 10: COMMIT temp.builder.openshift.io/my-shared-storage/file-uploader-1:3 b83e447 Getting image source signatures [...]", "oc expose svc/file-uploader -n my-shared-storage", "oc scale --replicas=4 deploy/file-uploader -n my-shared-storage", "oc get pods -o wide -n my-shared-storage", "oc set volume deploy/file-uploader --add --name=my-shared-storage -t pvc --claim-mode=ReadWriteMany --claim-size=10Gi --claim-name=my-shared-storage --claim-class=ocs-storagecluster-cephfs --mount-path=/opt/app-root/src/uploaded -n my-shared-storage", "oc get pvc -n my-shared-storage", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE my-shared-storage Bound pvc-5402cc8a-e874-4d7e-af76-1eb05bd2e7c7 10Gi RWX ocs-storagecluster-cephfs 52s", "oc get deployment file-uploader -o yaml -n my-shared-storage | less", "[...] spec: progressDeadlineSeconds: 600 replicas: 4 revisionHistoryLimit: 10 selector: matchLabels: deployment: file-uploader strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: annotations: openshift.io/generated-by: OpenShiftNewApp creationTimestamp: null labels: deployment: file-uploader spec: # <-- Start inserted lines after here containers: # <-- End inserted lines before here - image: image-registry.openshift-image-registry.svc:5000/my-shared-storage/file-uploader@sha256:a458ea62f990e431ad7d5f84c89e2fa27bdebdd5e29c5418c70c56eb81f0a26b imagePullPolicy: IfNotPresent name: file-uploader [...]", "oc edit deployment file-uploader -n my-shared-storage", "[...] spec: topologySpreadConstraints: - labelSelector: matchLabels: deployment: file-uploader maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: DoNotSchedule - labelSelector: matchLabels: deployment: file-uploader maxSkew: 1 topologyKey: kubernetes.io/hostname whenUnsatisfiable: ScheduleAnyway nodeSelector: node-role.kubernetes.io/worker: \"\" containers: [...]", "deployment.apps/file-uploader edited", "oc scale deployment file-uploader --replicas=0 -n my-shared-storage", "deployment.apps/file-uploader scaled", "oc scale deployment file-uploader --replicas=4 -n my-shared-storage", "deployment.apps/file-uploader scaled", "oc get pods -o wide -n my-shared-storage | egrep '^file-uploader'| grep -v build | awk '{print USD7}' | sort | uniq -c", "1 perf1-mz8bt-worker-d2hdm 1 perf1-mz8bt-worker-k68rv 1 perf1-mz8bt-worker-ntkp8 1 perf1-mz8bt-worker-qpwsr", "oc get nodes -L topology.kubernetes.io/zone | grep datacenter | grep -v master", "perf1-mz8bt-worker-d2hdm Ready worker 35d v1.20.0+5fbfd19 datacenter1 perf1-mz8bt-worker-k68rv Ready worker 35d v1.20.0+5fbfd19 datacenter1 perf1-mz8bt-worker-ntkp8 Ready worker 35d v1.20.0+5fbfd19 datacenter2 perf1-mz8bt-worker-qpwsr Ready worker 35d v1.20.0+5fbfd19 datacenter2", "oc get route file-uploader -n my-shared-storage -o jsonpath --template=\"http://{.spec.host}{'\\n'}\"", "http://file-uploader-my-shared-storage.apps.cluster-ocs4-abdf.ocs4-abdf.sandbox744.opentlc.com", "oc delete pod <PODNAME> --grace-period=0 --force --namespace <NAMESPACE>" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/configuring_openshift_data_foundation_disaster_recovery_for_openshift_workloads/introduction-to-stretch-cluster-disaster-recovery_stretch-cluster
Chapter 12. Configuring the node port service range
Chapter 12. Configuring the node port service range As a cluster administrator, you can expand the available node port range. If your cluster uses of a large number of node ports, you might need to increase the number of available ports. The default port range is 30000-32767 . You can never reduce the port range, even if you first expand it beyond the default range. 12.1. Prerequisites Your cluster infrastructure must allow access to the ports that you specify within the expanded range. For example, if you expand the node port range to 30000-32900 , the inclusive port range of 32768-32900 must be allowed by your firewall or packet filtering configuration. 12.2. Expanding the node port range You can expand the node port range for the cluster. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster with a user with cluster-admin privileges. Procedure To expand the node port range, enter the following command. Replace <port> with the largest port number in the new range. USD oc patch network.config.openshift.io cluster --type=merge -p \ '{ "spec": { "serviceNodePortRange": "30000-<port>" } }' Tip You can alternatively apply the following YAML to update the node port range: apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: serviceNodePortRange: "30000-<port>" Example output network.config.openshift.io/cluster patched To confirm that the configuration is active, enter the following command. It can take several minutes for the update to apply. USD oc get configmaps -n openshift-kube-apiserver config \ -o jsonpath="{.data['config\.yaml']}" | \ grep -Eo '"service-node-port-range":["[[:digit:]]+-[[:digit:]]+"]' Example output "service-node-port-range":["30000-33000"] 12.3. Additional resources Configuring ingress cluster traffic using a NodePort Network [config.openshift.io/v1 ] Service [core/v1 ]
[ "oc patch network.config.openshift.io cluster --type=merge -p '{ \"spec\": { \"serviceNodePortRange\": \"30000-<port>\" } }'", "apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: serviceNodePortRange: \"30000-<port>\"", "network.config.openshift.io/cluster patched", "oc get configmaps -n openshift-kube-apiserver config -o jsonpath=\"{.data['config\\.yaml']}\" | grep -Eo '\"service-node-port-range\":[\"[[:digit:]]+-[[:digit:]]+\"]'", "\"service-node-port-range\":[\"30000-33000\"]" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/networking/configuring-node-port-service-range
Deploying a high availability automation hub
Deploying a high availability automation hub Red Hat Ansible Automation Platform 2.3 Overview of the requirements and procedures for a high availability deployment of automation hub. Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/deploying_a_high_availability_automation_hub/index
Chapter 12. Using a service account as an OAuth client
Chapter 12. Using a service account as an OAuth client 12.1. Service accounts as OAuth clients You can use a service account as a constrained form of OAuth client. Service accounts can request only a subset of scopes that allow access to some basic user information and role-based power inside of the service account's own namespace: user:info user:check-access role:<any_role>:<service_account_namespace> role:<any_role>:<service_account_namespace>:! When using a service account as an OAuth client: client_id is system:serviceaccount:<service_account_namespace>:<service_account_name> . client_secret can be any of the API tokens for that service account. For example: USD oc sa get-token <service_account_name> To get WWW-Authenticate challenges, set an serviceaccounts.openshift.io/oauth-want-challenges annotation on the service account to true . redirect_uri must match an annotation on the service account. 12.1.1. Redirect URIs for service accounts as OAuth clients Annotation keys must have the prefix serviceaccounts.openshift.io/oauth-redirecturi. or serviceaccounts.openshift.io/oauth-redirectreference. such as: In its simplest form, the annotation can be used to directly specify valid redirect URIs. For example: The first and second postfixes in the above example are used to separate the two valid redirect URIs. In more complex configurations, static redirect URIs may not be enough. For example, perhaps you want all Ingresses for a route to be considered valid. This is where dynamic redirect URIs via the serviceaccounts.openshift.io/oauth-redirectreference. prefix come into play. For example: Since the value for this annotation contains serialized JSON data, it is easier to see in an expanded format: { "kind": "OAuthRedirectReference", "apiVersion": "v1", "reference": { "kind": "Route", "name": "jenkins" } } Now you can see that an OAuthRedirectReference allows us to reference the route named jenkins . Thus, all Ingresses for that route will now be considered valid. The full specification for an OAuthRedirectReference is: { "kind": "OAuthRedirectReference", "apiVersion": "v1", "reference": { "kind": ..., 1 "name": ..., 2 "group": ... 3 } } 1 kind refers to the type of the object being referenced. Currently, only route is supported. 2 name refers to the name of the object. The object must be in the same namespace as the service account. 3 group refers to the group of the object. Leave this blank, as the group for a route is the empty string. Both annotation prefixes can be combined to override the data provided by the reference object. For example: The first postfix is used to tie the annotations together. Assuming that the jenkins route had an Ingress of https://example.com , now https://example.com/custompath is considered valid, but https://example.com is not. The format for partially supplying override data is as follows: Type Syntax Scheme "https://" Hostname "//website.com" Port "//:8000" Path "examplepath" Note Specifying a hostname override will replace the hostname data from the referenced object, which is not likely to be desired behavior. Any combination of the above syntax can be combined using the following format: <scheme:>//<hostname><:port>/<path> The same object can be referenced more than once for more flexibility: Assuming that the route named jenkins has an Ingress of https://example.com , then both https://example.com:8000 and https://example.com/custompath are considered valid. Static and dynamic annotations can be used at the same time to achieve the desired behavior:
[ "oc sa get-token <service_account_name>", "serviceaccounts.openshift.io/oauth-redirecturi.<name>", "\"serviceaccounts.openshift.io/oauth-redirecturi.first\": \"https://example.com\" \"serviceaccounts.openshift.io/oauth-redirecturi.second\": \"https://other.com\"", "\"serviceaccounts.openshift.io/oauth-redirectreference.first\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\"", "{ \"kind\": \"OAuthRedirectReference\", \"apiVersion\": \"v1\", \"reference\": { \"kind\": \"Route\", \"name\": \"jenkins\" } }", "{ \"kind\": \"OAuthRedirectReference\", \"apiVersion\": \"v1\", \"reference\": { \"kind\": ..., 1 \"name\": ..., 2 \"group\": ... 3 } }", "\"serviceaccounts.openshift.io/oauth-redirecturi.first\": \"custompath\" \"serviceaccounts.openshift.io/oauth-redirectreference.first\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\"", "\"serviceaccounts.openshift.io/oauth-redirecturi.first\": \"custompath\" \"serviceaccounts.openshift.io/oauth-redirectreference.first\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\" \"serviceaccounts.openshift.io/oauth-redirecturi.second\": \"//:8000\" \"serviceaccounts.openshift.io/oauth-redirectreference.second\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\"", "\"serviceaccounts.openshift.io/oauth-redirectreference.first\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\" \"serviceaccounts.openshift.io/oauth-redirecturi.second\": \"https://other.com\"" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/authentication_and_authorization/using-service-accounts-as-oauth-client
B.12. Guest is Unable to Start with Error: warning: could not open /dev/net/tun
B.12. Guest is Unable to Start with Error: warning: could not open /dev/net/tun Symptom The guest virtual machine does not start after configuring a type='ethernet' (also known as 'generic ethernet') interface in the host system. An error appears either in libvirtd.log , /var/log/libvirt/qemu/ name_of_guest .log , or in both, similar to the below message: Investigation Use of the generic ethernet interface type ( <interface type='ethernet'> ) is discouraged, because using it requires lowering the level of host protection against potential security flaws in QEMU and its guests. However, it is sometimes necessary to use this type of interface to take advantage of some other facility that is not yet supported directly in libvirt . For example, openvswitch was not supported in libvirt until libvirt-0.9.11 , so in older versions of libvirt , <interface type='ethernet'> was the only way to connect a guest to an openvswitch bridge. However, if you configure a <interface type='ethernet'> interface without making any other changes to the host system, the guest virtual machine will not start successfully. The reason for this failure is that for this type of interface, a script called by QEMU needs to manipulate the tap device. However, with type='ethernet' configured, in an attempt to lock down QEMU , libvirt and SELinux have put in place several checks to prevent this. (Normally, libvirt performs all of the tap device creation and manipulation, and passes an open file descriptor for the tap device to QEMU .) Solution Reconfigure the host system to be compatible with the generic ethernet interface. Procedure B.4. Reconfiguring the host system to use the generic ethernet interface Set SELinux to permissive by configuring SELINUX=permissive in /etc/selinux/config : From a root shell, run the command setenforce permissive . In /etc/libvirt/qemu.conf add or edit the following lines: Restart libvirtd . Important Since each of these steps significantly decreases the host's security protections against QEMU guest domains, this configuration should only be used if there is no alternative to using <interface type='ethernet'> . Note For more information on SELinux, refer to the Red Hat Enterprise Linux 6 Security-Enhanced Linux User Guide .
[ "warning: could not open /dev/net/tun: no virtual network emulation qemu-kvm: -netdev tap,script=/etc/my-qemu-ifup,id=hostnet0: Device 'tap' could not be initialized", "This file controls the state of SELinux on the system. SELINUX= can take one of these three values: enforcing - SELinux security policy is enforced. permissive - SELinux prints warnings instead of enforcing. disabled - No SELinux policy is loaded. SELINUX=permissive SELINUXTYPE= can take one of these two values: targeted - Targeted processes are protected, mls - Multi Level Security protection. SELINUXTYPE=targeted", "clear_emulator_capabilities = 0", "user = \"root\"", "group = \"root\"", "cgroup_device_acl = [ \"/dev/null\", \"/dev/full\", \"/dev/zero\", \"/dev/random\", \"/dev/urandom\", \"/dev/ptmx\", \"/dev/kvm\", \"/dev/kqemu\", \"/dev/rtc\", \"/dev/hpet\", \"/dev/net/tun\"," ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_host_configuration_and_guest_installation_guide/App_Generic_Ethernet
19.6. Virtualization
19.6. Virtualization Virtualization Getting Started Guide The Virtualization Getting Started Guide is an introduction to virtualization on Red Hat Enterprise Linux 7. Virtualization Deployment and Administration Guide The Virtualization Deployment and Administration Guide provides information on installing, configuring, and managing virtualization on Red Hat Enterprise Linux 7. Virtualization Security Guide The Virtualization Security Guide provides an overview of virtualization security technologies provided by Red Hat, and provides recommendations for securing virtualization hosts, guests, and shared infrastructure and resources in virtualized environments. Virtualization Tuning and Optimization Guide The Virtualization Tuning and Optimization Guide covers KVM and virtualization performance. Within this guide you can find tips and suggestions for making full use of KVM performance features and options for your host systems and virtualized guests.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.0_release_notes/sect-red_hat_enterprise_linux-7.0_release_notes-documentation-virtualization
Use Red Hat Quay
Use Red Hat Quay Red Hat Quay 3.10 Use Red Hat Quay Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/use_red_hat_quay/index
Chapter 6. Bug fixes
Chapter 6. Bug fixes This section describes the notable bug fixes introduced in Red Hat OpenShift Data Foundation 4.13. 6.1. Multicloud Object Gateway Reconcile of disableLoadBalancerService field is ignored in OpenShift Data Foundation operator Previously, any change to the disableLoadBalancerService field for Multicloud Object Gateway (MCG) was overridden due to the OpenShift Data Foundation operator reconciliation. With this fix, reconcile of disableLoadBalancerService field is ignored in OpenShift Data Foundation operator and, as a result, any value set for this field in NooBaa CR is retained and not overridden. ( BZ#2186171 ) Performance improvement for non optimized database related flows on deletions Previously, non optimized database related flows on deletions caused Multicloud Object Gateway to spike in CPU usage and perform slowly on mass delete scenarios. For example, reclaiming a deleted object bucket claim (OBC). With this fix, indexes for the bucket reclaimer process are optimized, a new index is added to the database to speed up the database cleaner flows, and bucket reclaimer changes are introduced to work on batches of objects. ( BZ#2181535 ) OpenShift generated certificates used for MCG internal flows to avoid errors Previously, there were errors in some of the Multicloud Object Gateway (MCG) internal flows due to self-signed certificate that resulted in failed client operations. This was due to the use of self-signed certification in internal communication between MCG components. With this fix, OpenShift Container Platform generated certificate is used for internal communications between MCG components, thereby avoiding the errors in the internal flows. ( BZ#2168867 ) Metric for number of bytes used by Multicloud Object Gateway bucket Previously, there was no metric to show the number of bytes used by Multicloud Object Gateway bucket. With this fix, a new metric NooBaa_bucket_used_bytes is added, which shows the number of bytes used by the Multicloud Object Gateway bucket. ( BZ#2168010 ) Public access disabled for Microsoft Azure blob storage Previously, the default container created in Microsoft Azure was with public access enabled and caused security concerns. With this fix, the default container created will not have the public access enabled by default which means AllowBlobPublicAccess is set to false. ( BZ#2167821 ) Multicloud Object Gateway bucket buckets are deleted even when the replication rules are set Previously, if replication rules were set for a Multicloud Object Gateway bucket, the bucket was not considered to be eligible for deletion, thereby the buckets would stay without getting deleted. With this fix, the replication rules on a specific bucket are updated when the bucket is being deleted and as a result the bucket is deleted. ( BZ#2168788 ) Database init container ownership replaced with Kubernetes FSGroup Previously, Multicloud Object Gateway (MCG) failed to come up and serve when init container for MCG database (DB) pod failed to change ownership. With this fix, DB init container ownership is replaced with Kubernetes FSGroup. ( BZ#2115616 ) 6.2. CephFS cephfs-top is able to display more than 100 clients Previously, when you tried to load more than 100 clients to cephfs-top , in a few instances, it showed a blank screen and went into hung state as cephfs-top could not accommodate the clients in the display due to less or no space. Because the clients were displayed based on x_coord_map calculations, cephfs-top could not accommodate more clients in the display. This issue is fixed as a part of another BZ in Ceph when ncurses scrolling and a new way of displaying clients were introduced in cephfs-top . The x_coord_map calculation was also dropped. So, cephfs-top now displays 200 or more clients. ( BZ#2067168 ) 6.3. Ceph container storage interface (CSI) RBD Filesystem PVC expands even when the StagingTargetPath is missing Previously, the RADOS block device (RBD) Filesystem persistent volume claim (PVC) expansion was not successful when the StagingTartgetPath was missing in the NodeExpandVolume remote procedure call (RPC) and Ceph CSI was not able to get the device details to expand. With this fix, Ceph CSI goes through all the mount references to identify the StageingTargetPath where the RBD image is mounted. As a result, RBD Filesystem PVC expands successfully even when the StagingTargetPath is missing. ( BZ#2164617 ) Default memory and CPU resource limit increased Previously, odf-csi-addons-operator had low memory resource limit and as a result the odf-csi-addons-operator pod was OOMKilled (out of memory). With this fix, the default memory and the CPU resource limit has been increased and odf-csi-addons-operator OOMKills are not observed. ( BZ#2172365 ) 6.4. OpenShift Data Foundation operator Two separate routes for secure and insecure ports Previously, http request failures occured as route ended up using the secure port because the port in RGW service for its openshiftroute was not defined. With this fix, insecure port for the existing OpenShift for RGW are defined properly and a new route with secure port is created, thereby avoiding the http request failures. Now, two routes are available for RGW, the existing route uses the insecure port and the new separate route uses the secure port. ( BZ#2104148 ) Reflects correct state of the Ceph cluster in external mode Previously, when OpenShift Data Foundation is deployed in external mode with a Ceph cluster, the negative conditions such as storagecluster ExternalClusterStateConnected were not cleared from the storage cluster even when the associated Ceph cluster was in a good state. With this fix, the negative conditions are removed from the storage cluster when the Ceph cluster is in a positive state, thereby reflecting the correct state of the Ceph cluster. ( BZ#2172189 ) nginx configurations are added through the ConfigMap Previously, when IPv6 was disabled at node's kernel level, IPv6 listen directive of nginx configuration for the odf-console pod gave an error. As a result, OpenShift Data Foundation was stuck with odf-console not available and odf-console is in CrashLoopBackOff errors. With this fix, all the nginx configurations are added through the ConfigMap created by the odf-operator . ( BZ#2173161 ) 6.5. OpenShift Data Foundation console User interface correctly passes the PVC name to the CR Previously, while creating NamespaceStore in the user interface (UI) using file system, the UI would pass the entire persistent volume claim (PVC) object to the CR instead of just the PVC name that is required to be passed to the CR's spec.nsfs.pvcName field. As a result, an error was seen on the UI. With this fix, only the PVC name is passed to the CR instead of the entire PVC object. ( BZ#2158922 ) Refresh popup is shown when OpenShfit Data Foundation is upgraded Previously, when OpenShfit Data Foundation was upgraded, OpenShift Container Platform did not show the Refresh button due to lack of awareness about the changes. OpenShift used to not perform checks to know the changes in the version field of the plugin-manifest.json file present in the odf-console pod. With this fix, OpenShift Container Platform and OpenShift Data Foundation are configured to poll the manifest for OpenShift Data Foundation user interface. Based on the change in version a Refresh popup is shown. ( BZ#2157876 ) 6.6. Rook StorageClasses are created even if the RGW endpoint is not reachable Previously, in OpenShift Data Foundation external mode deployment, if the RADOS gateway (RGW) endpoints were not reachable and Rook fails to configure the CephObjectStore, creation of RADOS block device (RBD) and CephFS also would fail as these were tightly coupled in the python script, create-external-cluster-resources.py . With this fix, the issues in the python script was fixed to make separate calls instead of failing or showing errors and the StorageClasses are created. ( BZ#2139451 )
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/4.13_release_notes/bug_fixes
Chapter 1. Remediations overview
Chapter 1. Remediations overview After identifying the highest remediation priorities in your Red Hat Enterprise Linux (RHEL) infrastructure, you can create remediation playbooks that fix those issues. Subscription requirements Red Hat Insights for Red Hat Enterprise Linux is included with every RHEL subscription. No additional subscriptions are required to use Insights remediation features. User requirements All Insights users will automatically have access to read, create, and manage remediation playbooks. 1.1. User Access settings in the Red Hat Hybrid Cloud Console All users on your account have access to most of the data in Insights for Red Hat Enterprise Linux. 1.1.1. Predefined User Access groups and roles To make groups and roles easier to manage, Red Hat provides two predefined groups and a set of predefined roles. 1.1.1.1. Predefined groups The Default access group contains all users in your organization. Many predefined roles are assigned to this group. It is automatically updated by Red Hat. Note If the Organization Administrator makes changes to the Default access group its name changes to Custom default access group and it is no longer updated by Red Hat. The Default admin access group contains only users who have Organization Administrator permissions. This group is automatically maintained and users and roles in this group cannot be changed. 1.1.2. User Access roles for remediations users The Remediations user role enables standard or enhanced access to remediations features in Insights for Red Hat Enterprise Linux. The Remediations user role is included in the Default access group and permits access to view existing playbooks and to create new playbooks. Remediations users cannot execute playbooks on systems.
null
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/red_hat_insights_remediations_guide_with_fedramp/remediations-overview_red-hat-insights-remediation-guide
Chapter 7. Tutorial: Using AWS WAF and AWS ALBs to protect ROSA workloads
Chapter 7. Tutorial: Using AWS WAF and AWS ALBs to protect ROSA workloads AWS WAF is a web application firewall that lets you monitor the HTTP and HTTPS requests that are forwarded to your protected web application resources. You can use an AWS Application Load Balancer (ALB) to add a Web Application Firewall (WAF) to your Red Hat OpenShift Service on AWS (ROSA) workloads. Using an external solution protects ROSA resources from experiencing denial of service due to handling the WAF. Important It is recommended that you use the more flexible CloudFront method unless you absolutely must use an ALB based solution. 7.1. Prerequisites Multiple availability zone (AZ) ROSA (HCP or Classic) cluster. Note AWS ALBs require at least two public subnets across AZs, per the AWS documentation . For this reason, only multiple AZ ROSA clusters can be used with ALBs. You have access to the OpenShift CLI ( oc ). You have access to the AWS CLI ( aws ). 7.1.1. Environment setup Prepare the environment variables: USD export AWS_PAGER="" USD export CLUSTER=USD(oc get infrastructure cluster -o=jsonpath="{.status.infrastructureName}") USD export REGION=USD(oc get infrastructure cluster -o=jsonpath="{.status.platformStatus.aws.region}") USD export OIDC_ENDPOINT=USD(oc get authentication.config.openshift.io cluster -o jsonpath='{.spec.serviceAccountIssuer}' | sed 's|^https://||') USD export AWS_ACCOUNT_ID=USD(aws sts get-caller-identity --query Account --output text) USD export SCRATCH="/tmp/USD{CLUSTER}/alb-waf" USD mkdir -p USD{SCRATCH} USD echo "Cluster: USD(echo USD{CLUSTER} | sed 's/-[a-z0-9]\{5\}USD//'), Region: USD{REGION}, OIDC Endpoint: USD{OIDC_ENDPOINT}, AWS Account ID: USD{AWS_ACCOUNT_ID}" 7.1.2. AWS VPC and subnets Note This section only applies to clusters that were deployed into existing VPCs. If you did not deploy your cluster into an existing VPC, skip this section and proceed to the installation section below. Set the below variables to the proper values for your ROSA deployment: USD export VPC_ID=<vpc-id> 1 USD export PUBLIC_SUBNET_IDS=(<space-separated-list-of-ids>) 2 USD export PRIVATE_SUBNET_IDS=(<space-separated-list-of-ids>) 3 1 Replace with the VPC ID of the cluster, for example: export VPC_ID=vpc-04c429b7dbc4680ba . 2 Replace with a space-separated list of the private subnet IDs of the cluster, making sure to preserve the () . For example: export PUBLIC_SUBNET_IDS=(subnet-056fd6861ad332ba2 subnet-08ce3b4ec753fe74c subnet-071aa28228664972f) . 3 Replace with a space-separated list of the private subnet IDs of the cluster, making sure to preserve the () . For example: export PRIVATE_SUBNET_IDS=(subnet-0b933d72a8d72c36a subnet-0817eb72070f1d3c2 subnet-0806e64159b66665a) . Add a tag to your cluster's VPC with the cluster identifier: USD aws ec2 create-tags --resources USD{VPC_ID} \ --tags Key=kubernetes.io/cluster/USD{CLUSTER},Value=shared --region USD{REGION} Add a tag to your public subnets: USD aws ec2 create-tags \ --resources USD{PUBLIC_SUBNET_IDS} \ --tags Key=kubernetes.io/role/elb,Value='1' \ Key=kubernetes.io/cluster/USD{CLUSTER},Value=shared \ --region USD{REGION} Add a tag to your private subnets: USD aws ec2 create-tags \ --resources USD{PRIVATE_SUBNET_IDS} \ --tags Key=kubernetes.io/role/internal-elb,Value='1' \ Key=kubernetes.io/cluster/USD{CLUSTER},Value=shared \ --region USD{REGION} 7.2. Deploy the AWS Load Balancer Operator The AWS Load Balancer Operator is used to used to install, manage and configure an instance of aws-load-balancer-controller in a ROSA cluster. To deploy ALBs in ROSA, we need to first deploy the AWS Load Balancer Operator. Create a new project to deploy the AWS Load Balancer Operator into by running the following command: USD oc new-project aws-load-balancer-operator Create an AWS IAM policy for the AWS Load Balancer Controller if one does not already exist by running the following command: Note The policy is sourced from the upstream AWS Load Balancer Controller policy . This is required by the operator to function. USD POLICY_ARN=USD(aws iam list-policies --query \ "Policies[?PolicyName=='aws-load-balancer-operator-policy'].{ARN:Arn}" \ --output text) USD if [[ -z "USD{POLICY_ARN}" ]]; then wget -O "USD{SCRATCH}/load-balancer-operator-policy.json" \ https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/install/iam_policy.json POLICY_ARN=USD(aws --region "USDREGION" --query Policy.Arn \ --output text iam create-policy \ --policy-name aws-load-balancer-operator-policy \ --policy-document "file://USD{SCRATCH}/load-balancer-operator-policy.json") fi Create an AWS IAM trust policy for AWS Load Balancer Operator: USD cat <<EOF > "USD{SCRATCH}/trust-policy.json" { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Condition": { "StringEquals" : { "USD{OIDC_ENDPOINT}:sub": ["system:serviceaccount:aws-load-balancer-operator:aws-load-balancer-operator-controller-manager", "system:serviceaccount:aws-load-balancer-operator:aws-load-balancer-controller-cluster"] } }, "Principal": { "Federated": "arn:aws:iam::USDAWS_ACCOUNT_ID:oidc-provider/USD{OIDC_ENDPOINT}" }, "Action": "sts:AssumeRoleWithWebIdentity" } ] } EOF Create an AWS IAM role for the AWS Load Balancer Operator: USD ROLE_ARN=USD(aws iam create-role --role-name "USD{CLUSTER}-alb-operator" \ --assume-role-policy-document "file://USD{SCRATCH}/trust-policy.json" \ --query Role.Arn --output text) Attach the AWS Load Balancer Operator policy to the IAM role we created previously by running the following command: USD aws iam attach-role-policy --role-name "USD{CLUSTER}-alb-operator" \ --policy-arn USD{POLICY_ARN} Create a secret for the AWS Load Balancer Operator to assume our newly created AWS IAM role: USD cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: aws-load-balancer-operator namespace: aws-load-balancer-operator stringData: credentials: | [default] role_arn = USD{ROLE_ARN} web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token EOF Install the AWS Load Balancer Operator: USD cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: aws-load-balancer-operator namespace: aws-load-balancer-operator spec: upgradeStrategy: Default --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: aws-load-balancer-operator namespace: aws-load-balancer-operator spec: channel: stable-v1.0 installPlanApproval: Automatic name: aws-load-balancer-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: aws-load-balancer-operator.v1.0.0 EOF Deploy an instance of the AWS Load Balancer Controller using the operator: Note If you get an error here wait a minute and try again, it means the Operator has not completed installing yet. USD cat << EOF | oc apply -f - apiVersion: networking.olm.openshift.io/v1 kind: AWSLoadBalancerController metadata: name: cluster spec: credentials: name: aws-load-balancer-operator enabledAddons: - AWSWAFv2 EOF Check the that the operator and controller pods are both running: USD oc -n aws-load-balancer-operator get pods You should see the following, if not wait a moment and retry: NAME READY STATUS RESTARTS AGE aws-load-balancer-controller-cluster-6ddf658785-pdp5d 1/1 Running 0 99s aws-load-balancer-operator-controller-manager-577d9ffcb9-w6zqn 2/2 Running 0 2m4s 7.3. Deploy a sample application Create a new project for our sample application: USD oc new-project hello-world Deploy a hello world application: USD oc new-app -n hello-world --image=docker.io/openshift/hello-openshift Convert the pre-created service resource to a NodePort service type: USD oc -n hello-world patch service hello-openshift -p '{"spec":{"type":"NodePort"}}' Deploy an AWS ALB using the AWS Load Balancer Operator: USD cat << EOF | oc apply -f - apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: hello-openshift-alb namespace: hello-world annotations: alb.ingress.kubernetes.io/scheme: internet-facing spec: ingressClassName: alb rules: - http: paths: - path: / pathType: Exact backend: service: name: hello-openshift port: number: 8080 EOF Curl the AWS ALB Ingress endpoint to verify the hello world application is accessible: Note AWS ALB provisioning takes a few minutes. If you receive an error that says curl: (6) Could not resolve host , please wait and try again. USD INGRESS=USD(oc -n hello-world get ingress hello-openshift-alb -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') USD curl "http://USD{INGRESS}" Example output Hello OpenShift! 7.3.1. Configure the AWS WAF The AWS WAF service is a web application firewall that lets you monitor, protect, and control the HTTP and HTTPS requests that are forwarded to your protected web application resources, like ROSA. Create a AWS WAF rules file to apply to our web ACL: USD cat << EOF > USD{SCRATCH}/waf-rules.json [ { "Name": "AWS-AWSManagedRulesCommonRuleSet", "Priority": 0, "Statement": { "ManagedRuleGroupStatement": { "VendorName": "AWS", "Name": "AWSManagedRulesCommonRuleSet" } }, "OverrideAction": { "None": {} }, "VisibilityConfig": { "SampledRequestsEnabled": true, "CloudWatchMetricsEnabled": true, "MetricName": "AWS-AWSManagedRulesCommonRuleSet" } }, { "Name": "AWS-AWSManagedRulesSQLiRuleSet", "Priority": 1, "Statement": { "ManagedRuleGroupStatement": { "VendorName": "AWS", "Name": "AWSManagedRulesSQLiRuleSet" } }, "OverrideAction": { "None": {} }, "VisibilityConfig": { "SampledRequestsEnabled": true, "CloudWatchMetricsEnabled": true, "MetricName": "AWS-AWSManagedRulesSQLiRuleSet" } } ] EOF This will enable the Core (Common) and SQL AWS Managed Rule Sets. Create an AWS WAF Web ACL using the rules we specified above: USD WAF_ARN=USD(aws wafv2 create-web-acl \ --name USD{CLUSTER}-waf \ --region USD{REGION} \ --default-action Allow={} \ --scope REGIONAL \ --visibility-config SampledRequestsEnabled=true,CloudWatchMetricsEnabled=true,MetricName=USD{CLUSTER}-waf-metrics \ --rules file://USD{SCRATCH}/waf-rules.json \ --query 'Summary.ARN' \ --output text) Annotate the Ingress resource with the AWS WAF Web ACL ARN: USD oc annotate -n hello-world ingress.networking.k8s.io/hello-openshift-alb \ alb.ingress.kubernetes.io/wafv2-acl-arn=USD{WAF_ARN} Wait for 10 seconds for the rules to propagate and test that the app still works: USD curl "http://USD{INGRESS}" Example output Hello OpenShift! Test that the WAF denies a bad request: USD curl -X POST "http://USD{INGRESS}" \ -F "user='<script><alert>Hello></alert></script>'" Example output <html> <head><title>403 Forbidden</title></head> <body> <center><h1>403 Forbidden</h1></center> </body> </html Note Activation of the AWS WAF integration can sometimes take several minutes. If you do not receive a 403 Forbidden error, please wait a few seconds and try again. The expected result is a 403 Forbidden error, which means the AWS WAF is protecting your application. 7.4. Additional resources Adding Extra Security with AWS WAF, CloudFront and ROSA | Amazon Web Services on YouTube
[ "export AWS_PAGER=\"\" export CLUSTER=USD(oc get infrastructure cluster -o=jsonpath=\"{.status.infrastructureName}\") export REGION=USD(oc get infrastructure cluster -o=jsonpath=\"{.status.platformStatus.aws.region}\") export OIDC_ENDPOINT=USD(oc get authentication.config.openshift.io cluster -o jsonpath='{.spec.serviceAccountIssuer}' | sed 's|^https://||') export AWS_ACCOUNT_ID=USD(aws sts get-caller-identity --query Account --output text) export SCRATCH=\"/tmp/USD{CLUSTER}/alb-waf\" mkdir -p USD{SCRATCH} echo \"Cluster: USD(echo USD{CLUSTER} | sed 's/-[a-z0-9]\\{5\\}USD//'), Region: USD{REGION}, OIDC Endpoint: USD{OIDC_ENDPOINT}, AWS Account ID: USD{AWS_ACCOUNT_ID}\"", "export VPC_ID=<vpc-id> 1 export PUBLIC_SUBNET_IDS=(<space-separated-list-of-ids>) 2 export PRIVATE_SUBNET_IDS=(<space-separated-list-of-ids>) 3", "aws ec2 create-tags --resources USD{VPC_ID} --tags Key=kubernetes.io/cluster/USD{CLUSTER},Value=shared --region USD{REGION}", "aws ec2 create-tags --resources USD{PUBLIC_SUBNET_IDS} --tags Key=kubernetes.io/role/elb,Value='1' Key=kubernetes.io/cluster/USD{CLUSTER},Value=shared --region USD{REGION}", "aws ec2 create-tags --resources USD{PRIVATE_SUBNET_IDS} --tags Key=kubernetes.io/role/internal-elb,Value='1' Key=kubernetes.io/cluster/USD{CLUSTER},Value=shared --region USD{REGION}", "oc new-project aws-load-balancer-operator", "POLICY_ARN=USD(aws iam list-policies --query \"Policies[?PolicyName=='aws-load-balancer-operator-policy'].{ARN:Arn}\" --output text)", "if [[ -z \"USD{POLICY_ARN}\" ]]; then wget -O \"USD{SCRATCH}/load-balancer-operator-policy.json\" https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/install/iam_policy.json POLICY_ARN=USD(aws --region \"USDREGION\" --query Policy.Arn --output text iam create-policy --policy-name aws-load-balancer-operator-policy --policy-document \"file://USD{SCRATCH}/load-balancer-operator-policy.json\") fi", "cat <<EOF > \"USD{SCRATCH}/trust-policy.json\" { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Condition\": { \"StringEquals\" : { \"USD{OIDC_ENDPOINT}:sub\": [\"system:serviceaccount:aws-load-balancer-operator:aws-load-balancer-operator-controller-manager\", \"system:serviceaccount:aws-load-balancer-operator:aws-load-balancer-controller-cluster\"] } }, \"Principal\": { \"Federated\": \"arn:aws:iam::USDAWS_ACCOUNT_ID:oidc-provider/USD{OIDC_ENDPOINT}\" }, \"Action\": \"sts:AssumeRoleWithWebIdentity\" } ] } EOF", "ROLE_ARN=USD(aws iam create-role --role-name \"USD{CLUSTER}-alb-operator\" --assume-role-policy-document \"file://USD{SCRATCH}/trust-policy.json\" --query Role.Arn --output text)", "aws iam attach-role-policy --role-name \"USD{CLUSTER}-alb-operator\" --policy-arn USD{POLICY_ARN}", "cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: aws-load-balancer-operator namespace: aws-load-balancer-operator stringData: credentials: | [default] role_arn = USD{ROLE_ARN} web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token EOF", "cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: aws-load-balancer-operator namespace: aws-load-balancer-operator spec: upgradeStrategy: Default --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: aws-load-balancer-operator namespace: aws-load-balancer-operator spec: channel: stable-v1.0 installPlanApproval: Automatic name: aws-load-balancer-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: aws-load-balancer-operator.v1.0.0 EOF", "cat << EOF | oc apply -f - apiVersion: networking.olm.openshift.io/v1 kind: AWSLoadBalancerController metadata: name: cluster spec: credentials: name: aws-load-balancer-operator enabledAddons: - AWSWAFv2 EOF", "oc -n aws-load-balancer-operator get pods", "NAME READY STATUS RESTARTS AGE aws-load-balancer-controller-cluster-6ddf658785-pdp5d 1/1 Running 0 99s aws-load-balancer-operator-controller-manager-577d9ffcb9-w6zqn 2/2 Running 0 2m4s", "oc new-project hello-world", "oc new-app -n hello-world --image=docker.io/openshift/hello-openshift", "oc -n hello-world patch service hello-openshift -p '{\"spec\":{\"type\":\"NodePort\"}}'", "cat << EOF | oc apply -f - apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: hello-openshift-alb namespace: hello-world annotations: alb.ingress.kubernetes.io/scheme: internet-facing spec: ingressClassName: alb rules: - http: paths: - path: / pathType: Exact backend: service: name: hello-openshift port: number: 8080 EOF", "INGRESS=USD(oc -n hello-world get ingress hello-openshift-alb -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') curl \"http://USD{INGRESS}\"", "Hello OpenShift!", "cat << EOF > USD{SCRATCH}/waf-rules.json [ { \"Name\": \"AWS-AWSManagedRulesCommonRuleSet\", \"Priority\": 0, \"Statement\": { \"ManagedRuleGroupStatement\": { \"VendorName\": \"AWS\", \"Name\": \"AWSManagedRulesCommonRuleSet\" } }, \"OverrideAction\": { \"None\": {} }, \"VisibilityConfig\": { \"SampledRequestsEnabled\": true, \"CloudWatchMetricsEnabled\": true, \"MetricName\": \"AWS-AWSManagedRulesCommonRuleSet\" } }, { \"Name\": \"AWS-AWSManagedRulesSQLiRuleSet\", \"Priority\": 1, \"Statement\": { \"ManagedRuleGroupStatement\": { \"VendorName\": \"AWS\", \"Name\": \"AWSManagedRulesSQLiRuleSet\" } }, \"OverrideAction\": { \"None\": {} }, \"VisibilityConfig\": { \"SampledRequestsEnabled\": true, \"CloudWatchMetricsEnabled\": true, \"MetricName\": \"AWS-AWSManagedRulesSQLiRuleSet\" } } ] EOF", "WAF_ARN=USD(aws wafv2 create-web-acl --name USD{CLUSTER}-waf --region USD{REGION} --default-action Allow={} --scope REGIONAL --visibility-config SampledRequestsEnabled=true,CloudWatchMetricsEnabled=true,MetricName=USD{CLUSTER}-waf-metrics --rules file://USD{SCRATCH}/waf-rules.json --query 'Summary.ARN' --output text)", "oc annotate -n hello-world ingress.networking.k8s.io/hello-openshift-alb alb.ingress.kubernetes.io/wafv2-acl-arn=USD{WAF_ARN}", "curl \"http://USD{INGRESS}\"", "Hello OpenShift!", "curl -X POST \"http://USD{INGRESS}\" -F \"user='<script><alert>Hello></alert></script>'\"", "<html> <head><title>403 Forbidden</title></head> <body> <center><h1>403 Forbidden</h1></center> </body> </html" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/tutorials/cloud-experts-using-alb-and-waf
Upgrading Guide
Upgrading Guide Red Hat build of Keycloak 22.0 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/upgrading_guide/index
Chapter 4. Security for Cluster Traffic
Chapter 4. Security for Cluster Traffic 4.1. Node Authentication and Authorization (Remote Client-Server Mode) Security can be enabled at node level via SASL protocol, which enables node authentication against a security realm. This requires nodes to authenticate each other when joining or merging with a cluster. For detailed information about security realms, see Section 2.6.1, "About Security Realms" . The following example depicts the <sasl /> element, which leverages the SASL protocol. Both DIGEST-MD5 or GSSAPI mechanisms are currently supported. Example 4.1. Configure SASL Authentication In the provided example, the nodes use the DIGEST-MD5 mechanism to authenticate against the ClusterRealm . In order to join, nodes must have the cluster role. The cluster-role attribute determines the role all nodes must belong to in the security realm in order to JOIN or MERGE with the cluster. Unless it has been specified, the cluster-role attribute is the name of the clustered <cache-container> by default. Each node identifies itself using the client-name property. If none is specified, the hostname on which the server is running will be used. This name can also be overridden by specifying the jboss.node.name system property that can be overridden on the command line. For example: Note JGroups AUTH protocol is not integrated with security realms, and its use is not advocated for Red Hat JBoss Data Grid. Report a bug 4.1.1. Configure Node Authentication for Cluster Security (DIGEST-MD5) The following example demonstrates how to use DIGEST-MD5 with a properties-based security realm, with a dedicated realm for cluster node. Example 4.2. Using the DIGEST-MD5 Mechanism In the provided example, supposing the hostnames of the various nodes are node001 , node002 , node003 , the cluster-users.properties will contain: node001=/<node001passwordhash>/ node002=/<node002passwordhash>/ node003=/<node003passwordhash>/ The cluster-roles.properties will contain: node001=clustered node002=clustered node003=clustered To generate these values, the following add-users.sh script can be used: The MD5 password hash of the node must also be placed in the " client_password " property of the <sasl/> element. Note To increase security, it is recommended that this password be stored using a Vault. For more information about vault expressions, see the Red Hat Enterprise Application Platform Security Guide Once node security has been set up as discussed here, the cluster coordinator will validate each JOIN ing and MERGE ing node's credentials against the realm before letting the node become part of the cluster view. Report a bug 4.1.2. Configure Node Authentication for Cluster Security (GSSAPI/Kerberos) When using the GSSAPI mechanism, the client_name is used as the name of a Kerberos-enabled login module defined within the security domain subsystem. For a full procedure on how to do this, see Section 2.6.7.1, "Configure Hot Rod Authentication (GSSAPI/Kerberos)" . Example 4.3. Using the Kerberos Login Module The following property must be set in the <sasl/> element to reference it: As a result, the authentication section of the security realm is ignored, as the nodes will be validated against the Kerberos Domain Controller. The authorization configuration is still required, as the node principal must belong to the required cluster-role. In all cases, it is recommended that a shared authorization database, such as LDAP, be used to validate node membership in order to simplify administration. By default, the principal of the joining node must be in the following format: Report a bug
[ "<management> <security-realms> <!-- Additional configuration information here --> <security-realm name=\"ClusterRealm\"> <authentication> <properties path=\"cluster-users.properties\" relative-to=\"jboss.server.config.dir\"/> </authentication> <authorization> <properties path=\"cluster-roles.properties\" relative-to=\"jboss.server.config.dir\"/> </authorization> </security-realm> </security-realms> <!-- Additional configuration information here --> </security-realms> </management> <stack name=\"udp\"> <!-- Additional configuration information here --> <sasl mech=\"DIGEST-MD5\" security-realm=\"ClusterRealm\" cluster-role=\"cluster\"> <property name=\"client_name\">node1</property> <property name=\"client_password\">password</property> </sasl> <!-- Additional configuration information here --> </stack>", "clustered.sh -Djboss.node.name=node001", "<management> <security-realms> <security-realm name=\"ClusterRealm\"> <authentication> <properties path=\"cluster-users.properties\" relative-to=\"jboss.server.config.dir\"/> </authentication> <authorization> <properties path=\"cluster-roles.properties\" relative-to=\"jboss.server.config.dir\"/> </authorization> </security-realm> </security-realms> </management> <subsystem xmlns=\"urn:infinispa:server:jgroups:6.1\" default-stack=\"USD{jboss.default.jgroups.stack:udp}\"> <stack name=\"udp\"> <transport type=\"UDP\" socket-binding=\"jgroups-udp\"/> <protocol type=\"PING\"/> <protocol type=\"MERGE2\"/> <protocol type=\"FD_SOCK\" socket-binding=\"jgroups-udp-fd\"/> <protocol type=\"FD_ALL\"/> <protocol type=\"pbcast.NAKACK\"/> <protocol type=\"UNICAST2\"/> <protocol type=\"pbcast.STABLE\"/> <protocol type=\"pbcast.GMS\"/> <protocol type=\"UFC\"/> <protocol type=\"MFC\"/> <protocol type=\"FRAG2\"/> <protocol type=\"RSVP\"/> <sasl security-realm=\"ClusterRealm\" mech=\"DIGEST-MD5\"> <property name=\"client_password>...</property> </sasl> </stack> </subsystem> <subsystem xmlns=\"urn:infinispan:server:core:6.1\" default-cache-container=\"clustered\"> <cache-container name=\"clustered\" default-cache=\"default\"> <transport executor=\"infinispan-transport\" lock-timeout=\"60000\" stack=\"udp\"/> <!-- various clustered cache definitions here --> </cache-container> </subsystem>", "add-user.sh -up cluster-users.properties -gp cluster-roles.properties -r ClusterRealm -u node001 -g clustered -p <password>", "<property name=\"client_password>...</property>", "<security-domain name=\"krb-node0\" cache-type=\"default\"> <authentication> <login-module code=\"Kerberos\" flag=\"required\"> <module-option name=\"storeKey\" value=\"true\"/> <module-option name=\"useKeyTab\" value=\"true\"/> <module-option name=\"refreshKrb5Config\" value=\"true\"/> <module-option name=\"principal\" value=\"jgroups/node0/[email protected]\"/> <module-option name=\"keyTab\" value=\"USD{jboss.server.config.dir}/keytabs/jgroups_node0_clustered.keytab\"/> <module-option name=\"doNotPrompt\" value=\"true\"/> </login-module> </authentication> </security-domain>", "<sasl <!-- Additional configuration information here --> > <property name=\"login_module_name\"> <!-- Additional configuration information here --> </property> </sasl>", "jgroups/USDNODE_NAME/USDCACHE_CONTAINER_NAME@REALM" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/security_guide/chap-security_for_cluster_traffic
6.4 Release Notes
6.4 Release Notes Red Hat Enterprise Linux 6 Release Notes for Red Hat Enterprise Linux 6.4 Edition 4 Red Hat Engineering Content Services
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_release_notes/index
Preface
Preface To train complex machine-learning models or process data more quickly, you can use the distributed workloads feature to run your jobs on multiple OpenShift worker nodes in parallel. This approach significantly reduces the task completion time, and enables the use of larger datasets and more complex models.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/working_with_distributed_workloads/pr01
B.45.2. RHSA-2011:0479 - Moderate: libvirt security and bug fix update
B.45.2. RHSA-2011:0479 - Moderate: libvirt security and bug fix update Updated libvirt packages that fix one security issue and one bug are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having moderate security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. The libvirt library is a C API for managing and interacting with the virtualization capabilities of Linux and other operating systems. In addition, libvirt provides tools for remotely managing virtualized systems. CVE-2011-1486 A flaw was found in the way libvirtd handled error reporting for concurrent connections. A remote attacker able to establish read-only connections to libvirtd on a server could use this flaw to crash libvirtd. Bug Fix BZ# 668692 Previously, running qemu under a different UID prevented it from accessing files with mode 0660 permissions that were owned by a different user, but by a group that qemu was a member of. All libvirt users are advised to upgrade to these updated packages, which contain backported patches to resolve these issues. After installing the updated packages, libvirtd must be restarted ("service libvirtd restart") for this update to take effect.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/rhsa-2011-0479
Cluster APIs
Cluster APIs OpenShift Container Platform 4.18 Reference guide for cluster APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/cluster_apis/index
Preface
Preface Open Java Development Kit (OpenJDK) is a free and open-source implementation of the Java Platform, Standard Edition (Java SE). Eclipse Temurin is available in three LTS versions: OpenJDK 8u, OpenJDK 11u, and OpenJDK 17u. Binary files for Eclipse Temurin are available for macOS, Microsoft Windows, and multiple Linux x86 Operating Systems including Red Hat Enterprise Linux and Ubuntu.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_eclipse_temurin_11.0.20/pr01
40.8. Graphical Interface
40.8. Graphical Interface Some OProfile preferences can be set with a graphical interface. To start it, execute the oprof_start command as root at a shell prompt. After changing any of the options, save them by clicking the Save and quit button. The preferences are written to /root/.oprofile/daemonrc , and the application exits. Exiting the application does not stop OProfile from sampling. On the Setup tab, to set events for the processor counters as discussed in Section 40.2.2, "Setting Events to Monitor" , select the counter from the pulldown menu and select the event from the list. A brief description of the event appears in the text box below the list. Only events available for the specific counter and the specific architecture are displayed. The interface also displays whether the profiler is running and some brief statistics about it. Figure 40.1. OProfile Setup On the right side of the tab, select the Profile kernel option to count events in kernel mode for the currently selected event, as discussed in Section 40.2.3, "Separating Kernel and User-space Profiles" . If this option is unselected, no samples are collected for the kernel. Select the Profile user binaries option to count events in user mode for the currently selected event, as discussed in Section 40.2.3, "Separating Kernel and User-space Profiles" . If this option is unselected, no samples are collected for user applications. Use the Count text field to set the sampling rate for the currently selected event as discussed in Section 40.2.2.1, "Sampling Rate" . If any unit masks are available for the currently selected event, as discussed in Section 40.2.2.2, "Unit Masks" , they are displayed in the Unit Masks area on the right side of the Setup tab. Select the checkbox beside the unit mask to enable it for the event. On the Configuration tab, to profile the kernel, enter the name and location of the vmlinux file for the kernel to monitor in the Kernel image file text field. To configure OProfile not to monitor the kernel, select No kernel image . Figure 40.2. OProfile Configuration If the Verbose option is selected, the oprofiled daemon log includes more information. If Per-application kernel samples files is selected, OProfile generates per-application profiles for the kernel and kernel modules as discussed in Section 40.2.3, "Separating Kernel and User-space Profiles" . This is equivalent to the opcontrol --separate=kernel command. If Per-application shared libs samples files is selected, OProfile generates per-application profiles for libraries. This is equivalent to the opcontrol --separate=library command. To force data to be written to samples files as discussed in Section 40.5, "Analyzing the Data" , click the Flush profiler data button. This is equivalent to the opcontrol --dump command. To start OProfile from the graphical interface, click Start profiler . To stop the profiler, click Stop profiler . Exiting the application does not stop OProfile from sampling.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/OProfile-Graphical_Interface
How To Set Up SSO with SAML v2
How To Set Up SSO with SAML v2 Red Hat JBoss Enterprise Application Platform 7.4 Instructions for configuring and managing single sign-on user access to Red Hat JBoss Enterprise Application Platform using SAML 2.0. Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/how_to_set_up_sso_with_saml_v2/index
Builds using Shipwright
Builds using Shipwright OpenShift Dedicated 4 An extensible build framework to build container images on an OpenShift cluster Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_dedicated/4/html-single/builds_using_shipwright/index
Developing process services in Red Hat Process Automation Manager
Developing process services in Red Hat Process Automation Manager Red Hat Process Automation Manager 7.13
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_process_services_in_red_hat_process_automation_manager/index
3.12. Partial Results Mode
3.12. Partial Results Mode JBoss Data Virtualization supports a partial results query mode. This mode changes the behavior of the query processor so the server returns results even when some data sources are unavailable. For example, suppose that two data sources exist for different suppliers and your data designers have created a virtual group that creates a union between the information from the two suppliers. If your application submits a query without using partial results query mode and one of the suppliers'\u0080\u0099 databases is down, the query against the virtual group returns an exception. However, if your application runs the same query in partial results query mode, the server returns data from the running data source and no data from the data source that is down. When using partial results mode, if a source throws an exception during processing it does not cause the user's query to fail. Rather, that source is treated as returning no more rows after the failure point. Most commonly, that source will return 0 rows. This behavior is most useful when using UNION or OUTER JOIN queries as these operations handle missing information in a useful way. Most other kinds of queries will simply return 0 rows to the user when used in partial results mode and the source is unavailable. Note In some instances, (typically when you are using JDBC sources), if the source is not available initially, its absence will prevent Teiid from automatically determining the appropriate set of source capabilities. If you see an exception indicating that the capabilities for an unavailable source are not valid in partial results mode, then it may be necessary to manually set the database version or similar property on the translator to ensure that the capabilities are known even if the source is not available.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_1_client_development/Partial_Results_Mode1
Chapter 12. Storage Pools
Chapter 12. Storage Pools This chapter includes instructions on creating storage pools of assorted types. A storage pool is a quantity of storage set aside by an administrator, often a dedicated storage administrator, for use by virtual machines. Storage pools are often divided into storage volumes either by the storage administrator or the system administrator, and the volumes are assigned to guest virtual machines as block devices. Example 12.1. NFS storage pool Suppose a storage administrator responsible for an NFS server creates a share to store guest virtual machines' data. The system administrator defines a pool on the host physical machine with the details of the share (nfs.example.com: /path/to/share should be mounted on /vm_data ). When the pool is started, libvirt mounts the share on the specified directory, just as if the system administrator logged in and executed mount nfs.example.com:/path/to/share /vmdata . If the pool is configured to autostart, libvirt ensures that the NFS share is mounted on the directory specified when libvirt is started. Once the pool starts, the files that the NFS share, are reported as volumes, and the storage volumes' paths are then queried using the libvirt APIs. The volumes' paths can then be copied into the section of a guest virtual machine's XML definition file describing the source storage for the guest virtual machine's block devices. With NFS, applications using the libvirt APIs can create and delete volumes in the pool (files within the NFS share) up to the limit of the size of the pool (the maximum storage capacity of the share). Not all pool types support creating and deleting volumes. Stopping the pool negates the start operation, in this case, unmounts the NFS share. The data on the share is not modified by the destroy operation, despite the name. See man virsh for more details. Note Storage pools and volumes are not required for the proper operation of guest virtual machines. Pools and volumes provide a way for libvirt to ensure that a particular piece of storage will be available for a guest virtual machine, but some administrators will prefer to manage their own storage and guest virtual machines will operate properly without any pools or volumes defined. On systems that do not use pools, system administrators must ensure the availability of the guest virtual machines' storage using whatever tools they prefer, for example, adding the NFS share to the host physical machine's fstab so that the share is mounted at boot time. Warning When creating storage pools on a guest, make sure to follow security considerations. This information is discussed in more detail in the Red Hat Enterprise Linux Virtualization Security Guide which can be found at https://access.redhat.com/site/documentation/ . 12.1. Disk-based Storage Pools This section covers creating disk based storage devices for guest virtual machines. Warning Guests should not be given write access to whole disks or block devices (for example, /dev/sdb ). Use partitions (for example, /dev/sdb1 ) or LVM volumes. If you pass an entire block device to the guest, the guest will likely partition it or create its own LVM groups on it. This can cause the host physical machine to detect these partitions or LVM groups and cause errors. 12.1.1. Creating a Disk-based Storage Pool Using virsh This procedure creates a new storage pool using a disk device with the virsh command. Warning Dedicating a disk to a storage pool will reformat and erase all data presently stored on the disk device. It is strongly recommended to back up the storage device before commencing with the following procedure. Create a GPT disk label on the disk The disk must be relabeled with a GUID Partition Table (GPT) disk label. GPT disk labels allow for creating a large numbers of partitions, up to 128 partitions, on each device. GPT partition tables can store partition data for far more partitions than the MS-DOS partition table. Create the storage pool configuration file Create a temporary XML text file containing the storage pool information required for the new device. The file must be in the format shown below, and contain the following fields: <name>guest_images_disk</name> The name parameter determines the name of the storage pool. This example uses the name guest_images_disk in the example below. <device path=' /dev/sdb '/> The device parameter with the path attribute specifies the device path of the storage device. This example uses the device /dev/sdb . <target> <path> /dev </path></target> The file system target parameter with the path sub-parameter determines the location on the host physical machine file system to attach volumes created with this storage pool. For example, sdb1, sdb2, sdb3. Using /dev/ , as in the example below, means volumes created from this storage pool can be accessed as /dev /sdb1, /dev /sdb2, /dev /sdb3. <format type=' gpt '/> The format parameter specifies the partition table type. This example uses the gpt in the example below, to match the GPT disk label type created in the step. Create the XML file for the storage pool device with a text editor. Example 12.2. Disk based storage device storage pool Attach the device Add the storage pool definition using the virsh pool-define command with the XML configuration file created in the step. Start the storage pool Start the storage pool with the virsh pool-start command. Verify the pool is started with the virsh pool-list --all command. Turn on autostart Turn on autostart for the storage pool. Autostart configures the libvirtd service to start the storage pool when the service starts. Verify the storage pool configuration Verify the storage pool was created correctly, the sizes reported correctly, and the state reports as running . Optional: Remove the temporary configuration file Remove the temporary storage pool XML configuration file if it is not needed. A disk based storage pool is now available. 12.1.2. Deleting a Storage Pool Using virsh The following demonstrates how to delete a storage pool using virsh: To avoid any issues with other guest virtual machines using the same pool, it is best to stop the storage pool and release any resources in use by it. Remove the storage pool's definition
[ "parted /dev/sdb GNU Parted 2.1 Using /dev/sdb Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) mklabel New disk label type? gpt (parted) quit Information: You may need to update /etc/fstab. #", "<pool type='disk'> <name> guest_images_disk </name> <source> <device path=' /dev/sdb '/> <format type=' gpt '/> </source> <target> <path> /dev </path> </target> </pool>", "virsh pool-define ~/guest_images_disk.xml Pool guest_images_disk defined from /root/guest_images_disk.xml virsh pool-list --all Name State Autostart ----------------------------------------- default active yes guest_images_disk inactive no", "virsh pool-start guest_images_disk Pool guest_images_disk started virsh pool-list --all Name State Autostart ----------------------------------------- default active yes guest_images_disk active no", "virsh pool-autostart guest_images_disk Pool guest_images_disk marked as autostarted virsh pool-list --all Name State Autostart ----------------------------------------- default active yes guest_images_disk active yes", "virsh pool-info guest_images_disk Name: guest_images_disk UUID: 551a67c8-5f2a-012c-3844-df29b167431c State: running Capacity: 465.76 GB Allocation: 0.00 Available: 465.76 GB ls -la /dev/sdb brw-rw----. 1 root disk 8, 16 May 30 14:08 /dev/sdb virsh vol-list guest_images_disk Name Path -----------------------------------------", "rm ~/ guest_images_disk .xml", "virsh pool-destroy guest_images_disk", "virsh pool-undefine guest_images_disk" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/chap-virtualization_administration_guide-storage_pools-storage_pools
Chapter 5. Deprecated functionalities
Chapter 5. Deprecated functionalities None.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.19/html/release_notes_and_known_issues/deprecated-functionalities
5.2. Creating a Striped Logical Volume
5.2. Creating a Striped Logical Volume This example procedure creates an LVM striped logical volume called striped_logical_volume that stripes data across the disks at /dev/sda1 , /dev/sdb1 , and /dev/sdc1 . Label the disks you will use in the volume group as LVM physical volumes with the pvcreate command. Warning This command destroys any data on /dev/sda1 , /dev/sdb1 , and /dev/sdc1 . Create the volume group volgroup01 . The following command creates the volume group volgroup01 . You can use the vgs command to display the attributes of the new volume group. Create a striped logical volume from the volume group you have created. The following command creates the striped logical volume striped_logical_volume from the volume group volgroup01 . This example creates a logical volume that is 2 gigabytes in size, with three stripes and a stripe size of 4 kilobytes. Create a file system on the striped logical volume. The following command creates a GFS2 file system on the logical volume. The following commands mount the logical volume and report the file system disk space usage.
[ "pvcreate /dev/sda1 /dev/sdb1 /dev/sdc1 Physical volume \"/dev/sda1\" successfully created Physical volume \"/dev/sdb1\" successfully created Physical volume \"/dev/sdc1\" successfully created", "vgcreate volgroup01 /dev/sda1 /dev/sdb1 /dev/sdc1 Volume group \"volgroup01\" successfully created", "vgs VG #PV #LV #SN Attr VSize VFree volgroup01 3 0 0 wz--n- 51.45G 51.45G", "lvcreate -i 3 -I 4 -L 2G -n striped_logical_volume volgroup01 Rounding size (512 extents) up to stripe boundary size (513 extents) Logical volume \"striped_logical_volume\" created", "mkfs.gfs2 -p lock_nolock -j 1 /dev/volgroup01/striped_logical_volume This will destroy any data on /dev/volgroup01/striped_logical_volume. Are you sure you want to proceed? [y/n] y Device: /dev/volgroup01/striped_logical_volume Blocksize: 4096 Filesystem Size: 492484 Journals: 1 Resource Groups: 8 Locking Protocol: lock_nolock Lock Table: Syncing All Done", "mount /dev/volgroup01/striped_logical_volume /mnt df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 13902624 1656776 11528232 13% / /dev/hda1 101086 10787 85080 12% /boot tmpfs 127880 0 127880 0% /dev/shm /dev/volgroup01/striped_logical_volume 1969936 20 1969916 1% /mnt" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/logical_volume_manager_administration/stripe_create_ex
Chapter 9. Read-only fields
Chapter 9. Read-only fields Certain fields in the REST API are marked read-only. These usually include the URL of a resource, the ID, and occasionally some internal fields. For example, the 'created\_by' attribute of each object indicates which user created the resource, and you cannot edit this. If you post some values and notice that they are not changing, these fields might be read-only.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/automation_controller_api_overview/controller-api-readonly-fields
4.6. Using Function Modifiers
4.6. Using Function Modifiers In some cases you may need to translate the function differently or even insert additional function calls above or below the function being translated. The JDBC translator provides an abstract class FunctionModifier for this purpose. During the start method a modifier instance can be registered against a given function name via a call to JDBCExecutionFactory.registerFunctionModifier . The FunctionModifier has a method called translate . Use the translate method to change the way the function is represented. An example of overriding the translate method to change the MOD(a, b) function into an infix operator for Sybase (a % b). The translate method returns a list of strings and language objects that will be assembled by the translator into a final string. The strings will be used as is and the language objects will be further processed by the translator. public class ModFunctionModifier extends FunctionModifier { public List translate(Function function) { List parts = new ArrayList(); parts.add("("); Expression[] args = function.getParameters().toArray(new Expression[0]); parts.add(args[0]); parts.add(" % "); parts.add(args[1]); parts.add(")"); return parts; } } In addition to building your own FunctionModifiers, there are a number of pre-built generic function modifiers that are provided with the translator. Table 4.2. Common Modifiers Modifier Description AliasModifier Handles renaming a function ("ucase" to "upper" for example) EscapeSyntaxModifier Wraps a function in the standard JDBC escape syntax for functions: {fn xxxx()} To register the function modifiers for your supported functions, you must call the ExecutionFactory.registerFunctionModifier(String name, FunctionModifier modifier) method. public class ExtendedJDBCExecutionFactory extends JDBCExecutionFactory { @Override public void start() { super.start(); // register functions. registerFunctionModifier("abs", new MyAbsModifier()); registerFunctionModifier("concat", new AliasModifier("concat2")); } } Support for the two functions being registered ("abs" and "concat") must be declared in the capabilities as well. Functions that do not have modifiers registered will be translated as usual.
[ "public class ModFunctionModifier extends FunctionModifier { public List translate(Function function) { List parts = new ArrayList(); parts.add(\"(\"); Expression[] args = function.getParameters().toArray(new Expression[0]); parts.add(args[0]); parts.add(\" % \"); parts.add(args[1]); parts.add(\")\"); return parts; } }", "public class ExtendedJDBCExecutionFactory extends JDBCExecutionFactory { @Override public void start() { super.start(); // register functions. registerFunctionModifier(\"abs\", new MyAbsModifier()); registerFunctionModifier(\"concat\", new AliasModifier(\"concat2\")); } }" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_4_server_development/using_function_modifiers
Appendix C. Using your subscription
Appendix C. Using your subscription AMQ is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal. C.1. Accessing your account Procedure Go to access.redhat.com . If you do not already have an account, create one. Log in to your account. C.2. Activating a subscription Procedure Go to access.redhat.com . Navigate to My Subscriptions . Navigate to Activate a subscription and enter your 16-digit activation number. C.3. Downloading release files To access .zip, .tar.gz, and other release files, use the customer portal to find the relevant files for download. If you are using RPM packages or the Red Hat Maven repository, this step is not required. Procedure Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads . Locate the Red Hat AMQ entries in the INTEGRATION AND AUTOMATION category. Select the desired AMQ product. The Software Downloads page opens. Click the Download link for your component. C.4. Registering your system for packages To install RPM packages for this product on Red Hat Enterprise Linux, your system must be registered. If you are using downloaded release files, this step is not required. Procedure Go to access.redhat.com . Navigate to Registration Assistant . Select your OS version and continue to the page. Use the listed command in your system terminal to complete the registration. For more information about registering your system, see one of the following resources: Red Hat Enterprise Linux 6 - Registering the system and managing subscriptions Red Hat Enterprise Linux 7 - Registering the system and managing subscriptions Red Hat Enterprise Linux 8 - Registering the system and managing subscriptions
null
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_.net_client/using_your_subscription
Release Notes for AMQ Streams 1.8 on RHEL
Release Notes for AMQ Streams 1.8 on RHEL Red Hat AMQ 2021.q3 For use with AMQ Streams on Red Hat Enterprise Linux
null
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/release_notes_for_amq_streams_1.8_on_rhel/index
Chapter 1. Release notes for Red Hat OpenShift Logging
Chapter 1. Release notes for Red Hat OpenShift Logging 1.1. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see Red Hat CTO Chris Wright's message . 1.2. Supported Versions Table 1.1. OpenShift Container Platform version support for Red Hat OpenShift Logging (RHOL) 4.7 4.8 4.9 RHOL 5.1 X X RHOL 5.2 X X X RHOL 5.3 X X 1.2.1. OpenShift Logging 5.1.0 This release includes RHSA-2021:2112 OpenShift Logging Bug Fix Release 5.1.0 . 1.2.1.1. New features and enhancements OpenShift Logging 5.1 now supports OpenShift Container Platform 4.7 and later running on: IBM Power Systems IBM Z and LinuxONE This release adds improvements related to the following components and concepts. As a cluster administrator, you can use Kubernetes pod labels to gather log data from an application and send it to a specific log store. You can gather log data by configuring the inputs[].application.selector.matchLabels element in the ClusterLogForwarder custom resource (CR) YAML file. You can also filter the gathered log data by namespace. ( LOG-883 ) This release adds the following new ElasticsearchNodeDiskWatermarkReached warnings to the OpenShift Elasticsearch Operator (EO): Elasticsearch Node Disk Low Watermark Reached Elasticsearch Node Disk High Watermark Reached Elasticsearch Node Disk Flood Watermark Reached The alert applies the past several warnings when it predicts that an Elasticsearch node will reach the Disk Low Watermark , Disk High Watermark , or Disk Flood Stage Watermark thresholds in the 6 hours. This warning period gives you time to respond before the node reaches the disk watermark thresholds. The warning messages also provide links to the troubleshooting steps, which you can follow to help mitigate the issue. The EO applies the past several hours of disk space data to a linear model to generate these warnings. ( LOG-1100 ) JSON logs can now be forwarded as JSON objects, rather than quoted strings, to either Red Hat's managed Elasticsearch cluster or any of the other supported third-party systems. Additionally, you can now query individual fields from a JSON log message inside Kibana which increases the discoverability of specific logs. ( LOG-785 , LOG-1148 ) 1.2.1.2. Deprecated and removed features Some features available in releases have been deprecated or removed. Deprecated functionality is still included in OpenShift Logging and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. 1.2.1.2.1. Elasticsearch Curator has been removed With this update, the Elasticsearch Curator has been removed and is no longer supported. Elasticsearch Curator helped you curate or manage your indices on OpenShift Container Platform 4.4 and earlier. Instead of using Elasticsearch Curator, configure the log retention time. 1.2.1.2.2. Forwarding logs using the legacy Fluentd and legacy syslog methods have been deprecated From OpenShift Container Platform 4.6 to the present, forwarding logs by using the legacy Fluentd and legacy syslog methods have been deprecated and will be removed in a future release. Use the standard non-legacy methods instead. 1.2.1.3. Bug fixes Before this update, the ClusterLogForwarder CR did not show the input[].selector element after it had been created. With this update, when you specify a selector in the ClusterLogForwarder CR, it remains. Fixing this bug was necessary for LOG-883 , which enables using pod label selectors to forward application log data. ( LOG-1338 ) Before this update, an update in the cluster service version (CSV) accidentally introduced resources and limits for the OpenShift Elasticsearch Operator container. Under specific conditions, this caused an out-of-memory condition that terminated the Elasticsearch Operator pod. This update fixes the issue by removing the CSV resources and limits for the Operator container. The Operator gets scheduled without issues. ( LOG-1254 ) Before this update, forwarding logs to Kafka using chained certificates failed with the following error message: state=error: certificate verify failed (unable to get local issuer certificate) Logs could not be forwarded to a Kafka broker with a certificate signed by an intermediate CA. This happened because the Fluentd Kafka plug-in could only handle a single CA certificate supplied in the ca-bundle.crt entry of the corresponding secret. This update fixes the issue by enabling the Fluentd Kafka plug-in to handle multiple CA certificates supplied in the ca-bundle.crt entry of the corresponding secret. Now, logs can be forwarded to a Kafka broker with a certificate signed by an intermediate CA. ( LOG-1218 , LOG-1216 ) Before this update, while under load, Elasticsearch responded to some requests with an HTTP 500 error, even though there was nothing wrong with the cluster. Retrying the request was successful. This update fixes the issue by updating the index management cron jobs to be more resilient when they encounter temporary HTTP 500 errors. The updated index management cron jobs will first retry a request multiple times before failing. ( LOG-1215 ) Before this update, if you did not set the .proxy value in the cluster installation configuration, and then configured a global proxy on the installed cluster, a bug prevented Fluentd from forwarding logs to Elasticsearch. To work around this issue, in the proxy or cluster configuration, set the no_proxy value to .svc.cluster.local so it skips internal traffic. This update fixes the proxy configuration issue. If you configure the global proxy after installing an OpenShift Container Platform cluster, Fluentd forwards logs to Elasticsearch. ( LOG-1187 , BZ#1915448 ) Before this update, the logging collector created more socket connections than necessary. With this update, the logging collector reuses the existing socket connection to send logs. ( LOG-1186 ) Before this update, if a cluster administrator tried to add or remove storage from an Elasticsearch cluster, the OpenShift Elasticsearch Operator (EO) incorrectly tried to upgrade the Elasticsearch cluster, displaying scheduledUpgrade: "True" , shardAllocationEnabled: primaries , and change the volumes. With this update, the EO does not try to upgrade the Elasticsearch cluster. The EO status displays the following new status information to indicate when you have tried to make an unsupported change to the Elasticsearch storage that it has ignored: StorageStructureChangeIgnored when you try to change between using ephemeral and persistent storage structures. StorageClassNameChangeIgnored when you try to change the storage class name. StorageSizeChangeIgnored when you try to change the storage size. Note If you configure the ClusterLogging custom resource (CR) to switch from ephemeral to persistent storage, the EO creates a persistent volume claim (PVC) but does not create a persistent volume (PV). To clear the StorageStructureChangeIgnored status, you must revert the change to the ClusterLogging CR and delete the persistent volume claim (PVC). ( LOG-1351 ) Before this update, if you redeployed a full Elasticsearch cluster, it got stuck in an unhealthy state, with one non-data node running and all other data nodes shut down. This issue happened because new certificates prevented the Elasticsearch Operator from scaling down the non-data nodes of the Elasticsearch cluster. With this update, Elasticsearch Operator can scale all the data and non-data nodes down and then back up again, so they load the new certificates. The Elasticsearch Operator can reach the new nodes after they load the new certificates. ( LOG-1536 ) 1.2.2. OpenShift Logging 5.0.9 This release includes RHBA-2021:3705 - Bug Fix Advisory. OpenShift Logging Bug Fix Release (5.0.9) . 1.2.2.1. Bug fixes This release includes the following bug fixes: Before this update, some log entries had unrecognized UTF-8 bytes, which caused Elasticsearch to reject messages and block the entire buffered payload. This update resolves the issue: rejected payloads drop the invalid log entries and resubmit the remaining entries. ( LOG-1574 ) Before this update, editing the ClusterLogging custom resource (CR) did not apply the value of totalLimitSize to the Fluentd total_limit_size field, which limits the size of the buffer plugin instance. As a result, Fluentd applied the default values. With this update, the CR applies the value of totalLimitSize to the Fluentd total_limit_size field. Fluentd uses the value of the total_limit_size field or the default value, whichever is less. ( LOG-1736 ) 1.2.2.2. CVEs CVE-2020-25648 CVE-2021-22922 CVE-2021-22923 CVE-2021-22924 CVE-2021-36222 CVE-2021-37576 CVE-2021-37750 CVE-2021-38201 1.2.3. OpenShift Logging 5.0.8 This release includes RHBA-2021:3526 - Bug Fix Advisory. OpenShift Logging Bug Fix Release (5.0.8) . 1.2.3.1. Bug fixes This release also includes the following bug fixes: Due to an issue in the release pipeline scripts, the value of the olm.skipRange field remained unchanged at 5.2.0 and was not updated when the z-stream number, 0 , increased. The current release fixes the pipeline scripts to update the value of this field when the release numbers change. ( LOG-1741 ) 1.2.4. OpenShift Logging 5.0.7 This release includes RHBA-2021:2884 - Bug Fix Advisory. OpenShift Logging Bug Fix Release (5.0.7) . 1.2.4.1. Bug fixes This release also includes the following bug fixes: LOG-1594 - Vendored viaq/logerr dependency is missing a license file 1.2.4.2. CVEs CVE-2016-10228 CVE-2017-14502 CVE-2018-25011 CVE-2019-2708 CVE-2019-3842 CVE-2019-9169 CVE-2019-13012 CVE-2019-18276 CVE-2019-18811 CVE-2019-19523 CVE-2019-19528 CVE-2019-25013 CVE-2020-0431 CVE-2020-8231 CVE-2020-8284 CVE-2020-8285 CVE-2020-8286 CVE-2020-8927 CVE-2020-9948 CVE-2020-9951 CVE-2020-9983 CVE-2020-10543 CVE-2020-10878 CVE-2020-11608 CVE-2020-12114 CVE-2020-12362 CVE-2020-12363 CVE-2020-12364 CVE-2020-12464 CVE-2020-13434 CVE-2020-13543 CVE-2020-13584 CVE-2020-13776 CVE-2020-14314 CVE-2020-14344 CVE-2020-14345 CVE-2020-14346 CVE-2020-14347 CVE-2020-14356 CVE-2020-14360 CVE-2020-14361 CVE-2020-14362 CVE-2020-14363 CVE-2020-15358 CVE-2020-15437 CVE-2020-24394 CVE-2020-24977 CVE-2020-25212 CVE-2020-25284 CVE-2020-25285 CVE-2020-25643 CVE-2020-25704 CVE-2020-25712 CVE-2020-26116 CVE-2020-26137 CVE-2020-26541 CVE-2020-27618 CVE-2020-27619 CVE-2020-27786 CVE-2020-27835 CVE-2020-28196 CVE-2020-28974 CVE-2020-29361 CVE-2020-29362 CVE-2020-29363 CVE-2020-35508 CVE-2020-36322 CVE-2020-36328 CVE-2020-36329 CVE-2021-0342 CVE-2021-0605 CVE-2021-3177 CVE-2021-3326 CVE-2021-3501 CVE-2021-3516 CVE-2021-3517 CVE-2021-3518 CVE-2021-3520 CVE-2021-3537 CVE-2021-3541 CVE-2021-3543 CVE-2021-20271 CVE-2021-23336 CVE-2021-27219 CVE-2021-33034 1.2.5. OpenShift Logging 5.0.6 This release includes RHBA-2021:2655 - Bug Fix Advisory. OpenShift Logging Bug Fix Release (5.0.6) . 1.2.5.1. Bug fixes This release also includes the following bug fixes: LOG-1451 - [1927249] fieldmanager.go:186] [SHOULD NOT HAPPEN] failed to update managedFields... duplicate entries for key [name="POLICY_MAPPING"] ( LOG-1451 ) LOG-1537 - Full Cluster Cert Redeploy is broken when the ES clusters includes non-data nodes( LOG-1537 ) LOG-1430 - eventrouter raising "Observed a panic: &runtime.TypeAssertionError" ( LOG-1430 ) LOG-1461 - The index management job status is always Completed even when there has an error in the job log. ( LOG-1461 ) LOG-1459 - Operators missing disconnected annotation ( LOG-1459 ) LOG-1572 - Bug 1981579: Fix built-in application behavior to collect all of logs ( LOG-1572 ) 1.2.5.2. CVEs CVE-2016-10228 CVE-2017-14502 CVE-2018-25011 CVE-2019-2708 CVE-2019-9169 CVE-2019-25013 CVE-2020-8231 CVE-2020-8284 CVE-2020-8285 CVE-2020-8286 CVE-2020-8927 CVE-2020-10543 CVE-2020-10878 CVE-2020-13434 CVE-2020-14344 CVE-2020-14345 CVE-2020-14346 CVE-2020-14347 CVE-2020-14360 CVE-2020-14361 CVE-2020-14362 CVE-2020-14363 CVE-2020-15358 CVE-2020-25712 CVE-2020-26116 CVE-2020-26137 CVE-2020-26541 CVE-2020-27618 CVE-2020-27619 CVE-2020-28196 CVE-2020-29361 CVE-2020-29362 CVE-2020-29363 CVE-2020-36328 CVE-2020-36329 CVE-2021-3177 CVE-2021-3326 CVE-2021-3516 CVE-2021-3517 CVE-2021-3518 CVE-2021-3520 CVE-2021-3537 CVE-2021-3541 CVE-2021-20271 CVE-2021-23336 CVE-2021-27219 CVE-2021-33034 1.2.6. OpenShift Logging 5.0.5 This release includes RHSA-2021:2374 - Security Advisory. Moderate: Openshift Logging Bug Fix Release (5.0.5) . 1.2.6.1. Security fixes gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation. ( CVE-2021-3121 ) glib: integer overflow in g_bytes_new function on 64-bit platforms due to an implicit cast from 64 bits to 32 bits( CVE-2021-27219 ) The following issues relate to the above CVEs: BZ#1921650 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation( BZ#1921650 ) LOG-1361 CVE-2021-3121 elasticsearch-operator-container: gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation [openshift-logging-5]( LOG-1361 ) LOG-1362 CVE-2021-3121 elasticsearch-proxy-container: gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation [openshift-logging-5]( LOG-1362 ) LOG-1363 CVE-2021-3121 logging-eventrouter-container: gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation [openshift-logging-5]( LOG-1363 ) 1.2.7. OpenShift Logging 5.0.4 This release includes RHSA-2021:2136 - Security Advisory. Moderate: Openshift Logging security and bugs update (5.0.4) . 1.2.7.1. Security fixes gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation. ( CVE-2021-3121 ) The following Jira issues contain the above CVEs: LOG-1364 CVE-2021-3121 cluster-logging-operator-container: gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation [openshift-logging-5]. ( LOG-1364 ) 1.2.7.2. Bug fixes This release also includes the following bug fixes: LOG-1328 Port fix to 5.0.z for BZ-1945168. ( LOG-1328 ) 1.2.8. OpenShift Logging 5.0.3 This release includes RHSA-2021:1515 - Security Advisory. Important OpenShift Logging Bug Fix Release (5.0.3) . 1.2.8.1. Security fixes jackson-databind: arbitrary code execution in slf4j-ext class ( CVE-2018-14718 ) jackson-databind: arbitrary code execution in blaze-ds-opt and blaze-ds-core classes ( CVE-2018-14719 ) jackson-databind: exfiltration/XXE in some JDK classes ( CVE-2018-14720 ) jackson-databind: server-side request forgery (SSRF) in axis2-jaxws class ( CVE-2018-14721 ) jackson-databind: improper polymorphic deserialization in axis2-transport-jms class ( CVE-2018-19360 ) jackson-databind: improper polymorphic deserialization in openjpa class ( CVE-2018-19361 ) jackson-databind: improper polymorphic deserialization in jboss-common-core class ( CVE-2018-19362 ) jackson-databind: default typing mishandling leading to remote code execution ( CVE-2019-14379 ) jackson-databind: serialization gadgets in com.pastdev.httpcomponents.configuration.JndiConfiguration ( CVE-2020-24750 ) jackson-databind: mishandles the interaction between serialization gadgets and typing, related to org.apache.commons.dbcp2.datasources.PerUserPoolDataSource ( CVE-2020-35490 ) jackson-databind: mishandles the interaction between serialization gadgets and typing, related to org.apache.commons.dbcp2.datasources.SharedPoolDataSource ( CVE-2020-35491 ) jackson-databind: mishandles the interaction between serialization gadgets and typing, related to com.oracle.wls.shaded.org.apache.xalan.lib.sql.JNDIConnectionPool ( CVE-2020-35728 ) jackson-databind: mishandles the interaction between serialization gadgets and typing, related to oadd.org.apache.commons.dbcp.cpdsadapter.DriverAdapterCPDS ( CVE-2020-36179 ) jackson-databind: mishandles the interaction between serialization gadgets and typing, related to org.apache.commons.dbcp2.cpdsadapter.DriverAdapterCPDS ( CVE-2020-36180 ) jackson-databind: mishandles the interaction between serialization gadgets and typing, related to org.apache.tomcat.dbcp.dbcp.cpdsadapter.DriverAdapterCPDS ( CVE-2020-36181 ) jackson-databind: mishandles the interaction between serialization gadgets and typing, related to org.apache.tomcat.dbcp.dbcp2.cpdsadapter.DriverAdapterCPDS ( CVE-2020-36182 ) jackson-databind: mishandles the interaction between serialization gadgets and typing, related to org.docx4j.org.apache.xalan.lib.sql.JNDIConnectionPool ( CVE-2020-36183 ) jackson-databind: mishandles the interaction between serialization gadgets and typing, related to org.apache.tomcat.dbcp.dbcp2.datasources.PerUserPoolDataSource ( CVE-2020-36184 ) jackson-databind: mishandles the interaction between serialization gadgets and typing, related to org.apache.tomcat.dbcp.dbcp2.datasources.SharedPoolDataSource ( CVE-2020-36185 ) jackson-databind: mishandles the interaction between serialization gadgets and typing, related to org.apache.tomcat.dbcp.dbcp.datasources.PerUserPoolDataSource ( CVE-2020-36186 ) jackson-databind: mishandles the interaction between serialization gadgets and typing, related to org.apache.tomcat.dbcp.dbcp.datasources.SharedPoolDataSource ( CVE-2020-36187 ) jackson-databind: mishandles the interaction between serialization gadgets and typing, related to com.newrelic.agent.deps.ch.qos.logback.core.db.JNDIConnectionSource ( CVE-2020-36188 ) jackson-databind: mishandles the interaction between serialization gadgets and typing, related to com.newrelic.agent.deps.ch.qos.logback.core.db.DriverManagerConnectionSource ( CVE-2020-36189 ) jackson-databind: mishandles the interaction between serialization gadgets and typing, related to javax.swing ( CVE-2021-20190 ) golang: data race in certain net/http servers including ReverseProxy can lead to DoS ( CVE-2020-15586 ) golang: ReadUvarint and ReadVarint can read an unlimited number of bytes from invalid inputs ( CVE-2020-16845 ) OpenJDK: Incomplete enforcement of JAR signing disabled algorithms (Libraries, 8249906) ( CVE-2021-2163 ) The following Jira issues contain the above CVEs: LOG-1234 CVE-2020-15586 CVE-2020-16845 openshift-eventrouter: various flaws [openshift-4]. ( LOG-1234 ) LOG-1243 CVE-2018-14718 CVE-2018-14719 CVE-2018-14720 CVE-2018-14721 CVE-2018-19360 CVE-2018-19361 CVE-2018-19362 CVE-2019-14379 CVE-2020-35490 CVE-2020-35491 CVE-2020-35728... logging-elasticsearch6-container: various flaws [openshift-logging-5.0]. ( LOG-1243 ) 1.2.8.2. Bug fixes This release also includes the following bug fixes: LOG-1224 Release 5.0 - ClusterLogForwarder namespace-specific log forwarding does not work as expected. ( LOG-1224 ) LOG-1232 5.0 - Bug 1859004 - Sometimes the eventrouter couldn't gather event logs. ( LOG-1232 ) LOG-1299 Release 5.0 - Forwarding logs to Kafka using Chained certificates fails with error "state=error: certificate verify failed (unable to get local issuer certificate)". ( LOG-1299 ) 1.2.9. OpenShift Logging 5.0.2 This release includes RHBA-2021:1167 - Bug Fix Advisory. OpenShift Logging Bug Fix Release (5.0.2) . 1.2.9.1. Bug fixes If you did not set .proxy in the cluster installation configuration, and then configured a global proxy on the installed cluster, a bug prevented Fluentd from forwarding logs to Elasticsearch. To work around this issue, in the proxy/cluster configuration, set no_proxy to .svc.cluster.local so it skips internal traffic. The current release fixes the proxy configuration issue. Now, if you configure the global proxy after installing an OpenShift cluster, Fluentd forwards logs to Elasticsearch. ( LOG-1187 ) Previously, forwarding logs to Kafka using chained certificates failed with error "state=error: certificate verify failed (unable to get local issuer certificate)." Logs could not be forwarded to a Kafka broker with a certificate signed by an intermediate CA. This happened because fluentd Kafka plugin could only handle a single CA certificate supplied in the ca-bundle.crt entry of the corresponding secret. The current release fixes this issue by enabling the fluentd Kafka plugin to handle multiple CA certificates supplied in the ca-bundle.crt entry of the corresponding secret. Now, logs can be forwarded to a Kafka broker with a certificate signed by an intermediate CA. ( LOG-1216 , LOG-1218 ) Previously, an update in the cluster service version (CSV) accidentally introduced resources and limits for the OpenShift Elasticsearch operator container. Under specific conditions, this caused an out-of-memory condition that terminated the Elasticsearch operator pod. The current release fixes this issue by removing the CSV resources and limits for the operator container. Now, the operator gets scheduled without issues. ( LOG-1254 ) 1.2.10. OpenShift Logging 5.0.1 This release includes RHBA-2021:0963 - Bug Fix Advisory. OpenShift Logging Bug Fix Release (5.0.1) . 1.2.10.1. Bug fixes Previously, if you enabled legacy log forwarding, logs were not sent to managed storage. This issue occurred because the generated log forwarding configuration improperly chose between either log forwarding or legacy log forwarding. The current release fixes this issue. If the ClusterLogging CR defines a logstore , logs are sent to managed storage. Additionally, if legacy log forwarding is enabled, logs are sent to legacy log forwarding regardless of whether managed storage is enabled. ( LOG-1172 ) Previously, while under load, Elasticsearch responded to some requests with an HTTP 500 error, even though there was nothing wrong with the cluster. Retrying the request was successful. This release fixes the issue by updating the cron jobs to be more resilient when encountering temporary HTTP 500 errors. Now, they will retry a request multiple times first before failing. ( LOG-1215 ) 1.2.11. OpenShift Logging 5.0.0 This release includes RHBA-2021:0652 - Bug Fix Advisory. Errata Advisory for Openshift Logging 5.0.0 . 1.2.11.1. New features and enhancements This release adds improvements related to the following concepts. Cluster Logging becomes Red Hat OpenShift Logging With this release, Cluster Logging becomes Red Hat OpenShift Logging 5.0. Maximum five primary shards per index With this release, the OpenShift Elasticsearch Operator (EO) sets the number of primary shards for an index between one and five, depending on the number of data nodes defined for a cluster. Previously, the EO set the number of shards for an index to the number of data nodes. When an index in Elasticsearch was configured with a number of replicas, it created that many replicas for each primary shard, not per index. Therefore, as the index sharded, a greater number of replica shards existed in the cluster, which created a lot of overhead for the cluster to replicate and keep in sync. Updated OpenShift Elasticsearch Operator name and maturity level This release updates the display name of the OpenShift Elasticsearch Operator and operator maturity level. The new display name and clarified specific use for the OpenShift Elasticsearch Operator are updated in Operator Hub. OpenShift Elasticsearch Operator reports on CSV success This release adds reporting metrics to indicate that installing or upgrading the ClusterServiceVersion (CSV) object for the OpenShift Elasticsearch Operator was successful. Previously, there was no way to determine, or generate an alert, if the installing or upgrading the CSV failed. Now, an alert is provided as part of the OpenShift Elasticsearch Operator. Reduce Elasticsearch pod certificate permission warnings Previously, when the Elasticsearch pod started, it generated certificate permission warnings, which misled some users to troubleshoot their clusters. The current release fixes these permissions issues to reduce these types of notifications. New links from alerts to explanations and troubleshooting This release adds a link from the alerts that an Elasticsearch cluster generates to a page of explanations and troubleshooting steps for that alert. New connection timeout for deletion jobs The current release adds a connection timeout for deletion jobs, which helps prevent pods from occasionally hanging when they query Elasticsearch to delete indices. Now, if the underlying 'curl' call does not connect before the timeout period elapses, the timeout terminates the call. Minimize updates to rollover index templates With this enhancement, the OpenShift Elasticsearch Operator only updates its rollover index templates if they have different field values. Index templates have a higher priority than indices. When the template is updated, the cluster prioritizes distributing them over the index shards, impacting performance. To minimize Elasticsearch cluster operations, the operator only updates the templates when the number of primary shards or replica shards changes from what is currently configured. 1.2.11.2. Technology Preview features Some features in this release are currently in Technology Preview. These experimental features are not intended for production use. Note the following scope of support on the Red Hat Customer Portal for these features: Technology Preview Features Support Scope In the table below, features are marked with the following statuses: TP : Technology Preview GA : General Availability - : Not Available Table 1.2. Technology Preview tracker Feature OCP 4.5 OCP 4.6 Logging 5.0 Log forwarding TP GA GA 1.2.11.3. Deprecated and removed features Some features available in releases have been deprecated or removed. Deprecated functionality is still included in OpenShift Logging and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. 1.2.11.3.1. Elasticsearch Curator has been deprecated The Elasticsearch Curator has been deprecated and will be removed in a future release. Elasticsearch Curator helped you curate or manage your indices on OpenShift Container Platform 4.4 and earlier. Instead of using Elasticsearch Curator, configure the log retention time. 1.2.11.3.2. Forwarding logs using the legacy Fluentd and legacy syslog methods have been deprecated From OpenShift Container Platform 4.6 to the present, forwarding logs by using the legacy Fluentd and legacy syslog methods have been deprecated and will be removed in a future release. Use the standard non-legacy methods instead. 1.2.11.4. Bug fixes Previously, Elasticsearch rejected HTTP requests whose headers exceeded the default max header size, 8 KB. Now, the max header size is 128 KB, and Elasticsearch no longer rejects HTTP requests for exceeding the max header size. ( BZ#1845293 ) Previously, nodes did not recover from Pending status because a software bug did not correctly update their statuses in the Elasticsearch custom resource (CR). The current release fixes this issue, so the nodes can recover when their status is Pending. ( BZ#1887357 ) Previously, when the Cluster Logging Operator (CLO) scaled down the number of Elasticsearch nodes in the clusterlogging CR to three nodes, it omitted previously-created nodes that had unique IDs. The OpenShift Elasticsearch Operator rejected the update because it has safeguards that prevent nodes with unique IDs from being removed. Now, when the CLO scales down the number of nodes and updates the Elasticsearch CR, it marks nodes with unique IDs as count 0 instead of omitting them. As a result, users can scale down their cluster to 3 nodes by using the clusterlogging CR. ( BZ#1879150 ) Note In OpenShift Logging 5.0 and later, the Cluster Logging Operator is called Red Hat OpenShift Logging Operator. Previously, the Fluentd collector pod went into a crash loop when the ClusterLogForwarder had an incorrectly-configured secret. The current release fixes this issue. Now, the ClusterLogForwarder validates the secrets and reports any errors in its status field. As a result, it does not cause the Fluentd collector pod to crash. ( BZ#1888943 ) Previously, if you updated the Kibana resource configuration in the clusterlogging instance to resource{} , the resulting nil map caused a panic and changed the status of the OpenShift Elasticsearch Operator to CrashLoopBackOff . The current release fixes this issue by initializing the map. ( BZ#1889573 ) Previously, the fluentd collector pod went into a crash loop when the ClusterLogForwarder had multiple outputs using the same secret. The current release fixes this issue. Now, multiple outputs can share a secret. ( BZ#1890072 ) Previously, if you deleted a Kibana route, the Cluster Logging Operator (CLO) could not recover or recreate it. Now, the CLO watches the route, and if you delete the route, the OpenShift Elasticsearch Operator can reconcile or recreate it. ( BZ#1890825 ) Previously, the Cluster Logging Operator (CLO) would attempt to reconcile the Elasticsearch resource, which depended upon the Red Hat-provided Elastic Custom Resource Definition (CRD). Attempts to list an unknown kind caused the CLO to exit its reconciliation loop. This happened because the CLO tried to reconcile all of its managed resources whether they were defined or not. The current release fixes this issue. The CLO only reconciles types provided by the OpenShift Elasticsearch Operator if a user defines managed storage. As a result, users can create collector-only deployments of cluster logging by deploying the CLO. ( BZ#1891738 ) Previously, because of an LF GA syslog implementation for RFC 3164, logs sent to remote syslog were not compatible with the legacy behavior. The current release fixes this issue. AddLogSource adds details about log's source details to the "message" field. Now, logs sent to remote syslog are compatible with the legacy behavior. ( BZ#1891886 ) Previously, the Elasticsearch rollover pods failed with a resource_already_exists_exception error. Within the Elasticsearch rollover API, when the index was created, the *-write alias was not updated to point to it. As a result, the time the rollover API endpoint was triggered for that particular index, it received an error that the resource already existed. The current release fixes this issue. Now, when a rollover occurs in the indexmanagement cronjobs, if a new index was created, it verifies that the alias points to the new index. This behavior prevents the error. If the cluster is already receiving this error, a cronjob fixes the issue so that subsequent runs work as expected. Now, performing rollovers no longer produces the exception. ( BZ#1893992 ) Previously, Fluent stopped sending logs even though the logging stack seemed functional. Logs were not shipped to an endpoint for an extended period even when an endpoint came back up. This happened if the max backoff time was too long and the endpoint was down. The current release fixes this issue by lowering the max backoff time, so the logs are shipped sooner. ( BZ#1894634 ) Previously, omitting the Storage size of the Elasticsearch node caused panic in the OpenShift Elasticsearch Operator code. This panic appeared in the logs as: Observed a panic: "invalid memory address or nil pointer dereference" The panic happened because although Storage size is a required field, the software didn't check for it. The current release fixes this issue, so there is no panic if the storage size is omitted. Instead, the storage defaults to ephemeral storage and generates a log message for the user. ( BZ#1899589 ) Previously, elasticsearch-rollover and elasticsearch-delete pods remained in the Invalid JSON: or ValueError: No JSON object could be decoded error states. This exception was raised because there was no exception handler for invalid JSON input. The current release fixes this issue by providing a handler for invalid JSON input. As a result, the handler outputs an error message instead of an exception traceback, and the elasticsearch-rollover and elasticsearch-delete jobs do not remain those error states. ( BZ#1899905 ) Previously, when deploying Fluentd as a stand-alone, a Kibana pod was created even if the value of replicas was 0 . This happened because Kibana defaulted to 1 pod even when there were no Elasticsearch nodes. The current release fixes this. Now, a Kibana only defaults to 1 when there are one or more Elasticsearch nodes. ( BZ#1901424 ) Previously, if you deleted the secret, it was not recreated. Even though the certificates were on a disk local to the operator, they weren't rewritten because they hadn't changed. That is, certificates were only written if they changed. The current release fixes this issue. It rewrites the secret if the certificate changes or is not found. Now, if you delete the master-certs, they are replaced. ( BZ#1901869 ) Previously, if a cluster had multiple custom resources with the same name, the resource would get selected alphabetically when not fully qualified with the API group. As a result, if you installed both Red Hat's OpenShift Elasticsearch Operator alongside the OpenShift Elasticsearch Operator, you would see failures when collected data via a must-gather report. The current release fixes this issue by ensuring must-gathers now use the full API group when gathering information about the cluster's custom resources. ( BZ#1897731 ) An earlier bug fix to address issues related to certificate generation introduced an error. Trying to read the certificates caused them to be regenerated because they were recognized as missing. This, in turn, triggered the OpenShift Elasticsearch Operator to perform a rolling upgrade on the Elasticsearch cluster and, potentially, to have mismatched certificates. This bug was caused by the operator incorrectly writing certificates to the working directory. The current release fixes this issue. Now the operator consistently reads and writes certificates to the same working directory, and the certificates are only regenerated if needed. ( BZ#1905910 ) Previously, queries to the root endpoint to retrieve the Elasticsearch version received a 403 response. The 403 response broke any services that used this endpoint in prior releases. This error happened because non-administrative users did not have the monitor permission required to query the root endpoint and retrieve the Elasticsearch version. Now, non-administrative users can query the root endpoint for the deployed version of Elasticsearch. ( BZ#1906765 ) Previously, in some bulk insertion situations, the Elasticsearch proxy timed out connections between fluentd and Elasticsearch. As a result, fluentd failed to deliver messages and logged a Server returned nothing (no headers, no data) error. The current release fixes this issue: It increases the default HTTP read and write timeouts in the Elasticsearch proxy from five seconds to one minute. It also provides command-line options in the Elasticsearch proxy to control HTTP timeouts in the field. ( BZ#1908707 ) Previously, in some cases, the {ProductName}/Elasticsearch dashboard was missing from the OpenShift Container Platform monitoring dashboard because the dashboard configuration resource referred to a different namespace owner and caused the OpenShift Container Platform to garbage-collect that resource. Now, the ownership reference is removed from the OpenShift Elasticsearch Operator reconciler configuration, and the logging dashboard appears in the console. ( BZ#1910259 ) Previously, the code that uses environment variables to replace values in the Kibana configuration file did not consider commented lines. This prevented users from overriding the default value of server.maxPayloadBytes. The current release fixes this issue by uncommenting the default value of server.maxPayloadByteswithin. Now, users can override the value by using environment variables, as documented. ( BZ#1918876 ) Previously, the Kibana log level was increased not to suppress instructions to delete indices that failed to migrate, which also caused the display of GET requests at the INFO level that contained the Kibana user's email address and OAuth token. The current release fixes this issue by masking these fields, so the Kibana logs do not display them. ( BZ#1925081 ) 1.2.11.5. Known issues Fluentd pods with the ruby-kafka-1.1.0 and fluent-plugin-kafka-0.13.1 gems are not compatible with Apache Kafka version 0.10.1.0. As a result, log forwarding to Kafka fails with a message: error_class=Kafka::DeliveryFailed error="Failed to send messages to flux-openshift-v4/1" The ruby-kafka-0.7 gem dropped support for Kafka 0.10 in favor of native support for Kafka 0.11. The ruby-kafka-1.0.0 gem added support for Kafka 2.3 and 2.4. The current version of OpenShift Logging tests and therefore supports Kafka version 2.4.1. To work around this issue, upgrade to a supported version of Apache Kafka. ( BZ#1907370 )
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/logging/cluster-logging-release-notes
Chapter 4. Evaluating the model
Chapter 4. Evaluating the model If you want to measure the improvements of your new model, you can compare its performance to the base model with the evaluation process. You can also chat with the model directly to qualitatively identify whether the new model has learned the knowledge you created. If you want more quantitative results of the model improvements, you can run the evaluation process in the RHEL AI CLI. 4.1. Evaluating your new model If you want to measure the improvements of your new model, you can compare its performance to the base model with the evaluation process. You can also chat with the model directly to qualitatively identify whether the new model has learned the knowledge you created. If you want more quantitative results of the model improvements, you can run the evaluation process in the RHEL AI CLI with the following procedure. Prerequisites You installed RHEL AI with the bootable container image. You created a custom qna.yaml file with skills or knowledge. You ran the synthetic data generation process. You trained the model using the RHEL AI training process. You downloaded the prometheus-8x7b-v2-0 judge model. You have root user access on your machine. Procedure Navigate to your working Git branch where you created your qna.yaml file. You can now run the evaluation process on different benchmarks. Each command needs the path to the trained samples model to evaluate, you can access these checkpoints in your ~/.local/share/instructlab/checkpoints folder. MMLU_BRANCH benchmark - If you want to measure how your knowledge contributions have impacted your model, run the mmlu_branch benchmark by executing the following command: USD ilab model evaluate --benchmark mmlu_branch --model ~/.local/share/instructlab/phased/phase2/checkpoints/hf_format/<checkpoint> \ --tasks-dir ~/.local/share/instructlab/datasets/<node-dataset> \ --base-model ~/.cache/instructlab/models/granite-7b-starter where <checkpoint> Specify the best scored checkpoint file generated during multi-phase training <node-dataset> Specify the node_datasets directory, in the ~/.local/share/instructlab/datasets/ directory, with the same timestamps as the.jsonl files used for training the model. Example output # KNOWLEDGE EVALUATION REPORT ## BASE MODEL (SCORE) /home/user/.cache/instructlab/models/instructlab/granite-7b-lab/ (0.74/1.0) ## MODEL (SCORE) /home/user/local/share/instructlab/phased/phases2/checkpoints/hf_format/samples_665(0.78/1.0) ### IMPROVEMENTS (0.0 to 1.0): 1. tonsils: 0.74 -> 0.78 (+0.04) Optional: MT_BENCH_BRANCH benchmark - If you want to measure how your skills contributions have impacted your model, run the mt_bench_branch benchmark by executing the following command: USD ilab model evaluate \ --benchmark mt_bench_branch \ --model ~/.local/share/instructlab/phased/phase2/checkpoints/hf_format/<checkpoint> \ --judge-model ~/.cache/instructlab/models/prometheus-8x7b-v2-0 \ --branch <worker-branch> \ --base-branch <worker-branch> where <checkpoint> Specify the best scored checkpoint file generated during multi-phase training. <worker-branch> Specify the branch you used when adding data to your taxonomy tree. <num-gpus> Specify the number of GPUs you want to use for evaluation. Note Customizing skills is not currently supported on Red Hat Enterprise Linux AI version 1.2. Example output # SKILL EVALUATION REPORT ## BASE MODEL (SCORE) /home/user/.cache/instructlab/models/instructlab/granite-7b-lab (5.78/10.0) ## MODEL (SCORE) /home/user/local/share/instructlab/phased/phases2/checkpoints/hf_format/samples_665(6.00/10.0) ### IMPROVEMENTS (0.0 to 10.0): 1. foundational_skills/reasoning/linguistics_reasoning/object_identification/qna.yaml: 4.0 -> 6.67 (+2.67) 2. foundational_skills/reasoning/theory_of_mind/qna.yaml: 3.12 -> 4.0 (+0.88) 3. foundational_skills/reasoning/linguistics_reasoning/logical_sequence_of_words/qna.yaml: 9.33 -> 10.0 (+0.67) 4. foundational_skills/reasoning/logical_reasoning/tabular/qna.yaml: 5.67 -> 6.33 (+0.67) 5. foundational_skills/reasoning/common_sense_reasoning/qna.yaml: 1.67 -> 2.33 (+0.67) 6. foundational_skills/reasoning/logical_reasoning/causal/qna.yaml: 5.67 -> 6.0 (+0.33) 7. foundational_skills/reasoning/logical_reasoning/general/qna.yaml: 6.6 -> 6.8 (+0.2) 8. compositional_skills/writing/grounded/editing/content/qna.yaml: 6.8 -> 7.0 (+0.2) 9. compositional_skills/general/synonyms/qna.yaml: 4.5 -> 4.67 (+0.17) ### REGRESSIONS (0.0 to 10.0): 1. foundational_skills/reasoning/unconventional_reasoning/lower_score_wins/qna.yaml: 5.67 -> 4.0 (-1.67) 2. foundational_skills/reasoning/mathematical_reasoning/qna.yaml: 7.33 -> 6.0 (-1.33) 3. foundational_skills/reasoning/temporal_reasoning/qna.yaml: 5.67 -> 4.67 (-1.0) ### NO CHANGE (0.0 to 10.0): 1. foundational_skills/reasoning/linguistics_reasoning/odd_one_out/qna.yaml (9.33) 2. compositional_skills/grounded/linguistics/inclusion/qna.yaml (6.5) Optional: You can manually evaluate each checkpoint using the MMLU and MT_BENCH benchmarks. You can evaluate any model against the standardized set of knowledge or skills, allowing you to compare the scores of your own model against other LLMs. If you do run multi-phase training, this process is done with single-phase training. MMLU - If you want to see the evaluation score of your new model against a standardized set of knowledge data, set the mmlu benchmark by running the following command: USD ilab model evaluate --benchmark mmlu --model ~/.local/share/instructlab/phased/phase2/checkpoints/hf_format/samples_665 where <checkpoint> Specify one of the checkpoint files generated during multi-phase training. Example output # KNOWLEDGE EVALUATION REPORT ## MODEL (SCORE) /home/user/.local/share/instructlab/phased/phase2/checkpoints/hf_format/samples_665 ### SCORES (0.0 to 1.0): mmlu_abstract_algebra - 0.31 mmlu_anatomy - 0.46 mmlu_astronomy - 0.52 mmlu_business_ethics - 0.55 mmlu_clinical_knowledge - 0.57 mmlu_college_biology - 0.56 mmlu_college_chemistry - 0.38 mmlu_college_computer_science - 0.46 ... MT_BENCH - If you want to see the evaluation score of your new model against a standardized set of skills, set the mt_bench benchmark by running the following command: USD ilab model evaluate --benchmark mt_bench --model ~/.local/share/instructlab/phased/phases2/checkpoints/hf_format/samples_665 where <checkpoint> Specify one of the checkpoint files generated during multi-phase training. Example output # SKILL EVALUATION REPORT ## MODEL (SCORE) /home/user/local/share/instructlab/phased/phases2/checkpoints/hf_format/samples_665(7.27/10.0) ### TURN ONE (0.0 to 10.0): 7.48 ### TURN TWO (0.0 to 10.0): 7.05
[ "ilab model evaluate --benchmark mmlu_branch --model ~/.local/share/instructlab/phased/phase2/checkpoints/hf_format/<checkpoint> --tasks-dir ~/.local/share/instructlab/datasets/<node-dataset> --base-model ~/.cache/instructlab/models/granite-7b-starter", "KNOWLEDGE EVALUATION REPORT ## BASE MODEL (SCORE) /home/user/.cache/instructlab/models/instructlab/granite-7b-lab/ (0.74/1.0) ## MODEL (SCORE) /home/user/local/share/instructlab/phased/phases2/checkpoints/hf_format/samples_665(0.78/1.0) ### IMPROVEMENTS (0.0 to 1.0): 1. tonsils: 0.74 -> 0.78 (+0.04)", "ilab model evaluate --benchmark mt_bench_branch --model ~/.local/share/instructlab/phased/phase2/checkpoints/hf_format/<checkpoint> --judge-model ~/.cache/instructlab/models/prometheus-8x7b-v2-0 --branch <worker-branch> --base-branch <worker-branch>", "SKILL EVALUATION REPORT ## BASE MODEL (SCORE) /home/user/.cache/instructlab/models/instructlab/granite-7b-lab (5.78/10.0) ## MODEL (SCORE) /home/user/local/share/instructlab/phased/phases2/checkpoints/hf_format/samples_665(6.00/10.0) ### IMPROVEMENTS (0.0 to 10.0): 1. foundational_skills/reasoning/linguistics_reasoning/object_identification/qna.yaml: 4.0 -> 6.67 (+2.67) 2. foundational_skills/reasoning/theory_of_mind/qna.yaml: 3.12 -> 4.0 (+0.88) 3. foundational_skills/reasoning/linguistics_reasoning/logical_sequence_of_words/qna.yaml: 9.33 -> 10.0 (+0.67) 4. foundational_skills/reasoning/logical_reasoning/tabular/qna.yaml: 5.67 -> 6.33 (+0.67) 5. foundational_skills/reasoning/common_sense_reasoning/qna.yaml: 1.67 -> 2.33 (+0.67) 6. foundational_skills/reasoning/logical_reasoning/causal/qna.yaml: 5.67 -> 6.0 (+0.33) 7. foundational_skills/reasoning/logical_reasoning/general/qna.yaml: 6.6 -> 6.8 (+0.2) 8. compositional_skills/writing/grounded/editing/content/qna.yaml: 6.8 -> 7.0 (+0.2) 9. compositional_skills/general/synonyms/qna.yaml: 4.5 -> 4.67 (+0.17) ### REGRESSIONS (0.0 to 10.0): 1. foundational_skills/reasoning/unconventional_reasoning/lower_score_wins/qna.yaml: 5.67 -> 4.0 (-1.67) 2. foundational_skills/reasoning/mathematical_reasoning/qna.yaml: 7.33 -> 6.0 (-1.33) 3. foundational_skills/reasoning/temporal_reasoning/qna.yaml: 5.67 -> 4.67 (-1.0) ### NO CHANGE (0.0 to 10.0): 1. foundational_skills/reasoning/linguistics_reasoning/odd_one_out/qna.yaml (9.33) 2. compositional_skills/grounded/linguistics/inclusion/qna.yaml (6.5)", "ilab model evaluate --benchmark mmlu --model ~/.local/share/instructlab/phased/phase2/checkpoints/hf_format/samples_665", "KNOWLEDGE EVALUATION REPORT ## MODEL (SCORE) /home/user/.local/share/instructlab/phased/phase2/checkpoints/hf_format/samples_665 ### SCORES (0.0 to 1.0): mmlu_abstract_algebra - 0.31 mmlu_anatomy - 0.46 mmlu_astronomy - 0.52 mmlu_business_ethics - 0.55 mmlu_clinical_knowledge - 0.57 mmlu_college_biology - 0.56 mmlu_college_chemistry - 0.38 mmlu_college_computer_science - 0.46", "ilab model evaluate --benchmark mt_bench --model ~/.local/share/instructlab/phased/phases2/checkpoints/hf_format/samples_665", "SKILL EVALUATION REPORT ## MODEL (SCORE) /home/user/local/share/instructlab/phased/phases2/checkpoints/hf_format/samples_665(7.27/10.0) ### TURN ONE (0.0 to 10.0): 7.48 ### TURN TWO (0.0 to 10.0): 7.05" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.2/html/creating_a_custom_llm_using_rhel_ai/evaluating_model
33.2. Supported DNS Zone Types
33.2. Supported DNS Zone Types IdM supports two DNS zone types: master and forward zones. Note This guide uses the BIND terminology for zone types which is different from the terminology used for Microsoft Windows DNS. Master zones in BIND serve the same purpose as forward lookup zones and reverse lookup zones in Microsoft Windows DNS. Forward zones in BIND serve the same purpose as conditional forwarders in Microsoft Windows DNS. Master DNS zones Master DNS zones contain authoritative DNS data and can accept dynamic DNS updates. This behavior is equivalent to the type master setting in standard BIND configuration. Master zones are managed using the ipa dnszone-* commands. In compliance with standard DNS rules, every master zone must contain SOA and NS records. IdM generates these records automatically when the DNS zone is created, but the NS records must be manually copied to the parent zone to create proper delegation. In accordance with standard BIND behavior, forwarding configuration specified for master zones only affects queries for names for which the server is not authoritative. Example 33.1. Example Scenario for DNS Forwarding The IdM server contains the test.example. master zone. This zone contains an NS delegation record for the sub.test.example. name. In addition, the test.example. zone is configured with the 192.0.2.254 forwarder IP address. A client querying the name nonexistent.test.example. receives the NXDomain answer, and no forwarding occurs because the IdM server is authoritative for this name. On the other hand, querying for the sub.test.example. name is forwarded to the configured forwarder 192.0.2.254 because the IdM server is not authoritative for this name. Forward DNS zones Forward DNS zones do not contain any authoritative data. All queries for names belonging to a forward DNS zone are forwarded to a specified forwarder. This behavior is equivalent to the type forward setting in standard BIND configuration. Forward zones are managed using the ipa dnsforwardzone-* commands.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/supported-dns-zone-types
Chapter 8. OperatorGroup [operators.coreos.com/v1]
Chapter 8. OperatorGroup [operators.coreos.com/v1] Description OperatorGroup is the unit of multitenancy for OLM managed operators. It constrains the installation of operators in its namespace to a specified set of target namespaces. Type object Required metadata 8.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object OperatorGroupSpec is the spec for an OperatorGroup resource. status object OperatorGroupStatus is the status for an OperatorGroupResource. 8.1.1. .spec Description OperatorGroupSpec is the spec for an OperatorGroup resource. Type object Property Type Description selector object Selector selects the OperatorGroup's target namespaces. serviceAccountName string ServiceAccountName is the admin specified service account which will be used to deploy operator(s) in this operator group. staticProvidedAPIs boolean Static tells OLM not to update the OperatorGroup's providedAPIs annotation targetNamespaces array (string) TargetNamespaces is an explicit set of namespaces to target. If it is set, Selector is ignored. upgradeStrategy string UpgradeStrategy defines the upgrade strategy for operators in the namespace. There are currently two supported upgrade strategies: Default: OLM will only allow clusterServiceVersions to move to the replacing phase from the succeeded phase. This effectively means that OLM will not allow operators to move to the version if an installation or upgrade has failed. TechPreviewUnsafeFailForward: OLM will allow clusterServiceVersions to move to the replacing phase from the succeeded phase or from the failed phase. Additionally, OLM will generate new installPlans when a subscription references a failed installPlan and the catalog has been updated with a new upgrade for the existing set of operators. WARNING: The TechPreviewUnsafeFailForward upgrade strategy is unsafe and may result in unexpected behavior or unrecoverable data loss unless you have deep understanding of the set of operators being managed in the namespace. 8.1.2. .spec.selector Description Selector selects the OperatorGroup's target namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.3. .spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.4. .spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.5. .status Description OperatorGroupStatus is the status for an OperatorGroupResource. Type object Required lastUpdated Property Type Description conditions array Conditions is an array of the OperatorGroup's conditions. conditions[] object Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } lastUpdated string LastUpdated is a timestamp of the last time the OperatorGroup's status was Updated. namespaces array (string) Namespaces is the set of target namespaces for the OperatorGroup. serviceAccountRef object ServiceAccountRef references the service account object specified. 8.1.6. .status.conditions Description Conditions is an array of the OperatorGroup's conditions. Type array 8.1.7. .status.conditions[] Description Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) 8.1.8. .status.serviceAccountRef Description ServiceAccountRef references the service account object specified. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 8.2. API endpoints The following API endpoints are available: /apis/operators.coreos.com/v1/operatorgroups GET : list objects of kind OperatorGroup /apis/operators.coreos.com/v1/namespaces/{namespace}/operatorgroups DELETE : delete collection of OperatorGroup GET : list objects of kind OperatorGroup POST : create an OperatorGroup /apis/operators.coreos.com/v1/namespaces/{namespace}/operatorgroups/{name} DELETE : delete an OperatorGroup GET : read the specified OperatorGroup PATCH : partially update the specified OperatorGroup PUT : replace the specified OperatorGroup /apis/operators.coreos.com/v1/namespaces/{namespace}/operatorgroups/{name}/status GET : read status of the specified OperatorGroup PATCH : partially update status of the specified OperatorGroup PUT : replace status of the specified OperatorGroup 8.2.1. /apis/operators.coreos.com/v1/operatorgroups Table 8.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind OperatorGroup Table 8.2. HTTP responses HTTP code Reponse body 200 - OK OperatorGroupList schema 401 - Unauthorized Empty 8.2.2. /apis/operators.coreos.com/v1/namespaces/{namespace}/operatorgroups Table 8.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 8.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of OperatorGroup Table 8.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 8.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind OperatorGroup Table 8.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 8.8. HTTP responses HTTP code Reponse body 200 - OK OperatorGroupList schema 401 - Unauthorized Empty HTTP method POST Description create an OperatorGroup Table 8.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.10. Body parameters Parameter Type Description body OperatorGroup schema Table 8.11. HTTP responses HTTP code Reponse body 200 - OK OperatorGroup schema 201 - Created OperatorGroup schema 202 - Accepted OperatorGroup schema 401 - Unauthorized Empty 8.2.3. /apis/operators.coreos.com/v1/namespaces/{namespace}/operatorgroups/{name} Table 8.12. Global path parameters Parameter Type Description name string name of the OperatorGroup namespace string object name and auth scope, such as for teams and projects Table 8.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an OperatorGroup Table 8.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 8.15. Body parameters Parameter Type Description body DeleteOptions schema Table 8.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified OperatorGroup Table 8.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 8.18. HTTP responses HTTP code Reponse body 200 - OK OperatorGroup schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified OperatorGroup Table 8.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.20. Body parameters Parameter Type Description body Patch schema Table 8.21. HTTP responses HTTP code Reponse body 200 - OK OperatorGroup schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified OperatorGroup Table 8.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.23. Body parameters Parameter Type Description body OperatorGroup schema Table 8.24. HTTP responses HTTP code Reponse body 200 - OK OperatorGroup schema 201 - Created OperatorGroup schema 401 - Unauthorized Empty 8.2.4. /apis/operators.coreos.com/v1/namespaces/{namespace}/operatorgroups/{name}/status Table 8.25. Global path parameters Parameter Type Description name string name of the OperatorGroup namespace string object name and auth scope, such as for teams and projects Table 8.26. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified OperatorGroup Table 8.27. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 8.28. HTTP responses HTTP code Reponse body 200 - OK OperatorGroup schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified OperatorGroup Table 8.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.30. Body parameters Parameter Type Description body Patch schema Table 8.31. HTTP responses HTTP code Reponse body 200 - OK OperatorGroup schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified OperatorGroup Table 8.32. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.33. Body parameters Parameter Type Description body OperatorGroup schema Table 8.34. HTTP responses HTTP code Reponse body 200 - OK OperatorGroup schema 201 - Created OperatorGroup schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/operatorhub_apis/operatorgroup-operators-coreos-com-v1
A.2.2. The dmsetup ls Command
A.2.2. The dmsetup ls Command You can list the device names of mapped devices with the dmsetup ls command. You can list devices that have at least one target of a specified type with the dmsetup ls --target target_type command. For other options of the dmsetup ls , see the dmsetup man page. The following example shows the command to list the device names of currently configured mapped devices. The following example shows the command to list the devices names of currently configured mirror mappings. LVM configurations that are stacked on multipath or other device mapper devices can be complex to sort out. The dmsetup ls command provides a --tree option that displays dependencies between devices as a tree, as in the following example.
[ "dmsetup ls testgfsvg-testgfslv3 (253:4) testgfsvg-testgfslv2 (253:3) testgfsvg-testgfslv1 (253:2) VolGroup00-LogVol01 (253:1) VolGroup00-LogVol00 (253:0)", "dmsetup ls --target mirror lock_stress-grant--02.1722 (253, 34) lock_stress-grant--01.1720 (253, 18) lock_stress-grant--03.1718 (253, 52) lock_stress-grant--02.1716 (253, 40) lock_stress-grant--03.1713 (253, 47) lock_stress-grant--02.1709 (253, 23) lock_stress-grant--01.1707 (253, 8) lock_stress-grant--01.1724 (253, 14) lock_stress-grant--03.1711 (253, 27)", "dmsetup ls --tree vgtest-lvmir (253:13) ├─vgtest-lvmir_mimage_1 (253:12) │ └─mpathep1 (253:8) │ └─mpathe (253:5) │ ├─ (8:112) │ └─ (8:64) ├─vgtest-lvmir_mimage_0 (253:11) │ └─mpathcp1 (253:3) │ └─mpathc (253:2) │ ├─ (8:32) │ └─ (8:16) └─vgtest-lvmir_mlog (253:4) └─mpathfp1 (253:10) └─mpathf (253:6) ├─ (8:128) └─ (8:80)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/dmsetup-ls
function::qs_wait
function::qs_wait Name function::qs_wait - Function to record enqueue requests Synopsis Arguments qname the name of the queue requesting enqueue Description This function records that a new request was enqueued for the given queue name.
[ "qs_wait(qname:string)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-qs-wait
Chapter 4. Advisories related to this release
Chapter 4. Advisories related to this release The following advisories are issued to document bug fixes and CVE fixes included in this release: RHSA-2024:8116 RHSA-2024:8117 RHSA-2024:8118 RHSA-2024:8119 Revised on 2024-10-18 15:09:04 UTC
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/release_notes_for_red_hat_build_of_openjdk_8.0.432/openjdk8-432-advisory_openjdk