title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 6. Getting support for your clusters | Chapter 6. Getting support for your clusters 6.1. OpenShift Container Platform support For help with your Red Hat OpenShift Container Platform clusters, contact Red Hat Support . From here, you can: Open a new support case. Also see Submitting a support case in the OpenShift Container Platform documentation for instructions. View your open support cases: https://access.redhat.com/support/cases/#/case/list Open a live chat with support engineers Call or email a Red Hat Support expert Additional resources See Getting support in the OpenShift Container Platform documentation for more information. 6.2. OpenShift Dedicated support For questions about your existing Red Hat OpenShift Dedicated clusters, contact Red Hat Support . From here, you can: Open a new support case: https://access.redhat.com/support/cases/#/case/ View open support cases: https://access.redhat.com/support/cases/#/case/list Open a live chat with support engineers Call or email a Red Hat Support expert See Support in the OpenShift Dedicated documentation for more information. 6.3. Red Hat OpenShift Service on AWS (ROSA) support For questions about your existing Red Hat OpenShift Service on AWS (ROSA) clusters, contact Red Hat Support . From here, you can: Open a new support case: https://access.redhat.com/support/cases/#/case/ View open support cases: https://access.redhat.com/support/cases/#/case/list Open a live chat with support engineers Call or email a Red Hat Support expert See Getting support for Red Hat OpenShift Service on AWS in the ROSA documentation for more information. | null | https://docs.redhat.com/en/documentation/openshift_cluster_manager/1-latest/html/managing_clusters/assembly-getting-support |
CLI tools | CLI tools OpenShift Container Platform 4.10 Learning how to use the command-line tools for OpenShift Container Platform Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/cli_tools/index |
Chapter 7. Streams for Apache Kafka Proxy overview | Chapter 7. Streams for Apache Kafka Proxy overview Streams for Apache Kafka Proxy is an Apache Kafka protocol-aware ("Layer 7") proxy designed to enhance Kafka-based systems. Through its filter mechanism it allows additional behavior to be introduced into a Kafka-based system without requiring changes to either your applications or the Kafka cluster itself. Built-in filters are provided as part of the solution. Functioning as an intermediary, the Streams for Apache Kafka Proxy mediates communication between a Kafka cluster and its clients. It takes on the responsibility of receiving, filtering, and forwarding messages. An API provides a convenient means for implementing custom logic within the proxy. Important This feature is a technology preview and not intended for a production environment. For more information see the release notes . | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_on_openshift_overview/con-proxy-overview-str |
Chapter 1. Preparing to install on IBM Power Virtual Server | Chapter 1. Preparing to install on IBM Power Virtual Server The installation workflows documented in this section are for IBM Power(R) Virtual Server infrastructure environments. 1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . 1.2. Requirements for installing OpenShift Container Platform on IBM Power Virtual Server Before installing OpenShift Container Platform on IBM Power(R) Virtual Server you must create a service account and configure an IBM Cloud(R) account. See Configuring an IBM Cloud(R) account for details about creating an account, configuring DNS and supported IBM Power(R) Virtual Server regions. You must manually manage your cloud credentials when installing a cluster to IBM Power(R) Virtual Server. Do this by configuring the Cloud Credential Operator (CCO) for manual mode before you install the cluster. 1.3. Choosing a method to install OpenShift Container Platform on IBM Power Virtual Server You can install OpenShift Container Platform on IBM Power(R) Virtual Server using installer-provisioned infrastructure. This process involves using an installation program to provision the underlying infrastructure for your cluster. Installing OpenShift Container Platform on IBM Power(R) Virtual Server using user-provisioned infrastructure is not supported at this time. See Installation process for more information about installer-provisioned installation processes. 1.3.1. Installing a cluster on installer-provisioned infrastructure You can install a cluster on IBM Power(R) Virtual Server infrastructure that is provisioned by the OpenShift Container Platform installation program by using one of the following methods: Installing a customized cluster on IBM Power(R) Virtual Server : You can install a customized cluster on IBM Power(R) Virtual Server infrastructure that the installation program provisions. The installation program allows for some customization to be applied at the installation stage. Many other customization options are available post-installation . Installing a cluster on IBM Power(R) Virtual Server into an existing VPC : You can install OpenShift Container Platform on IBM Power(R) Virtual Server into an existing Virtual Private Cloud (VPC). You can use this installation method if you have constraints set by the guidelines of your company, such as limits when creating new accounts or infrastructure. Installing a private cluster on IBM Power(R) Virtual Server : You can install a private cluster on IBM Power(R) Virtual Server. You can use this method to deploy OpenShift Container Platform on an internal network that is not visible to the internet. Installing a cluster on IBM Power(R) Virtual Server in a restricted network : You can install OpenShift Container Platform on IBM Power(R) Virtual Server on installer-provisioned infrastructure by using an internal mirror of the installation release content. You can use this method to install a cluster that does not require an active internet connection to obtain the software components. 1.4. Configuring the Cloud Credential Operator utility The Cloud Credential Operator (CCO) manages cloud provider credentials as Kubernetes custom resource definitions (CRDs). To install a cluster on IBM Power(R) Virtual Server, you must set the CCO to manual mode as part of the installation process. To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. Additional resources Rotating API keys 1.5. steps Configuring an IBM Cloud(R) account | [
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret",
"chmod 775 ccoctl",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command."
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_ibm_power_virtual_server/preparing-to-install-on-ibm-power-vs |
Introduction | Introduction This document provides information about installing, configuring, and managing the Load Balancer Add-On components. The Load Balancer Add-On provides load balancing through specialized routing techniques that dispatch traffic to a pool of servers. The audience of this document should have advanced working knowledge of Red Hat Enterprise Linux and understand the concepts of clusters, storage, and server computing. This document is organized as follows: Chapter 1, Load Balancer Add-On Overview Chapter 2, Initial Load Balancer Add-On Configuration Chapter 3, Setting Up Load Balancer Add-On Chapter 4, Configuring the Load Balancer Add-On with Piranha Configuration Tool Appendix A, Using the Load Balancer Add-On with the High Availability Add-On For more information about Red Hat Enterprise Linux 6, see the following resources: Red Hat Enterprise Linux Installation Guide - Provides information regarding installation of Red Hat Enterprise Linux 6. Red Hat Enterprise Linux Deployment Guide - Provides information regarding the deployment, configuration and administration of Red Hat Enterprise Linux 6. For more information about the Load Balancer Add-On and related products for Red Hat Enterprise Linux 6, see the following resources: High Availability Add-On Overview - Provides a high-level overview of the High Availability Add-On, Resilient Storage Add-On, and the Load Balancer Add-On. Configuring and Managing the High Availability Add-On Provides information about configuring and managing the High Availability Add-On (also known as Red Hat Cluster) for Red Hat Enterprise Linux 6. Logical Volume Manager Administration - Provides a description of the Logical Volume Manager (LVM), including information on running LVM in a clustered environment. Global File System 2: Configuration and Administration - Provides information about installing, configuring, and maintaining the Red Hat Resilient Storage Add-On (also known as Red Hat Global File System 2). DM Multipath - Provides information about using the Device-Mapper Multipath feature of Red Hat Enterprise Linux 6. Release Notes - Provides information about the current release of Red Hat products. This document and other Red Hat documents are available in HTML, PDF, and RPM versions online at https://access.redhat.com/documentation/en/red-hat-enterprise-linux/ . 1. Feedback If you spot a typo, or if you have thought of a way to make this manual better, we would love to hear from you. Please submit a report in Bugzilla ( http://bugzilla.redhat.com/bugzilla/ ) against the Product Red Hat Enterprise Linux 6 and the component doc-Load_Balancer_Administration . If you have a suggestion for improving the documentation, try to be as specific as possible. If you have found an error, include the section number and some of the surrounding text so we can find it easily. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/load_balancer_administration/ch-intro-vsa |
Installation Guide | Installation Guide Red Hat JBoss Enterprise Application Platform 7.4 For Use with Red Hat JBoss Enterprise Application Platform 7.4 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/installation_guide/index |
Chapter 3. Installing Satellite Server | Chapter 3. Installing Satellite Server When you install Satellite Server from a connected network, you can obtain packages and receive updates directly from the Red Hat Content Delivery Network. Note You cannot register Satellite Server to itself. Use the following procedures to install Satellite Server, perform the initial configuration, and import subscription manifests. For more information on subscription manifests, see Managing Red Hat Subscriptions in the Content Management Guide . Note that the Satellite installation script is based on Puppet, which means that if you run the installation script more than once, it might overwrite any manual configuration changes. To avoid this and determine which future changes apply, use the --noop argument when you run the installation script. This argument ensures that no actual changes are made. Potential changes are written to /var/log/foreman-installer/satellite.log . Files are always backed up and so you can revert any unwanted changes. For example, in the foreman-installer logs, you can see an entry similar to the following about Filebucket: You can restore the file as follows: 3.1. Configuring the HTTP Proxy to Connect to Red Hat CDN Prerequisites Your network gateway and the HTTP proxy must allow access to the following hosts: Host name Port Protocol subscription.rhsm.redhat.com 443 HTTPS cdn.redhat.com 443 HTTPS *.akamaiedge.net 443 HTTPS cert.console.redhat.com (if using Red Hat Insights) 443 HTTPS api.access.redhat.com (if using Red Hat Insights) 443 HTTPS cert-api.access.redhat.com (if using Red Hat Insights) 443 HTTPS Satellite Server uses SSL to communicate with the Red Hat CDN securely. Use of an SSL interception proxy interferes with this communication. These hosts must be whitelisted on the proxy. For a list of IP addresses used by the Red Hat CDN (cdn.redhat.com), see the Knowledgebase article Public CIDR Lists for Red Hat on the Red Hat Customer Portal. To configure the subscription-manager with the HTTP proxy, follow the procedure below. Procedure On Satellite Server, complete the following details in the /etc/rhsm/rhsm.conf file: 3.2. Registering to Red Hat Subscription Management Registering the host to Red Hat Subscription Management enables the host to subscribe to and consume content for any subscriptions available to the user. This includes content such as Red Hat Enterprise Linux and Red Hat Satellite. For Red Hat Enterprise Linux 7, it also provides access to Red Hat Software Collections (RHSCL). Procedure Register your system with the Red Hat Content Delivery Network, entering your Customer Portal user name and password when prompted: The command displays output similar to the following: 3.3. Attaching the Satellite Infrastructure Subscription Note Skip this step if you have SCA enabled on Red Hat Customer Portal. There is no requirement of attaching the Red Hat Satellite Infrastructure Subscription to the Satellite Server using subscription-manager. For more information about SCA, see Simple Content Access . After you have registered Satellite Server, you must identify your subscription Pool ID and attach an available subscription. The Red Hat Satellite Infrastructure subscription provides access to the Red Hat Satellite and Red Hat Enterprise Linux content. For Red Hat Enterprise Linux 7, it also provides access to Red Hat Software Collections (RHSCL). This is the only subscription required. Red Hat Satellite Infrastructure is included with all subscriptions that include Satellite, formerly known as Smart Management. For more information, see Satellite Infrastructure Subscriptions MCT3718 MCT3719 in the Red Hat Knowledgebase . Subscriptions are classified as available if they are not already attached to a system. If you are unable to find an available Satellite subscription, see the Red Hat Knowledgebase solution How do I figure out which subscriptions have been consumed by clients registered under Red Hat Subscription Manager? to run a script to see if another system is consuming your subscription. Procedure Identify the Pool ID of the Satellite Infrastructure subscription: The command displays output similar to the following: Make a note of the subscription Pool ID. Your subscription Pool ID is different from the example provided. Attach the Satellite Infrastructure subscription to the base operating system that your Satellite Server is running on. If SCA is enabled on Satellite Server, you can skip this step: The command displays output similar to the following: Optional: Verify that the Satellite Infrastructure subscription is attached: 3.4. Configuring Repositories Use this procedure to enable the repositories that are required to install Satellite Server. Choose from the available list which operating system and version you are installing on: Red Hat Enterprise Linux 8 Red Hat Enterprise Linux 7 3.4.1. Red Hat Enterprise Linux 8 Disable all repositories: Enable the following repositories: Enable the module: Note Enablement of the module satellite:el8 warns about a conflict with postgresql:10 and ruby:2.5 as these modules are set to the default module versions on Red Hat Enterprise Linux 8. The module satellite:el8 has a dependency for the modules postgresql:12 and ruby:2.7 that will be enabled with the satellite:el8 module. These warnings do not cause installation process failure, hence can be ignored safely. For more information about modules and lifecycle streams on Red Hat Enterprise Linux 8, see Red Hat Enterprise Linux Application Streams Life Cycle . 3.4.2. Red Hat Enterprise Linux 7 Disable all repositories: Enable the following repositories: Note If you are installing Satellite Server as a virtual machine hosted on Red Hat Virtualization, you must also enable the Red Hat Common repository, and install Red Hat Virtualization guest agents and drivers. For more information, see Installing the Guest Agents and Drivers on Red Hat Enterprise Linux in the Virtual Machine Management Guide . 3.5. Installing Satellite Server Packages Red Hat Enterprise Linux 8 Red Hat Enterprise Linux 7 3.5.1. Red Hat Enterprise Linux 8 Procedure Update all packages: Install Satellite Server packages: 3.5.2. Red Hat Enterprise Linux 7 Update all packages: Install Satellite Server packages: 3.6. Synchronizing the System Clock With chronyd To minimize the effects of time drift, you must synchronize the system clock on the base operating system on which you want to install Satellite Server with Network Time Protocol (NTP) servers. If the base operating system clock is configured incorrectly, certificate verification might fail. For more information about the chrony suite, see Using the Chrony suite to configure NTP in Red Hat Enterprise Linux 8 Configuring basic system settings , and Configuring NTP Using the chrony Suite in the Red Hat Enterprise Linux 7 System Administrator's Guide . Procedure Install the chrony package: Start and enable the chronyd service: 3.7. Installing the SOS Package on the Base Operating System Install the sos package on the base operating system so that you can collect configuration and diagnostic information from a Red Hat Enterprise Linux system. You can also use it to provide the initial system analysis, which is required when opening a service request with Red Hat Technical Support. For more information on using sos , see the Knowledgebase solution What is a sosreport and how to create one in Red Hat Enterprise Linux 4.6 and later? on the Red Hat Customer Portal. Procedure Install the sos package: 3.8. Configuring Satellite Server Install Satellite Server using the satellite-installer installation script. This method is performed by running the installation script with one or more command options. The command options override the corresponding default initial configuration options and are recorded in the Satellite answer file. You can run the script as often as needed to configure any necessary options. Note Depending on the options that you use when running the Satellite installer, the configuration can take several minutes to complete. 3.8.1. Configuring Satellite Installation This initial configuration procedure creates an organization, location, user name, and password. After the initial configuration, you can create additional organizations and locations if required. The initial configuration also installs PostgreSQL databases on the same server. The installation process can take tens of minutes to complete. If you are connecting remotely to the system, use a utility such as tmux that allows suspending and reattaching a communication session so that you can check the installation progress in case you become disconnected from the remote system. If you lose connection to the shell where the installation command is running, see the log at /var/log/foreman-installer/satellite.log to determine if the process completed successfully. Considerations Use the satellite-installer --scenario satellite --help command to display the available options and any default values. If you do not specify any values, the default values are used. Specify a meaningful value for the option: --foreman-initial-organization . This can be your company name. An internal label that matches the value is also created and cannot be changed afterwards. If you do not specify a value, an organization called Default Organization with the label Default_Organization is created. You can rename the organization name but not the label. Remote Execution is the primary method of managing packages on Content Hosts. If you want to use the deprecated Katello Agent instead of Remote Execution SSH, use the --foreman-proxy-content-enable-katello-agent=true option to enable it. The same option should be given on any Capsule Server as well as Satellite Server. By default, all configuration files configured by the installer are managed by Puppet. When satellite-installer runs, it overwrites any manual changes to the Puppet managed files with the initial values. If you want to manage DNS files and DHCP files manually, use the --foreman-proxy-dns-managed=false and --foreman-proxy-dhcp-managed=false options so that Puppet does not manage the files related to the respective services. For more information on how to apply custom configuration on other services, see Applying Custom Configuration to Satellite . Procedure Enter the following command with any additional options that you want to use: The script displays its progress and writes logs to /var/log/foreman-installer/satellite.log . 3.9. Importing a Red Hat Subscription Manifest into Satellite Server Use the following procedure to import a Red Hat subscription manifest into Satellite Server. Prerequisites You must have a Red Hat subscription manifest file exported from the Customer Portal. For more information, see Creating and Managing Manifests in Using Red Hat Subscription Management . Procedure In the Satellite web UI, ensure the context is set to the organization you want to use. In the Satellite web UI, navigate to Content > Subscriptions and click Manage Manifest . In the Manage Manifest window, click Browse . Navigate to the location that contains the Red Hat subscription manifest file, then click Open . If the Manage Manifest window does not close automatically, click Close to return to the Subscriptions window. CLI procedure Copy the Red Hat subscription manifest file from your client to Satellite Server: Log in to Satellite Server as the root user and import the Red Hat subscription manifest file: You can now enable repositories and import Red Hat content. For more information, see Importing Content in the Content Management guide. | [
"/Stage[main]/Dhcp/File[/etc/dhcp/dhcpd.conf]: Filebucketed /etc/dhcp/dhcpd.conf to puppet with sum 622d9820b8e764ab124367c68f5fa3a1",
"puppet filebucket -l restore /etc/dhcp/dhcpd.conf 622d9820b8e764ab124367c68f5fa3a1",
"an http proxy server to use (enter server FQDN) proxy_hostname = myproxy.example.com port for http proxy server proxy_port = 8080 user name for authenticating to an http proxy, if needed proxy_user = password for basic http proxy auth, if needed proxy_password =",
"subscription-manager register",
"subscription-manager register Username: user_name Password: The system has been registered with ID: 541084ff2-44cab-4eb1-9fa1-7683431bcf9a",
"subscription-manager list --all --available --matches 'Red Hat Satellite Infrastructure Subscription'",
"Subscription Name: Red Hat Satellite Infrastructure Subscription Provides: Red Hat Satellite Red Hat Software Collections (for RHEL Server) Red Hat CodeReady Linux Builder for x86_64 Red Hat Ansible Engine Red Hat Enterprise Linux Load Balancer (for RHEL Server) Red Hat Red Hat Software Collections (for RHEL Server) Red Hat Enterprise Linux Server Red Hat Satellite Capsule Red Hat Enterprise Linux for x86_64 Red Hat Enterprise Linux High Availability for x86_64 Red Hat Satellite Red Hat Satellite 5 Managed DB Red Hat Satellite 6 Red Hat Discovery SKU: MCT3719 Contract: 11878983 Pool ID: 8a85f99968b92c3701694ee998cf03b8 Provides Management: No Available: 1 Suggested: 1 Service Level: Premium Service Type: L1-L3 Subscription Type: Standard Ends: 03/04/2020 System Type: Physical",
"subscription-manager attach --pool= pool_id",
"Successfully attached a subscription for: Red Hat Satellite Infrastructure Subscription",
"subscription-manager list --consumed",
"subscription-manager repos --disable \"*\"",
"subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=rhel-8-for-x86_64-appstream-rpms --enable=satellite-6.11-for-rhel-8-x86_64-rpms --enable=satellite-maintenance-6.11-for-rhel-8-x86_64-rpms",
"dnf module enable satellite:el8",
"subscription-manager repos --disable \"*\"",
"subscription-manager repos --enable=rhel-7-server-rpms --enable=rhel-server-rhscl-7-rpms --enable=rhel-7-server-ansible-2.9-rpms --enable=rhel-7-server-satellite-6.11-rpms --enable=rhel-7-server-satellite-maintenance-6.11-rpms",
"dnf update",
"dnf install satellite",
"yum update",
"yum install satellite",
"yum install chrony",
"systemctl start chronyd systemctl enable chronyd",
"yum install sos",
"satellite-installer --scenario satellite --foreman-initial-organization \" My_Organization \" --foreman-initial-location \" My_Location \" --foreman-initial-admin-username admin_user_name --foreman-initial-admin-password admin_password",
"scp ~/ manifest_file .zip root@ satellite.example.com :~/.",
"hammer subscription upload --file ~/ manifest_file .zip --organization \" My_Organization \""
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/installing_satellite_server_in_a_connected_network_environment/installing_server_connected_satellite |
Chapter 10. Run Red Hat JBoss Data Grid in Library Mode (Single-Node Setup) | Chapter 10. Run Red Hat JBoss Data Grid in Library Mode (Single-Node Setup) 10.1. Create a Main Method in the Quickstart Class Create a new Quickstart class by following the outlined steps: Prerequisites These quickstarts use the Infinispan quickstarts located at https://github.com/infinispan/infinispan-quickstart . The following procedure uses the infinispan-quickstart/embedded-cache quickstart. Procedure 10.1. Create a Main Method in the Quickstart Class Create the Quickstart.java File Create a file called Quickstart.java at your project's location. Add the Quickstart Class Add the following class and method to the Quickstart.java file: Copy Dependencies and Compile Java Classes Use the following command to copy all project dependencies to a directory and compile the Java classes from your project: Run the Main Method Use the following command to run the main method: Report a bug | [
"package com.mycompany.app; import org.infinispan.manager.DefaultCacheManager import org.infinispan.Cache public class Quickstart { public static void main(String args[]) throws Exception { Cache<Object, Object> cache = new DefaultCacheManager().getCache(); } }",
"mvn clean compile dependency:copy-dependencies -DstripVersion",
"java -cp target/classes/:target/dependency/* com.mycompany.app.Quickstart"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/getting_started_guide/chap-run_red_hat_jboss_data_grid_in_library_mode_single-node_setup |
Chapter 6. Configuring the system and running tests by using Cockpit | Chapter 6. Configuring the system and running tests by using Cockpit To run the certification tests by using Cockpit you need to upload the test plan to the HUT first. After running the tests, download the results and review them. This chapter contains the following topics: Section 6.1, "Setting up the Cockpit server" Section 6.2, "Adding the host under test to Cockpit" Section 6.3, "Getting authorization on the Red Hat SSO network" Section 6.4, "Downloading test plans in Cockpit from Red Hat certification portal" Section 6.5, "Using the test plan to prepare the host under test for testing" Section 6.6, "Running the certification tests using Cockpit" Section 6.7, "Reviewing and downloading the test results file" Section 6.8, "Submitting the test results from Cockpit to the Red Hat Certification Portal" Section 6.9, "Uploading the results file of the executed test plan to Red Hat Certification portal" 6.1. Setting up the Cockpit server Cockpit is a RHEL tool that lets you change the configuration of your systems as well as monitor their resources from a user-friendly web-based interface. Note You must set up Cockpit on a new system, which is separate from the host under test. Ensure that the Cockpit has access to the host under test. For more information on installing and configuring Cockpit, see Getting Started using the RHEL web console on RHEL 8, Getting Started using the RHEL web console on RHEL 9 and Introducing Cockpit . Prerequisites The Cockpit server has RHEL version 8 or 9 installed. You have installed the Cockpit plugin on your system. You have enabled the Cockpit service. Procedure Log in to the system where you installed Cockpit. Install the Cockpit RPM provided by the Red Hat Certification team. You must run Cockpit on port 9090. 6.2. Adding the host under test to Cockpit Adding the host under test (HUT) to Cockpit lets the two systems communicate by using passwordless SSH. Prerequisites You have the IP address or hostname of the HUT. Procedure Enter http:// <Cockpit_system_IP> :9090/ in your browser to launch the Cockpit web application. Enter the username and password, and then click Login . Click the down-arrow on the logged-in cockpit user name-> Add new host . The dialog box displays. In the Host field, enter the IP address or hostname of the system. In the User name field, enter the name you want to assign to this system. Optional: Select the predefined color or select a new color of your choice for the host added. Click Add . Click Accept key and connect to let Cockpit communicate with the system through passwordless SSH. Enter the Password . Select the Authorize SSH Key checkbox. Click Log in . Verification On the left panel, click Tools -> Red Hat Certification . Verify that the system you just added displays under the Hosts section on the right. 6.3. Getting authorization on the Red Hat SSO network Procedure Enter http://<Cockpit_system_IP>:9090/ in your browser's address bar to launch the Cockpit web application. Enter the username and password, and then click Login . Select Tools Red Hat Certification in the left panel. On the Cockpit homepage, click Authorize , to establish connectivity with the Red Hat system. The Log in to your Red Hat account page displays. Enter your credentials and click . The Grant access to rhcert-cwe page displays. Click Grant access . A confirmation message displays a successful device login. You are now connected to the Cockpit web application. 6.4. Downloading test plans in Cockpit from Red Hat certification portal For Non-authorized or limited access users: To download the test plan, see Downloading the test plan from Red Hat Certification portal . For authorized users: Procedure Enter http://<Cockpit_system_IP>:9090/ in your browser's address bar to launch the Cockpit web application. Enter the username and password, and then click Login . Select Tools Red Hat Certification in the left panel. Click the Test Plans tab. A list of Recent Certification Support Cases will appear. Click Download Test Plan . A message displays confirming the successful addition of the test plan. The downloaded test plan will be listed under the File Name of the Test Plan Files section. 6.5. Using the test plan to prepare the host under test for testing Provisioning the host under test performs a number of operations, such as setting up passwordless SSH communication with the cockpit, installing the required packages on your system based on the certification type, and creating a final test plan to run, which is a list of common tests taken from both the test plan provided by Red Hat and tests generated on discovering the system requirements. For instance, required hardware packages will be installed if the test plan is designed for certifying a hardware product. Prerequisites You have downloaded the test plan provided by Red Hat . Procedure Enter http:// <Cockpit_system_IP> :9090/ in your browser address bar to launch the Cockpit web application. Enter the username and password, and then click Login . Select Tools Red Hat Certification in the left panel. Click the Hosts tab, and then click the host under test on which you want to run the tests. Click Provision . A dialog box appears. Click Upload, and then select the new test plan .xml file. Then, click . A successful upload message is displayed. Optionally, if you want to reuse the previously uploaded test plan, then select it again to reupload. Note During the certification process, if you receive a redesigned test plan for the ongoing product certification, then you can upload it following the step. However, you must run rhcert-clean all in the Terminal tab before proceeding. In the Role field, select Host under test and click Submit . By default, the file is uploaded to path:`/var/rhcert/plans/<testplanfile.xml>` 6.6. Running the certification tests using Cockpit Prerequisites You have prepared the host under test . Procedure Enter http:// <Cockpit_system_IP> :9090/ in your browser address bar to launch the Cockpit web application. Enter the username and password, and click Login . Select Tools Red Hat Certification in the left panel. Click the Hosts tab and click on the host on which you want to run the tests. Click the Terminal tab and select Run. A list of recommended tests based on the test plan uploaded displays. The final test plan to run is a list of common tests taken from both the test plan provided by Red Hat and tests generated on discovering the system requirements. When prompted, choose whether to run each test by typing yes or no . You can also run particular tests from the list by typing select . 6.7. Reviewing and downloading the test results file Procedure Enter http:// <Cockpit_system_IP> :9090/ in your browser address bar to launch the Cockpit web application. Enter the username and password, and then click Login . Select Tools Red Hat Certification in the left panel. Click the Result Files tab to view the test results generated. Optional: Click Preview to view the results of each test. Click Download beside the result files. By default, the result file is saved as /var/rhcert/save/hostname-date-time.xml . 6.8. Submitting the test results from Cockpit to the Red Hat Certification Portal Procedure Enter http://<Cockpit_system_IP>:9090/ in your browser's address bar to launch the Cockpit web application. Enter the username and password, and then click Login . Select Tools Red Hat Certification in the left panel. Click the Result Files tab and select the case number from the displayed list. For the authorized users click Submit . A message displays confirming the successful upload of the test result file. For non-authorized users see, Uploading the results file of the executed test plan to Red Hat Certification portal . The test result file of the executed test plan will be uploaded to the Red Hat Certification portal. 6.9. Uploading the results file of the executed test plan to Red Hat Certification portal Prerequisites You have downloaded the test results file from either Cockpit or the HUT directly. Procedure Log in to Red Hat Certification portal . On the homepage, enter the product case number in the search bar. Select the case number from the list that is displayed. On the Summary tab, under the Files section, click Upload . steps Red Hat will review the results file you submitted and suggest the steps. For more information, visit Red Hat Certification portal . | [
"yum install redhat-certification-cockpit"
] | https://docs.redhat.com/en/documentation/red_hat_certified_cloud_and_service_provider_certification/2025/html/red_hat_certified_cloud_and_service_provider_certification_workflow_guide/assembly_cloud-wf-configuring-system-and-running-tests-by-using-Cockpit_cloud-instance-wf-setting-test-environment |
Part III. Technology Previews | Part III. Technology Previews This part provides a list of all Technology Previews available in Red Hat Enterprise Linux 7.5. For information on Red Hat scope of support for Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/ . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.5_release_notes/technology-previews |
Chapter 3. Endpoint authentication mechanisms | Chapter 3. Endpoint authentication mechanisms Data Grid Server can use custom SASL and HTTP authentication mechanisms for Hot Rod and REST endpoints. 3.1. Data Grid Server authentication Authentication restricts user access to endpoints as well as the Data Grid Console and Command Line Interface (CLI). Data Grid Server includes a "default" security realm that enforces user authentication. Default authentication uses a property realm with user credentials stored in the server/conf/users.properties file. Data Grid Server also enables security authorization by default so you must assign users with permissions stored in the server/conf/groups.properties file. Tip Use the user create command with the Command Line Interface (CLI) to add users and assign permissions. Run user create --help for examples and more information. 3.2. Configuring Data Grid Server authentication mechanisms You can explicitly configure Hot Rod and REST endpoints to use specific authentication mechanisms. Configuring authentication mechanisms is required only if you need to explicitly override the default mechanisms for a security realm. Note Each endpoint section in your configuration must include hotrod-connector and rest-connector elements or fields. For example, if you explicitly declare a hotrod-connector you must also declare a rest-connector even if it does not configure an authentication mechanism. Prerequisites Add security realms to your Data Grid Server configuration as required. Procedure Open your Data Grid Server configuration for editing. Add an endpoint element or field and specify the security realm that it uses with the security-realm attribute. Add a hotrod-connector element or field to configure the Hot Rod endpoint. Add an authentication element or field. Specify SASL authentication mechanisms for the Hot Rod endpoint to use with the sasl mechanisms attribute. If applicable, specify SASL quality of protection settings with the qop attribute. Specify the Data Grid Server identity with the server-name attribute if necessary. Add a rest-connector element or field to configure the REST endpoint. Add an authentication element or field. Specify HTTP authentication mechanisms for the REST endpoint to use with the mechanisms attribute. Save the changes to your configuration. Authentication mechanism configuration The following configuration specifies SASL mechanisms for the Hot Rod endpoint to use for authentication: XML <server xmlns="urn:infinispan:server:15.0"> <endpoints> <endpoint socket-binding="default" security-realm="my-realm"> <hotrod-connector> <authentication> <sasl mechanisms="SCRAM-SHA-512 SCRAM-SHA-384 SCRAM-SHA-256 SCRAM-SHA-1 DIGEST-SHA-512 DIGEST-SHA-384 DIGEST-SHA-256 DIGEST-SHA DIGEST-MD5 PLAIN" server-name="infinispan" qop="auth"/> </authentication> </hotrod-connector> <rest-connector> <authentication mechanisms="DIGEST BASIC"/> </rest-connector> </endpoint> </endpoints> </server> JSON { "server": { "endpoints": { "endpoint": { "socket-binding": "default", "security-realm": "my-realm", "hotrod-connector": { "authentication": { "security-realm": "default", "sasl": { "server-name": "infinispan", "mechanisms": ["SCRAM-SHA-512", "SCRAM-SHA-384", "SCRAM-SHA-256", "SCRAM-SHA-1", "DIGEST-SHA-512", "DIGEST-SHA-384", "DIGEST-SHA-256", "DIGEST-SHA", "DIGEST-MD5", "PLAIN"], "qop": ["auth"] } } }, "rest-connector": { "authentication": { "mechanisms": ["DIGEST", "BASIC"], "security-realm": "default" } } } } } } YAML server: endpoints: endpoint: socketBinding: "default" securityRealm: "my-realm" hotrodConnector: authentication: securityRealm: "default" sasl: serverName: "infinispan" mechanisms: - "SCRAM-SHA-512" - "SCRAM-SHA-384" - "SCRAM-SHA-256" - "SCRAM-SHA-1" - "DIGEST-SHA-512" - "DIGEST-SHA-384" - "DIGEST-SHA-256" - "DIGEST-SHA" - "DIGEST-MD5" - "PLAIN" qop: - "auth" restConnector: authentication: mechanisms: - "DIGEST" - "BASIC" securityRealm: "default" 3.2.1. Disabling authentication In local development environments or on isolated networks you can configure Data Grid to allow unauthenticated client requests. When you disable user authentication you should also disable authorization in your Data Grid security configuration. Procedure Open your Data Grid Server configuration for editing. Remove the security-realm attribute from the endpoints element or field. Remove any authorization elements from the security configuration for the cache-container and each cache configuration. Save the changes to your configuration. XML <server xmlns="urn:infinispan:server:15.0"> <endpoints socket-binding="default"/> </server> JSON { "server": { "endpoints": { "endpoint": { "socket-binding": "default" } } } } YAML server: endpoints: endpoint: socketBinding: "default" 3.3. Data Grid Server authentication mechanisms Data Grid Server automatically configures endpoints with authentication mechanisms that match your security realm configuration. For example, if you add a Kerberos security realm then Data Grid Server enables the GSSAPI and GS2-KRB5 authentication mechanisms for the Hot Rod endpoint. Important Currently, you cannot use the Lightweight Directory Access Protocol (LDAP) protocol with the DIGEST or SCRAM authentication mechanisms, because these mechanisms require access to specific hashed passwords. Hot Rod endpoints Data Grid Server enables the following SASL authentication mechanisms for Hot Rod endpoints when your configuration includes the corresponding security realm: Security realm SASL authentication mechanism Property realms and LDAP realms SCRAM , DIGEST Token realms OAUTHBEARER Trust realms EXTERNAL Kerberos identities GSSAPI , GS2-KRB5 SSL/TLS identities PLAIN REST endpoints Data Grid Server enables the following HTTP authentication mechanisms for REST endpoints when your configuration includes the corresponding security realm: Security realm HTTP authentication mechanism Property realms and LDAP realms DIGEST Token realms BEARER_TOKEN Trust realms CLIENT_CERT Kerberos identities SPNEGO SSL/TLS identities BASIC Memcached endpoints Data Grid Server enables the following SASL authentication mechanisms for Memcached binary protocol endpoints when your configuration includes the corresponding security realm: Security realm SASL authentication mechanism Property realms and LDAP realms SCRAM , DIGEST Token realms OAUTHBEARER Trust realms EXTERNAL Kerberos identities GSSAPI, GS2-KRB5 SSL/TLS identities PLAIN Data Grid Server enables authentication on Memcached text protocol endpoints only with security realms which support password-based authentication: Security realm Memcached text authentication Property realms and LDAP realms Yes Token realms No Trust realms No Kerberos identities No SSL/TLS identities No RESP endpoints Data Grid Server enables authentication on RESP endpoints only with security realms which support password-based authentication: Security realm RESP authentication Property realms and LDAP realms Yes Token realms No Trust realms No Kerberos identities No SSL/TLS identities No 3.3.1. SASL authentication mechanisms Data Grid Server supports the following SASL authentications mechanisms with Hot Rod and Memcached binary protocol endpoints: Authentication mechanism Description Security realm type Related details PLAIN Uses credentials in plain-text format. You should use PLAIN authentication with encrypted connections only. Property realms and LDAP realms Similar to the BASIC HTTP mechanism. DIGEST-* Uses hashing algorithms and nonce values. Hot Rod connectors support DIGEST-MD5 , DIGEST-SHA , DIGEST-SHA-256 , DIGEST-SHA-384 , and DIGEST-SHA-512 hashing algorithms, in order of strength. Property realms and LDAP realms Similar to the Digest HTTP mechanism. SCRAM-* Uses salt values in addition to hashing algorithms and nonce values. Hot Rod connectors support SCRAM-SHA , SCRAM-SHA-256 , SCRAM-SHA-384 , and SCRAM-SHA-512 hashing algorithms, in order of strength. Property realms and LDAP realms Similar to the Digest HTTP mechanism. GSSAPI Uses Kerberos tickets and requires a Kerberos Domain Controller. You must add a corresponding kerberos server identity in the realm configuration. In most cases, you also specify an ldap-realm to provide user membership information. Kerberos realms Similar to the SPNEGO HTTP mechanism. GS2-KRB5 Uses Kerberos tickets and requires a Kerberos Domain Controller. You must add a corresponding kerberos server identity in the realm configuration. In most cases, you also specify an ldap-realm to provide user membership information. Kerberos realms Similar to the SPNEGO HTTP mechanism. EXTERNAL Uses client certificates. Trust store realms Similar to the CLIENT_CERT HTTP mechanism. OAUTHBEARER Uses OAuth tokens and requires a token-realm configuration. Token realms Similar to the BEARER_TOKEN HTTP mechanism. 3.3.2. SASL quality of protection (QoP) If SASL mechanisms support integrity and privacy protection (QoP) settings, you can add them to your Hot Rod and Memcached endpoint configuration with the qop attribute. QoP setting Description auth Authentication only. auth-int Authentication with integrity protection. auth-conf Authentication with integrity and privacy protection. 3.3.3. SASL policies SASL policies provide fine-grain control over Hot Rod and Memcached authentication mechanisms. Tip Data Grid cache authorization restricts access to caches based on roles and permissions. Configure cache authorization and then set <no-anonymous value=false /> to allow anonymous login and delegate access logic to cache authorization. Policy Description Default value forward-secrecy Use only SASL mechanisms that support forward secrecy between sessions. This means that breaking into one session does not automatically provide information for breaking into future sessions. false pass-credentials Use only SASL mechanisms that require client credentials. false no-plain-text Do not use SASL mechanisms that are susceptible to simple plain passive attacks. false no-active Do not use SASL mechanisms that are susceptible to active, non-dictionary, attacks. false no-dictionary Do not use SASL mechanisms that are susceptible to passive dictionary attacks. false no-anonymous Do not use SASL mechanisms that accept anonymous logins. true SASL policy configuration In the following configuration the Hot Rod endpoint uses the GSSAPI mechanism for authentication because it is the only mechanism that complies with all SASL policies: XML <server xmlns="urn:infinispan:server:15.0"> <endpoints> <endpoint socket-binding="default" security-realm="default"> <hotrod-connector> <authentication> <sasl mechanisms="PLAIN DIGEST-MD5 GSSAPI EXTERNAL" server-name="infinispan" qop="auth" policy="no-active no-plain-text"/> </authentication> </hotrod-connector> <rest-connector/> </endpoint> </endpoints> </server> JSON { "server": { "endpoints" : { "endpoint" : { "socket-binding" : "default", "security-realm" : "default", "hotrod-connector" : { "authentication" : { "sasl" : { "server-name" : "infinispan", "mechanisms" : [ "PLAIN","DIGEST-MD5","GSSAPI","EXTERNAL" ], "qop" : [ "auth" ], "policy" : [ "no-active","no-plain-text" ] } } }, "rest-connector" : "" } } } } YAML server: endpoints: endpoint: socketBinding: "default" securityRealm: "default" hotrodConnector: authentication: sasl: serverName: "infinispan" mechanisms: - "PLAIN" - "DIGEST-MD5" - "GSSAPI" - "EXTERNAL" qop: - "auth" policy: - "no-active" - "no-plain-text" restConnector: ~ 3.3.4. HTTP authentication mechanisms Data Grid Server supports the following HTTP authentication mechanisms with REST endpoints: Authentication mechanism Description Security realm type Related details BASIC Uses credentials in plain-text format. You should use BASIC authentication with encrypted connections only. Property realms and LDAP realms Corresponds to the Basic HTTP authentication scheme and is similar to the PLAIN SASL mechanism. DIGEST Uses hashing algorithms and nonce values. REST connectors support SHA-512 , SHA-256 and MD5 hashing algorithms. Property realms and LDAP realms Corresponds to the Digest HTTP authentication scheme and is similar to DIGEST-* SASL mechanisms. SPNEGO Uses Kerberos tickets and requires a Kerberos Domain Controller. You must add a corresponding kerberos server identity in the realm configuration. In most cases, you also specify an ldap-realm to provide user membership information. Kerberos realms Corresponds to the Negotiate HTTP authentication scheme and is similar to the GSSAPI and GS2-KRB5 SASL mechanisms. BEARER_TOKEN Uses OAuth tokens and requires a token-realm configuration. Token realms Corresponds to the Bearer HTTP authentication scheme and is similar to OAUTHBEARER SASL mechanism. CLIENT_CERT Uses client certificates. Trust store realms Similar to the EXTERNAL SASL mechanism. | [
"<server xmlns=\"urn:infinispan:server:15.0\"> <endpoints> <endpoint socket-binding=\"default\" security-realm=\"my-realm\"> <hotrod-connector> <authentication> <sasl mechanisms=\"SCRAM-SHA-512 SCRAM-SHA-384 SCRAM-SHA-256 SCRAM-SHA-1 DIGEST-SHA-512 DIGEST-SHA-384 DIGEST-SHA-256 DIGEST-SHA DIGEST-MD5 PLAIN\" server-name=\"infinispan\" qop=\"auth\"/> </authentication> </hotrod-connector> <rest-connector> <authentication mechanisms=\"DIGEST BASIC\"/> </rest-connector> </endpoint> </endpoints> </server>",
"{ \"server\": { \"endpoints\": { \"endpoint\": { \"socket-binding\": \"default\", \"security-realm\": \"my-realm\", \"hotrod-connector\": { \"authentication\": { \"security-realm\": \"default\", \"sasl\": { \"server-name\": \"infinispan\", \"mechanisms\": [\"SCRAM-SHA-512\", \"SCRAM-SHA-384\", \"SCRAM-SHA-256\", \"SCRAM-SHA-1\", \"DIGEST-SHA-512\", \"DIGEST-SHA-384\", \"DIGEST-SHA-256\", \"DIGEST-SHA\", \"DIGEST-MD5\", \"PLAIN\"], \"qop\": [\"auth\"] } } }, \"rest-connector\": { \"authentication\": { \"mechanisms\": [\"DIGEST\", \"BASIC\"], \"security-realm\": \"default\" } } } } } }",
"server: endpoints: endpoint: socketBinding: \"default\" securityRealm: \"my-realm\" hotrodConnector: authentication: securityRealm: \"default\" sasl: serverName: \"infinispan\" mechanisms: - \"SCRAM-SHA-512\" - \"SCRAM-SHA-384\" - \"SCRAM-SHA-256\" - \"SCRAM-SHA-1\" - \"DIGEST-SHA-512\" - \"DIGEST-SHA-384\" - \"DIGEST-SHA-256\" - \"DIGEST-SHA\" - \"DIGEST-MD5\" - \"PLAIN\" qop: - \"auth\" restConnector: authentication: mechanisms: - \"DIGEST\" - \"BASIC\" securityRealm: \"default\"",
"<server xmlns=\"urn:infinispan:server:15.0\"> <endpoints socket-binding=\"default\"/> </server>",
"{ \"server\": { \"endpoints\": { \"endpoint\": { \"socket-binding\": \"default\" } } } }",
"server: endpoints: endpoint: socketBinding: \"default\"",
"<server xmlns=\"urn:infinispan:server:15.0\"> <endpoints> <endpoint socket-binding=\"default\" security-realm=\"default\"> <hotrod-connector> <authentication> <sasl mechanisms=\"PLAIN DIGEST-MD5 GSSAPI EXTERNAL\" server-name=\"infinispan\" qop=\"auth\" policy=\"no-active no-plain-text\"/> </authentication> </hotrod-connector> <rest-connector/> </endpoint> </endpoints> </server>",
"{ \"server\": { \"endpoints\" : { \"endpoint\" : { \"socket-binding\" : \"default\", \"security-realm\" : \"default\", \"hotrod-connector\" : { \"authentication\" : { \"sasl\" : { \"server-name\" : \"infinispan\", \"mechanisms\" : [ \"PLAIN\",\"DIGEST-MD5\",\"GSSAPI\",\"EXTERNAL\" ], \"qop\" : [ \"auth\" ], \"policy\" : [ \"no-active\",\"no-plain-text\" ] } } }, \"rest-connector\" : \"\" } } } }",
"server: endpoints: endpoint: socketBinding: \"default\" securityRealm: \"default\" hotrodConnector: authentication: sasl: serverName: \"infinispan\" mechanisms: - \"PLAIN\" - \"DIGEST-MD5\" - \"GSSAPI\" - \"EXTERNAL\" qop: - \"auth\" policy: - \"no-active\" - \"no-plain-text\" restConnector: ~"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/data_grid_security_guide/authentication-mechanisms |
Chapter 49. Ensuring the presence of host-based access control rules in IdM using Ansible playbooks | Chapter 49. Ensuring the presence of host-based access control rules in IdM using Ansible playbooks Ansible is an automation tool used to configure systems, deploy software, and perform rolling updates. It includes support for Identity Management (IdM). Learn more about Identity Management (IdM) host-based access policies and how to define them using Ansible . 49.1. Host-based access control rules in IdM Host-based access control (HBAC) rules define which users or user groups can access which hosts or host groups by using which services or services in a service group. As a system administrator, you can use HBAC rules to achieve the following goals: Limit access to a specified system in your domain to members of a specific user group. Allow only a specific service to be used to access systems in your domain. By default, IdM is configured with a default HBAC rule named allow_all , which means universal access to every host for every user via every relevant service in the entire IdM domain. You can fine-tune access to different hosts by replacing the default allow_all rule with your own set of HBAC rules. For centralized and simplified access control management, you can apply HBAC rules to user groups, host groups, or service groups instead of individual users, hosts, or services. 49.2. Ensuring the presence of an HBAC rule in IdM using an Ansible playbook Follow this procedure to ensure the presence of a host-based access control (HBAC) rule in Identity Management (IdM) using an Ansible playbook. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. The users and user groups you want to use for your HBAC rule exist in IdM. See Managing user accounts using Ansible playbooks and Ensuring the presence of IdM groups and group members using Ansible playbooks for details. The hosts and host groups to which you want to apply your HBAC rule exist in IdM. See Managing hosts using Ansible playbooks and Managing host groups using Ansible playbooks for details. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create your Ansible playbook file that defines the HBAC policy whose presence you want to ensure. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/hbacrule/ensure-hbacrule-allhosts-present.yml file: Run the playbook: Verification Log in to the IdM Web UI as administrator. Navigate to Policy Host-Based-Access-Control HBAC Test . In the Who tab, select idm_user. In the Accessing tab, select client.idm.example.com . In the Via service tab, select sshd . In the Rules tab, select login . In the Run test tab, click the Run test button. If you see ACCESS GRANTED, the HBAC rule is implemented successfully. Additional resources See the README-hbacsvc.md , README-hbacsvcgroup.md , and README-hbacrule.md files in the /usr/share/doc/ansible-freeipa directory. See the playbooks in the subdirectories of the /usr/share/doc/ansible-freeipa/playbooks directory. | [
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle hbacrules hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure idm_user can access client.idm.example.com via the sshd service - ipahbacrule: ipaadmin_password: \"{{ ipaadmin_password }}\" name: login user: idm_user host: client.idm.example.com hbacsvc: - sshd state: present",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-new-hbacrule-present.yml"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_idm_users_groups_hosts_and_access_control_rules/ensuring-the-presence-of-host-based-access-control-rules-in-idm-using-ansible-playbooks_managing-users-groups-hosts |
Chapter 2. Installing Red Hat Enterprise Linux Virtual Machines | Chapter 2. Installing Red Hat Enterprise Linux Virtual Machines Installing a Red Hat Enterprise Linux virtual machine involves the following key steps: Create a virtual machine. You must add a virtual disk for storage, and a network interface to connect the virtual machine to the network. Start the virtual machine and install an operating system. See your operating system's documentation for instructions. Red Hat Enterprise Linux 6: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/Installation_Guide/index.html Red Hat Enterprise Linux 7: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/Installation_Guide/index.html Red Hat Enterprise Linux Atomic Host 7: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html/installation_and_configuration_guide Red Hat Enterprise Linux 8: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/performing_a_standard_rhel_installation/index Enable the required repositories for your operating system. Install guest agents and drivers for additional virtual machine functionality. 2.1. Creating a Virtual Machine Create a new virtual machine and configure the required settings. Procedure Click Compute Virtual Machines . Click New to open the New Virtual Machine window. Select an Operating System from the drop-down list. Enter a Name for the virtual machine. Add storage to the virtual machine. Attach or Create a virtual disk under Instance Images . Click Attach and select an existing virtual disk. Click Create and enter a Size(GB) and Alias for a new virtual disk. You can accept the default settings for all other fields, or change them if required. See Section A.4, "Explanation of Settings in the New Virtual Disk and Edit Virtual Disk Windows" for more details on the fields for all disk types. Connect the virtual machine to the network. Add a network interface by selecting a vNIC profile from the nic1 drop-down list at the bottom of the General tab. Specify the virtual machine's Memory Size on the System tab. Choose the First Device that the virtual machine will boot from on the Boot Options tab. You can accept the default settings for all other fields, or change them if required. For more details on all fields in the New Virtual Machine window, see Section A.1, "Explanation of Settings in the New Virtual Machine and Edit Virtual Machine Windows" . Click OK . The new virtual machine is created and displays in the list of virtual machines with a status of Down . Before you can use this virtual machine, you must install an operating system and register with the Content Delivery Network. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/virtual_machine_management_guide/Installing_Red_Hat_Enterprise_Linux_Virtual_Machines |
Deploying OpenShift Data Foundation using Amazon Web Services | Deploying OpenShift Data Foundation using Amazon Web Services Red Hat OpenShift Data Foundation 4.17 Instructions for deploying OpenShift Data Foundation using Amazon Web Services for cloud storage Red Hat Storage Documentation Team Abstract Read this document for instructions about how to install Red Hat OpenShift Data Foundation using Red Hat OpenShift Container Platform on Amazon Web Services. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug . Preface Red Hat OpenShift Data Foundation supports deployment on existing Red Hat OpenShift Container Platform (RHOCP) AWS clusters in connected or disconnected environments along with out-of-the-box support for proxy environments. Note Only internal OpenShift Data Foundation clusters are supported on AWS. See Planning your deployment and Preparing to deploy OpenShift Data Foundation for more information about deployment requirements. To deploy OpenShift Data Foundation, start with the requirements in Preparing to deploy OpenShift Data Foundation chapter and then follow the deployment process for your environment based on your requirement: Deploy using dynamic storage devices Deploy standalone Multicloud Object Gateway component Chapter 1. Preparing to deploy OpenShift Data Foundation Deploying OpenShift Data Foundation on OpenShift Container Platform using dynamic storage devices provides you with the option to create internal cluster resources. Before you begin the deployment of Red Hat OpenShift Data Foundation, follow these steps: Optional: If you want to enable cluster-wide encryption using the external Key Management System (KMS) HashiCorp Vault, follow these steps: Ensure that you have a valid Red Hat OpenShift Data Foundation Advanced subscription. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . When the Token authentication method is selected for encryption then refer to Enabling cluster-wide encryption with the Token authentication using KMS . When the Kubernetes authentication method is selected for encryption then refer to Enabling cluster-wide encryption with the Kubernetes authentication using KMS . Ensure that you are using signed certificates on your Vault servers. Optional: If you want to enable cluster-wide encryption using the external Key Management System (KMS) Thales CipherTrust Manager, you must first enable the Key Management Interoperability Protocol (KMIP) and use signed certificates on your server. Create a KMIP client if one does not exist. From the user interface, select KMIP -> Client Profile -> Add Profile . Add the CipherTrust username to the Common Name field during profile creation. Create a token by navigating to KMIP -> Registration Token -> New Registration Token . Copy the token for the step. To register the client, navigate to KMIP -> Registered Clients -> Add Client . Specify the Name . Paste the Registration Token from the step, then click Save . Download the Private Key and Client Certificate by clicking Save Private Key and Save Certificate respectively. To create a new KMIP interface, navigate to Admin Settings -> Interfaces -> Add Interface . Select KMIP Key Management Interoperability Protocol and click . Select a free Port . Select Network Interface as all . Select Interface Mode as TLS, verify client cert, user name taken from client cert, auth request is optional . (Optional) You can enable hard delete to delete both metadata and material when the key is deleted. It is disabled by default. Select the certificate authority (CA) to be used, and click Save . To get the server CA certificate, click on the Action menu (...) on the right of the newly created interface, and click Download Certificate . Optional: If StorageClass encryption is to be enabled during deployment, create a key to act as the Key Encryption Key (KEK): Navigate to Keys -> Add Key . Enter Key Name . Set the Algorithm and Size to AES and 256 respectively. Enable Create a key in Pre-Active state and set the date and time for activation. Ensure that Encrypt and Decrypt are enabled under Key Usage . Copy the ID of the newly created Key to be used as the Unique Identifier during deployment. Minimum starting node requirements An OpenShift Data Foundation cluster is deployed with minimum configuration when the standard deployment resource requirement is not met. See Resource requirements section in the Planning guide . Disaster recovery requirements Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution: A valid Red Hat OpenShift Data Foundation Advanced subscription A valid Red Hat Advanced Cluster Management for Kubernetes subscription To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . For detailed requirements, see Configuring OpenShift Data Foundation Disaster Recovery for OpenShift Workloads guide, and Requirements and recommendations section of the Install guide in Red Hat Advanced Cluster Management for Kubernetes documentation. Chapter 2. Deploy OpenShift Data Foundation using dynamic storage devices You can deploy OpenShift Data Foundation on OpenShift Container Platform using dynamic storage devices provided by Amazon Web Services (AWS) EBS (type, gp2-csi or gp3-csi ) that provides you with the option to create internal cluster resources. This results in the internal provisioning of the base services, which helps to make additional storage classes available to applications. Also, it is possible to deploy only the Multicloud Object Gateway (MCG) component with OpenShift Data Foundation. For more information, see Deploy standalone Multicloud Object Gateway . Note Only internal OpenShift Data Foundation clusters are supported on AWS. See Planning your deployment for more information about deployment requirements. Also, ensure that you have addressed the requirements in Preparing to deploy OpenShift Data Foundation chapter before proceeding with the below steps for deploying using dynamic storage devices: Install the Red Hat OpenShift Data Foundation Operator . Create the OpenShift Data Foundation Cluster . 2.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators -> OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.17 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 2.2. Enabling cluster-wide encryption with KMS using the Token authentication method You can enable the key value backend path and policy in the vault for token authentication. Prerequisites Administrator access to the vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . Carefully, select a unique path name as the backend path that follows the naming convention since you cannot change it later. Procedure Enable the Key/Value (KV) backend path in the vault. For vault KV secret engine API, version 1: For vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Create a token that matches the above policy: 2.3. Enabling cluster-wide encryption with KMS using the Kubernetes authentication method You can enable the Kubernetes authentication method for cluster-wide encryption using the Key Management System (KMS). Prerequisites Administrator access to Vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . The OpenShift Data Foundation operator must be installed from the Operator Hub. Select a unique path name as the backend path that follows the naming convention carefully. You cannot change this path name later. Procedure Create a service account: where, <serviceaccount_name> specifies the name of the service account. For example: Create clusterrolebindings and clusterroles : For example: Create a secret for the serviceaccount token and CA certificate. where, <serviceaccount_name> is the service account created in the earlier step. Get the token and the CA certificate from the secret. Retrieve the OCP cluster endpoint. Fetch the service account issuer: Use the information collected in the step to setup the Kubernetes authentication method in Vault: Important To configure the Kubernetes authentication method in Vault when the issuer is empty: Enable the Key/Value (KV) backend path in Vault. For Vault KV secret engine API, version 1: For Vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Generate the roles: The role odf-rook-ceph-op is later used while you configure the KMS connection details during the creation of the storage system. 2.3.1. Enabling key rotation when using KMS Security common practices require periodic encryption key rotation. You can enable key rotation when using KMS using this procedure. To enable key rotation, add the annotation keyrotation.csiaddons.openshift.io/schedule: <value> to either Namespace , StorageClass , or PersistentVolumeClaims (in order of precedence). <value> can be either @hourly , @daily , @weekly , @monthly , or @yearly . If <value> is empty, the default is @weekly . The below examples use @weekly . Important Key rotation is only supported for RBD backed volumes. Annotating Namespace Annotating StorageClass Annotating PersistentVolumeClaims 2.4. Creating an OpenShift Data Foundation cluster Create an OpenShift Data Foundation cluster after you install the OpenShift Data Foundation operator. Prerequisites The OpenShift Data Foundation operator must be installed from the Operator Hub. For more information, see Installing OpenShift Data Foundation Operator . Procedure In the OpenShift Web Console, click Operators -> Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator, and then click Create StorageSystem . In the Backing storage page, select the following: Select Full Deployment for the Deployment type option. Select the Use an existing StorageClass option. Select the Storage Class . As of OpenShift Data Foundation version 4.12, you can choose gp2-csi or gp3-csi as the storage class. Optional: Select Use external PostgreSQL checkbox to use an external PostgreSQL [Technology preview] . This provides high availability solution for Multicloud Object Gateway where the PostgreSQL pod is a single point of failure. Provide the following connection details: Username Password Server name and Port Database name Select Enable TLS/SSL checkbox to enable encryption for the Postgres server. Click . In the Capacity and nodes page, provide the necessary information: Select a value for Requested Capacity from the dropdown list. It is set to 2 TiB by default. Note Once you select the initial storage capacity, cluster expansion is performed only using the selected usable capacity (three times of raw storage). In the Select Nodes section, select at least three available nodes. In the Configure performance section, select one of the following performance profiles: Lean Use this in a resource constrained environment with minimum resources that are lower than the recommended. This profile minimizes resource consumption by allocating fewer CPUs and less memory. Balanced (default) Use this when recommended resources are available. This profile provides a balance between resource consumption and performance for diverse workloads. Performance Use this in an environment with sufficient resources to get the best performance. This profile is tailored for high performance by allocating ample memory and CPUs to ensure optimal execution of demanding workloads. Note You have the option to configure the performance profile even after the deployment using the Configure performance option from the options menu of the StorageSystems tab. Important Before selecting a resource profile, make sure to check the current availability of resources within the cluster. Opting for a higher resource profile in a cluster with insufficient resources might lead to installation failures. For more information about resource requirements, see Resource requirement for performance profiles . Optional: Select the Taint nodes checkbox to dedicate the selected nodes for OpenShift Data Foundation. For cloud platforms with multiple availability zones, ensure that the Nodes are spread across different Locations/availability zones. If the nodes selected do not match the OpenShift Data Foundation cluster requirements of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed. For minimum starting node requirements, see the Resource requirements section in the Planning guide. Click . Optional: In the Security and network page, configure the following based on your requirements: To enable encryption, select Enable data encryption for block and file storage . Select either one or both the encryption levels: Cluster-wide encryption Encrypts the entire cluster (block and file). StorageClass encryption Creates encrypted persistent volume (block only) using encryption enabled storage class. Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, select one of the following providers and provide the necessary details: Vault Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Thales CipherTrust Manager (using KMIP) Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . To enable in-transit encryption, select In-transit encryption . Select a Network . Click . In the Data Protection page, if you are configuring Regional-DR solution for Openshift Data Foundation then select the Prepare cluster for disaster recovery (Regional-DR only) checkbox, else click . In the Review and create page, review the configuration details. To modify any configuration settings, click Back . Click Create StorageSystem . Note When your deployment has five or more nodes, racks, or rooms, and when there are five or more number of failure domains present in the deployment, you can configure Ceph monitor counts based on the number of racks or zones. An alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the option to increase the number of Ceph monitor counts. You can use the Configure option in the alert to configure the Ceph monitor counts. For more information, see Resolving low Ceph monitor count alert . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators -> OpenShift Data Foundation -> Storage System -> ocs-storagecluster-storagesystem -> Resources . Verify that Status of StorageCluster is Ready and has a green tick mark to it. To verify that all the components for OpenShift Data Foundation are successfully installed, see Verifying OpenShift Data Foundation deployment . Additional resources To enable Overprovision Control alerts, refer to Alerts in Monitoring guide. 2.5. Verifying OpenShift Data Foundation deployment To verify that OpenShift Data Foundation is deployed correctly: Verify the state of the pods . Verify that the OpenShift Data Foundation cluster is healthy . Verify that the Multicloud Object Gateway is healthy . Verify that the OpenShift Data Foundation specific storage classes exist . 2.5.1. Verifying the state of the pods Procedure Click Workloads -> Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see the following table: Set filter for Running and Completed pods to verify that the following pods are in Running and Completed state: Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) ux-backend-server- * (1 pod on any storage node) * ocs-client-operator -* (1 pod on any storage node) ocs-client-operator-console -* (1 pod on any storage node) ocs-provider-server -* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) MON rook-ceph-mon-* (3 pods distributed across storage nodes) MGR rook-ceph-mgr-* (1 pod on any storage node) MDS rook-ceph-mds-ocs-storagecluster-cephfilesystem-* (2 pods distributed across storage nodes) CSI cephfs csi-cephfsplugin-* (1 pod on each storage node) csi-cephfsplugin-provisioner-* (2 pods distributed across storage nodes) rbd csi-rbdplugin-* (1 pod on each storage node) csi-rbdplugin-provisioner-* (2 pods distributed across storage nodes) rook-ceph-crashcollector rook-ceph-crashcollector-* (1 pod on each storage node) OSD rook-ceph-osd-* (1 pod for each device) rook-ceph-osd-prepare-ocs-deviceset-* (1 pod for each device) 2.5.2. Verifying the OpenShift Data Foundation cluster is healthy Procedure In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Block and File tab, verify that the Storage Cluster has a green tick. In the Details card, verify that the cluster information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation . 2.5.3. Verifying the Multicloud Object Gateway is healthy Procedure In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the object service dashboard, see Monitoring OpenShift Data Foundation . Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . 2.5.4. Verifying that the specific storage classes exist Procedure Click Storage -> Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation: ocs-storagecluster-ceph-rbd ocs-storagecluster-cephfs openshift-storage.noobaa.io Chapter 3. Deploy standalone Multicloud Object Gateway Deploying only the Multicloud Object Gateway component with OpenShift Data Foundation provides the flexibility in deployment and helps to reduce the resource consumption. Use this section to deploy only the standalone Multicloud Object Gateway component, which involves the following steps: Installing Red Hat OpenShift Data Foundation Operator Creating standalone Multicloud Object Gateway Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . 3.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators -> OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.17 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 3.2. Creating a standalone Multicloud Object Gateway You can create only the standalone Multicloud Object Gateway component while deploying OpenShift Data Foundation. Prerequisites Ensure that the OpenShift Data Foundation Operator is installed. Procedure In the OpenShift Web Console, click Operators -> Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, select the following: Select Multicloud Object Gateway for Deployment type . Select the Use an existing StorageClass option. Click . Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Click . In the Review and create page, review the configuration details: To modify any configuration settings, click Back . Click Create StorageSystem . Verification steps Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. Verifying the state of the pods Click Workloads -> Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list and verify that the following pods are in Running state. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) Chapter 4. Creating an AWS-STS-backed backingstore Amazon Web Services Security Token Service (AWS STS) is an AWS feature and it is a way to authenticate using short-lived credentials. Creating an AWS-STS-backed backingstore involves the following: Creating an AWS role using a script, which helps to get the temporary security credentials for the role session Installing OpenShift Data Foundation operator in AWS STS OpenShift cluster Creating backingstore in AWS STS OpenShift cluster 4.1. Creating an AWS role using a script You need to create a role and pass the role Amazon resource name (ARN) while installing the OpenShift Data Foundation operator. Prerequisites Configure Red Hat OpenShift Container Platform cluster with AWS STS. For more information, see Configuring an AWS cluster to use short-term credentials . Procedure Create an AWS role using a script that matches OpenID Connect (OIDC) configuration for Multicloud Object Gateway (MCG) on OpenShift Data Foundation. The following example shows the details that are required to create the role: where 123456789123 Is the AWS account ID mybucket Is the bucket name (using public bucket configuration) us-east-2 Is the AWS region openshift-storage Is the namespace name Sample script 4.1.1. Installing OpenShift Data Foundation operator in AWS STS OpenShift cluster Prerequisites Configure Red Hat OpenShift Container Platform cluster with AWS STS. For more information, see Configuring an AWS cluster to use short-term credentials . Create an AWS role using a script that matches OpenID Connect (OIDC) configuration. For more information, see Creating an AWS role using a script . Procedure Install OpenShift Data Foundation Operator from the Operator Hub. During the installation add the role ARN in the ARN Details field. Make sure that the Update approval field is set to Manual . 4.1.2. Creating a new AWS STS backingstore Prerequisites Configure Red Hat OpenShift Container Platform cluster with AWS STS. For more information, see Configuring an AWS cluster to use short-term credentials . Create an AWS role using a script that matches OpenID Connect (OIDC) configuration. For more information, see Creating an AWS role using a script . Install OpenShift Data Foundation Operator. For more information, see Installing OpenShift Data Foundation operator in AWS STS OpenShift cluster . Procedure Install Multicloud Object Gateway (MCG). It is installed with the default backingstore by using the short-lived credentials. After the MCG system is ready, you can create more backingstores of the type aws-sts-s3 using the following MCG command line interface command: where backingstore-name Name of the backingstore aws-sts-role-arn The AWS STS role ARN which will assume role region The AWS bucket region target-bucket The target bucket name on the cloud Chapter 5. View OpenShift Data Foundation Topology The topology shows the mapped visualization of the OpenShift Data Foundation storage cluster at various abstraction levels and also lets you to interact with these layers. The view also shows how the various elements compose the Storage cluster altogether. Procedure On the OpenShift Web Console, navigate to Storage -> Data Foundation -> Topology . The view shows the storage cluster and the zones inside it. You can see the nodes depicted by circular entities within the zones, which are indicated by dotted lines. The label of each item or resource contains basic information such as status and health or indication for alerts. Choose a node to view node details on the right-hand panel. You can also access resources or deployments within a node by clicking on the search/preview decorator icon. To view deployment details Click the preview decorator on a node. A modal window appears above the node that displays all of the deployments associated with that node along with their statuses. Click the Back to main view button in the model's upper left corner to close and return to the view. Select a specific deployment to see more information about it. All relevant data is shown in the side panel. Click the Resources tab to view the pods information. This tab provides a deeper understanding of the problems and offers granularity that aids in better troubleshooting. Click the pod links to view the pod information page on OpenShift Container Platform. The link opens in a new window. Chapter 6. Uninstalling OpenShift Data Foundation 6.1. Uninstalling OpenShift Data Foundation in Internal mode To uninstall OpenShift Data Foundation in Internal mode, refer to the knowledgebase article on Uninstalling OpenShift Data Foundation . | [
"oc annotate namespace openshift-storage openshift.io/node-selector=",
"vault secrets enable -path=odf kv",
"vault secrets enable -path=odf kv-v2",
"echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -",
"vault token create -policy=odf -format json",
"oc -n openshift-storage create serviceaccount <serviceaccount_name>",
"oc -n openshift-storage create serviceaccount odf-vault-auth",
"oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:_<serviceaccount_name>_",
"oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:odf-vault-auth",
"cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: odf-vault-auth-token namespace: openshift-storage annotations: kubernetes.io/service-account.name: <serviceaccount_name> type: kubernetes.io/service-account-token data: {} EOF",
"SA_JWT_TOKEN=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['token']}\" | base64 --decode; echo) SA_CA_CRT=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['ca\\.crt']}\" | base64 --decode; echo)",
"OCP_HOST=USD(oc config view --minify --flatten -o jsonpath=\"{.clusters[0].cluster.server}\")",
"oc proxy & proxy_pid=USD! issuer=\"USD( curl --silent http://127.0.0.1:8001/.well-known/openid-configuration | jq -r .issuer)\" kill USDproxy_pid",
"vault auth enable kubernetes",
"vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\" issuer=\"USDissuer\"",
"vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\"",
"vault secrets enable -path=odf kv",
"vault secrets enable -path=odf kv-v2",
"echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -",
"vault write auth/kubernetes/role/odf-rook-ceph-op bound_service_account_names=rook-ceph-system,rook-ceph-osd,noobaa bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h",
"vault write auth/kubernetes/role/odf-rook-ceph-osd bound_service_account_names=rook-ceph-osd bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h",
"oc get namespace default NAME STATUS AGE default Active 5d2h",
"oc annotate namespace default \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" namespace/default annotated",
"oc get storageclass rbd-sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE rbd-sc rbd.csi.ceph.com Delete Immediate true 5d2h",
"oc annotate storageclass rbd-sc \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" storageclass.storage.k8s.io/rbd-sc annotated",
"oc get pvc data-pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-pvc Bound pvc-f37b8582-4b04-4676-88dd-e1b95c6abf74 1Gi RWO default 20h",
"oc annotate pvc data-pvc \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" persistentvolumeclaim/data-pvc annotated",
"oc get encryptionkeyrotationcronjobs.csiaddons.openshift.io NAME SCHEDULE SUSPEND ACTIVE LASTSCHEDULE AGE data-pvc-1642663516 @weekly 3s",
"oc annotate pvc data-pvc \"keyrotation.csiaddons.openshift.io/schedule=*/1 * * * *\" --overwrite=true persistentvolumeclaim/data-pvc annotated",
"oc get encryptionkeyrotationcronjobs.csiaddons.openshift.io NAME SCHEDULE SUSPEND ACTIVE LASTSCHEDULE AGE data-pvc-1642664617 */1 * * * * 3s",
"oc annotate namespace openshift-storage openshift.io/node-selector=",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"arn:aws:iam::123456789123:oidc-provider/mybucket-oidc.s3.us-east-2.amazonaws.com\" }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"mybucket-oidc.s3.us-east-2.amazonaws.com:sub\": [ \"system:serviceaccount:openshift-storage:noobaa\", \"system:serviceaccount:openshift-storage:noobaa-core\", \"system:serviceaccount:openshift-storage:noobaa-endpoint\" ] } } } ] }",
"#!/bin/bash set -x This is a sample script to help you deploy MCG on AWS STS cluster. This script shows how to create role-policy and then create the role in AWS. For more information see: https://docs.openshift.com/rosa/authentication/assuming-an-aws-iam-role-for-a-service-account.html WARNING: This is a sample script. You need to adjust the variables based on your requirement. Variables : user variables - REPLACE these variables with your values: ROLE_NAME=\"<role-name>\" # role name that you pick in your AWS account NAMESPACE=\"<namespace>\" # namespace name where MCG is running. For OpenShift Data Foundation, it is openshift-storage. MCG variables SERVICE_ACCOUNT_NAME_1=\"noobaa\" # The service account name of deployment operator SERVICE_ACCOUNT_NAME_2=\"noobaa-endpoint\" # The service account name of deployment endpoint SERVICE_ACCOUNT_NAME_3=\"noobaa-core\" # The service account name of statefulset core AWS variables Make sure these values are not empty (AWS_ACCOUNT_ID, OIDC_PROVIDER) AWS_ACCOUNT_ID is your AWS account number AWS_ACCOUNT_ID=USD(aws sts get-caller-identity --query \"Account\" --output text) If you want to create the role before using the cluster, replace this field too. The OIDC provider is in the structure: 1) <OIDC-bucket>.s3.<aws-region>.amazonaws.com. for OIDC bucket configurations are in an S3 public bucket 2) `<characters>.cloudfront.net` for OIDC bucket configurations in an S3 private bucket with a public CloudFront distribution URL OIDC_PROVIDER=USD(oc get authentication cluster -ojson | jq -r .spec.serviceAccountIssuer | sed -e \"s/^https:\\/\\///\") the permission (S3 full access) POLICY_ARN_STRINGS=\"arn:aws:iam::aws:policy/AmazonS3FullAccess\" Creating the role (with AWS command line interface) read -r -d '' TRUST_RELATIONSHIP <<EOF { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"arn:aws:iam::USD{AWS_ACCOUNT_ID}:oidc-provider/USD{OIDC_PROVIDER}\" }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"USD{OIDC_PROVIDER}:sub\": [ \"system:serviceaccount:USD{NAMESPACE}:USD{SERVICE_ACCOUNT_NAME_1}\", \"system:serviceaccount:USD{NAMESPACE}:USD{SERVICE_ACCOUNT_NAME_2}\", \"system:serviceaccount:USD{NAMESPACE}:USD{SERVICE_ACCOUNT_NAME_3}\" ] } } } ] } EOF echo \"USD{TRUST_RELATIONSHIP}\" > trust.json aws iam create-role --role-name \"USDROLE_NAME\" --assume-role-policy-document file://trust.json --description \"role for demo\" while IFS= read -r POLICY_ARN; do echo -n \"Attaching USDPOLICY_ARN ... \" aws iam attach-role-policy --role-name \"USDROLE_NAME\" --policy-arn \"USD{POLICY_ARN}\" echo \"ok.\" done <<< \"USDPOLICY_ARN_STRINGS\"",
"noobaa backingstore create aws-sts-s3 <backingstore-name> --aws-sts-arn=<aws-sts-role-arn> --region=<region> --target-bucket=<target-bucket>"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html-single/deploying_openshift_data_foundation_using_amazon_web_services/index |
8.17. avahi | 8.17. avahi 8.17.1. RHBA-2014:1535 - avahi bug fix update Updated avahi packages that fix several bugs are now available for Red Hat Enterprise Linux 6. Avahi is an implementation of the DNS Service Discovery and Multicast DNS specifications for Zero Configuration Networking. It facilitates service discovery on a local network. Avahi and Avahi-aware applications allow you to plug your computer into a network and, with no configuration, view other people to chat with, view printers to print to, and find shared files on other computers. Bug Fixes BZ# 768708 , BZ# 885849 Previously, when the ARCOUNT field in the DNS Response Header contained a non-zero value, avahi-daemon performed a check and logged errors about invalid DNS packets being received. Note that a non-zero value of ARCOUNT is an indication of additional data sections in the DNS packet, but avahi-daemon does not interpret them. As a consequence, avahi-daemon was not sufficiently interoperable with other mDNS/DNS-SD implementations, and the automatic service discovery thus did not provide the user with the expected results on some platforms. Additionally, avahi-daemon logged inaccurate information which cluttered log files. The redundant check has been removed, and the described situation no longer occurs. BZ# 1074028 Previously, various options such as maximum count of cached resource records or numerous options related to handling of connected clients to the avahi-daemon could not be configured. As a consequence, in large networks, the avahi-daemon could reach the upper bound of some internal limits. In addition, the avahi-daemon was exhibiting erroneous behavior, such as logging error messages and failing to discover some services in large networks. To fix this bug, support for configuring various internal limits has been introduced with the following newly added options: cache-entries-max, clients-max, objects-per-client-max, entries-per-entry-group-max. For details about these options, see the avahi-daemon.conf(5) manual page. Users of avahi are advised to upgrade to these updated packages, which fix these bugs. After installing the update, avahi-daemon will be restarted automatically. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/avahi |
Chapter 10. Renaming an IdM server | Chapter 10. Renaming an IdM server You cannot change the host name of an existing Identity Management (IdM) server. However, you can replace the server with a replica of a different name. Procedure Install a new replica that will replace the existing server, ensuring the replica has the required host name and IP address. For details, see Installing an IdM replica . Important If the server you are uninstalling is the certificate revocation list (CRL) publisher server, make another server the CRL publisher server before proceeding. For details on how this is done in the context of a migration procedure, see the following sections: Stopping CRL generation on a RHEL 8 IdM CA server Starting CRL generation on the new RHEL 9 IdM CA server Stop the existing IdM server instance. Uninstall the existing server as described in Uninstalling an IdM server . | [
"ipactl stop"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/installing_identity_management/renaming-an-idm-server_installing-identity-management |
Chapter 1. Red Hat Quay features | Chapter 1. Red Hat Quay features Red Hat Quay is regularly released with new features and software updates. The following features are available for Red Hat Quay deployments, however the list is not exhaustive: High availability Geo-replication Repository mirroring Docker v2, schema 2 (multi-arch) support Continuous integration Security scanning with Clair Custom log rotation Zero downtime garbage collection 24/7 support Users should check the Red Hat Quay Release Notes for the latest feature information. | null | https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/deploy_red_hat_quay_-_high_availability/poc-overview |
Chapter 14. Provisioning cloud instances in Amazon EC2 | Chapter 14. Provisioning cloud instances in Amazon EC2 Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides public cloud compute resources. Using Satellite, you can interact with Amazon EC2's public API to create cloud instances and control their power management states. Use the procedures in this chapter to add a connection to an Amazon EC2 account and provision a cloud instance. 14.1. Prerequisites for Amazon EC2 provisioning The requirements for Amazon EC2 provisioning include: A Capsule Server managing a network in your EC2 environment. Use a Virtual Private Cloud (VPC) to ensure a secure network between the hosts and Capsule Server. An Amazon Machine Image (AMI) for image-based provisioning. You can use synchronized content repositories for Red Hat Enterprise Linux. For more information, see Syncing Repositories in Managing content . Provide an activation key for host registration. For more information, see Creating An Activation Key in Managing content . 14.2. Installing Amazon EC2 plugin Install the Amazon EC2 plugin to attach an EC2 compute resource provider to Satellite. This allows you to manage and deploy hosts to EC2. Procedure Install the EC2 compute resource provider on your Satellite Server: Optional: In the Satellite web UI, navigate to Administer > About and select the compute resources tab to verify the installation of the Amazon EC2 plugin. 14.3. Adding an Amazon EC2 connection to the Satellite Server Use this procedure to add the Amazon EC2 connection in Satellite Server's compute resources. To use the CLI instead of the Satellite web UI, see the CLI procedure . Prerequisites An AWS EC2 user performing this procedure needs the AmazonEC2FullAccess permissions. You can attach these permissions from AWS. Time settings and Amazon Web Services Amazon Web Services uses time settings as part of the authentication process. Ensure that Satellite Server's time is correctly synchronized. Ensure that an NTP service, such as ntpd or chronyd , is running properly on Satellite Server. Failure to provide the correct time to Amazon Web Services can lead to authentication failures. Procedure In the Satellite web UI, navigate to Infrastructure > Compute Resources and in the Compute Resources window, click Create Compute Resource . In the Name field, enter a name to identify the Amazon EC2 compute resource. From the Provider list, select EC2 . In the Description field, enter information that helps distinguish the resource for future use. Optional: From the HTTP proxy list, select an HTTP proxy to connect to external API services. You must add HTTP proxies to Satellite before you can select a proxy from this list. For more information, see Section 14.4, "Using an HTTP proxy with compute resources" . In the Access Key and Secret Key fields, enter the access keys for your Amazon EC2 account. For more information, see Managing Access Keys for your AWS Account on the Amazon documentation website. Optional: Click Load Regions to populate the Regions list. From the Region list, select the Amazon EC2 region or data center to use. Click the Locations tab and ensure that the location you want to use is selected, or add a different location. Click the Organizations tab and ensure that the organization you want to use is selected, or add a different organization. Click Submit to save the Amazon EC2 connection. Select the new compute resource and then click the SSH keys tab, and click Download to save a copy of the SSH keys to use for SSH authentication. Until BZ1793138 is resolved, you can download a copy of the SSH keys only immediately after creating the Amazon EC2 compute resource. If you require SSH keys at a later stage, follow the procedure in Section 14.9, "Connecting to an Amazon EC2 instance using SSH" . CLI procedure Create the connection with the hammer compute-resource create command. Use --user and --password options to add the access key and secret key respectively. 14.4. Using an HTTP proxy with compute resources In some cases, the EC2 compute resource that you use might require a specific HTTP proxy to communicate with Satellite. In Satellite, you can create an HTTP proxy and then assign the HTTP proxy to your EC2 compute resource. However, if you configure an HTTP proxy for Satellite in Administer > Settings , and then add another HTTP proxy for your compute resource, the HTTP proxy that you define in Administer > Settings takes precedence. Procedure In the Satellite web UI, navigate to Infrastructure > HTTP Proxies , and select New HTTP Proxy . In the Name field, enter a name for the HTTP proxy. In the URL field, enter the URL for the HTTP proxy, including the port number. Optional: Enter a username and password to authenticate to the HTTP proxy, if your HTTP proxy requires authentication. Click Test Connection to ensure that you can connect to the HTTP proxy from Satellite. Click the Locations tab and add a location. Click the Organization tab and add an organization. Click Submit . 14.5. Creating an image for Amazon EC2 You can create images for Amazon EC2 from within Satellite. Procedure In the Satellite web UI, navigate to Infrastructure > Compute Resources . Select your Amazon EC2 provider. Click Create Image . In the Name field, enter a meaningful and unique name for your EC2 image. From the Operating System list, select an operating system to associate with the image. From the Architecture list, select an architecture to associate with the image. In the Username field, enter the username needed to SSH into the machine. In the Image ID field, enter the image ID provided by Amazon or an operating system vendor. Optional: Select the User Data check box to enable support for user data input. Optional: Set an Iam Role for Fog to use when creating this image. Click Submit to save your changes to Satellite. 14.6. Adding Amazon EC2 images to Satellite Server Amazon EC2 uses image-based provisioning to create hosts. You must add image details to your Satellite Server. This includes access details and image location. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Infrastructure > Compute Resources and select an Amazon EC2 connection. Click the Images tab, and then click Create Image . In the Name field, enter a name to identify the image for future use. From the Operating System list, select the operating system that corresponds with the image you want to add. From the Architecture list, select the operating system's architecture. In the Username field, enter the SSH user name for image access. This is normally the root user. In the Password field, enter the SSH password for image access. In the Image ID field, enter the Amazon Machine Image (AMI) ID for the image. This is usually in the following format: ami-xxxxxxxx . Optional: Select the User Data checkbox if the images support user data input, such as cloud-init data. If you enable user data, the Finish scripts are automatically disabled. This also applies in reverse: if you enable the Finish scripts, this disables user data. Optional: In the IAM role field, enter the Amazon security role used for creating the image. Click Submit to save the image details. CLI procedure Create the image with the hammer compute-resource image create command. Use the --uuid field to store the full path of the image location on the Amazon EC2 server. 14.7. Adding Amazon EC2 details to a compute profile You can add hardware settings for instances on Amazon EC2 to a compute profile. Procedure In the Satellite web UI, navigate to Infrastructure > Compute Profiles and click the name of your profile, then click an EC2 connection. From the Flavor list, select the hardware profile on EC2 to use for the host. From the Image list, select the image to use for image-based provisioning. From the Availability zone list, select the target cluster to use within the chosen EC2 region. From the Subnet list, add the subnet for the EC2 instance. If you have a VPC for provisioning new hosts, use its subnet. From the Security Groups list, select the cloud-based access rules for ports and IP addresses to apply to the host. From the Managed IP list, select either a Public IP or a Private IP. Click Submit to save the compute profile. CLI procedure Set Amazon EC2 details to a compute profile: 14.8. Creating image-based hosts on Amazon EC2 The Amazon EC2 provisioning process creates hosts from existing images on the Amazon EC2 server. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Hosts > Create Host . In the Name field, enter a name for the host. Optional: Click the Organization tab and change the organization context to match your requirement. Optional: Click the Location tab and change the location context to match your requirement. From the Host Group list, select a host group that you want to assign your host to. That host group will populate the form. From the Deploy on list, select the EC2 connection. From the Compute Profile list, select a profile to use to automatically populate virtual machine-based settings. Click the Interfaces tab, and on the interface of the host, click Edit . Verify that the fields are populated with values. Note in particular: Satellite automatically assigns an IP address for the new host. Ensure that the MAC address field is blank. EC2 assigns a MAC address to the host during provisioning. The Name from the Host tab becomes the DNS name . Ensure that Satellite automatically selects the Managed , Primary , and Provision options for the first interface on the host. If not, select them. Click OK to save. To add another interface, click Add Interface . You can select only one interface for Provision and Primary . Click the Operating System tab and confirm that all fields are populated with values. Click the Virtual Machine tab and confirm that all fields are populated with values. Click the Parameters tab, and ensure that a parameter exists that provides an activation key. If not, add an activation key. Click Submit to save your changes. This new host entry triggers the Amazon EC2 server to create the instance, using the pre-existing image as a basis for the new volume. CLI procedure Create the host with the hammer host create command and include --provision-method image to use image-based provisioning. For more information about additional host creation parameters for this compute resource, enter the hammer host create --help command. 14.9. Connecting to an Amazon EC2 instance using SSH You can connect remotely to an Amazon EC2 instance from Satellite Server using SSH. However, to connect to any Amazon Web Services EC2 instance that you provision through Red Hat Satellite, you must first access the private key that is associated with the compute resource in the Foreman database, and use this key for authentication. Procedure To locate the compute resource list, on your Satellite Server base system, enter the following command, and note the ID of the compute resource that you want to use: Switch user to the postgres user: Initiate the postgres shell: Connect to the Foreman database as the user postgres : Select the secret from key_pairs where compute_resource_id = 3 : Copy the key from after -----BEGIN RSA PRIVATE KEY----- until -----END RSA PRIVATE KEY----- . Create a .pem file and paste your key into the file: Ensure that you restrict access to the .pem file: To connect to the Amazon EC2 instance, enter the following command: 14.10. Configuring a finish template for an Amazon Web Service EC2 environment You can use Red Hat Satellite finish templates during the provisioning of Red Hat Enterprise Linux instances in an Amazon EC2 environment. If you want to use a Finish template with SSH, Satellite must reside within the EC2 environment and in the correct security group. Satellite currently performs SSH finish provisioning directly, not using Capsule Server. If Satellite Server does not reside within EC2, the EC2 virtual machine reports an internal IP rather than the necessary external IP with which it can be reached. Procedure In the Satellite web UI, navigate to Hosts > Templates > Provisioning Templates . In the Provisioning Templates page, enter Kickstart default finish into the search field and click Search . On the Kickstart default finish template, select Clone . In the Name field, enter a unique name for the template. In the template, prefix each command that requires root privileges with sudo , except for subscription-manager register and yum commands, or add the following line to run the entire template as the sudo user: Click the Association tab, and associate the template with a Red Hat Enterprise Linux operating system that you want to use. Click the Locations tab, and add the the location where the host resides. Click the Organizations tab, and add the organization that the host belongs to. Make any additional customizations or changes that you require, then click Submit to save your template. In the Satellite web UI, navigate to Hosts > Operating systems and select the operating system that you want for your host. Click the Templates tab, and from the Finish Template list, select your finish template. In the Satellite web UI, navigate to Hosts > Create Host . In the Name field, enter a name for the host. Optional: Click the Organization tab and change the organization context to match your requirement. Optional: Click the Location tab and change the location context to match your requirement. From the Host Group list, select a host group that you want to assign your host to. That host group will populate the form. Click the Parameters tab and navigate to Host parameters . In Host parameters , click Add Parameter two times to add two new parameter fields. Add the following parameters: In the Name field, enter activation_keys . In the corresponding Value field, enter your activation key. In the Name field, enter remote_execution_ssh_user . In the corresponding Value field, enter ec2-user . Click Submit to save the changes. 14.11. Deleting a virtual machine on Amazon EC2 You can delete virtual machines running on Amazon EC2 from within Satellite. Procedure In the Satellite web UI, navigate to Infrastructure > Compute Resources . Select your Amazon EC2 provider. On the Virtual Machines tab, click Delete from the Actions menu. This deletes the virtual machine from the Amazon EC2 compute resource while retaining any associated hosts within Satellite. If you want to delete an orphaned host, navigate to Hosts > All Hosts and delete the host manually. 14.12. More information about Amazon Web Services and Satellite For information about how to locate Red Hat Gold Images on Amazon Web Services EC2, see How to Locate Red Hat Cloud Access Gold Images on AWS EC2 . For information about how to install and use the Amazon Web Service Client on Linux, see Install the AWS Command Line Interface on Linux in the Amazon Web Services documentation. For information about importing and exporting virtual machines in Amazon Web Services, see VM Import/Export in the Amazon Web Services documentation. | [
"satellite-installer --enable-foreman-compute-ec2",
"hammer compute-resource create --description \"Amazon EC2 Public Cloud` --locations \" My_Location \" --name \" My_EC2_Compute_Resource \" --organizations \" My_Organization \" --password \" My_Secret_Key \" --provider \"EC2\" --region \" My_Region \" --user \" My_User_Name \"",
"hammer compute-resource image create --architecture \" My_Architecture \" --compute-resource \" My_EC2_Compute_Resource \" --name \" My_Amazon_EC2_Image \" --operatingsystem \" My_Operating_System \" --user-data true --username root --uuid \"ami- My_AMI_ID \"",
"hammer compute-profile values create --compute-resource \" My_Laptop \" --compute-profile \" My_Compute_Profile \" --compute-attributes \"flavor_id=1,availability_zone= My_Zone ,subnet_id=1,security_group_ids=1,managed_ip=public_ip\"",
"hammer host create --compute-attributes=\"flavor_id=m1.small,image_id=TestImage,availability_zones=us-east-1a,security_group_ids=Default,managed_ip=Public\" --compute-resource \" My_EC2_Compute_Resource \" --enabled true --hostgroup \" My_Host_Group \" --image \" My_Amazon_EC2_Image \" --interface \"managed=true,primary=true,provision=true,subnet_id=EC2\" --location \" My_Location \" --managed true --name \"My_Host_Name_\" --organization \" My_Organization \" --provision-method image",
"hammer compute-resource list",
"su - postgres",
"psql",
"postgres=# \\c foreman",
"select secret from key_pairs where compute_resource_id = 3; secret",
"vim Keyname .pem",
"chmod 600 Keyname .pem",
"ssh -i Keyname .pem ec2-user@ example.aws.com",
"sudo -s << EOS _Template_ _Body_ EOS"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/provisioning_hosts/Provisioning_Cloud_Instances_in_Amazon_EC2_ec2-provisioning |
4.2. Configuration File Blacklist | 4.2. Configuration File Blacklist The blacklist section of the multipath configuration file specifies the devices that will not be used when the system configures multipath devices. Devices that are blacklisted will not be grouped into a multipath device. In older releases of Red Hat Enterprise Linux, multipath always tried to create a multipath device for every path that was not explicitly blacklisted. As of Red Hat Enterprise Linux 6, however, if the find_multipaths configuration parameter is set to yes , then multipath will create a device only if one of three conditions are met: There are at least two paths that are not blacklisted with the same WWID. The user manually forces the creation of the device by specifying a device with the multipath command. A path has the same WWID as a multipath device that was previously created (even if that multipath device does not currently exist). Whenever a multipath device is created, multipath remembers the WWID of the device so that it will automatically create the device again as soon as it sees a path with that WWID. This allows you to have multipath automatically choose the correct paths to make into multipath devices, without have to edit the multipath blacklist. If you have previously created a multipath device without using the find_multipaths parameter and then you later set the parameter to yes , you may need to remove the WWIDs of any device you do not want created as a multipath device from the /etc/multipath/wwids file. The following shows a sample /etc/multipath/wwids file. The WWIDs are enclosed by slashes (/): With the find_multipaths parameter set to yes , you need to blacklist only the devices with multiple paths that you do not want to be multipathed. Because of this, it will generally not be necessary to blacklist devices. If you do need to blacklist devices, you can do so according to the following criteria: By WWID, as described in Section 4.2.1, "Blacklisting by WWID" By device name, as described in Section 4.2.2, "Blacklisting By Device Name" By device type, as described in Section 4.2.3, "Blacklisting By Device Type" By udev property, as described in Section 4.2.4, "Blacklisting By udev Property (Red Hat Enterprise Linux 7.5 and Later)" By device protocol, as described in Section 4.2.5, "Blacklisting By Device Protocol (Red Hat Enterprise Linux 7.6 and Later)" By default, a variety of device types are blacklisted, even after you comment out the initial blacklist section of the configuration file. For information, see Section 4.2.2, "Blacklisting By Device Name" . 4.2.1. Blacklisting by WWID You can specify individual devices to blacklist by their World-Wide IDentification with a wwid entry in the blacklist section of the configuration file. The following example shows the lines in the configuration file that would blacklist a device with a WWID of 26353900f02796769. 4.2.2. Blacklisting By Device Name You can blacklist device types by device name so that they will not be grouped into a multipath device by specifying a devnode entry in the blacklist section of the configuration file. The following example shows the lines in the configuration file that would blacklist all SCSI devices, since it blacklists all sd* devices. You can use a devnode entry in the blacklist section of the configuration file to specify individual devices to blacklist rather than all devices of a specific type. This is not recommended, however, since unless it is statically mapped by udev rules, there is no guarantee that a specific device will have the same name on reboot. For example, a device name could change from /dev/sda to /dev/sdb on reboot. By default, the following devnode entries are compiled in the default blacklist; the devices that these entries blacklist do not generally support DM Multipath. To enable multipathing on any of these devices, you would need to specify them in the blacklist_exceptions section of the configuration file, as described in Section 4.2.6, "Blacklist Exceptions" . 4.2.3. Blacklisting By Device Type You can specify specific device types in the blacklist section of the configuration file with a device section. The following example blacklists all IBM DS4200 and HP devices. 4.2.4. Blacklisting By udev Property (Red Hat Enterprise Linux 7.5 and Later) The blacklist and blacklist_exceptions sections of the multipath.conf configuration file support the property parameter. This parameter allows users to blacklist certain types of devices. The property parameter takes a regular expression string that is matched against the udev environment variable name for the device. The following example blacklists all devices with the udev property ID_ATA . 4.2.5. Blacklisting By Device Protocol (Red Hat Enterprise Linux 7.6 and Later) You can specify the protocol for a device to be excluded from multipathing in the blacklist section of the configuration file with a protocol section. The protocol strings that multipath recognizes are scsi:fcp, scsi:spi, scsi:ssa, scsi:sbp, scsi:srp, scsi:iscsi, scsi:sas, scsi:adt, scsi:ata, scsi:unspec, ccw, cciss, nvme, and undef. The protocol that a path is using can be viewed by running the command multipathd show paths format "%d %P" . The following example blacklists all devices with an undefined protocol or an unknown SCSI transport type. 4.2.6. Blacklist Exceptions You can use the blacklist_exceptions section of the configuration file to enable multipathing on devices that have been blacklisted by default. For example, if you have a large number of devices and want to multipath only one of them (with the WWID of 3600d0230000000000e13955cc3757803), instead of individually blacklisting each of the devices except the one you want, you could instead blacklist all of them, and then allow only the one you want by adding the following lines to the /etc/multipath.conf file. When specifying devices in the blacklist_exceptions section of the configuration file, you must specify the exceptions in the same way they were specified in the blacklist. For example, a WWID exception will not apply to devices specified by a devnode blacklist entry, even if the blacklisted device is associated with that WWID. Similarly, devnode exceptions apply only to devnode entries, and device exceptions apply only to device entries. The property parameter works differently than the other blacklist_exception parameters. If the parameter is set, the device must have a udev variable that matches. Otherwise, the device is blacklisted. This parameter allows users to blacklist SCSI devices that multipath should ignore, such as USB sticks and local hard drives. To allow only SCSI devices that could reasonably be multipathed, set this parameter to SCSI_IDENT_|ID_WWN) as in the following example. | [
"Multipath wwids, Version : 1.0 NOTE: This file is automatically maintained by multipath and multipathd. You should not need to edit this file in normal circumstances. # Valid WWIDs: /3600d0230000000000e13955cc3757802/ /3600d0230000000000e13955cc3757801/ /3600d0230000000000e13955cc3757800/ /3600d02300069c9ce09d41c31f29d4c00/ /SWINSYS SF2372 0E13955CC3757802/ /3600d0230000000000e13955cc3757803/",
"blacklist { wwid 26353900f02796769 }",
"blacklist { devnode \"^sd[a-z]\" }",
"blacklist { devnode \"^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*\" devnode \"^(td|ha)d[a-z]\" }",
"blacklist { device { vendor \"IBM\" product \"3S42\" #DS4200 Product 10 } device { vendor \"HP\" product \"*\" } }",
"blacklist { property \"ID_ATA\" }",
"blacklist { protocol \"scsi:unspec\" protocol \"undef\" }",
"blacklist { wwid \"*\" } blacklist_exceptions { wwid \"3600d0230000000000e13955cc3757803\" }",
"blacklist_exceptions { property \"(SCSI_IDENT_|ID_WWN)\" }"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/dm_multipath/config_file_blacklist |
16.2. Configuring Clustered Services | 16.2. Configuring Clustered Services The IdM server is not cluster aware . However, it is possible to configure a clustered service to be part of IdM by synchronizing Kerberos keys across all of the participating hosts and configuring services running on the hosts to respond to whatever names the clients use. Enroll all of the hosts in the cluster into the IdM domain. Create any service principals and generate the required keytabs. Collect any keytabs that have been set up for services on the host, including the host keytab at /etc/krb5.keytab . Use the ktutil command to produce a single keytab file that contains the contents of all of the keytab files. For each file, use the rkt command to read the keys from that file. Use the wkt command to write all of the keys which have been read to a new keytab file. Replace the keytab files on each host with the newly-created combined keytab file. At this point, each host in this cluster can now impersonate any other host. Some services require additional configuration to accommodate cluster members which do not reset host names when taking over a failed service. For sshd , set GSSAPIStrictAcceptorCheck no in /etc/ssh/sshd_config . For mod_auth_kerb , set KrbServiceName Any in /etc/httpd/conf.d/auth_kerb.conf . Note For SSL servers, the subject name or a subject alternative name for the server's certificate must appear correct when a client connects to the clustered host. If possible, share the private key among all of the hosts. If each cluster member contains a subject alternative name which includes the names of all the other cluster members, that satisfies any client connection requirements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/ipa-cluster |
Chapter 3. Supported Standards and Protocols | Chapter 3. Supported Standards and Protocols Red Hat Certificate System is based on many public and standard protocols and RFCs, to ensure the best possible performance and interoperability. The major standards and protocols used or supported by Certificate System 10 are outlined in this chapter, to help administrators plan their client services effectively. 3.1. TLS, ECC, and RSA The Transport Layer Security (TLS) protocol is an universally accepted standard for authenticated and encrypted communication between clients and servers. Both client and server authentication occur over TLS. TLS uses a combination of public-key and symmetric-key encryption. Symmetric-key encryption is much faster than public-key encryption, but public-key encryption provides better authentication techniques. An TLS session always begins with an exchange of messages called handshake , initial communication between the server and client. The handshake allows the server to authenticate itself to the client using public-key techniques, optionally allows the client to authenticate to the server, then allows the client and the server to cooperate in the creation of symmetric keys used for rapid encryption, decryption, and integrity verification during the session that follows. TLS supports a variety of different cryptographic algorithms, or ciphers , for operations such as authenticating the server and client, transmitting certificates, and establishing session keys. Clients and servers may support different cipher suites, or sets of ciphers. Among other functions, the handshake determines how the server and client negotiate which cipher suite is used to authenticate each other, to transmit certificates, and to establish session keys. Key-exchange algorithms like RSA and Elliptic Curve Diffie-Hellman (ECDH) govern the way the server and client determine the symmetric keys to use during an TLS session. TLS supports ECC (Elliptic Curve Cryptography) cipher suites, as well as RSA. The Certificate System supports both RSA and ECC public-key cryptographic systems natively. In more recent practise, key-exchange algorithms are being superseded by key-agreement protocols where each of the two or more parties can influence the outcome when establishing a common key for secure communication. Key agreement is preferrable to key exchange because it allows for Perfect Forward Secrecy (PFS) to be implemented. When PFS is used, random public keys (also called temporary cipher parameters or ephemeral keys ) are generated for each session by a non-deterministic algorithm for the purposes of key agreement. As a result, there is no single secret value which could lead to the compromise of multiple messages, protecting past and future communication alike. Note Longer RSA keys are required to provide security as computing capabilities increase. The recommended RSA key-length is 2048 bits. Though many servers continue to use 1024-bit keys, servers should migrate to at least 2048 bits. For 64-bit machines, consider using stronger keys. All CAs should use at least 2048-bit keys, and stronger keys (such as 3072 or 4096 bits) if possible. 3.1.1. Supported Cipher Suites Cipher and hashing algorithms are in a constant flux with regard to various vulnerabilities and security strength. As a general rule, Red Hat Certificate System follows the NIST guideline and supports TLS 1.1 and TLS 1.2 cipher suites pertaining to the server keys. 3.1.1.1. Recommended TLS Cipher Suites The Transport Layer Security (TLS) protocol is a universally accepted standard for authenticated and encrypted communication between clients and servers. Red Hat Certificate System supports TLS 1.1 and 1.2. Red Hat Certificate System supports the following cipher suites when the server is acting either as a server or as a client: ECC TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256 TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384 TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 RSA TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA TLS_DHE_RSA_WITH_AES_128_CBC_SHA TLS_DHE_RSA_WITH_AES_256_CBC_SHA TLS_DHE_RSA_WITH_AES_128_CBC_SHA256 TLS_DHE_RSA_WITH_AES_256_CBC_SHA256 TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 | null | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/supported-standard |
Chapter 4. Viewing application composition by using the Topology view | Chapter 4. Viewing application composition by using the Topology view The Topology view in the Developer perspective of the web console provides a visual representation of all the applications within a project, their build status, and the components and services associated with them. 4.1. Prerequisites To view your applications in the Topology view and interact with them, ensure that: You have logged in to the web console . You have the appropriate roles and permissions in a project to create applications and other workloads in OpenShift Container Platform. You have created and deployed an application on OpenShift Container Platform using the Developer perspective . You are in the Developer perspective . 4.2. Viewing the topology of your application You can navigate to the Topology view using the left navigation panel in the Developer perspective. After you deploy an application, you are directed automatically to the Graph view where you can see the status of the application pods, quickly access the application on a public URL, access the source code to modify it, and see the status of your last build. You can zoom in and out to see more details for a particular application. The Topology view provides you the option to monitor your applications using the List view. Use the List view icon ( ) to see a list of all your applications and use the Graph view icon ( ) to switch back to the graph view. You can customize the views as required using the following: Use the Find by name field to find the required components. Search results may appear outside of the visible area; click Fit to Screen from the lower-left toolbar to resize the Topology view to show all components. Use the Display Options drop-down list to configure the Topology view of the various application groupings. The options are available depending on the types of components deployed in the project: Expand group Virtual Machines: Toggle to show or hide the virtual machines. Application Groupings: Clear to condense the application groups into cards with an overview of an application group and alerts associated with it. Helm Releases: Clear to condense the components deployed as Helm Release into cards with an overview of a given release. Knative Services: Clear to condense the Knative Service components into cards with an overview of a given component. Operator Groupings: Clear to condense the components deployed with an Operator into cards with an overview of the given group. Show elements based on Pod Count or Labels Pod Count: Select to show the number of pods of a component in the component icon. Labels: Toggle to show or hide the component labels. The Topology view also provides you the Export application option to download your application in the ZIP file format. You can then import the downloaded application to another project or cluster. For more details, see Exporting an application to another project or cluster in the Additional resources section. 4.3. Interacting with applications and components In the Topology view in the Developer perspective of the web console, the Graph view provides the following options to interact with applications and components: Click Open URL ( ) to see your application exposed by the route on a public URL. Click Edit Source code to access your source code and modify it. Note This feature is available only when you create applications using the From Git , From Catalog , and the From Dockerfile options. Hover your cursor over the lower left icon on the pod to see the name of the latest build and its status. The status of the application build is indicated as New ( ), Pending ( ), Running ( ), Completed ( ), Failed ( ), and Canceled ( ). The status or phase of the pod is indicated by different colors and tooltips as: Running ( ): The pod is bound to a node and all of the containers are created. At least one container is still running or is in the process of starting or restarting. Not Ready ( ): The pods which are running multiple containers, not all containers are ready. Warning ( ): Containers in pods are being terminated, however termination did not succeed. Some containers may be other states. Failed ( ): All containers in the pod terminated but least one container has terminated in failure. That is, the container either exited with non-zero status or was terminated by the system. Pending ( ): The pod is accepted by the Kubernetes cluster, but one or more of the containers has not been set up and made ready to run. This includes time a pod spends waiting to be scheduled as well as the time spent downloading container images over the network. Succeeded ( ): All containers in the pod terminated successfully and will not be restarted. Terminating ( ): When a pod is being deleted, it is shown as Terminating by some kubectl commands. Terminating status is not one of the pod phases. A pod is granted a graceful termination period, which defaults to 30 seconds. Unknown ( ): The state of the pod could not be obtained. This phase typically occurs due to an error in communicating with the node where the pod should be running. After you create an application and an image is deployed, the status is shown as Pending . After the application is built, it is displayed as Running . Figure 4.1. Application topology The application resource name is appended with indicators for the different types of resource objects as follows: CJ : CronJob D : Deployment DC : DeploymentConfig DS : DaemonSet J : Job P : Pod SS : StatefulSet (Knative): A serverless application Note Serverless applications take some time to load and display on the Graph view . When you deploy a serverless application, it first creates a service resource and then a revision. After that, it is deployed and displayed on the Graph view . If it is the only workload, you might be redirected to the Add page. After the revision is deployed, the serverless application is displayed on the Graph view . 4.4. Scaling application pods and checking builds and routes The Topology view provides the details of the deployed components in the Overview panel. You can use the Overview and Details tabs to scale the application pods, check build status, services, and routes as follows: Click on the component node to see the Overview panel to the right. Use the Details tab to: Scale your pods using the up and down arrows to increase or decrease the number of instances of the application manually. For serverless applications, the pods are automatically scaled down to zero when idle and scaled up depending on the channel traffic. Check the Labels , Annotations , and Status of the application. Click the Resources tab to: See the list of all the pods, view their status, access logs, and click on the pod to see the pod details. See the builds, their status, access logs, and start a new build if needed. See the services and routes used by the component. For serverless applications, the Resources tab provides information on the revision, routes, and the configurations used for that component. 4.5. Adding components to an existing project You can add components to a project. Procedure Navigate to the +Add view. Click Add to Project ( ) to left navigation pane or press Ctrl + Space Search for the component and click the Start / Create / Install button or click Enter to add the component to the project and see it in the topology Graph view . Figure 4.2. Adding component via quick search Alternatively, you can also use the available options in the context menu, such as Import from Git , Container Image , Database , From Catalog , Operator Backed , Helm Charts , Samples , or Upload JAR file , by right-clicking in the topology Graph view to add a component to your project. Figure 4.3. Context menu to add services 4.6. Grouping multiple components within an application You can use the +Add view to add multiple components or services to your project and use the topology Graph view to group applications and resources within an application group. Prerequisites You have created and deployed minimum two or more components on OpenShift Container Platform using the Developer perspective. Procedure To add a service to the existing application group, press Shift + drag it to the existing application group. Dragging a component and adding it to an application group adds the required labels to the component. Figure 4.4. Application grouping Alternatively, you can also add the component to an application as follows: Click the service pod to see the Overview panel to the right. Click the Actions drop-down menu and select Edit Application Grouping . In the Edit Application Grouping dialog box, click the Application drop-down list, and select an appropriate application group. Click Save to add the service to the application group. You can remove a component from an application group by selecting the component and using Shift + drag to drag it out of the application group. 4.7. Adding services to your application To add a service to your application use the +Add actions using the context menu in the topology Graph view . Note In addition to the context menu, you can add services by using the sidebar or hovering and dragging the dangling arrow from the application group. Procedure Right-click an application group in the topology Graph view to display the context menu. Figure 4.5. Add resource context menu Use Add to Application to select a method for adding a service to the application group, such as From Git , Container Image , From Dockerfile , From Devfile , Upload JAR file , Event Source , Channel , or Broker . Complete the form for the method you choose and click Create . For example, to add a service based on the source code in your Git repository, choose the From Git method, fill in the Import from Git form, and click Create . 4.8. Removing services from your application In the topology Graph view remove a service from your application using the context menu. Procedure Right-click on a service in an application group in the topology Graph view to display the context menu. Select Delete Deployment to delete the service. Figure 4.6. Deleting deployment option 4.9. Labels and annotations used for the Topology view The Topology view uses the following labels and annotations: Icon displayed in the node Icons in the node are defined by looking for matching icons using the app.openshift.io/runtime label, followed by the app.kubernetes.io/name label. This matching is done using a predefined set of icons. Link to the source code editor or the source The app.openshift.io/vcs-uri annotation is used to create links to the source code editor. Node Connector The app.openshift.io/connects-to annotation is used to connect the nodes. App grouping The app.kubernetes.io/part-of=<appname> label is used to group the applications, services, and components. For detailed information on the labels and annotations OpenShift Container Platform applications must use, see Guidelines for labels and annotations for OpenShift applications . 4.10. Additional resources See Importing a codebase from Git to create an application for more information on creating an application from Git. See Connecting an application to a service using the Developer perspective . See Exporting applications | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/building_applications/odc-viewing-application-composition-using-topology-view |
Chapter 1. Red Hat OpenShift support for Windows Containers overview | Chapter 1. Red Hat OpenShift support for Windows Containers overview You can add Windows nodes either by creating a compute machine set or by specifying existing Bring-Your-Own-Host (BYOH) Window instances through a configuration map . Note Compute machine sets are not supported for bare metal or provider agnostic clusters. For workloads including both Linux and Windows, OpenShift Container Platform allows you to deploy Windows workloads running on Windows Server containers while also providing traditional Linux workloads hosted on Red Hat Enterprise Linux CoreOS (RHCOS) or Red Hat Enterprise Linux (RHEL). For more information, see getting started with Windows container workloads . You need the WMCO to run Windows workloads in your cluster. The WMCO orchestrates the process of deploying and managing Windows workloads on a cluster. For more information, see how to enable Windows container workloads . You can create a Windows MachineSet object to create infrastructure Windows machine sets and related machines so that you can move supported Windows workloads to the new Windows machines. You can create a Windows MachineSet object on multiple platforms. You can schedule Windows workloads to Windows compute nodes. You can perform Windows Machine Config Operator upgrades to ensure that your Windows nodes have the latest updates. You can remove a Windows node by deleting a specific machine. You can use Bring-Your-Own-Host (BYOH) Windows instances to repurpose Windows Server VMs and bring them to OpenShift Container Platform. BYOH Windows instances benefit users who are looking to mitigate major disruptions in the event that a Windows server goes offline. You can use BYOH Windows instances as nodes on OpenShift Container Platform 4.8 and later versions. You can disable Windows container workloads by performing the following: Uninstalling the Windows Machine Config Operator Deleting the Windows Machine Config Operator namespace | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/windows_container_support_for_openshift/windows-container-overview |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.10/html/configuring_amq_broker/making-open-source-more-inclusive |
Chapter 4. Configuring Red Hat OpenStack Platform director for Service Telemetry Framework | Chapter 4. Configuring Red Hat OpenStack Platform director for Service Telemetry Framework To collect metrics, events, or both, and to send them to the Service Telemetry Framework (STF) storage domain, you must configure the Red Hat OpenStack Platform (RHOSP) overcloud to enable data collection and transport. STF can support both single and multiple clouds. The default configuration in RHOSP and STF set up for a single cloud installation. For a single RHOSP overcloud deployment with default configuration, see Section 4.1, "Deploying Red Hat OpenStack Platform overcloud for Service Telemetry Framework using director" . To plan your RHOSP installation and configuration STF for multiple clouds, see Section 4.3, "Configuring multiple clouds" . As part of an RHOSP overcloud deployment, you might need to configure additional features in your environment: To disable the data collector services, see Section 4.2, "Disabling Red Hat OpenStack Platform services used with Service Telemetry Framework" . 4.1. Deploying Red Hat OpenStack Platform overcloud for Service Telemetry Framework using director As part of the Red Hat OpenStack Platform (RHOSP) overcloud deployment using director, you must configure the data collectors and the data transport to Service Telemetry Framework (STF). Procedure Section 4.1.1, "Getting CA certificate from Service Telemetry Framework for overcloud configuration" Retrieving the AMQ Interconnect password Retrieving the AMQ Interconnect route address Creating the base configuration for STF Configuring the STF connection for the overcloud Deploying the overcloud Validating client-side installation Additional resources For more information about deploying an OpenStack cloud using director, see Installing and managing Red Hat OpenStack Platform with director . To collect data through AMQ Interconnect, see the amqp1 plug-in . 4.1.1. Getting CA certificate from Service Telemetry Framework for overcloud configuration To connect your Red Hat OpenStack Platform (RHOSP) overcloud to Service Telemetry Framework (STF), retrieve the CA certificate of AMQ Interconnect that runs within STF and use the certificate in RHOSP configuration. Procedure View a list of available certificates in STF: USD oc get secrets Retrieve and note the content of the default-interconnect-selfsigned Secret: USD oc get secret/default-interconnect-selfsigned -o jsonpath='{.data.ca\.crt}' | base64 -d 4.1.2. Retrieving the AMQ Interconnect password When you configure the Red Hat OpenStack Platform (RHOSP) overcloud for Service Telemetry Framework (STF), you must provide the AMQ Interconnect password in the STF connection file. You can disable basic authentication on the AMQ Interconnect connection by setting the value of the transports.qdr.auth parameter of the ServiceTelemetry spec to none . The transports.qdr.auth parameter is absent in versions of STF before 1.5.3, so the default behavior is that basic authentication is disabled. In a new install of STF 1.5.3 or later, the default value of transports.qdr.auth is basic , but if you upgraded to STF 1.5.3, the default value of transports.qdr.auth is none . Procedure Log in to your Red Hat OpenShift Container Platform environment where STF is hosted. Change to the service-telemetry project: USD oc project service-telemetry Retrieve the AMQ Interconnect password: USD oc get secret default-interconnect-users -o json | jq -r .data.guest | base64 -d 4.1.3. Retrieving the AMQ Interconnect route address When you configure the Red Hat OpenStack Platform (RHOSP) overcloud for Service Telemetry Framework (STF), you must provide the AMQ Interconnect route address in the STF connection file. Procedure Log in to your Red Hat OpenShift Container Platform environment where STF is hosted. Change to the service-telemetry project: USD oc project service-telemetry Retrieve the AMQ Interconnect route address: USD oc get routes -ogo-template='{{ range .items }}{{printf "%s\n" .spec.host }}{{ end }}' | grep "\-5671" default-interconnect-5671-service-telemetry.apps.infra.watch 4.1.4. Creating the base configuration for STF To configure the base parameters to provide a compatible data collection and transport for Service Telemetry Framework (STF), you must create a file that defines the default data collection values. Procedure Log in to the undercloud host as the stack user. Create a configuration file called enable-stf.yaml in the /home/stack directory. Important Setting PipelinePublishers to an empty list results in no metric data passing to RHOSP telemetry components, such as Gnocchi or Panko. If you need to send data to additional pipelines, the Ceilometer polling interval of 30 seconds, that you specify in ExtraConfig , might overwhelm the RHOSP telemetry components. You must increase the interval to a larger value, such as 300 , which results in less telemetry resolution in STF. enable-stf.yaml parameter_defaults: # only send to STF, not other publishers PipelinePublishers: [] # manage the polling and pipeline configuration files for Ceilometer agents ManagePolling: true ManagePipeline: true ManageEventPipeline: false # enable Ceilometer metrics CeilometerQdrPublishMetrics: true # enable collection of API status CollectdEnableSensubility: true CollectdSensubilityTransport: amqp1 # enable collection of containerized service metrics CollectdEnableLibpodstats: true # set collectd overrides for higher telemetry resolution and extra plugins # to load CollectdConnectionType: amqp1 CollectdAmqpInterval: 30 CollectdDefaultPollingInterval: 30 # to collect information about the virtual memory subsystem of the kernel # CollectdExtraPlugins: # - vmem # set standard prefixes for where metrics are published to QDR MetricsQdrAddresses: - prefix: 'collectd' distribution: multicast - prefix: 'anycast/ceilometer' distribution: multicast ExtraConfig: ceilometer::agent::polling::polling_interval: 30 ceilometer::agent::polling::polling_meters: - cpu - memory.usage # to avoid filling the memory buffers if disconnected from the message bus # note: this may need an adjustment if there are many metrics to be sent. collectd::plugin::amqp1::send_queue_limit: 5000 # to receive extra information about virtual memory, you must enable vmem plugin in CollectdExtraPlugins # collectd::plugin::vmem::verbose: true # provide name and uuid in addition to hostname for better correlation # to ceilometer data collectd::plugin::virt::hostname_format: "name uuid hostname" # to capture all extra_stats metrics, comment out below config collectd::plugin::virt::extra_stats: cpu_util vcpu disk # provide the human-friendly name of the virtual instance collectd::plugin::virt::plugin_instance_format: metadata # set memcached collectd plugin to report its metrics by hostname # rather than host IP, ensuring metrics in the dashboard remain uniform collectd::plugin::memcached::instances: local: host: "%{hiera('fqdn_canonical')}" port: 11211 # report root filesystem storage metrics collectd::plugin::df::ignoreselected: false 4.1.5. Configuring the STF connection for the overcloud To configure the Service Telemetry Framework (STF) connection, you must create a file that contains the connection configuration of the AMQ Interconnect for the overcloud to the STF deployment. Enable the collection of metrics and storage of the metrics in STF and deploy the overcloud. The default configuration is for a single cloud instance with the default message bus topics. For configuration of multiple cloud deployments, see Section 4.3, "Configuring multiple clouds" . Prerequisites Retrieve the CA certificate from the AMQ Interconnect deployed by STF. For more information, see Section 4.1.1, "Getting CA certificate from Service Telemetry Framework for overcloud configuration" . Retrieve the AMQ Interconnect password. For more information, see Section 4.1.2, "Retrieving the AMQ Interconnect password" . Retrieve the AMQ Interconnect route address. For more information, see Section 4.1.3, "Retrieving the AMQ Interconnect route address" . Procedure Log in to the undercloud host as the stack user. Create a configuration file called stf-connectors.yaml in the /home/stack directory. In the stf-connectors.yaml file, configure the MetricsQdrConnectors address to connect the AMQ Interconnect on the overcloud to the STF deployment. You configure the topic addresses for Sensubility, Ceilometer, and collectd in this file to match the defaults in STF. For more information about customizing topics and cloud configuration, see Section 4.3, "Configuring multiple clouds" . stf-connectors.yaml resource_registry: OS::TripleO::Services::Collectd: /usr/share/openstack-tripleo-heat-templates/deployment/metrics/collectd-container-puppet.yaml parameter_defaults: ExtraConfig: qdr::router_id: "%{::hostname}.cloud1" MetricsQdrConnectors: - host: default-interconnect-5671-service-telemetry.apps.infra.watch port: 443 role: edge verifyHostname: false sslProfile: sslProfile saslUsername: guest@default-interconnect saslPassword: <password_from_stf> MetricsQdrSSLProfiles: - name: sslProfile caCertFileContent: | -----BEGIN CERTIFICATE----- <snip> -----END CERTIFICATE----- CeilometerQdrMetricsConfig: driver: amqp topic: cloud1-metering CollectdAmqpInstances: cloud1-telemetry: format: JSON presettle: false CollectdSensubilityResultsChannel: sensubility/cloud1-telemetry The qdr::router_id configuration is to override the default value which uses the fully-qualified domain name (FQDN) of the host. In some cases the FQDN can result in a router ID length of greater than 61 characters which results in failed QDR connections. For deployments with shorter FQDN values this is not necessary. The resource_registry configuration directly loads the collectd service because you do not include the collectd-write-qdr.yaml environment file for multiple cloud deployments. Replace the host sub-parameter of MetricsQdrConnectors with the value that you retrieved in Section 4.1.3, "Retrieving the AMQ Interconnect route address" . Replace the <password_from_stf> portion of the saslPassword sub-parameter of MetricsQdrConnectors with the value you retrieved in Section 4.1.2, "Retrieving the AMQ Interconnect password" . Replace the caCertFileContent parameter with the contents retrieved in Section 4.1.1, "Getting CA certificate from Service Telemetry Framework for overcloud configuration" . Set topic value of CeilometerQdrMetricsConfig.topic to define the topic for Ceilometer metrics. The value is a unique topic identifier for the cloud such as cloud1-metering . Set CollectdAmqpInstances sub-parameter to define the topic for collectd metrics. The section name is a unique topic identifier for the cloud such as cloud1-telemetry . Set CollectdSensubilityResultsChannel to define the topic for collectd-sensubility events. The value is a unique topic identifier for the cloud such as sensubility/cloud1-telemetry . Note When you define the topics for collectd and Ceilometer, the value you provide is transposed into the full topic that the Smart Gateway client uses to listen for messages. Ceilometer topic values are transposed into the topic address anycast/ceilometer/<TOPIC>.sample and collectd topic values are transposed into the topic address collectd/<TOPIC> . The value for sensubility is the full topic path and has no transposition from topic value to topic address. For an example of a cloud configuration in the ServiceTelemetry object referring to the full topic address, see the section called "The clouds parameter" . 4.1.6. Deploying the overcloud Deploy or update the overcloud with the required environment files so that data is collected and transmitted to Service Telemetry Framework (STF). Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: USD source ~/stackrc Add your data collection and AMQ Interconnect environment files to the stack with your other environment files and deploy the overcloud: (undercloud)USD openstack overcloud deploy --templates \ -e [your environment files] \ -e /usr/share/openstack-tripleo-heat-templates/environments/metrics/ceilometer-write-qdr.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/metrics/qdr-edge-only.yaml \ -e /home/stack/enable-stf.yaml \ -e /home/stack/stf-connectors.yaml Include the ceilometer-write-qdr.yaml file to ensure that Ceilometer telemetry is sent to STF. Include the qdr-edge-only.yaml file to ensure that the message bus is enabled and connected to STF message bus routers. Include the enable-stf.yaml environment file to ensure that the defaults are configured correctly. Include the stf-connectors.yaml environment file to define the connection to STF. 4.1.7. Validating client-side installation To validate data collection from the Service Telemetry Framework (STF) storage domain, query the data sources for delivered data. To validate individual nodes in the Red Hat OpenStack Platform (RHOSP) deployment, use SSH to connect to the console. Tip Some telemetry data is available only when RHOSP has active workloads. Procedure Log in to an overcloud node, for example, controller-0. Ensure that the metrics_qdr and collection agent containers are running on the node: USD sudo podman container inspect --format '{{.State.Status}}' metrics_qdr collectd ceilometer_agent_notification ceilometer_agent_central running running running running Note Use this command on compute nodes: USD sudo podman container inspect --format '{{.State.Status}}' metrics_qdr collectd ceilometer_agent_compute Return the internal network address on which AMQ Interconnect is running, for example, 172.17.1.44 listening on port 5666 : USD sudo podman exec -it metrics_qdr cat /etc/qpid-dispatch/qdrouterd.conf listener { host: 172.17.1.44 port: 5666 authenticatePeer: no saslMechanisms: ANONYMOUS } Return a list of connections to the local AMQ Interconnect: USD sudo podman exec -it metrics_qdr qdstat --bus=172.17.1.44:5666 --connections Connections id host container role dir security authentication tenant ============================================================================================================================================================================================================================================================================================ 1 default-interconnect-5671-service-telemetry.apps.infra.watch:443 default-interconnect-7458fd4d69-bgzfb edge out TLSv1.2(DHE-RSA-AES256-GCM-SHA384) anonymous-user 12 172.17.1.44:60290 openstack.org/om/container/controller-0/ceilometer-agent-notification/25/5c02cee550f143ec9ea030db5cccba14 normal in no-security no-auth 16 172.17.1.44:36408 metrics normal in no-security anonymous-user 899 172.17.1.44:39500 10a2e99d-1b8a-4329-b48c-4335e5f75c84 normal in no-security no-auth There are four connections: Outbound connection to STF Inbound connection from ceilometer Inbound connection from collectd Inbound connection from our qdstat client The outbound STF connection is provided to the MetricsQdrConnectors host parameter and is the route for the STF storage domain. The other hosts are internal network addresses of the client connections to this AMQ Interconnect. To ensure that messages are delivered, list the links, and view the _edge address in the deliv column for delivery of messages: USD sudo podman exec -it metrics_qdr qdstat --bus=172.17.1.44:5666 --links Router Links type dir conn id id peer class addr phs cap pri undel unsett deliv presett psdrop acc rej rel mod delay rate =========================================================================================================================================================== endpoint out 1 5 local _edge 250 0 0 0 2979926 0 0 0 0 2979926 0 0 0 endpoint in 1 6 250 0 0 0 0 0 0 0 0 0 0 0 0 endpoint in 1 7 250 0 0 0 0 0 0 0 0 0 0 0 0 endpoint out 1 8 250 0 0 0 0 0 0 0 0 0 0 0 0 endpoint in 1 9 250 0 0 0 0 0 0 0 0 0 0 0 0 endpoint out 1 10 250 0 0 0 911 911 0 0 0 0 0 911 0 endpoint in 1 11 250 0 0 0 0 911 0 0 0 0 0 0 0 endpoint out 12 32 local temp.lSY6Mcicol4J2Kp 250 0 0 0 0 0 0 0 0 0 0 0 0 endpoint in 16 41 250 0 0 0 2979924 0 0 0 0 2979924 0 0 0 endpoint in 912 1834 mobile USDmanagement 0 250 0 0 0 1 0 0 1 0 0 0 0 0 endpoint out 912 1835 local temp.9Ok2resI9tmt+CT 250 0 0 0 0 0 0 0 0 0 0 0 0 To list the addresses from RHOSP nodes to STF, connect to Red Hat OpenShift Container Platform to retrieve the AMQ Interconnect pod name and list the connections. List the available AMQ Interconnect pods: USD oc get pods -l application=default-interconnect NAME READY STATUS RESTARTS AGE default-interconnect-7458fd4d69-bgzfb 1/1 Running 0 6d21h Connect to the pod and list the known connections. In this example, there are three edge connections from the RHOSP nodes with connection id 22, 23, and 24: USD oc exec -it deploy/default-interconnect -- qdstat --connections 2020-04-21 18:25:47.243852 UTC default-interconnect-7458fd4d69-bgzfb Connections id host container role dir security authentication tenant last dlv uptime =============================================================================================================================================================================================== 5 10.129.0.110:48498 bridge-3f5 edge in no-security anonymous-user 000:00:00:02 000:17:36:29 6 10.129.0.111:43254 rcv[default-cloud1-ceil-meter-smartgateway-58f885c76d-xmxwn] edge in no-security anonymous-user 000:00:00:02 000:17:36:20 7 10.130.0.109:50518 rcv[default-cloud1-coll-event-smartgateway-58fbbd4485-rl9bd] normal in no-security anonymous-user - 000:17:36:11 8 10.130.0.110:33802 rcv[default-cloud1-ceil-event-smartgateway-6cfb65478c-g5q82] normal in no-security anonymous-user 000:01:26:18 000:17:36:05 22 10.128.0.1:51948 Router.ceph-0.redhat.local edge in TLSv1/SSLv3(DHE-RSA-AES256-GCM-SHA384) anonymous-user 000:00:00:03 000:22:08:43 23 10.128.0.1:51950 Router.compute-0.redhat.local edge in TLSv1/SSLv3(DHE-RSA-AES256-GCM-SHA384) anonymous-user 000:00:00:03 000:22:08:43 24 10.128.0.1:52082 Router.controller-0.redhat.local edge in TLSv1/SSLv3(DHE-RSA-AES256-GCM-SHA384) anonymous-user 000:00:00:00 000:22:08:34 27 127.0.0.1:42202 c2f541c1-4c97-4b37-a189-a396c08fb079 normal in no-security no-auth 000:00:00:00 000:00:00:00 To view the number of messages delivered by the network, use each address with the oc exec command: USD oc exec -it deploy/default-interconnect -- qdstat --address 2020-04-21 18:20:10.293258 UTC default-interconnect-7458fd4d69-bgzfb Router Addresses class addr phs distrib pri local remote in out thru fallback ========================================================================================================================== mobile anycast/ceilometer/event.sample 0 balanced - 1 0 970 970 0 0 mobile anycast/ceilometer/metering.sample 0 balanced - 1 0 2,344,833 2,344,833 0 0 mobile collectd/notify 0 multicast - 1 0 70 70 0 0 mobile collectd/telemetry 0 multicast - 1 0 216,128,890 216,128,890 0 0 4.2. Disabling Red Hat OpenStack Platform services used with Service Telemetry Framework Disable the services used when deploying Red Hat OpenStack Platform (RHOSP) and connecting it to Service Telemetry Framework (STF). There is no removal of logs or generated configuration files as part of the disablement of the services. Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: USD source ~/stackrc Create the disable-stf.yaml environment file: USD cat > ~/disable-stf.yaml <<EOF --- resource_registry: OS::TripleO::Services::CeilometerAgentCentral: OS::Heat::None OS::TripleO::Services::CeilometerAgentNotification: OS::Heat::None OS::TripleO::Services::CeilometerAgentIpmi: OS::Heat::None OS::TripleO::Services::ComputeCeilometerAgent: OS::Heat::None OS::TripleO::Services::Redis: OS::Heat::None OS::TripleO::Services::Collectd: OS::Heat::None OS::TripleO::Services::MetricsQdr: OS::Heat::None EOF Remove the following files from your RHOSP director deployment: ceilometer-write-qdr.yaml qdr-edge-only.yaml enable-stf.yaml stf-connectors.yaml Update the RHOSP overcloud. Ensure that you use the disable-stf.yaml file early in the list of environment files. By adding disable-stf.yaml early in the list, other environment files can override the configuration that would disable the service: (undercloud)USD openstack overcloud deploy --templates \ -e /home/stack/disable-stf.yaml \ -e [your environment files] 4.3. Configuring multiple clouds You can configure multiple Red Hat OpenStack Platform (RHOSP) clouds to target a single instance of Service Telemetry Framework (STF). When you configure multiple clouds, every cloud must send metrics and events on their own unique message bus topic. In the STF deployment, Smart Gateway instances listen on these topics to save information to the common data store. Data that is stored by the Smart Gateway in the data storage domain is filtered by using the metadata that each of Smart Gateways creates. Warning Ensure that you deploy each cloud with a unique cloud domain configuration. For more information about configuring the domain for your cloud deployment, see Section 4.3.4, "Setting a unique cloud domain" . Figure 4.1. Two RHOSP clouds connect to STF To configure the RHOSP overcloud for a multiple cloud scenario, complete the following tasks: Plan the AMQP address prefixes that you want to use for each cloud. For more information, see Section 4.3.1, "Planning AMQP address prefixes" . Deploy metrics and events consumer Smart Gateways for each cloud to listen on the corresponding address prefixes. For more information, see Section 4.3.2, "Deploying Smart Gateways" . Configure each cloud with a unique domain name. For more information, see Section 4.3.4, "Setting a unique cloud domain" . Create the base configuration for STF. For more information, see Section 4.1.4, "Creating the base configuration for STF" . Configure each cloud to send its metrics and events to STF on the correct address. For more information, see Section 4.3.5, "Creating the Red Hat OpenStack Platform environment file for multiple clouds" . 4.3.1. Planning AMQP address prefixes By default, Red Hat OpenStack Platform (RHOSP) nodes retrieve data through two data collectors; collectd and Ceilometer. The collectd-sensubility plugin requires a unique address. These components send telemetry data or notifications to the respective AMQP addresses, for example, collectd/telemetry . STF Smart Gateways listen on those AMQP addresses for data. To support multiple clouds and to identify which cloud generated the monitoring data, configure each cloud to send data to a unique address. Add a cloud identifier prefix to the second part of the address. The following list shows some example addresses and identifiers: collectd/cloud1-telemetry collectd/cloud1-notify sensubility/cloud1-telemetry anycast/ceilometer/cloud1-metering.sample anycast/ceilometer/cloud1-event.sample collectd/cloud2-telemetry collectd/cloud2-notify sensubility/cloud2-telemetry anycast/ceilometer/cloud2-metering.sample anycast/ceilometer/cloud2-event.sample collectd/us-east-1-telemetry collectd/us-west-3-telemetry 4.3.2. Deploying Smart Gateways You must deploy a Smart Gateway for each of the data collection types for each cloud; one for collectd metrics, one for collectd events, one for Ceilometer metrics, one for Ceilometer events, and one for collectd-sensubility metrics. Configure each of the Smart Gateways to listen on the AMQP address that you define for the corresponding cloud. To define Smart Gateways, configure the clouds parameter in the ServiceTelemetry manifest. When you deploy STF for the first time, Smart Gateway manifests are created that define the initial Smart Gateways for a single cloud. When you deploy Smart Gateways for multiple cloud support, you deploy multiple Smart Gateways for each of the data collection types that handle the metrics and the events data for each cloud. The initial Smart Gateways are defined in cloud1 with the following subscription addresses: collector type default subscription address collectd metrics collectd/telemetry collectd events collectd/notify collectd-sensubility metrics sensubility/telemetry Ceilometer metrics anycast/ceilometer/metering.sample Ceilometer events anycast/ceilometer/event.sample Prerequisites You have determined your cloud naming scheme. For more information about determining your naming scheme, see Section 4.3.1, "Planning AMQP address prefixes" . You have created your list of clouds objects. For more information about creating the content for the clouds parameter, see the section called "The clouds parameter" . Procedure Log in to Red Hat OpenShift Container Platform. Change to the service-telemetry namespace: USD oc project service-telemetry Edit the default ServiceTelemetry object and add a clouds parameter with your configuration: Warning Long cloud names might exceed the maximum pod name of 63 characters. Ensure that the combination of the ServiceTelemetry name default and the clouds.name does not exceed 19 characters. Cloud names cannot contain any special characters, such as - . Limit cloud names to alphanumeric (a-z, 0-9). Topic addresses have no character limitation and can be different from the clouds.name value. USD oc edit stf default apiVersion: infra.watch/v1beta1 kind: ServiceTelemetry metadata: ... spec: ... clouds: - name: cloud1 events: collectors: - collectorType: collectd subscriptionAddress: collectd/cloud1-notify - collectorType: ceilometer subscriptionAddress: anycast/ceilometer/cloud1-event.sample metrics: collectors: - collectorType: collectd subscriptionAddress: collectd/cloud1-telemetry - collectorType: sensubility subscriptionAddress: sensubility/cloud1-telemetry - collectorType: ceilometer subscriptionAddress: anycast/ceilometer/cloud1-metering.sample - name: cloud2 events: ... Save the ServiceTelemetry object. Verify that each Smart Gateway is running. This can take several minutes depending on the number of Smart Gateways: USD oc get po -l app=smart-gateway NAME READY STATUS RESTARTS AGE default-cloud1-ceil-event-smartgateway-6cfb65478c-g5q82 2/2 Running 0 13h default-cloud1-ceil-meter-smartgateway-58f885c76d-xmxwn 2/2 Running 0 13h default-cloud1-coll-event-smartgateway-58fbbd4485-rl9bd 2/2 Running 0 13h default-cloud1-coll-meter-smartgateway-7c6fc495c4-jn728 2/2 Running 0 13h default-cloud1-sens-meter-smartgateway-8h4tc445a2-mm683 2/2 Running 0 13h 4.3.3. Deleting the default Smart Gateways After you configure Service Telemetry Framework (STF) for multiple clouds, you can delete the default Smart Gateways if they are no longer in use. The Service Telemetry Operator can remove SmartGateway objects that were created but are no longer listed in the ServiceTelemetry clouds list of objects. To enable the removal of SmartGateway objects that are not defined by the clouds parameter, you must set the cloudsRemoveOnMissing parameter to true in the ServiceTelemetry manifest. Tip If you do not want to deploy any Smart Gateways, define an empty clouds list by using the clouds: [] parameter. Warning The cloudsRemoveOnMissing parameter is disabled by default. If you enable the cloudsRemoveOnMissing parameter, you remove any manually-created SmartGateway objects in the current namespace without any possibility to restore. Procedure Define your clouds parameter with the list of cloud objects that you want the Service Telemetry Operator to manage. For more information, see the section called "The clouds parameter" . Edit the ServiceTelemetry object and add the cloudsRemoveOnMissing parameter: apiVersion: infra.watch/v1beta1 kind: ServiceTelemetry metadata: ... spec: ... cloudsRemoveOnMissing: true clouds: ... Save the modifications. Verify that the Operator deleted the Smart Gateways. This can take several minutes while the Operators reconcile the changes: USD oc get smartgateways 4.3.4. Setting a unique cloud domain To ensure that telemetry from different Red Hat OpenStack Platform (RHOSP) clouds to Service Telemetry Framework (STF) can be uniquely identified and do not conflict, configure the CloudDomain parameter. Warning Ensure that you do not change host or domain names in an existing deployment. Host and domain name configuration is supported in new cloud deployments only. Procedure Create a new environment file, for example, hostnames.yaml . Set the CloudDomain parameter in the environment file, as shown in the following example: hostnames.yaml parameter_defaults: CloudDomain: newyork-west-04 CephStorageHostnameFormat: 'ceph-%index%' ObjectStorageHostnameFormat: 'swift-%index%' ComputeHostnameFormat: 'compute-%index%' Add the new environment file to your deployment. Additional resources Section 4.3.5, "Creating the Red Hat OpenStack Platform environment file for multiple clouds" Core Overcloud Parameters in the Overcloud Parameters guide 4.3.5. Creating the Red Hat OpenStack Platform environment file for multiple clouds To label traffic according to the cloud of origin, you must create a configuration with cloud-specific instance names. Create an stf-connectors.yaml file and adjust the values of CeilometerQdrMetricsConfig and CollectdAmqpInstances to match the AMQP address prefix scheme. Note If you enabled container health and API status monitoring, you must also modify the CollectdSensubilityResultsChannel parameter. For more information, see Section 6.9, "Red Hat OpenStack Platform API status and containerized services health" . Prerequisites You have retrieved the CA certificate from the AMQ Interconnect deployed by STF. For more information, see Section 4.1.1, "Getting CA certificate from Service Telemetry Framework for overcloud configuration" . You have created your list of clouds objects. For more information about creating the content for the clouds parameter, see the clouds configuration parameter . You have retrieved the AMQ Interconnect route address. For more information, see Section 4.1.3, "Retrieving the AMQ Interconnect route address" . You have created the base configuration for STF. For more information, see Section 4.1.4, "Creating the base configuration for STF" . You have created a unique domain name environment file. For more information, see Section 4.3.4, "Setting a unique cloud domain" . Procedure Log in to the undercloud host as the stack user. Create a configuration file called stf-connectors.yaml in the /home/stack directory. In the stf-connectors.yaml file, configure the MetricsQdrConnectors address to connect to the AMQ Interconnect on the overcloud deployment. Configure the CeilometerQdrMetricsConfig , CollectdAmqpInstances , and CollectdSensubilityResultsChannel topic values to match the AMQP address that you want for this cloud deployment. stf-connectors.yaml resource_registry: OS::TripleO::Services::Collectd: /usr/share/openstack-tripleo-heat-templates/deployment/metrics/collectd-container-puppet.yaml parameter_defaults: ExtraConfig: qdr::router_id: %{::hostname}.cloud1 MetricsQdrConnectors: - host: default-interconnect-5671-service-telemetry.apps.infra.watch port: 443 role: edge verifyHostname: false sslProfile: sslProfile MetricsQdrSSLProfiles: - name: sslProfile caCertFileContent: | -----BEGIN CERTIFICATE----- <snip> -----END CERTIFICATE----- CeilometerQdrMetricsConfig: driver: amqp topic: cloud1-metering CollectdAmqpInstances: cloud1-telemetry: format: JSON presettle: false CollectdSensubilityResultsChannel: sensubility/cloud1-telemetry The qdr::router_id configuration is to override the default value which uses the fully-qualified domain name (FQDN) of the host. In some cases the FQDN can result in a router ID length of greater than 61 characters which results in failed QDR connections. For deployments with shorter FQDN values this is not necessary. The resource_registry configuration directly loads the collectd service because you do not include the collectd-write-qdr.yaml environment file for multiple cloud deployments. Replace the host parameter with the value that you retrieved in Section 4.1.3, "Retrieving the AMQ Interconnect route address" . Replace the caCertFileContent parameter with the contents retrieved in Section 4.1.1, "Getting CA certificate from Service Telemetry Framework for overcloud configuration" . Replace the host sub-parameter of MetricsQdrConnectors with the value that you retrieved in Section 4.1.3, "Retrieving the AMQ Interconnect route address" . Set topic value of CeilometerQdrMetricsConfig.topic to define the topic for Ceilometer metrics. The value is a unique topic identifier for the cloud such as cloud1-metering . Set CollectdAmqpInstances sub-parameter to define the topic for collectd metrics. The section name is a unique topic identifier for the cloud such as cloud1-telemetry . Set CollectdSensubilityResultsChannel to define the topic for collectd-sensubility events. The value is a unique topic identifier for the cloud such as sensubility/cloud1-telemetry . Note When you define the topics for collectd and Ceilometer, the value you provide is transposed into the full topic that the Smart Gateway client uses to listen for messages. Ceilometer topic values are transposed into the topic address anycast/ceilometer/<TOPIC>.sample and collectd topic values are transposed into the topic address collectd/<TOPIC> . The value for sensubility is the full topic path and has no transposition from topic value to topic address. For an example of a cloud configuration in the ServiceTelemetry object referring to the full topic address, see the section called "The clouds parameter" . Ensure that the naming convention in the stf-connectors.yaml file aligns with the spec.bridge.amqpUrl field in the Smart Gateway configuration. For example, configure the CeilometerQdrMetricsConfig.topic field to a value of cloud1-metering . Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: USD source stackrc Include the stf-connectors.yaml file and unique domain name environment file hostnames.yaml in the openstack overcloud deployment command, with any other environment files relevant to your environment: Warning If you use the collectd-write-qdr.yaml file with a custom CollectdAmqpInstances parameter, data publishes to the custom and default topics. In a multiple cloud environment, the configuration of the resource_registry parameter in the stf-connectors.yaml file loads the collectd service. (undercloud)USD openstack overcloud deploy --templates \ -e [your environment files] \ -e /usr/share/openstack-tripleo-heat-templates/environments/metrics/ceilometer-write-qdr.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/metrics/qdr-edge-only.yaml \ -e /home/stack/hostnames.yaml \ -e /home/stack/enable-stf.yaml \ -e /home/stack/stf-connectors.yaml Deploy the Red Hat OpenStack Platform overcloud. Additional resources For information about how to validate the deployment, see Section 4.1.7, "Validating client-side installation" . 4.3.6. Querying metrics data from multiple clouds Data stored in Prometheus has a service label according to the Smart Gateway it was scraped from. You can use this label to query data from a specific cloud. To query data from a specific cloud, use a Prometheus promql query that matches the associated service label; for example: collectd_uptime{service="default-cloud1-coll-meter"} . | [
"oc get secrets",
"oc get secret/default-interconnect-selfsigned -o jsonpath='{.data.ca\\.crt}' | base64 -d",
"oc project service-telemetry",
"oc get secret default-interconnect-users -o json | jq -r .data.guest | base64 -d",
"oc project service-telemetry",
"oc get routes -ogo-template='{{ range .items }}{{printf \"%s\\n\" .spec.host }}{{ end }}' | grep \"\\-5671\" default-interconnect-5671-service-telemetry.apps.infra.watch",
"parameter_defaults: # only send to STF, not other publishers PipelinePublishers: [] # manage the polling and pipeline configuration files for Ceilometer agents ManagePolling: true ManagePipeline: true ManageEventPipeline: false # enable Ceilometer metrics CeilometerQdrPublishMetrics: true # enable collection of API status CollectdEnableSensubility: true CollectdSensubilityTransport: amqp1 # enable collection of containerized service metrics CollectdEnableLibpodstats: true # set collectd overrides for higher telemetry resolution and extra plugins # to load CollectdConnectionType: amqp1 CollectdAmqpInterval: 30 CollectdDefaultPollingInterval: 30 # to collect information about the virtual memory subsystem of the kernel # CollectdExtraPlugins: # - vmem # set standard prefixes for where metrics are published to QDR MetricsQdrAddresses: - prefix: 'collectd' distribution: multicast - prefix: 'anycast/ceilometer' distribution: multicast ExtraConfig: ceilometer::agent::polling::polling_interval: 30 ceilometer::agent::polling::polling_meters: - cpu - memory.usage # to avoid filling the memory buffers if disconnected from the message bus # note: this may need an adjustment if there are many metrics to be sent. collectd::plugin::amqp1::send_queue_limit: 5000 # to receive extra information about virtual memory, you must enable vmem plugin in CollectdExtraPlugins # collectd::plugin::vmem::verbose: true # provide name and uuid in addition to hostname for better correlation # to ceilometer data collectd::plugin::virt::hostname_format: \"name uuid hostname\" # to capture all extra_stats metrics, comment out below config collectd::plugin::virt::extra_stats: cpu_util vcpu disk # provide the human-friendly name of the virtual instance collectd::plugin::virt::plugin_instance_format: metadata # set memcached collectd plugin to report its metrics by hostname # rather than host IP, ensuring metrics in the dashboard remain uniform collectd::plugin::memcached::instances: local: host: \"%{hiera('fqdn_canonical')}\" port: 11211 # report root filesystem storage metrics collectd::plugin::df::ignoreselected: false",
"resource_registry: OS::TripleO::Services::Collectd: /usr/share/openstack-tripleo-heat-templates/deployment/metrics/collectd-container-puppet.yaml parameter_defaults: ExtraConfig: qdr::router_id: \"%{::hostname}.cloud1\" MetricsQdrConnectors: - host: default-interconnect-5671-service-telemetry.apps.infra.watch port: 443 role: edge verifyHostname: false sslProfile: sslProfile saslUsername: guest@default-interconnect saslPassword: <password_from_stf> MetricsQdrSSLProfiles: - name: sslProfile caCertFileContent: | -----BEGIN CERTIFICATE----- <snip> -----END CERTIFICATE----- CeilometerQdrMetricsConfig: driver: amqp topic: cloud1-metering CollectdAmqpInstances: cloud1-telemetry: format: JSON presettle: false CollectdSensubilityResultsChannel: sensubility/cloud1-telemetry",
"source ~/stackrc",
"(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /usr/share/openstack-tripleo-heat-templates/environments/metrics/ceilometer-write-qdr.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/metrics/qdr-edge-only.yaml -e /home/stack/enable-stf.yaml -e /home/stack/stf-connectors.yaml",
"sudo podman container inspect --format '{{.State.Status}}' metrics_qdr collectd ceilometer_agent_notification ceilometer_agent_central running running running running",
"sudo podman container inspect --format '{{.State.Status}}' metrics_qdr collectd ceilometer_agent_compute",
"sudo podman exec -it metrics_qdr cat /etc/qpid-dispatch/qdrouterd.conf listener { host: 172.17.1.44 port: 5666 authenticatePeer: no saslMechanisms: ANONYMOUS }",
"sudo podman exec -it metrics_qdr qdstat --bus=172.17.1.44:5666 --connections Connections id host container role dir security authentication tenant ============================================================================================================================================================================================================================================================================================ 1 default-interconnect-5671-service-telemetry.apps.infra.watch:443 default-interconnect-7458fd4d69-bgzfb edge out TLSv1.2(DHE-RSA-AES256-GCM-SHA384) anonymous-user 12 172.17.1.44:60290 openstack.org/om/container/controller-0/ceilometer-agent-notification/25/5c02cee550f143ec9ea030db5cccba14 normal in no-security no-auth 16 172.17.1.44:36408 metrics normal in no-security anonymous-user 899 172.17.1.44:39500 10a2e99d-1b8a-4329-b48c-4335e5f75c84 normal in no-security no-auth",
"sudo podman exec -it metrics_qdr qdstat --bus=172.17.1.44:5666 --links Router Links type dir conn id id peer class addr phs cap pri undel unsett deliv presett psdrop acc rej rel mod delay rate =========================================================================================================================================================== endpoint out 1 5 local _edge 250 0 0 0 2979926 0 0 0 0 2979926 0 0 0 endpoint in 1 6 250 0 0 0 0 0 0 0 0 0 0 0 0 endpoint in 1 7 250 0 0 0 0 0 0 0 0 0 0 0 0 endpoint out 1 8 250 0 0 0 0 0 0 0 0 0 0 0 0 endpoint in 1 9 250 0 0 0 0 0 0 0 0 0 0 0 0 endpoint out 1 10 250 0 0 0 911 911 0 0 0 0 0 911 0 endpoint in 1 11 250 0 0 0 0 911 0 0 0 0 0 0 0 endpoint out 12 32 local temp.lSY6Mcicol4J2Kp 250 0 0 0 0 0 0 0 0 0 0 0 0 endpoint in 16 41 250 0 0 0 2979924 0 0 0 0 2979924 0 0 0 endpoint in 912 1834 mobile USDmanagement 0 250 0 0 0 1 0 0 1 0 0 0 0 0 endpoint out 912 1835 local temp.9Ok2resI9tmt+CT 250 0 0 0 0 0 0 0 0 0 0 0 0",
"oc get pods -l application=default-interconnect NAME READY STATUS RESTARTS AGE default-interconnect-7458fd4d69-bgzfb 1/1 Running 0 6d21h",
"oc exec -it deploy/default-interconnect -- qdstat --connections 2020-04-21 18:25:47.243852 UTC default-interconnect-7458fd4d69-bgzfb Connections id host container role dir security authentication tenant last dlv uptime =============================================================================================================================================================================================== 5 10.129.0.110:48498 bridge-3f5 edge in no-security anonymous-user 000:00:00:02 000:17:36:29 6 10.129.0.111:43254 rcv[default-cloud1-ceil-meter-smartgateway-58f885c76d-xmxwn] edge in no-security anonymous-user 000:00:00:02 000:17:36:20 7 10.130.0.109:50518 rcv[default-cloud1-coll-event-smartgateway-58fbbd4485-rl9bd] normal in no-security anonymous-user - 000:17:36:11 8 10.130.0.110:33802 rcv[default-cloud1-ceil-event-smartgateway-6cfb65478c-g5q82] normal in no-security anonymous-user 000:01:26:18 000:17:36:05 22 10.128.0.1:51948 Router.ceph-0.redhat.local edge in TLSv1/SSLv3(DHE-RSA-AES256-GCM-SHA384) anonymous-user 000:00:00:03 000:22:08:43 23 10.128.0.1:51950 Router.compute-0.redhat.local edge in TLSv1/SSLv3(DHE-RSA-AES256-GCM-SHA384) anonymous-user 000:00:00:03 000:22:08:43 24 10.128.0.1:52082 Router.controller-0.redhat.local edge in TLSv1/SSLv3(DHE-RSA-AES256-GCM-SHA384) anonymous-user 000:00:00:00 000:22:08:34 27 127.0.0.1:42202 c2f541c1-4c97-4b37-a189-a396c08fb079 normal in no-security no-auth 000:00:00:00 000:00:00:00",
"oc exec -it deploy/default-interconnect -- qdstat --address 2020-04-21 18:20:10.293258 UTC default-interconnect-7458fd4d69-bgzfb Router Addresses class addr phs distrib pri local remote in out thru fallback ========================================================================================================================== mobile anycast/ceilometer/event.sample 0 balanced - 1 0 970 970 0 0 mobile anycast/ceilometer/metering.sample 0 balanced - 1 0 2,344,833 2,344,833 0 0 mobile collectd/notify 0 multicast - 1 0 70 70 0 0 mobile collectd/telemetry 0 multicast - 1 0 216,128,890 216,128,890 0 0",
"source ~/stackrc",
"cat > ~/disable-stf.yaml <<EOF --- resource_registry: OS::TripleO::Services::CeilometerAgentCentral: OS::Heat::None OS::TripleO::Services::CeilometerAgentNotification: OS::Heat::None OS::TripleO::Services::CeilometerAgentIpmi: OS::Heat::None OS::TripleO::Services::ComputeCeilometerAgent: OS::Heat::None OS::TripleO::Services::Redis: OS::Heat::None OS::TripleO::Services::Collectd: OS::Heat::None OS::TripleO::Services::MetricsQdr: OS::Heat::None EOF",
"(undercloud)USD openstack overcloud deploy --templates -e /home/stack/disable-stf.yaml -e [your environment files]",
"oc project service-telemetry",
"oc edit stf default",
"apiVersion: infra.watch/v1beta1 kind: ServiceTelemetry metadata: spec: clouds: - name: cloud1 events: collectors: - collectorType: collectd subscriptionAddress: collectd/cloud1-notify - collectorType: ceilometer subscriptionAddress: anycast/ceilometer/cloud1-event.sample metrics: collectors: - collectorType: collectd subscriptionAddress: collectd/cloud1-telemetry - collectorType: sensubility subscriptionAddress: sensubility/cloud1-telemetry - collectorType: ceilometer subscriptionAddress: anycast/ceilometer/cloud1-metering.sample - name: cloud2 events:",
"oc get po -l app=smart-gateway NAME READY STATUS RESTARTS AGE default-cloud1-ceil-event-smartgateway-6cfb65478c-g5q82 2/2 Running 0 13h default-cloud1-ceil-meter-smartgateway-58f885c76d-xmxwn 2/2 Running 0 13h default-cloud1-coll-event-smartgateway-58fbbd4485-rl9bd 2/2 Running 0 13h default-cloud1-coll-meter-smartgateway-7c6fc495c4-jn728 2/2 Running 0 13h default-cloud1-sens-meter-smartgateway-8h4tc445a2-mm683 2/2 Running 0 13h",
"apiVersion: infra.watch/v1beta1 kind: ServiceTelemetry metadata: spec: cloudsRemoveOnMissing: true clouds:",
"oc get smartgateways",
"parameter_defaults: CloudDomain: newyork-west-04 CephStorageHostnameFormat: 'ceph-%index%' ObjectStorageHostnameFormat: 'swift-%index%' ComputeHostnameFormat: 'compute-%index%'",
"resource_registry: OS::TripleO::Services::Collectd: /usr/share/openstack-tripleo-heat-templates/deployment/metrics/collectd-container-puppet.yaml parameter_defaults: ExtraConfig: qdr::router_id: %{::hostname}.cloud1 MetricsQdrConnectors: - host: default-interconnect-5671-service-telemetry.apps.infra.watch port: 443 role: edge verifyHostname: false sslProfile: sslProfile MetricsQdrSSLProfiles: - name: sslProfile caCertFileContent: | -----BEGIN CERTIFICATE----- <snip> -----END CERTIFICATE----- CeilometerQdrMetricsConfig: driver: amqp topic: cloud1-metering CollectdAmqpInstances: cloud1-telemetry: format: JSON presettle: false CollectdSensubilityResultsChannel: sensubility/cloud1-telemetry",
"source stackrc",
"(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /usr/share/openstack-tripleo-heat-templates/environments/metrics/ceilometer-write-qdr.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/metrics/qdr-edge-only.yaml -e /home/stack/hostnames.yaml -e /home/stack/enable-stf.yaml -e /home/stack/stf-connectors.yaml"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/service_telemetry_framework_1.5/assembly-completing-the-stf-configuration_assembly |
Chapter 7. Configuring the GRUB boot loader by using RHEL system roles | Chapter 7. Configuring the GRUB boot loader by using RHEL system roles By using the bootloader RHEL system role, you can automate the configuration and management tasks related to the GRUB boot loader. This role currently supports configuring the GRUB boot loader, which runs on the following CPU architectures: AMD and Intel 64-bit architectures (x86-64) The 64-bit ARM architecture (ARMv8.0) IBM Power Systems, Little Endian (POWER9) 7.1. Updating the existing boot loader entries by using the bootloader RHEL system role You can use the bootloader RHEL system role to update the existing entries in the GRUB boot menu in an automated fashion. This way you can efficiently pass specific kernel command-line parameters that can optimize the performance or behavior of your systems. For example, if you leverage systems, where detailed boot messages from the kernel and init system are not necessary, use bootloader to apply the quiet parameter to your existing boot loader entries on your managed nodes to achieve a cleaner, less cluttered, and more user-friendly booting experience. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. You identified the kernel that corresponds to the boot loader entry you want to update. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configuration and management of GRUB boot loader hosts: managed-node-01.example.com tasks: - name: Update existing boot loader entries ansible.builtin.include_role: name: rhel-system-roles.bootloader vars: bootloader_settings: - kernel: path: /boot/vmlinuz-5.14.0-362.24.1.el9_3.aarch64 options: - name: quiet state: present bootloader_reboot_ok: true The settings specified in the example playbook include the following: kernel Specifies the kernel connected with the boot loader entry that you want to update. options Specifies the kernel command-line parameters to update for your chosen boot loader entry (kernel). bootloader_reboot_ok: true The role detects that a reboot is required for the changes to take effect and performs a restart of the managed node. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.bootloader/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Check that your specified boot loader entry has updated kernel command-line parameters: Additional resources /usr/share/ansible/roles/rhel-system-roles.bootloader/README.md file /usr/share/doc/rhel-system-roles/bootloader/ directory Working With Playbooks Using Variables Roles Configuring kernel command-line parameters 7.2. Securing the boot menu with password by using the bootloader RHEL system role You can use the bootloader RHEL system role to set a password to the GRUB boot menu in an automated fashion. This way you can efficiently prevent unauthorized users from modifying boot parameters, and to have better control over the system boot. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Store your sensitive variables in an encrypted file: Create the vault: After the ansible-vault create command opens an editor, enter the sensitive data in the <key> : <value> format: pwd: <password> Save the changes, and close the editor. Ansible encrypts the data in the vault. Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configuration and management of GRUB boot loader hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: Set the bootloader password ansible.builtin.include_role: name: rhel-system-roles.bootloader vars: bootloader_password: "{{ pwd }}" bootloader_reboot_ok: true The settings specified in the example playbook include the following: bootloader_password: "{{ pwd }}" The variable ensures protection of boot parameters with a password. bootloader_reboot_ok: true The role detects that a reboot is required for the changes to take effect and performs a restart of the managed node. Important Changing the boot loader password is not an idempotent transaction. This means that if you apply the same Ansible playbook again, the result will not be the same, and the state of the managed node will change. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.bootloader/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification On your managed node during the GRUB boot menu screen, press the e key for edit. You are prompted for a username and a password: Enter username: root The boot loader username is always root and you do not need to specify it in your Ansible playbook. Enter password: <password> The boot loader password corresponds to the pwd variable that you defined in the vault.yml file. You can view or edit configuration of the particular boot loader entry: Additional resources /usr/share/ansible/roles/rhel-system-roles.bootloader/README.md file /usr/share/doc/rhel-system-roles/bootloader/ directory 7.3. Setting a timeout for the boot loader menu by using the bootloader RHEL system role You can use the bootloader RHEL system role to configure a timeout for the GRUB boot loader menu in an automated way. You can update a period of time to intervene and select a non-default boot entry for various purposes. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configuration and management of the GRUB boot loader hosts: managed-node-01.example.com tasks: - name: Update the boot loader timeout ansible.builtin.include_role: name: rhel-system-roles.bootloader vars: bootloader_timeout: 10 The settings specified in the example playbook include the following: bootloader_timeout: 10 Input an integer to control for how long the GRUB boot loader menu is displayed before booting the default entry. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.bootloader/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Remotely restart your managed node: On the managed node, observe the GRUB boot menu screen. The highlighted entry will be executed automatically in 10s For how long this boot menu is displayed before GRUB automatically uses the default entry. Alternative: you can remotely query for the "timeout" settings in the /boot/grub2/grub.cfg file of your managed node: Additional resources /usr/share/ansible/roles/rhel-system-roles.bootloader/README.md file /usr/share/doc/rhel-system-roles/bootloader/ directory 7.4. Collecting the boot loader configuration information by using the bootloader RHEL system role You can use the bootloader RHEL system role to gather information about the GRUB boot loader entries in an automated fashion. You can use this information to verify the correct configuration of system boot parameters, such as kernel and initial RAM disk image paths. As a result, you can for example: Prevent boot failures. Revert to a known good state when troubleshooting. Be sure that security-related kernel command-line parameters are correctly configured. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configuration and management of GRUB boot loader hosts: managed-node-01.example.com tasks: - name: Gather information about the boot loader configuration ansible.builtin.include_role: name: rhel-system-roles.bootloader vars: bootloader_gather_facts: true - name: Display the collected boot loader configuration information debug: var: bootloader_facts For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.bootloader/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification After you run the preceding playbook on the control node, you will see a similar command-line output as in the following example: The command-line output shows the following notable configuration information about the boot entry: args Command-line parameters passed to the kernel by the GRUB2 boot loader during the boot process. They configure various settings and behaviors of the kernel, initramfs, and other boot-time components. id Unique identifier assigned to each boot entry in a boot loader menu. It consists of machine ID and the kernel version. root The root filesystem for the kernel to mount and use as the primary filesystem during the boot. Additional resources /usr/share/ansible/roles/rhel-system-roles.bootloader/README.md file /usr/share/doc/rhel-system-roles/bootloader/ directory Understanding boot entries | [
"--- - name: Configuration and management of GRUB boot loader hosts: managed-node-01.example.com tasks: - name: Update existing boot loader entries ansible.builtin.include_role: name: rhel-system-roles.bootloader vars: bootloader_settings: - kernel: path: /boot/vmlinuz-5.14.0-362.24.1.el9_3.aarch64 options: - name: quiet state: present bootloader_reboot_ok: true",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m ansible.builtin.command -a 'grubby --info=ALL' managed-node-01.example.com | CHANGED | rc=0 >> index=1 kernel=\"/boot/vmlinuz-5.14.0-362.24.1.el9_3.aarch64\" args=\"ro crashkernel=1G-4G:256M,4G-64G:320M,64G-:576M rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap USDtuned_params quiet \" root=\"/dev/mapper/rhel-root\" initrd=\"/boot/initramfs-5.14.0-362.24.1.el9_3.aarch64.img USDtuned_initrd\" title=\"Red Hat Enterprise Linux (5.14.0-362.24.1.el9_3.aarch64) 9.4 (Plow)\" id=\"2c9ec787230141a9b087f774955795ab-5.14.0-362.24.1.el9_3.aarch64\"",
"ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>",
"pwd: <password>",
"--- - name: Configuration and management of GRUB boot loader hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: Set the bootloader password ansible.builtin.include_role: name: rhel-system-roles.bootloader vars: bootloader_password: \"{{ pwd }}\" bootloader_reboot_ok: true",
"ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml",
"ansible-playbook --ask-vault-pass ~/playbook.yml",
"--- - name: Configuration and management of the GRUB boot loader hosts: managed-node-01.example.com tasks: - name: Update the boot loader timeout ansible.builtin.include_role: name: rhel-system-roles.bootloader vars: bootloader_timeout: 10",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m ansible.builtin.reboot managed-node-01.example.com | CHANGED => { \"changed\": true, \"elapsed\": 21, \"rebooted\": true }",
"ansible managed-node-01.example.com -m ansible.builtin.command -a \"grep 'timeout' /boot/grub2/grub.cfg\" managed-node-01.example.com | CHANGED | rc=0 >> if [ xUSDfeature_timeout_style = xy ] ; then set timeout_style=menu set timeout=10 Fallback normal timeout code in case the timeout_style feature is set timeout=10 if [ xUSDfeature_timeout_style = xy ] ; then set timeout_style=menu set timeout=10 set orig_timeout_style=USD{timeout_style} set orig_timeout=USD{timeout} # timeout_style=menu + timeout=0 avoids the countdown code keypress check set timeout_style=menu set timeout=10 set timeout_style=hidden set timeout=10 if [ xUSDfeature_timeout_style = xy ]; then if [ \"USD{menu_show_once_timeout}\" ]; then set timeout_style=menu set timeout=10 unset menu_show_once_timeout save_env menu_show_once_timeout",
"--- - name: Configuration and management of GRUB boot loader hosts: managed-node-01.example.com tasks: - name: Gather information about the boot loader configuration ansible.builtin.include_role: name: rhel-system-roles.bootloader vars: bootloader_gather_facts: true - name: Display the collected boot loader configuration information debug: var: bootloader_facts",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"\"bootloader_facts\": [ { \"args\": \"ro crashkernel=1G-4G:256M,4G-64G:320M,64G-:576M rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap USDtuned_params quiet\", \"default\": true, \"id\": \"2c9ec787230141a9b087f774955795ab-5.14.0-362.24.1.el9_3.aarch64\", \"index\": \"1\", \"initrd\": \"/boot/initramfs-5.14.0-362.24.1.el9_3.aarch64.img USDtuned_initrd\", \"kernel\": \"/boot/vmlinuz-5.14.0-362.24.1.el9_3.aarch64\", \"root\": \"/dev/mapper/rhel-root\", \"title\": \"Red Hat Enterprise Linux (5.14.0-362.24.1.el9_3.aarch64) 9.4 (Plow)\" } ]"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/automating_system_administration_by_using_rhel_system_roles/configuring-the-grub-2-boot-loader-by-using-rhel-system-roles_automating-system-administration-by-using-rhel-system-roles |
Chapter 4. Create a product | Chapter 4. Create a product The product listing provides marketing and technical information, showcasing your product's features and advantages to potential customers. It lays the foundation for adding all necessary components to your product for certification. Prerequisites Verify the functionality of your product on the target Red Hat platform, in addition to the specific certification testing requirements. If running your product on the targeted Red Hat platform results in a substandard experience then you must resolve the issues before certification. Procedure Red Hat recommends completing all optional fields in the listing tabs for a comprehensive product listing. More information helps mutual customers make informed choices. Red Hat encourages collaboration with your product manager, marketing representative, or other product experts when entering information for your product listing. Fields marked with an asterisk (*) are mandatory. Procedure Log in to the Red Hat Partner Connect Portal . Go to the Certified technology portal tab and click Visit the portal . On the header bar, click Product management . From the Listing and certification tab click Manage products . From the My Products page, click Create Product . A Create New Product dialog opens. Enter the Product name . From the What kind of product would you like to certify? drop-down, select the required product category and click Create product . For example, select OpenStack Infrastructure for creating an OpenStack platform based product listing. A new page with your Product name opens. It comprises the following tabs: Section 4.1, "Overview" Section 4.2, "Product Information" Section 4.3, "Components" Section 4.4, "Support" Along with the following tabs, the page header provides the Product Score details. Product Score evaluates your product information and displays a score. It can be: Fair Good Excellent Best Click How do I improve my score? to improve your product score. After providing the product listing details, click Save before moving to the section. 4.1. Overview This tab consists of a series of tasks that you must complete to publish your product: Section 4.1.1, "Complete product listing details" Section 4.1.2, "Complete company profile information" Section 4.1.3, "Add at least one product component" Section 4.1.4, "Certify components for your listing" 4.1.1. Complete product listing details To complete your product listing details, click Start . The Product Information tab opens. Enter all the essential product details and click Save . 4.1.2. Complete company profile information To complete your company profile information, click Start . After entering all the details, click Submit . To modify the existing details, click Review . The Account Details page opens. Review and modify the Company profile information and click Submit . 4.1.3. Add at least one product component Click Start . You are redirected to the Components tab. To add a new or existing product component, click Add component . For adding a new component, In the Component Name text box, enter the component name. For What kind of standalone component are you creating? select VNF for OpenStack for certifying a Virtual Network Function (VNF) packaged as a virtual machine on Red Hat OpenStack Platform. Click Create new component . For the Red Hat OpenStack Version , version 17 is enabled by default. For adding an existing component, from the Add Component dialog, select Existing Component . From the Available components list, search and select the components that you wish to certify and click the forward arrow. The selected components are added to the Chosen components list. Click Attach existing component . 4.1.4. Certify components for your listing To certify the components for your listing, click Start . If you have existing product components, you can view the list of Attached Components and their details: Name Certification Security Type Created Click more options to archive or remove the components Select the components for certification. After completing all the above tasks you will see a green tick mark corresponding to all the options. The Overview tab also provides the following information: Product contacts - Provides Product marketing and Technical contact information. Click Add contacts to product to provide the contact information Click Edit to update the information. Components in product - Provides the list of the components attached to the product along with their last updated information. Click Add components to product to add new or existing components to your product. Click Edit components to update the existing component information. After publishing the product listing, you can view your Product Readiness Score and Ways to raise your score on the Overview tab. 4.2. Product Information Through this tab you can provide all the essential information about your product. The product details are published along with your product on the Red Hat Ecosystem catalog. General tab: Provide basic details of the product, including product name and description. Enter the Product Name . Optional: Upload the Product Logo according to the defined guidelines. Enter a Brief description and a Long description . Click Save . Features & Benefits tab: Provide important features of your product. Optional: Enter the Title and Description . Optional: To add additional features for your product, click + Add new feature . Click Save . Quick start & Config tab: Add links to any quick start guide or configuration document to help customers deploy and start using your product. Optional: Enter Quick start & configuration instructions . Click Save . Select Hide default instructions check box, if you don't want to display them. Linked resources tab: Add links to supporting documentation to help our customers use your product. The information is mapped to and is displayed in the Documentation section on the product's catalog page. Note It is mandatory to add a minimum of three resources. Red Hat encourages you to add more resources, if available. Select the Type drop-down menu, and enter the Title and Description of the resource. Enter the Resource URL . Optional: To add additional resources for your product, click + Add new Resource . Click Save . FAQs tab: Add frequently asked questions and answers of the product's purpose, operation, installation, or other attribute details. You can include common customer queries about your product and services. Enter Question and Answer . Optional: To add additional FAQs for your product, click + Add new FAQ . Click Save . Support tab: This tab lets you provide contact information of your Support team. Enter the Support description , Support web site , Support phone number , and Support email address . Click Save . Contacts tab: Provide contact information of your marketing and technical team. Enter the Marketing contact email address and Technical contact email address . Optional: To add additional contacts, click + Add another . Click Save . Legal tab: Provide the product related license and policy information. Enter the License Agreement URL for the product and Privacy Policy URL . Click Save . SEO tab: Use this tab to improve the discoverability of your product for our mutual customers, enhancing visibility both within the Red Hat Ecosystem Catalog search and on internet search engines. Providing a higher number of search aliases (key and value pairs) will increase the discoverability of your product. Select the Product Category . Enter the Key and Value to set up Search aliases. Click Save . Optional: To add additional key-value pair, click + Add new key-value pair . Note Add at least one Search alias for your product. Red Hat encourages you to add more aliases, if available. 4.3. Components Use this tab to add components to your product listing. Through this tab you can also view a list of attached components linked to your Product Listing. Alternatively, to attach a component to the Product Listing, you can complete the Add at least one product component option available on the Overview tab of a product listing. To add a new or existing product component, click Add component . For adding a new component, in the Component Name text box, enter the component name. For What kind of OpenStack component are you creating? select VNF for OpenStack for certifying a Virtual Network Function (VNF) packaged as a virtual machine on Red Hat OpenStack Platform. Click Create new component . For the Red Hat OpenStack Version , version 17 is enabled by default. For adding an existing component, from the Add Component dialog, select Existing Component . From the Available components list, search and select the components that you wish to certify and click the forward arrow. The selected components are added to the Chosen components list. Click Attach existing component . Note You can add the same component to multiple products listings. All attached components must be published before the product listing can be published. After attaching components, you can view the list of Attached Components and their details: Name Certification Security Type Created Click more options to archive or remove the attached components Alternatively, to search for specific components, type the component's name in the Search by component Name text box. 4.4. Support The Red Hat Partner Acceleration Desk (PAD) is a Products and Technologies level partner help desk service that allows the current and prospective partners a central location to ask non-technical questions pertaining to Red Hat offerings, partner programs, product certification, engagement process, and so on. You can also contact the Red Hat Partner Acceleration Desk for any technical questions you may have regarding the Certification. Technical help requests will be redirected to the Certification Operations team. Through the Partner Subscriptions program, Red Hat offers free, not-for-resale software subscriptions that you can use to validate your product on the target Red Hat platform. To request access to the program, follow the instructions on the Partner Subscriptions site. To request support, click Open a support case. See PAD - How to open & manage PAD cases , to open a PAD ticket. To view the list of existing support cases, click View support cases . 4.5. Removing a product After creating a product listing if you wish to remove it, go to the Overview tab and click Delete . A published product must first be unpublished before it can be deleted. Red Hat retains information related to deleted products even after you delete the product. | null | https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_openstack_application_and_vnf_workflow_guide/proc_create-a-product-for-openstack-infrastructure_rhosp-vnf-wf-pre-certification-tests |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar. Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/building_running_and_managing_containers/proc_providing-feedback-on-red-hat-documentation_building-running-and-managing-containers |
Chapter 12. Network Observability CLI | Chapter 12. Network Observability CLI 12.1. Installing the Network Observability CLI The Network Observability CLI ( oc netobserv ) is deployed separately from the Network Observability Operator. The CLI is available as an OpenShift CLI ( oc ) plugin. It provides a lightweight way to quickly debug and troubleshoot with network observability. 12.1.1. About the Network Observability CLI You can quickly debug and troubleshoot networking issues by using the Network Observability CLI ( oc netobserv ). The Network Observability CLI is a flow and packet visualization tool that relies on eBPF agents to stream collected data to an ephemeral collector pod. It requires no persistent storage during the capture. After the run, the output is transferred to your local machine. This enables quick, live insight into packets and flow data without installing the Network Observability Operator. Important CLI capture is meant to run only for short durations, such as 8-10 minutes. If it runs for too long, it can be difficult to delete the running process. 12.1.2. Installing the Network Observability CLI Installing the Network Observability CLI ( oc netobserv ) is a separate procedure from the Network Observability Operator installation. This means that, even if you have the Operator installed from OperatorHub, you need to install the CLI separately. Note You can optionally use Krew to install the netobserv CLI plugin. For more information, see "Installing a CLI plugin with Krew". Prerequisites You must install the OpenShift CLI ( oc ). You must have a macOS or Linux operating system. Procedure Download the oc netobserv file that corresponds with your architecture. For example, for the amd64 archive: USD curl -LO https://mirror.openshift.com/pub/cgw/netobserv/latest/oc-netobserv-amd64 Make the file executable: USD chmod +x ./oc-netobserv-amd64 Move the extracted netobserv-cli binary to a directory that is on your PATH , such as /usr/local/bin/ : USD sudo mv ./oc-netobserv-amd64 /usr/local/bin/oc-netobserv Verification Verify that oc netobserv is available: USD oc netobserv version Example output Netobserv CLI version <version> Additional resources Installing and using CLI plugins Installing a CLI plugin with Krew 12.2. Using the Network Observability CLI You can visualize and filter the flows and packets data directly in the terminal to see specific usage, such as identifying who is using a specific port. The Network Observability CLI collects flows as JSON and database files or packets as a PCAP file, which you can use with third-party tools. 12.2.1. Capturing flows You can capture flows and filter on any resource or zone in the data to solve use cases, such as displaying Round-Trip Time (RTT) between two zones. Table visualization in the CLI provides viewing and flow search capabilities. Prerequisites Install the OpenShift CLI ( oc ). Install the Network Observability CLI ( oc netobserv ) plugin. Procedure Capture flows with filters enabled by running the following command: USD oc netobserv flows --enable_filter=true --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051 Add filters to the live table filter prompt in the terminal to further refine the incoming flows. For example: live table filter: [SrcK8S_Zone:us-west-1b] press enter to match multiple regular expressions at once Use the PageUp and PageDown keys to toggle between None , Resource , Zone , Host , Owner and all of the above . To stop capturing, press Ctrl + C . The data that was captured is written to two separate files in an ./output directory located in the same path used to install the CLI. View the captured data in the ./output/flow/<capture_date_time>.json JSON file, which contains JSON arrays of the captured data. Example JSON file { "AgentIP": "10.0.1.76", "Bytes": 561, "DnsErrno": 0, "Dscp": 20, "DstAddr": "f904:ece9:ba63:6ac7:8018:1e5:7130:0", "DstMac": "0A:58:0A:80:00:37", "DstPort": 9999, "Duplicate": false, "Etype": 2048, "Flags": 16, "FlowDirection": 0, "IfDirection": 0, "Interface": "ens5", "K8S_FlowLayer": "infra", "Packets": 1, "Proto": 6, "SrcAddr": "3e06:6c10:6440:2:a80:37:b756:270f", "SrcMac": "0A:58:0A:80:00:01", "SrcPort": 46934, "TimeFlowEndMs": 1709741962111, "TimeFlowRttNs": 121000, "TimeFlowStartMs": 1709741962111, "TimeReceived": 1709741964 } You can use SQLite to inspect the ./output/flow/<capture_date_time>.db database file. For example: Open the file by running the following command: USD sqlite3 ./output/flow/<capture_date_time>.db Query the data by running a SQLite SELECT statement, for example: sqlite> SELECT DnsLatencyMs, DnsFlagsResponseCode, DnsId, DstAddr, DstPort, Interface, Proto, SrcAddr, SrcPort, Bytes, Packets FROM flow WHERE DnsLatencyMs >10 LIMIT 10; Example output 12|NoError|58747|10.128.0.63|57856||17|172.30.0.10|53|284|1 11|NoError|20486|10.128.0.52|56575||17|169.254.169.254|53|225|1 11|NoError|59544|10.128.0.103|51089||17|172.30.0.10|53|307|1 13|NoError|32519|10.128.0.52|55241||17|169.254.169.254|53|254|1 12|NoError|32519|10.0.0.3|55241||17|169.254.169.254|53|254|1 15|NoError|57673|10.128.0.19|59051||17|172.30.0.10|53|313|1 13|NoError|35652|10.0.0.3|46532||17|169.254.169.254|53|183|1 32|NoError|37326|10.0.0.3|52718||17|169.254.169.254|53|169|1 14|NoError|14530|10.0.0.3|58203||17|169.254.169.254|53|246|1 15|NoError|40548|10.0.0.3|45933||17|169.254.169.254|53|174|1 12.2.2. Capturing packets You can capture packets using the Network Observability CLI. Prerequisites Install the OpenShift CLI ( oc ). Install the Network Observability CLI ( oc netobserv ) plugin. Procedure Run the packet capture with filters enabled: USD oc netobserv packets --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051 Add filters to the live table filter prompt in the terminal to refine the incoming packets. An example filter is as follows: live table filter: [SrcK8S_Zone:us-west-1b] press enter to match multiple regular expressions at once Use the PageUp and PageDown keys to toggle between None , Resource , Zone , Host , Owner and all of the above . To stop capturing, press Ctrl + C . View the captured data, which is written to a single file in an ./output/pcap directory located in the same path that was used to install the CLI: The ./output/pcap/<capture_date_time>.pcap file can be opened with Wireshark. 12.2.3. Capturing metrics You can generate on-demand dashboards in Prometheus by using a service monitor for Network Observability. Prerequisites Install the OpenShift CLI ( oc ). Install the Network Observability CLI ( oc netobserv ) plugin. Procedure Capture metrics with filters enabled by running the following command: Example output USD oc netobserv metrics --enable_filter=true --cidr=0.0.0.0/0 --protocol=TCP --port=49051 Open the link provided in the terminal to view the NetObserv / On-Demand dashboard: Example URL https://console-openshift-console.apps.rosa...openshiftapps.com/monitoring/dashboards/netobserv-cli Note Features that are not enabled present as empty graphs. 12.2.4. Cleaning the Network Observability CLI You can manually clean the CLI workload by running oc netobserv cleanup . This command removes all the CLI components from your cluster. When you end a capture, this command is run automatically by the client. You might be required to manually run it if you experience connectivity issues. Procedure Run the following command: USD oc netobserv cleanup Additional resources Network Observability CLI reference 12.3. Network Observability CLI (oc netobserv) reference The Network Observability CLI ( oc netobserv ) has most features and filtering options that are available for the Network Observability Operator. You can pass command line arguments to enable features or filtering options. 12.3.1. Network Observability CLI usage You can use the Network Observability CLI ( oc netobserv ) to pass command line arguments to capture flows data, packets data, and metrics for further analysis and enable features supported by the Network Observability Operator. 12.3.1.1. Syntax The basic syntax for oc netobserv commands: oc netobserv syntax USD oc netobserv [<command>] [<feature_option>] [<command_options>] 1 1 1 Feature options can only be used with the oc netobserv flows command. They cannot be used with the oc netobserv packets command. 12.3.1.2. Basic commands Table 12.1. Basic commands Command Description flows Capture flows information. For subcommands, see the "Flows capture options" table. packets Capture packets data. For subcommands, see the "Packets capture options" table. metrics Capture metrics data. For subcommands, see the "Metrics capture options" table. follow Follow collector logs when running in background. stop Stop collection by removing agent daemonset. copy Copy collector generated files locally. cleanup Remove the Network Observability CLI components. version Print the software version. help Show help. 12.3.1.3. Flows capture options Flows capture has mandatory commands as well as additional options, such as enabling extra features about packet drops, DNS latencies, Round-trip time, and filtering. oc netobserv flows syntax USD oc netobserv flows [<feature_option>] [<command_options>] Option Description Default --enable_all enable all eBPF features false --enable_dns enable DNS tracking false --enable_network_events enable network events monitoring false --enable_pkt_translation enable packet translation false --enable_pkt_drop enable packet drop false --enable_rtt enable RTT tracking false --enable_udn_mapping enable User Defined Network mapping false --get-subnets get subnets information false --background run in background false --copy copy the output files locally prompt --log-level components logs info --max-time maximum capture time 5m --max-bytes maximum capture bytes 50000000 = 50MB --action filter action Accept --cidr filter CIDR 0.0.0.0/0 --direction filter direction - --dport filter destination port - --dport_range filter destination port range - --dports filter on either of two destination ports - --drops filter flows with only dropped packets false --icmp_code filter ICMP code - --icmp_type filter ICMP type - --node-selector capture on specific nodes - --peer_ip filter peer IP - --peer_cidr filter peer CIDR - --port_range filter port range - --port filter port - --ports filter on either of two ports - --protocol filter protocol - --regexes filter flows using regular expression - --sport_range filter source port range - --sport filter source port - --sports filter on either of two source ports - --tcp_flags filter TCP flags - --interfaces interfaces to monitor - Example running flows capture on TCP protocol and port 49051 with PacketDrop and RTT features enabled: USD oc netobserv flows --enable_pkt_drop --enable_rtt --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051 12.3.1.4. Packets capture options You can filter packets capture data the as same as flows capture by using the filters. Certain features, such as packets drop, DNS, RTT, and network events, are only available for flows and metrics capture. oc netobserv packets syntax USD oc netobserv packets [<option>] Option Description Default --background run in background false --copy copy the output files locally prompt --log-level components logs info --max-time maximum capture time 5m --max-bytes maximum capture bytes 50000000 = 50MB --action filter action Accept --cidr filter CIDR 0.0.0.0/0 --direction filter direction - --dport filter destination port - --dport_range filter destination port range - --dports filter on either of two destination ports - --drops filter flows with only dropped packets false --icmp_code filter ICMP code - --icmp_type filter ICMP type - --node-selector capture on specific nodes - --peer_ip filter peer IP - --peer_cidr filter peer CIDR - --port_range filter port range - --port filter port - --ports filter on either of two ports - --protocol filter protocol - --regexes filter flows using regular expression - --sport_range filter source port range - --sport filter source port - --sports filter on either of two source ports - --tcp_flags filter TCP flags - Example running packets capture on TCP protocol and port 49051: USD oc netobserv packets --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051 12.3.1.5. Metrics capture options You can enable features and use filters on metrics capture, the same as flows capture. The generated graphs fill accordingly in the dashboard. oc netobserv metrics syntax USD oc netobserv metrics [<option>] Option Description Default --enable_all enable all eBPF features false --enable_dns enable DNS tracking false --enable_network_events enable network events monitoring false --enable_pkt_translation enable packet translation false --enable_pkt_drop enable packet drop false --enable_rtt enable RTT tracking false --enable_udn_mapping enable User Defined Network mapping false --get-subnets get subnets information false --action filter action Accept --cidr filter CIDR 0.0.0.0/0 --direction filter direction - --dport filter destination port - --dport_range filter destination port range - --dports filter on either of two destination ports - --drops filter flows with only dropped packets false --icmp_code filter ICMP code - --icmp_type filter ICMP type - --node-selector capture on specific nodes - --peer_ip filter peer IP - --peer_cidr filter peer CIDR - --port_range filter port range - --port filter port - --ports filter on either of two ports - --protocol filter protocol - --regexes filter flows using regular expression - --sport_range filter source port range - --sport filter source port - --sports filter on either of two source ports - --tcp_flags filter TCP flags - --interfaces interfaces to monitor - Example running metrics capture for TCP drops USD oc netobserv metrics --enable_pkt_drop --protocol=TCP | [
"curl -LO https://mirror.openshift.com/pub/cgw/netobserv/latest/oc-netobserv-amd64",
"chmod +x ./oc-netobserv-amd64",
"sudo mv ./oc-netobserv-amd64 /usr/local/bin/oc-netobserv",
"oc netobserv version",
"Netobserv CLI version <version>",
"oc netobserv flows --enable_filter=true --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051",
"live table filter: [SrcK8S_Zone:us-west-1b] press enter to match multiple regular expressions at once",
"{ \"AgentIP\": \"10.0.1.76\", \"Bytes\": 561, \"DnsErrno\": 0, \"Dscp\": 20, \"DstAddr\": \"f904:ece9:ba63:6ac7:8018:1e5:7130:0\", \"DstMac\": \"0A:58:0A:80:00:37\", \"DstPort\": 9999, \"Duplicate\": false, \"Etype\": 2048, \"Flags\": 16, \"FlowDirection\": 0, \"IfDirection\": 0, \"Interface\": \"ens5\", \"K8S_FlowLayer\": \"infra\", \"Packets\": 1, \"Proto\": 6, \"SrcAddr\": \"3e06:6c10:6440:2:a80:37:b756:270f\", \"SrcMac\": \"0A:58:0A:80:00:01\", \"SrcPort\": 46934, \"TimeFlowEndMs\": 1709741962111, \"TimeFlowRttNs\": 121000, \"TimeFlowStartMs\": 1709741962111, \"TimeReceived\": 1709741964 }",
"sqlite3 ./output/flow/<capture_date_time>.db",
"sqlite> SELECT DnsLatencyMs, DnsFlagsResponseCode, DnsId, DstAddr, DstPort, Interface, Proto, SrcAddr, SrcPort, Bytes, Packets FROM flow WHERE DnsLatencyMs >10 LIMIT 10;",
"12|NoError|58747|10.128.0.63|57856||17|172.30.0.10|53|284|1 11|NoError|20486|10.128.0.52|56575||17|169.254.169.254|53|225|1 11|NoError|59544|10.128.0.103|51089||17|172.30.0.10|53|307|1 13|NoError|32519|10.128.0.52|55241||17|169.254.169.254|53|254|1 12|NoError|32519|10.0.0.3|55241||17|169.254.169.254|53|254|1 15|NoError|57673|10.128.0.19|59051||17|172.30.0.10|53|313|1 13|NoError|35652|10.0.0.3|46532||17|169.254.169.254|53|183|1 32|NoError|37326|10.0.0.3|52718||17|169.254.169.254|53|169|1 14|NoError|14530|10.0.0.3|58203||17|169.254.169.254|53|246|1 15|NoError|40548|10.0.0.3|45933||17|169.254.169.254|53|174|1",
"oc netobserv packets --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051",
"live table filter: [SrcK8S_Zone:us-west-1b] press enter to match multiple regular expressions at once",
"oc netobserv metrics --enable_filter=true --cidr=0.0.0.0/0 --protocol=TCP --port=49051",
"https://console-openshift-console.apps.rosa...openshiftapps.com/monitoring/dashboards/netobserv-cli",
"oc netobserv cleanup",
"oc netobserv [<command>] [<feature_option>] [<command_options>] 1",
"oc netobserv flows [<feature_option>] [<command_options>]",
"oc netobserv flows --enable_pkt_drop --enable_rtt --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051",
"oc netobserv packets [<option>]",
"oc netobserv packets --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051",
"oc netobserv metrics [<option>]",
"oc netobserv metrics --enable_pkt_drop --protocol=TCP"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/network_observability/network-observability-cli-1 |
Appendix B. Metadata Server daemon configuration Reference | Appendix B. Metadata Server daemon configuration Reference Refer to this list of commands that can be used for the Metadata Server (MDS) daemon configuration. mon_force_standby_active Description If set to true , monitors force MDS in standby replay mode to be active. Set under the [mon] or [global] section in the Ceph configuration file. Type Boolean Default true max_mds Description The number of active MDS daemons during cluster creation. Set under the [mon] or [global] section in the Ceph configuration file. Type 32-bit Integer Default 1 mds_cache_memory_limit Description The memory limit the MDS enforces for its cache. Red Hat recommends using this parameter instead of the mds cache size parameter. Type 64-bit Integer Unsigned Default 4294967296 mds_cache_reservation Description The cache reservation, memory or inodes, for the MDS cache to maintain. The value is a percentage of the maximum cache configured. Once the MDS begins dipping into its reservation, it recalls client state until its cache size shrinks to restore the reservation. Type Float Default 0.05 mds_cache_size Description The number of inodes to cache. A value of 0 indicates an unlimited number. Red Hat recommends to use the mds_cache_memory_limit to limit the amount of memory the MDS cache uses. Type 32-bit Integer Default 0 mds_cache_mid Description The insertion point for new items in the cache LRU, from the top. Type Float Default 0.7 mds_dir_commit_ratio Description The fraction of directory that contains erroneous information before Ceph commits using a full update instead of partial update. Type Float Default 0.5 mds_dir_max_commit_size Description The maximum size of a directory update in MB before Ceph breaks the directory into smaller transactions. Type 32-bit Integer Default 90 mds_decay_halflife Description The half-life of the MDS cache temperature. Type Float Default 5 mds_beacon_interval Description The frequency, in seconds, of beacon messages sent to the monitor. Type Float Default 4 mds_beacon_grace Description The interval without beacons before Ceph declares a MDS laggy and possibly replaces it. Type Float Default 15 mds_blacklist_interval Description The blacklist duration for failed MDS daemons in the OSD map. Type Float Default 24.0*60.0 mds_session_timeout Description The interval, in seconds, of client inactivity before Ceph times out capabilities and leases. Type Float Default 60 mds_session_autoclose Description The interval, in seconds, before Ceph closes a laggy client's session. Type Float Default 300 mds_reconnect_timeout Description The interval, in seconds, to wait for clients to reconnect during a MDS restart. Type Float Default 45 mds_tick_interval Description How frequently the MDS performs internal periodic tasks. Type Float Default 5 mds_dirstat_min_interval Description The minimum interval, in seconds, to try to avoid propagating recursive statistics up the tree. Type Float Default 1 mds_scatter_nudge_interval Description How quickly changes in directory statistics propagate up. Type Float Default 5 mds_client_prealloc_inos Description The number of inode numbers to preallocate per client session. Type 32-bit Integer Default 1000 mds_early_reply Description Determines whether the MDS allows clients to see request results before they commit to the journal. Type Boolean Default true mds_use_tmap Description Use trivialmap for directory updates. Type Boolean Default true mds_default_dir_hash Description The function to use for hashing files across directory fragments. Type 32-bit Integer Default 2 ,that is, rjenkins mds_log Description Set to true if the MDS should journal metadata updates. Disable for benchmarking only. Type Boolean Default true mds_log_skip_corrupt_events Description Determines whether the MDS tries to skip corrupt journal events during journal replay. Type Boolean Default false mds_log_max_events Description The maximum events in the journal before Ceph initiates trimming. Set to -1 to disable limits. Type 32-bit Integer Default -1 mds_log_max_segments Description The maximum number of segments or objects in the journal before Ceph initiates trimming. Set to -1 to disable limits. Type 32-bit Integer Default 30 mds_log_max_expiring Description The maximum number of segments to expire in parallels. Type 32-bit Integer Default 20 mds_log_eopen_size Description The maximum number of inodes in an EOpen event. Type 32-bit Integer Default 100 mds_bal_sample_interval Description Determines how frequently to sample directory temperature when making fragmentation decisions. Type Float Default 3 mds_bal_replicate_threshold Description The maximum temperature before Ceph attempts to replicate metadata to other nodes. Type Float Default 8000 mds_bal_unreplicate_threshold Description The minimum temperature before Ceph stops replicating metadata to other nodes. Type Float Default 0 mds_bal_frag Description Determines whether or not the MDS fragments directories. Type Boolean Default false mds_bal_split_size Description The maximum directory size before the MDS splits a directory fragment into smaller bits. The root directory has a default fragment size limit of 10000. Type 32-bit Integer Default 10000 mds_bal_split_rd Description The maximum directory read temperature before Ceph splits a directory fragment. Type Float Default 25000 mds_bal_split_wr Description The maximum directory write temperature before Ceph splits a directory fragment. Type Float Default 10000 mds_bal_split_bits Description The number of bits by which to split a directory fragment. Type 32-bit Integer Default 3 mds_bal_merge_size Description The minimum directory size before Ceph tries to merge adjacent directory fragments. Type 32-bit Integer Default 50 mds_bal_merge_rd Description The minimum read temperature before Ceph merges adjacent directory fragments. Type Float Default 1000 mds_bal_merge_wr Description The minimum write temperature before Ceph merges adjacent directory fragments. Type Float Default 1000 mds_bal_interval Description The frequency, in seconds, of workload exchanges between MDS nodes. Type 32-bit Integer Default 10 mds_bal_fragment_interval Description The frequency, in seconds, of adjusting directory fragmentation. Type 32-bit Integer Default 5 mds_bal_idle_threshold Description The minimum temperature before Ceph migrates a subtree back to its parent. Type Float Default 0 mds_bal_max Description The number of iterations to run balancer before Ceph stops. For testing purposes only. Type 32-bit Integer Default -1 mds_bal_max_until Description The number of seconds to run balancer before Ceph stops. For testing purposes only. Type 32-bit Integer Default -1 mds_bal_mode Description The method for calculating MDS load: 1 = Hybrid. 2 = Request rate and latency. 3 = CPU load. Type 32-bit Integer Default 0 mds_bal_min_rebalance Description The minimum subtree temperature before Ceph migrates. Type Float Default 0.1 mds_bal_min_start Description The minimum subtree temperature before Ceph searches a subtree. Type Float Default 0.2 mds_bal_need_min Description The minimum fraction of target subtree size to accept. Type Float Default 0.8 mds_bal_need_max Description The maximum fraction of target subtree size to accept. Type Float Default 1.2 mds_bal_midchunk Description Ceph migrates any subtree that is larger than this fraction of the target subtree size. Type Float Default 0.3 mds_bal_minchunk Description Ceph ignores any subtree that is smaller than this fraction of the target subtree size. Type Float Default 0.001 mds_bal_target_removal_min Description The minimum number of balancer iterations before Ceph removes an old MDS target from the MDS map. Type 32-bit Integer Default 5 mds_bal_target_removal_max Description The maximum number of balancer iterations before Ceph removes an old MDS target from the MDS map. Type 32-bit Integer Default 10 mds_replay_interval Description The journal poll interval when in standby-replay mode for a hot standby . Type Float Default 1 mds_shutdown_check Description The interval for polling the cache during MDS shutdown. Type 32-bit Integer Default 0 mds_thrash_exports Description Ceph randomly exports subtrees between nodes. For testing purposes only. Type 32-bit Integer Default 0 mds_thrash_fragments Description Ceph randomly fragments or merges directories. Type 32-bit Integer Default 0 mds_dump_cache_on_map Description Ceph dumps the MDS cache contents to a file on each MDS map. Type Boolean Default false mds_dump_cache_after_rejoin Description Ceph dumps MDS cache contents to a file after rejoining the cache during recovery. Type Boolean Default false mds_verify_scatter Description Ceph asserts that various scatter/gather invariants are true . For developer use only. Type Boolean Default false mds_debug_scatterstat Description Ceph asserts that various recursive statistics invariants are true . For developer use only. Type Boolean Default false mds_debug_frag Description Ceph verifies directory fragmentation invariants when convenient. For developer use only. Type Boolean Default false mds_debug_auth_pins Description The debug authentication pin invariants. For developer use only. Type Boolean Default false mds_debug_subtrees Description Debugging subtree invariants. For developer use only. Type Boolean Default false mds_kill_mdstable_at Description Ceph injects a MDS failure in a MDS Table code. For developer use only. Type 32-bit Integer Default 0 mds_kill_export_at Description Ceph injects a MDS failure in the subtree export code. For developer use only. Type 32-bit Integer Default 0 mds_kill_import_at Description Ceph injects a MDS failure in the subtree import code. For developer use only. Type 32-bit Integer Default 0 mds_kill_link_at Description Ceph injects a MDS failure in a hard link code. For developer use only. Type 32-bit Integer Default 0 mds_kill_rename_at Description Ceph injects a MDS failure in the rename code. For developer use only. Type 32-bit Integer Default 0 mds_wipe_sessions Description Ceph deletes all client sessions on startup. For testing purposes only. Type Boolean Default 0 mds_wipe_ino_prealloc Description Ceph deletes inode preallocation metadata on startup. For testing purposes only. Type Boolean Default 0 mds_skip_ino Description The number of inode numbers to skip on startup. For testing purposes only. Type 32-bit Integer Default 0 mds_standby_for_name Description The MDS daemon is a standby for another MDS daemon of the name specified in this setting. Type String Default N/A mds_standby_for_rank Description An instance of the MDS daemon is a standby for another MDS daemon instance of this rank. Type 32-bit Integer Default -1 mds_standby_replay Description Determines whether the MDS daemon polls and replays the log of an active MDS when used as a hot standby . Type Boolean Default false | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/file_system_guide/metadata-server-daemon-configuration-reference_fs |
5.3. Configuring Cluster Properties | 5.3. Configuring Cluster Properties In addition to configuring cluster parameters in the preceding section ( Section 5.2, "Starting the Cluster Configuration Tool " ), you can configure the following cluster properties: Cluster Alias (optional), a Config Version (optional), and Fence Daemon Properties . To configure cluster properties, follow these steps: At the left frame, click Cluster . At the bottom of the right frame (labeled Properties ), click the Edit Cluster Properties button. Clicking that button causes a Cluster Properties dialog box to be displayed. The Cluster Properties dialog box presents text boxes for Cluster Alias , and Config Version , and two Fence Daemon Properties parameters (DLM clusters only): Post-Join Delay and Post-Fail Delay . (Optional) At the Cluster Alias text box, specify a cluster alias for the cluster. The default cluster alias is set to the true cluster name provided when the cluster is set up (refer to Section 5.2, "Starting the Cluster Configuration Tool " ). The cluster alias should be descriptive enough to distinguish it from other clusters and systems on your network (for example, nfs_cluster or httpd_cluster ). The cluster alias cannot exceed 15 characters. (Optional) The Config Version value is set to 1 by default and is automatically incremented each time you save your cluster configuration. However, if you need to set it to another value, you can specify it at the Config Version text box. Specify the Fence Daemon Properties parameters (DLM clusters only): Post-Join Delay and Post-Fail Delay . The Post-Join Delay parameter is the number of seconds the fence daemon ( fenced ) waits before fencing a node after the node joins the fence domain. The Post-Join Delay default value is 3 . A typical setting for Post-Join Delay is between 20 and 30 seconds, but can vary according to cluster and network performance. The Post-Fail Delay parameter is the number of seconds the fence daemon ( fenced ) waits before fencing a node (a member of the fence domain) after the node has failed.The Post-Fail Delay default value is 0 . Its value may be varied to suit cluster and network performance. Note For more information about Post-Join Delay and Post-Fail Delay , refer to the fenced (8) man page. Save cluster configuration changes by selecting File => Save . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_administration/s1-naming-cluster-ca |
E.2. Setting Up Encrypted Communication between the Manager and an LDAP Server | E.2. Setting Up Encrypted Communication between the Manager and an LDAP Server To set up encrypted communication between the Red Hat Virtualization Manager and an LDAP server, obtain the root CA certificate of the LDAP server, copy the root CA certificate to the Manager, and create a PEM-encoded CA certificate. The keystore type can be any Java-supported type. The following procedure uses the Java KeyStore (JKS) format. Note For more information on creating a PEM-encoded CA certificate and importing certificates, see the X.509 CERTIFICATE TRUST STORE section of the README file at /usr/share/doc/ovirt-engine-extension-aaa-ldap-< version >. Note The ovirt-engine-extension-aaa-ldap is deprecated. For new installations, use Red Hat Single Sign On. For more information, see Installing and Configuring Red Hat Single Sign-On in the Administration Guide . Procedure On the Red Hat Virtualization Manager, copy the root CA certificate of the LDAP server to the /tmp directory and import the root CA certificate using keytool to create a PEM-encoded CA certificate. The following command imports the root CA certificate at /tmp/myrootca.pem and creates a PEM-encoded CA certificate myrootca.jks under /etc/ovirt-engine/aaa/ . Note down the certificate's location and password. If you are using the interactive setup tool, this is all the information you need. If you are configuring the LDAP server manually, follow the rest of the procedure to update the configuration files. USD keytool -importcert -noprompt -trustcacerts -alias myrootca -file /tmp/myrootca.pem -keystore /etc/ovirt-engine/aaa/myrootca.jks -storepass password Update the /etc/ovirt-engine/aaa/profile1.properties file with the certificate information: Note USD{local:_basedir} is the directory where the LDAP property configuration file resides and points to the /etc/ovirt-engine/aaa directory. If you created the PEM-encoded CA certificate in a different directory, replace USD{local:_basedir} with the full path to the certificate. To use startTLS (recommended): # Create keystore, import certificate chain and uncomment pool.default.ssl.startTLS = true pool.default.ssl.truststore.file = USD{local:_basedir}/ myrootca.jks pool.default.ssl.truststore.password = password To use SSL: # Create keystore, import certificate chain and uncomment pool.default.serverset.single.port = 636 pool.default.ssl.enable = true pool.default.ssl.truststore.file = USD{local:_basedir}/ myrootca.jks pool.default.ssl.truststore.password = password To continue configuring an external LDAP provider, see Configuring an External LDAP Provider . To continue configuring LDAP and Kerberos for Single Sign-on, see Configuring LDAP and Kerberos for Single Sign-on . | [
"keytool -importcert -noprompt -trustcacerts -alias myrootca -file /tmp/myrootca.pem -keystore /etc/ovirt-engine/aaa/myrootca.jks -storepass password",
"Create keystore, import certificate chain and uncomment pool.default.ssl.startTLS = true pool.default.ssl.truststore.file = USD{local:_basedir}/ myrootca.jks pool.default.ssl.truststore.password = password",
"Create keystore, import certificate chain and uncomment pool.default.serverset.single.port = 636 pool.default.ssl.enable = true pool.default.ssl.truststore.file = USD{local:_basedir}/ myrootca.jks pool.default.ssl.truststore.password = password"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/administration_guide/setting_up_ssl_or_tls_connections_between_the_manager_and_an_ldap_server |
Chapter 1. OpenShift Container Platform 4.12 release notes | Chapter 1. OpenShift Container Platform 4.12 release notes Red Hat OpenShift Container Platform provides developers and IT organizations with a hybrid cloud application platform for deploying both new and existing applications on secure, scalable resources with minimal configuration and management overhead. OpenShift Container Platform supports a wide selection of programming languages and frameworks, such as Java, JavaScript, Python, Ruby, and PHP. Built on Red Hat Enterprise Linux (RHEL) and Kubernetes, OpenShift Container Platform provides a more secure and scalable multitenant operating system for today's enterprise-class applications, while delivering integrated application runtimes and libraries. OpenShift Container Platform enables organizations to meet security, privacy, compliance, and governance requirements. 1.1. About this release OpenShift Container Platform ( RHSA-2022:7399 ) is now available. This release uses Kubernetes 1.25 with CRI-O runtime. New features, changes, and known issues that pertain to OpenShift Container Platform 4.12 are included in this topic. OpenShift Container Platform 4.12 clusters are available at https://console.redhat.com/openshift . With the Red Hat OpenShift Cluster Manager application for OpenShift Container Platform, you can deploy OpenShift clusters to either on-premises or cloud environments. OpenShift Container Platform 4.12 is supported on Red Hat Enterprise Linux (RHEL) 8.6 and a later version of RHEL 8 that is released before End of Life of OpenShift Container Platform 4.12. You must use RHCOS machines for the control plane, and you can use either RHCOS or RHEL for compute machines. Starting with OpenShift Container Platform 4.12 an additional six months of Extended Update Support (EUS) phase on even numbered releases from 18 months to two years. For more information, see the Red Hat OpenShift Container Platform Life Cycle Policy . OpenShift Container Platform 4.8 is an Extended Update Support (EUS) release. More information on Red Hat OpenShift EUS is available in OpenShift Life Cycle and OpenShift EUS Overview . Maintenance support ends for version 4.8 in January 2023 and goes to extended life phase. For more information, see the Red Hat OpenShift Container Platform Life Cycle Policy . 1.2. OpenShift Container Platform layered and dependent component support and compatibility The scope of support for layered and dependent components of OpenShift Container Platform changes independently of the OpenShift Container Platform version. To determine the current support status and compatibility for an add-on, refer to its release notes. For more information, see the Red Hat OpenShift Container Platform Life Cycle Policy . 1.3. New features and enhancements This release adds improvements related to the following components and concepts. 1.3.1. Red Hat Enterprise Linux CoreOS (RHCOS) 1.3.1.1. Default consoles for new clusters are now determined by the installation platform Red Hat Enterprise Linux CoreOS (RHCOS) nodes installed from an OpenShift Container Platform 4.12 boot image now use a platform-specific default console. The default consoles on cloud platforms correspond to the specific system consoles expected by that cloud provider. VMware and OpenStack images now use a primary graphical console and a secondary serial console. Other bare metal installations now use only the graphical console by default, and do not enable a serial console. Installations performed with coreos-installer can override existing defaults and enable the serial console. Existing nodes are not affected. New nodes on existing clusters are not likely to be affected because they are typically installed from the boot image that was originally used to install the cluster. For information about how to enable the serial console, see the following documentation: Default console configuration . Modifying a live install ISO image to enable the serial console . Modifying a live install PXE environment to enable the serial console . 1.3.1.2. IBM Secure Execution on IBM Z and LinuxONE (Technology Preview) OpenShift Container Platform now supports configuring Red Hat Enterprise Linux CoreOS (RHCOS) nodes for IBM Secure Execution on IBM Z and LinuxONE (s390x architecture) as a Technology Preview feature. IBM Secure Execution is a hardware enhancement that protects memory boundaries for KVM guests. IBM Secure Execution provides the highest level of isolation and security for cluster workloads, and you can enable it by using an IBM Secure Execution-ready QCOW2 boot image. To use IBM Secure Execution, you must have host keys for your host machine(s) and they must be specified in your Ignition configuration file. IBM Secure Execution automatically encrypts your boot volumes using LUKS encryption. For more information, see Installing RHCOS using IBM Secure Execution . 1.3.1.3. RHCOS now uses RHEL 8.6 RHCOS now uses Red Hat Enterprise Linux (RHEL) 8.6 packages in OpenShift Container Platform 4.12. This enables you to have the latest fixes, features, and enhancements, as well as the latest hardware support and driver updates. OpenShift Container Platform 4.10 is an Extended Update Support (EUS) release that will continue to use RHEL 8.4 EUS packages for the entirety of its lifecycle. 1.3.2. Installation and upgrade 1.3.2.1. Assisted Installer SaaS provides platform integration support for Nutanix Assisted Installer SaaS on console.redhat.com supports installation of OpenShift Container Platform on the Nutanix platform with Machine API integration using either the Assisted Installer user interface or the REST API. Integration enables Nutanix Prism users to manage their infrastructure from a single interface, and enables auto-scaling. There are a few additional installation steps to enable Nutanix integration with Assisted Installer SaaS. See the Assisted Installer documentation for details. 1.3.2.2. Specify the load balancer type in AWS during installation Beginning with OpenShift Container Platform 4.12, you can specify either Network Load Balancer (NLB) or Classic as a persistent load balancer type in AWS during installation. Afterwards, if an Ingress Controller is deleted, the load balancer type persists with the lbType configured during installation. For more information, see Installing a cluster on AWS with network customizations . 1.3.2.3. Extend worker nodes to the edge of AWS when installing into an existing Virtual Private Cloud (VPC) with Local Zone subnets. With this update you can install OpenShift Container Platform to an existing VPC with installer-provisioned infrastructure, extending the worker nodes to Local Zones subnets. The installation program will provision worker nodes on the edge of the AWS network that are specifically designated for user applications by using NoSchedule taints. Applications deployed on the Local Zones locations deliver low latency for end users. For more information, see Installing a cluster using AWS Local Zones . 1.3.2.4. Google Cloud Platform Marketplace offering OpenShift Container Platform is now available on the GCP Marketplace. Installing an OpenShift Container Platform with a GCP Marketplace image lets you create self-managed cluster deployments that are billed on pay-per-use basis (hourly, per core) through GCP, while still being supported directly by Red Hat. For more information about installing using installer-provisioned infrastructure, see Using a GCP Marketplace image . For more information about installing a using user-provisioned infrastructure, see Creating additional worker machines in GCP . 1.3.2.5. Troubleshooting bootstrap failures during installation on GCP and Azure The installer now gathers serial console logs from the bootstrap and control plane hosts on GCP and Azure. This log data is added to the standard bootstrap log bundle. For more information, see Troubleshooting installation issues . 1.3.2.6. IBM Cloud VPC general availability IBM Cloud VPC is now generally available in OpenShift Container Platform 4.12. For more information about installing a cluster, see Preparing to install on IBM Cloud VPC . 1.3.2.7. Required administrator acknowledgment when upgrading from OpenShift Container Platform 4.11 to 4.12 OpenShift Container Platform 4.12 uses Kubernetes 1.25, which removed several deprecated APIs . A cluster administrator must provide a manual acknowledgment before the cluster can be upgraded from OpenShift Container Platform 4.11 to 4.12. This is to help prevent issues after upgrading to OpenShift Container Platform 4.12, where APIs that have been removed are still in use by workloads, tools, or other components running on or interacting with the cluster. Administrators must evaluate their cluster for any APIs in use that will be removed and migrate the affected components to use the appropriate new API version. After this is done, the administrator can provide the administrator acknowledgment. All OpenShift Container Platform 4.11 clusters require this administrator acknowledgment before they can be upgraded to OpenShift Container Platform 4.12. For more information, see Preparing to update to OpenShift Container Platform 4.12 . 1.3.2.8. Enabling a feature set when installing a cluster Beginning with OpenShift Container Platform 4.12, you can enable a feature set as part of the installation process. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see Enabling OpenShift Container Platform features using feature gates . 1.3.2.9. OpenShift Container Platform on ARM OpenShift Container Platform 4.12 is now supported on ARM architecture-based Azure installer-provisioned infrastructure. AWS Graviton 3 processors are now available for cluster deployments and are also supported on OpenShift Container Platform 4.11. For more information about instance availability and installation documentation, see Supported installation methods for different platforms 1.3.2.10. Mirroring file-based catalog Operator images in OCI format with the oc-mirror CLI plugin (Technology Preview) Using the oc-mirror CLI plugin to mirror file-based catalog Operator images in OCI format instead of Docker v2 format is now available as a Technology Preview . For more information, see Mirroring file-based catalog Operator images in OCI format . 1.3.2.11. Installing an OpenShift Container Platform cluster on GCP into a shared VPC (Technology Preview) In OpenShift Container Platform 4.12, you can install a cluster on GCP into a shared VPC as a Technology Preview . In this installation method, the cluster is configured to use a VPC from a different GCP project. A shared VPC enables an organization to connect resources from multiple projects to a common VPC network. You can communicate within the organization securely and efficiently by using internal IP addresses from that network. For more information, see Installing a cluster on GCP into a shared VPC . 1.3.2.12. Consistent IP address for Ironic API in bare-metal installations without a provisioning network With this update, in bare-metal installations without a provisioning network, the Ironic API service is accessible through a proxy server. This proxy server provides a consistent IP address for the Ironic API service. If the Metal3 pod that contains metal3-ironic relocates to another pod, the consistent proxy address ensures constant communication with the Ironic API service. 1.3.2.13. Installing OpenShift Container Platform on GCP using service account authentication In OpenShift Container Platform 4.12, you can install a cluster on GCP using a virtual machine with a service account attached to it. This allows you to perform an installation without needing to use a service account JSON file. For more information, see Creating a GCP service account . 1.3.2.14. propagateUserTags parameter for AWS resources provisioned by the OpenShift Container Platform cluster In OpenShift Container Platform 4.12, the propagateUserTags parameter is a flag that directs in-cluster Operators to include the specified user tags in the tags of the AWS resources that the Operators create. For more information, see Optional configuration parameters . 1.3.2.15. Ironic container images use RHEL 9 base image In earlier versions of OpenShift Container Platform, Ironic container images used Red Hat Enterprise Linux (RHEL) 8 as the base image. From OpenShift Container Platform 4.12, Ironic container images use RHEL 9 as the base image. The RHEL 9 base image adds support for CentOS Stream 9, Python 3.8, and Python 3.9 in Ironic components. For more information about the Ironic provisioning service, see Deploying installer-provisioned clusters on bare metal . 1.3.2.16. Cloud provider configuration updates for clusters that run on RHOSP In OpenShift Container Platform 4.12, clusters that run on Red Hat OpenStack Platform (RHOSP) are switched from the legacy OpenStack cloud provider to the external Cloud Controller Manager (CCM). This change follows the move in Kubernetes from in-tree, legacy cloud providers to external cloud providers that are implemented by using the Cloud Controller Manager . For more information, see The OpenStack Cloud Controller Manager . 1.3.2.17. Support for workloads on RHOSP distributed compute nodes In OpenShift Container Platform 4.12, cluster deployments to Red Hat OpenStack Platform (RHOSP) clouds that have distributed compute node (DCN) architecture were validated. A reference architecture for these deployments is forthcoming. For a brief overview of this type of deployment, see the blog post Deploying Your Cluster at the Edge With OpenStack . 1.3.2.18. OpenShift Container Platform on AWS Outposts (Technology Preview) OpenShift Container Platform 4.12 is now supported on the AWS Outposts platform as a Technology Preview . With AWS Outposts you can deploy edge-based worker nodes, while using AWS Regions for the control plane nodes. For more information, see Installing a cluster on AWS with remote workers on AWS Outposts . 1.3.2.19. Agent-based installation supports two input modes The Agent-based installation supports two input modes: install-config.yaml file agent-config.yaml file Optional Zero Touch Provisioning (ZTP) manifests With the preferred mode, you can configure the install-config.yaml file and specify Agent-based specific settings in the agent-config.yaml file. For more information, see About the Agent-based OpenShift Container Platform Installer . 1.3.2.20. Agent-based installation supports installing OpenShift Container Platform clusters in FIPS compliant mode Agent-based OpenShift Container Platform Installer supports OpenShift Container Platform clusters in Federal Information Processing Standards (FIPS) compliant mode. You must set the value of the fips field to True in the install-config.yaml file. For more information, see About FIPS compliance . 1.3.2.21. Deploy an Agent-based OpenShift Container Platform cluster in a disconnected environment You can perform an Agent-based installation in a disconnected environment. To create an image that is used in a disconnected environment, the imageContentSources section in the install-config.yaml file must contain the mirror information or registries.conf file if you are using ZTP manifests. The actual configuration settings to use in these files are supplied by either the oc adm release mirror or oc mirror command. For more information, see Understanding disconnected installation mirroring . 1.3.2.22. Explanation of field to build and push graph-data When creating the image set configuration, you can add the graph: true field to build and push the graph-data image to the mirror registry. The graph-data image is required to create OpenShift Update Service (OSUS). The graph: true field also generates the UpdateService custom resource manifest. The oc command-line interface (CLI) can use the UpdateService custom resource manifest to create OSUS. 1.3.2.23. Agent-based installation supports single and dual stack networking You can create the agent ISO image with the following IP address configurations: IPv4 IPv6 IPv4 and IPv6 in parallel (dual-stack) Note IPv6 is supported only on bare metal platforms. For more information, see Dual and single IP stack clusters . 1.3.2.24. Agent deployed OpenShift Container Platform cluster can be used as a hub cluster You can install the multicluster engine for Kubernetes Operator and deploy a hub cluster with the Agent-based OpenShift Container Platform Installer. For more information, see Preparing an Agent-based installed cluster for the multicluster engine for Kubernetes Operator . 1.3.2.25. Agent-based installation performs installation validations The Agent-based OpenShift Container Platform Installer performs validations on: Installation image generation: The user-provided manifests are checked for validity and compatibility. Installation: The installation service checks the hardware available for installation and emits validation events that can be retrieved with the openshift-install agent wait-for subcommands. For more information, see Installation validations . 1.3.2.26. Configure static networking in a Agent-based installation With the Agent-based OpenShift Container Platform Installer, you can configure static IP addresses for IPv4, IPv6, or dual-stack (both IPv4 and IPv6) for all the hosts prior to creating the agent ISO image. You can add the static addresses to the hosts section of the agent-config.yaml file or in the NMStateConfig.yaml file if you are using the ZTP manifests. Note that the configuration of the addresses must follow the syntax rules for NMState as described in NMState state examples . Note IPv6 is supported only on bare metal platforms. For more information, see About networking . 1.3.2.27. CLI based automated deployment in an Agent-based installation With the Agent-based OpenShift Container Platform Installer, you can define your installation configurations, generate an ISO for all the nodes, and then have an unattended installation by booting the target systems with the generated ISO. For more information, see Installing a OpenShift Container Platform cluster with the Agent-based OpenShift Container Platform Installer . 1.3.2.28. Agent-based installation supports host specific configuration at the instalation time You can configure the hostname, network configuration in NMState format, root device hints, and role in an Agent-based installation. For more information, see About root device hints . 1.3.2.29. Agent-based installation supports DHCP With the Agent-based OpenShift Container Platform Installer, you can deploy to environments where you rely on DHCP to configure networking for all the nodes, as long as you know the IP that at least one of the systems will receive. This IP is required so that all nodes use it as a meeting point. For more information, see DHCP . 1.3.2.30. Installing a cluster on Nutanix with limited internet access You can now install a cluster on Nutanix when the environment has limited access to to the internet, as in the case of a disconnected or restricted network cluster. With this type of installation, you create a registry that mirrors the contents of the OpenShift Container Platform image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network. For more information, see About disconnected installation mirroring and Installing a cluster on Nutanix in a restricted network . 1.3.3. Post-installation configuration 1.3.3.1. CSI driver installation on vSphere clusters To install a CSI driver on a cluster running on vSphere, the following requirements must be met: Virtual machines of hardware version 15 or later VMware vSphere version 7.0 Update 2 or later, which includes version 8.0. vCenter 7.0 Update 2 or later, which includes version 8.0. No third-party CSI driver already installed in the cluster If a third-party CSI driver is present in the cluster, OpenShift Container Platform does not overwrite it. Components with versions earlier than those above are still supported, but are deprecated. These versions are still fully supported, but version 4.12 of OpenShift Container Platform requires vSphere virtual hardware version 15 or later. For more information, see Deprecated and removed features . Failing to meet the above requirements prevents OpenShift Container Platform from upgrading to OpenShift Container Platform 4.13 or later. 1.3.3.2. Cluster Capabilities The following new cluster capabilities have been added: Console Insights Storage CSISnapshot A new predefined set of cluster capabilities, v4.12 , has been added. This includes all capabilities from v4.11 , and the new capabilities added with the current release. For more information, see link: Enabling cluster capabilities . 1.3.3.3. OpenShift Container Platform with multi-architecture compute machines (Technology Preview) OpenShift Container Platform 4.12 with multi-architecture compute machines now supports manifest listed images on image streams. For more information about manifest list images, see Configuring multi-architecture compute machines on an OpenShift Container Platform cluster . On a cluster with multi-architecture compute machines, you can now override the node affinity in the Operator's Subscription object to schedule pods on nodes with architectures that the Operator supports. For more information, see Using node affinity to control where an Operator is installed . 1.3.4. Web console 1.3.4.1. Administrator Perspective With this release, there are several updates to the Administrator perspective of the web console. The OpenShift Container Platform web console displays a ConsoleNotification if the cluster is upgrading. Once the upgrade is done, the notification is removed. A restart rollout option for the Deployment resource and a retry rollouts option for the DeploymentConfig resource are available on the Action and Kebab menus. 1.3.4.1.1. Multi-architecture compute machines on the OpenShift Container Platform web console The console-operator now scans all nodes and builds a set of all architecture types that cluster nodes run on and pass it to the console-config.yaml . The console-operator can be installed on nodes with architectures of the values amd64 , arm64 , ppc64le , or s390x . For more information about multi-architechture compute machines, see Configuring a multi-architecture compute machine on an OpenShift cluster . 1.3.4.1.2. Dynamic plugin generally available This feature was previously introduced as a Technology Preview in OpenShift Container Platform 4.10 and is now generally available in OpenShift Container Platform 4.12. With the dynamic plugin, you can build high quality and unique user experiences natively in the web console. You can: Add custom pages. Add perspectives beyond administrator and developer. Add navigation items. Add tabs and actions to resource pages. Extend existing pages. For more information, see Overview of dynamic-plugins . 1.3.4.2. Developer Perspective With this release, there are several updates to the Developer perspective of the web console. You can perform the following actions: Export your application in the ZIP file format to another project or cluster by using the Export application option on the +Add page. Create a Kafka event sink to receive events from a particular source and send them to a Kafka topic. Set the default resource preference in the User Preferences Applications page. In addition, you can select another resource type to be the default. Optionally, set another resource type from the Add page by clicking Import from Git Advanced options Resource type and selecting the resource from the drop-down list. Make the status.HostIP node IP address for pods visible in the Details tab of the Pods page. See the resource quota alert label on the Topology and Add pages whenever any resource reaches the quota. The alert label link takes you to the ResourceQuotas list page. If the alert label link is for a single resource quota, it takes you to the ResourceQuota details page. For deployments, an alert is displayed in the topology node side panel if any errors are associated with resource quotas. Also, a yellow border is displayed around the deployment nodes when the resource quota is exceeded. Customize the following UI items using the form or YAML view: Perspectives visible to users Quick starts visible to users Cluster roles accessible to a project Actions visible on the +Add page Item types in the Developer Catalog See the common updates to the Pipeline details and PipelineRun details page visualization by performing the following actions: Use the mouse wheel to change the zoom factor. Hover over the tasks to see the task details. Use the standard icons to zoom in, zoom out, fit to screen, and reset the view. PipelineRun details page only: At specific zoom factors, the background color of the tasks changes to indicate the error or warning status. You can hover over the tasks badge to see the total number of tasks and the completed tasks. 1.3.4.2.1. Helm page improvements In OpenShift Container Platform 4.12, you can do the following from the Helm page: Create Helm releases and repositories using the Create button. Create, update, or delete a cluster-scoped or a namespace-scoped Helm chart repository. View the list of the existing Helm chart repositories with their scope in the Repositories page. View the newly created Helm release in the Helm Releases page. 1.3.4.2.2. Negative matchers in Alertmanager With this update, Alertmanager now supports a Negative matcher option. Using Negative matcher , you can update the Label value to a Not Equals matcher. The negative matcher checkbox changes = (value equals) into != (value does not equal) and changes =~ (value matches regular expression) into !~ (value does not match regular expression). Also, the Use RegEx checkbox label is renamed to RegEx . 1.3.5. OpenShift CLI (oc) 1.3.5.1. Managing plugins for the OpenShift CLI with Krew (Technology Preview) Using Krew to install and manage plugins for the OpenShift CLI ( oc ) is now available as a Technology Preview . For more information, see Managing CLI plugins with Krew . 1.3.6. IBM Z and LinuxONE With this release, IBM Z and LinuxONE are now compatible with OpenShift Container Platform 4.12. The installation can be performed with z/VM or RHEL KVM. For installation instructions, see the following documentation: Installing a cluster with z/VM on IBM Z and LinuxONE Installing a cluster with z/VM on IBM Z and LinuxONE in a restricted network Installing a cluster with RHEL KVM on IBM Z and LinuxONE Installing a cluster with RHEL KVM on IBM Z and LinuxONE in a restricted network Notable enhancements The following new features are supported on IBM Z and LinuxONE with OpenShift Container Platform 4.12: Cron jobs Descheduler FIPS cryptography IPv6 PodDisruptionBudget Scheduler profiles Stream Control Transmission Protocol (SCTP) IBM Secure Execution (Technology Preview) OpenShift Container Platform now supports configuring Red Hat Enterprise Linux CoreOS (RHCOS) nodes for IBM Secure Execution on IBM Z and LinuxONE (s390x architecture) as a Technology Preview feature. For installation instructions, see the following documentation: Installing RHCOS using IBM Secure Execution Supported features The following features are also supported on IBM Z and LinuxONE: Currently, the following Operators are supported: Cluster Logging Operator Compliance Operator File Integrity Operator Local Storage Operator NFD Operator NMState Operator OpenShift Elasticsearch Operator Service Binding Operator Vertical Pod Autoscaler Operator The following Multus CNI plugins are supported: Bridge Host-device IPAM IPVLAN Alternate authentication providers Automatic Device Discovery with Local Storage Operator CSI Volumes Cloning Expansion Snapshot Encrypting data stored in etcd Helm Horizontal pod autoscaling Monitoring for user-defined projects Multipathing Operator API OC CLI plugins Persistent storage using iSCSI Persistent storage using local volumes (Local Storage Operator) Persistent storage using hostPath Persistent storage using Fibre Channel Persistent storage using Raw Block OVN-Kubernetes, including IPsec encryption Support for multiple network interfaces Three-node cluster support z/VM Emulated FBA devices on SCSI disks 4K FCP block device These features are available only for OpenShift Container Platform on IBM Z and LinuxONE for 4.12: HyperPAV enabled on IBM Z and LinuxONE for the virtual machines for FICON attached ECKD storage Restrictions The following restrictions impact OpenShift Container Platform on IBM Z and LinuxONE: Automatic repair of damaged machines with machine health checking Red Hat OpenShift Local Controlling overcommit and managing container density on nodes NVMe OpenShift Metering OpenShift Virtualization Precision Time Protocol (PTP) hardware Tang mode disk encryption during OpenShift Container Platform deployment Compute nodes must run Red Hat Enterprise Linux CoreOS (RHCOS) Persistent shared storage must be provisioned by using either Red Hat OpenShift Data Foundation or other supported storage protocols Persistent non-shared storage must be provisioned using local storage, like iSCSI, FC, or using LSO with DASD, FCP, or EDEV/FBA 1.3.7. IBM Power With this release, IBM Power is now compatible with OpenShift Container Platform 4.12. For installation instructions, see the following documentation: Installing a cluster on IBM Power Installing a cluster on IBM Power in a restricted network Notable enhancements The following new features are supported on IBM Power with OpenShift Container Platform 4.12: Cloud controller manager for IBM Cloud Cron jobs Descheduler FIPS cryptography PodDisruptionBudget Scheduler profiles Stream Control Transmission Protocol (SCTP) Topology Manager Supported features The following features are also supported on IBM Power: Currently, the following Operators are supported: Cluster Logging Operator Compliance Operator File Integrity Operator Local Storage Operator NFD Operator NMState Operator OpenShift Elasticsearch Operator SR-IOV Network Operator Service Binding Operator Vertical Pod Autoscaler Operator The following Multus CNI plugins are supported: Bridge Host-device IPAM IPVLAN Alternate authentication providers CSI Volumes Cloning Expansion Snapshot Encrypting data stored in etcd Helm Horizontal pod autoscaling IPv6 Monitoring for user-defined projects Multipathing Multus SR-IOV Operator API OC CLI plugins OVN-Kubernetes, including IPsec encryption Persistent storage using iSCSI Persistent storage using local volumes (Local Storage Operator) Persistent storage using hostPath Persistent storage using Fibre Channel Persistent storage using Raw Block Support for multiple network interfaces Support for Power10 Three-node cluster support 4K Disk Support Restrictions The following restrictions impact OpenShift Container Platform on IBM Power: Automatic repair of damaged machines with machine health checking Red Hat OpenShift Local Controlling overcommit and managing container density on nodes OpenShift Metering OpenShift Virtualization Precision Time Protocol (PTP) hardware Tang mode disk encryption during OpenShift Container Platform deployment Compute nodes must run Red Hat Enterprise Linux CoreOS (RHCOS) Persistent storage must be of the Filesystem type that uses local volumes, Red Hat OpenShift Data Foundation, Network File System (NFS), or Container Storage Interface (CSI) 1.3.8. Images A new import value, importMode , has been added to the importPolicy parameter of image streams. The following fields are available for this value: Legacy : Legacy is the default value for importMode . When active, the manifest list is discarded, and a single sub-manifest is imported. The platform is chosen in the following order of priority: Tag annotations Control plane architecture Linux/AMD64 The first manifest in the list PreserveOriginal : When active, the original manifest is preserved. For manifest lists, the manifest list and all of its sub-manifests are imported. 1.3.9. Security and compliance 1.3.9.1. Security Profiles Operator The Security Profiles Operator (SPO) is now available for OpenShift Container Platform 4.12 and later. The SPO provides a way to define secure computing ( seccomp ) profiles and SELinux profiles as custom resources, synchronizing profiles to every node in a given namespace. For more information, see Security Profiles Operator Overview . 1.3.10. Networking 1.3.10.1. Support for dual-stack addressing for the API VIP and Ingress VIP Assisted Installer supports installation of OpenShift Container Platform 4.12 and later versions with dual stack networking for the API VIP and Ingress VIP on bare metal only. This support introduces two new configuration settings: api_vips and ingress_vips , which can take a list of IP addresses. The legacy settings, api_vip and ingress_vip must also be set in OpenShift Container Platform 4.12; however, since they only take one IP address, you must set the IPv4 address when configuring dual stack networking for the API VIP and Ingress VIP with the legacy api_vip and ingress_vip configuration settings. The API VIP address and the Ingress VIP address must be of the primary IP address family when using dual-stack networking. Currently, Red Hat does not support dual-stack VIPs or dual-stack networking with IPv6 as the primary IP address family. However, Red Hat does support dual-stack networking with IPv4 as the primary IP address family. Therefore, you must place the IPv4 entries before the IPv6 entries. See the Assisted Installer documentation for details. 1.3.10.2. Red Hat OpenShift Networking Red Hat OpenShift Networking is an ecosystem of features, plugins, and advanced networking capabilities that extend Kubernetes networking beyond the Kubernetes CNI plugin with the advanced networking-related features that your cluster needs to manage its network traffic for one or multiple hybrid clusters. This ecosystem of networking capabilities integrates ingress, egress, load balancing, high-performance throughput, security, and inter-, and intra-cluster traffic management and provides role-based observability tooling to reduce its natural complexities. For more information, see About networking . 1.3.10.3. OVN-Kubernetes is now the default networking plugin When installing a new cluster the OVN-Kubernetes network plugin is the default networking plugin. For all prior versions of OpenShift Container Platform, OpenShift SDN remains the default networking plugin. The OVN-Kubernetes network plugin includes a wider array of features than OpenShift SDN, including: Support for all existing OpenShift SDN features Support for IPv6 networks Support for Configuring IPsec encryption Complete support for the NetworkPolicy API Support for audit logging of network policy events Support for network flow tracking in NetFlow, sFlow, and IPFIX formats Support for hybrid networks for Windows containers Support for hardware offloading to compatible NICs There are also enormous scale, performance, and stability improvements in OpenShift Container Platform 4.12 compared to prior versions. If you are using the OpenShift SDN network plugin, note that: Existing and future deployments using OpenShift SDN continues to be supported. OpenShift SDN remains the default on OpenShift Container Platform versions earlier than 4.12. As of OpenShift Container Platform 4.12, OpenShift SDN is a supported installation-time option. OpenShift SDN remains feature frozen. For more information about OVN-Kubernetes, including a feature comparison matrix with OpenShift SDN, see About the OVN-Kubernetes network plugin . For information on migrating to OVN-Kubernetes from OpenShift SDN, see Migrating from the OpenShift SDN network plugin . 1.3.10.4. Ingress Node Firewall Operator This update introduces a new stateless Ingress Node Firewall Operator. You can now configure firewall rules at the node level. For more information, see Ingress Node Firewall Operator . 1.3.10.5. Enhancements to networking metrics The following metrics are now available for the OVN-Kubernetes network plugin: ovn_controller_southbound_database_connected ovnkube_master_libovsdb_monitors ovnkube_master_network_programming_duration_seconds ovnkube_master_network_programming_ovn_duration_seconds ovnkube_master_egress_routing_via_host ovs_vswitchd_interface_resets_total ovs_vswitchd_interface_rx_dropped_total ovs_vswitchd_interface_tx_dropped_total ovs_vswitchd_interface_rx_errors_total ovs_vswitchd_interface_tx_errors_total ovs_vswitchd_interface_collisions_total The following metric has been removed: ovnkube_master_skipped_nbctl_daemon_total 1.3.10.6. Multi-zone Installer Provisioned Infrastructure VMware vSphere installation (Technology Preview) Beginning with OpenShift Container Platform 4.12, the ability to configure multiple vCenter datacenters and multiple vCenter clusters in a single vCenter installation using installer-provisioned infrastructure is now available as a Technology Preview feature. Using vCenter tags, you can use this feature to associate vCenter datacenters and compute clusters with openshift-regions and openshift-zones. These associations define failure domains to enable application workloads to be associated with specific locations and failure domains. 1.3.10.7. Kubernetes NMState in VMware vSphere now supported Beginning with OpenShift Container Platform 4.12, you can configure the networking settings such as DNS servers or search domains, VLANs, bridges, and interface bonding using the Kubernetes NMState Operator on your VMware vSphere instance. For more information, see About the Kubernetes NMState Operator . 1.3.10.8. Kubernetes NMState in OpenStack now supported Beginning with OpenShift Container Platform 4.12, you can configure the networking settings such as DNS servers or search domains, VLANs, bridges, and interface bonding using the Kubernetes NMState Operator on your OpenStack instance. For more information, see About the Kubernetes NMState Operator . 1.3.10.9. External DNS Operator In OpenShift Container Platform 4.12, the External DNS Operator modifies the format of the ExternalDNS wildcard TXT records on AzureDNS. The External DNS Operator replaces the asterisk with any in ExternalDNS wildcard TXT records. You must avoid the ExternalDNS wildcard A and CNAME records having any leftmost subdomain because this might cause a conflict. The upstream version of ExternalDNS for OpenShift Container Platform 4.12 is v0.13.1. 1.3.10.10. The ingressClassName field is required for each ingress object Beginning with OpenShift Container Platform 4.12, you must specify a class name in the ingressClassName field for each ingress object to ensure proper routing and functionality. 1.3.10.11. Capturing metrics and telemetry associated with the use of routes and shards In OpenShift Container Platform 4.12, the Cluster Ingress Operator exports a new metric named route_metrics_controller_routes_per_shard . The shard_name label of the metric specifies the name of the shards. This metric gives the total number of routes that are admitted by each shard. The following metrics are sent through telemetry. Table 1.1. Metrics sent through telemetry Name Recording rule expression Description cluster:route_metrics_controller_routes_per_shard:min min(route_metrics_controller_routes_per_shard) Tracks the minimum number of routes admitted by any of the shards cluster:route_metrics_controller_routes_per_shard:max max(route_metrics_controller_routes_per_shard) Tracks the maximum number of routes admitted by any of the shards cluster:route_metrics_controller_routes_per_shard:avg avg(route_metrics_controller_routes_per_shard) Tracks the average value of the route_metrics_controller_routes_per_shard metric cluster:route_metrics_controller_routes_per_shard:median quantile(0.5, route_metrics_controller_routes_per_shard) Tracks the median value of the route_metrics_controller_routes_per_shard metric cluster:openshift_route_info:tls_termination:sum sum (openshift_route_info) by (tls_termination) Tracks the number of routes for each tls_termination value. The possible values for tls_termination are edge , passthrough and reencrypt 1.3.10.12. AWS Load Balancer Operator In OpenShift Container Platform 4.12, the AWS Load Balancer controller now implements the Kubernetes Ingress specification for multiple matches. If multiple paths within an Ingress match a request, the longest matching path takes the precedence. If two paths still match, paths with an exact path type take precedence over a prefix path type. The AWS Load Balancer Operator sets the EnableIPTargetType feature gate to false . The AWS Load Balancer controller disables the support for services and ingress resources for target-type ip . The upstream version of aws-load-balancer-controller for an OpenShift Container Platform 4.12 is v2.4.4. 1.3.10.13. Ingress Controller Autoscaling (Technology Preview) You can now use the OpenShift Container Platform Custom Metrics Autoscaler Operator to dynamically scale the default Ingress Controller based on metrics in your deployed cluster, such as the number of worker nodes available. The Custom Metrics Autoscaler is available as a Technology Preview feature. For more information, see Autoscaling an Ingress Controller . 1.3.10.14. HAProxy maxConnections default is now 50,000 In OpenShift Container Platform 4.12, the default value for the maxConnections setting is now 50000. Previously starting with OpenShift Container Platform 4.11, the default value for the maxConnections setting was 20000. For more information, see Ingress Controller configuration parameters . 1.3.10.15. Configuration of an Ingress Controller for manual DNS management You can now configure an Ingress Controller to stop automatic DNS management and start manual DNS management. Set the dnsManagementPolicy parameter to specify automatic or manual DNS management. For more information, see Configuring an Ingress Controller to manually manage DNS . 1.3.10.16. Supported hardware for SR-IOV (Single Root I/O Virtualization) OpenShift Container Platform 4.12 adds support for the following SR-IOV devices: Intel X710 Base T MT2892 Family [ConnectX‐6 Dx] MT2894 Family [ConnectX-6 Lx] MT42822 BlueField‐2 in ConnectX‐6 NIC mode Silicom STS Family For more information, see Supported devices . 1.3.10.17. Supported hardware for OvS (Open vSwitch) Hardware Offload OpenShift Container Platform 4.12 adds OvS Hardware Offload support for the following devices: MT2892 Family [ConnectX-6 Dx] MT2894 Family [ConnectX-6 Lx] MT42822 BlueField‐2 in ConnectX‐6 NIC mode For more information, see Supported devices . 1.3.10.18. Multi-network-policy supported for SR-IOV (Technology Preview) OpenShift Container Platform 4.12 adds support for configuring multi-network policy for SR-IOV devices. You can now configure multi-network for SR-IOV additional networks. Configuring SR-IOV additional networks is a Technology Preview feature and is only supported with kernel network interface cards (NICs). For more information, see Configuring multi-network policy . 1.3.10.19. Switch between AWS load balancer types without deleting the Ingress Controller You can update the Ingress Controller to switch between an AWS Classic Load Balancer (CLB) and an AWS Network Load Balancer (NLB) without deleting the Ingress Controller. For more information, see Configuring ingress cluster traffic on AWS . 1.3.10.20. IPv6 unsolicited neighbor advertisements and IPv4 gratuitous address resolution protocol now default on the SR-IOV CNI plugin Pods created with the Single Root I/O Virtualization (SR-IOV) CNI plugin, where the IP address management CNI plugin has assigned IPs, now send IPv6 unsolicited neighbor advertisements and/or IPv4 gratuitous address resolution protocol by default onto the network. This enhancement notifies hosts of the new pod's MAC address for a particular IP to refresh ARP/NDP caches with the correct information. For more information, see Supported devices . 1.3.10.21. Support for CoreDNS cache tuning You can now configure the time-to-live (TTL) duration of both successful and unsuccessful DNS queries cached by CoreDNS. For more information, see Tuning the CoreDNS cache . 1.3.10.22. OVN-Kubernetes supports configuration of internal subnet Previously, the subnet that OVN-Kubernetes uses internally was 100.64.0.0/16 for IPv4 and fd98::/48 for IPv6 and could not be modified. To support instances when these subnets overlap with existing subnets in your infrastructure, you can now change these internal subnets to avoid any overlap. For more information, see Cluster Network Operator configuration object 1.3.10.23. Egress IP support on Red Hat OpenStack Platform (RHOSP) RHOSP, paired with OpenShift Container Platform, now supports automatic attachment and detachment of Egress IP addresses. The traffic from one or more pods in any number of namespaces has a consistent source IP address for services outside of the cluster. This support applies to OpenShift SDN and OVN-Kubernetes as default network providers. 1.3.10.24. OpenShift SDN to OVN-Kubernetes feature migration support If you plan to migrate from the OpenShift SDN network plugin to the OVN-Kubernetes network plugin, your configurations for the following capabilities are automatically converted to work with OVN-Kubernetes: Egress IP addresses Egress firewalls Multicast For more information about how the migration to OVN-Kubernetes works, see Migrating from the OpenShift SDN cluster network provider . 1.3.10.25. Egress firewall audit logging For the OVN-Kubernetes network plugin, egress firewalls support audit logging using the same mechanism that network policy audit logging uses. For more information, see Logging for egress firewall and network policy rules . 1.3.10.26. Advertise MetalLB from a given address pool from a subset of nodes With this update, in BGP mode, you can use the node selector to advertise the MetalLB service from a subset of nodes, using a specific pool of IP addresses. This feature was introduced as a Technology Preview feature in OpenShift Container Platform 4.11 and is now generally available in OpenShift Container Platform 4.12 for BGP mode only. L2 mode remains a Technology Preview feature. For more information, see Advertising an IP address pool from a subset of nodes . 1.3.10.27. Additional deployment specifications for MetalLB This update provides additional deployment specifications for MetalLB. When you use a custom resource to deploy MetalLB, you can use these additional deployment specifications to manage how MetalLB speaker and controller pods deploy and run in your cluster. For example, you can use MetalLB deployment specifications to manage where MetalLB pods are deployed, define CPU limits for MetalLB pods, and assign runtime classes to MetalLB pods. For more information about deployment specifications for MetalLB, see Deployment specifications for MetalLB . 1.3.10.28. Node IP selection improvements Previously, the nodeip-configuration service on a cluster host selected the IP address from the interface that the default route used. If multiple routes were present, the service would select the route with the lowest metric value. As a result, network traffic could be distributed from the incorrect interface. With OpenShift Container Platform 4.12, a new interface has been added to the nodeip-configuration service, which allows users to create a hint file. The hint file contains a variable, NODEIP_HINT , that overrides the default IP selection logic and selects a specific node IP address from the subnet NODEIP_HINT variable. Using the NODEIP_HINT variable allows users to specify which IP address is used, ensuring that network traffic is distributed from the correct interface. For more information, see Optional: Overriding the default node IP selection logic . 1.3.10.29. CoreDNS update to version 1.10.0 In OpenShift Container Platform 4.12, CoreDNS uses version 1.10.0, which includes the following changes: CoreDNS does not expand the query UDP buffer size if it was previously set to a smaller value. CoreDNS now always prefixes each log line in Kubernetes client logs with the associated log level. CoreDNS now reloads more quickly at an approximate speed of 20ms. 1.3.10.30. Support for a configurable reload interval in HAProxy With this update, a cluster administrator can configure the reload interval to force HAProxy to reload its configuration less frequently in response to route and endpoint updates. The default minimum HAProxy reload interval is 5 seconds. For more information, see Configuring HAProxy reload interval . 1.3.10.31. Network Observability Operator updates The Network Observability Operator releases updates independently from the OpenShift Container Platform minor version release stream. Updates are available through a single, rolling stream which is supported on all currently supported versions of OpenShift Container Platform 4. Information regarding new features, enhancements, and bug fixes for the Network Observability Operator can be found in the Network Observability release notes . 1.3.10.32. IPv6 for secondary network interfaces on RHOSP IPv6 for secondary network interfaces is now supported in clusters that run on RHOSP. For more information,see Enabling IPv6 connectivity to pods on RHOSP . 1.3.10.33. UDP support for load balancers on RHOSP Resulting from the switch to an external OpenStack cloud provider, UDP is now supported for LoadBalancer services for clusters that run on that platform. 1.3.10.34. Deploy the SR-IOV Operator for hosted control planes (Technology Preview) If you configured and deployed your hosting service cluster, you can now deploy the SR-IOV Operator for a hosted cluster. For more information, see Deploying the SR-IOV Operator for hosted control planes . 1.3.10.35. Support for IPv6 virtual IP (VIP) addresses for the Ingress VIP and API VIP services on bare metal With this update, in installer-provisioned infrastructure clusters, the ingressVIP and apiVIP configuration settings in the install-config.yaml file are deprecated. Instead, use the ingressVIPs and apiVIPs configuration settings. These settings support dual-stack networking for applications on bare metal that require IPv4 and IPv6 access to the cluster by using the Ingress VIP and API VIP services. The ingressVIPs and apiVIPs configuration settings use a list format to specify an IPv4 address, an IPv6 address, or both IP address formats. The order of the list indicates the primary and secondary VIP address for each service. The primary IP address must be from the IPv4 network when using dual stack networking. 1.3.10.36. Support for switching the Bluefield-2 network device from data processing unit (DPU) mode to network interface controller (NIC) mode (Technology Preview) With this update, you can switch the BlueField-2 network device from data processing unit (DPU) mode to network interface controller (NIC) mode. For more information, see Switching Bluefield-2 from DPU to NIC . 1.3.10.37. Support for enabling hybrid networking after cluster installation Previously, for clusters that use the OVN-Kubernetes network plugin , during cluster installation you could enable hybrid networking so that your cluster supported Windows nodes. Now you can enable hybrid networking after installation. For more information, see Configuring hybrid networking . 1.3.10.38. Support for allocateLoadBalancerNodePorts in the Network API service object The ServiceSpec component in the Network API under the Service object describes the attributes that a user creates on a service. The allocateLoadBalancerNodePorts attribute within the ServiceSpec component is now supported as of OpenShift Container Platform 4.12.28 release. The allocateLoadBalancerNodePorts attribute defines whether the NodePorts will be automatically allocated for services of the LoadBalancer type. For more information, see Network API ServiceSpec object 1.3.11. Storage 1.3.11.1. Persistent storage using the GCP Filestore Driver Operator (Technology Preview) OpenShift Container Platform is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for Google Compute Platform (GCP) Filestore. The GCP Filestore CSI Driver Operator that manages this driver is in Technology Preview. For more information, see see GCP Filestore CSI Driver Operator . 1.3.11.2. Automatic CSI migration for AWS Elastic Block Storage auto migration is generally available Starting with OpenShift Container Platform 4.8, automatic migration for in-tree volume plugins to their equivalent Container Storage Interface (CSI) drivers became available as a Technology Preview feature. Support for Amazon Web Services (AWS) Elastic Block Storage (EBS) was provided in this feature in OpenShift Container Platform 4.8, and OpenShift Container Platform 4.12 now supports automatic migration for AWS EBS as generally available. CSI migration for AWS EBS is now enabled by default and requires no action by an administrator. This feature automatically translates in-tree objects to their counterpart CSI representations and should be completely transparent to users. Translated objects are not stored on disk, and user data is not migrated. While storage class referencing to the in-tree storage plugin will continue working, it is recommended that you switch the default storage class to the CSI storage class. For more information, see CSI Automatic Migration . 1.3.11.3. Automatic CSI migration for GCP PD auto migration is generally available Starting with OpenShift Container Platform 4.8, automatic migration for in-tree volume plugins to their equivalent Container Storage Interface (CSI) drivers became available as a Technology Preview feature. Support for Google Compute Engine Persistent Disk (GCP PD) was provided in this feature in OpenShift Container Platform 4.9, and OpenShift Container Platform 4.12 now supports automatic migration for GCP PD as generally available. CSI migration for GCP PD is now enabled by default and requires no action by an administrator. This feature automatically translates in-tree objects to their counterpart CSI representations and should be completely transparent to users. Translated objects are not stored on disk, and user data is not migrated. While storage class referencing to the in-tree storage plugin will continue working, it is recommended that you switch the default storage class to the CSI storage class. For more information, see CSI Automatic Migration . 1.3.11.4. Updating from OpenShift Container Platform 4.12 to 4.13 and later with vSphere in-tree PVs Updates from OpenShift Container Platform 4.12 to 4.13 and from 4.13 to 4.14 are blocked if all of the following conditions are true: CSI migration is not already enabled OpenShift Container Platform is not running on vSphere 7.0u3L+ or 8.0u2+ vSphere in-tree persistent volumes (PVs) are present For more information, see CSI Automatic Migration . 1.3.11.5. Storage capacity tracking for pod scheduling is generally available This new feature exposes the currently available storage capacity using CSIStorageCapacity objects, and enhances scheduling of pods that use Container Storage Interface (CSI) volumes with late binding. Currently, the only OpenShift Container Platform storage type that supports this features is OpenShift Data Foundation. 1.3.11.6. VMware vSphere CSI topology is generally available OpenShift Container Platform provides the ability to deploy OpenShift Container Platform for vSphere on different zones and regions, which allows you to deploy over multiple compute clusters, thus helping to avoid a single point of failure. For more information, see vSphere CSI topology . 1.3.11.7. Local ephemeral storage resource management is generally available The local ephemeral storage resource management features is now generally available. With this feature, you can manage local ephemeral storage by specifying requests and limits. For more information, see Ephemeral storage management . 1.3.11.8. Volume populators (Technology Preview) Volume populators use datasource to enable creating pre-populated volumes. Volume population is currently enabled, and supported as a Technology Preview feature. However, OpenShift Container Platform does not ship with any volume populators. For more information, see Volume populators . 1.3.11.9. VMware vSphere CSI Driver Operator requirements For OpenShift Container Platform 4.12, VMWare vSphere Container Storage Interface (CSI) Driver Operator requires the following minimum components installed: VMware vSphere version 7.0 Update 2 or later, which includes version 8.0. vCenter 7.0 Update 2 or later, which includes version 8.0. Virtual machines of hardware version 15 or later No third-party CSI driver already installed in the cluster If a third-party CSI driver is present in the cluster, OpenShift Container Platform does not overwrite it. The presence of a third-party CSI driver prevents OpenShift Container Platform from upgrading to OpenShift Container Platform 4.13 or later. For more information, see VMware vSphere CSI Driver Operator requirements . 1.3.11.10. Azure File supporting NFS is generally available OpenShift Container Platform 4.12 supports Azure File Container Storage Interface (CSI) Driver Operator with Network File System (NFS) as generally available. For more information, see NFS support . 1.3.12. Operator lifecycle 1.3.12.1. Platform Operators (Technology Preview) Starting in OpenShift Container Platform 4.12, Operator Lifecycle Manager (OLM) introduces the platform Operator type as a Technology Preview feature. The platform Operator mechanism relies on resources from the RukPak component, also introduced in OpenShift Container Platform 4.12, to source and manage content. A platform Operator is an OLM-based Operator that can be installed during or after an OpenShift Container Platform cluster's Day 0 operations and participates in the cluster's lifecycle. As a cluster administrator, you can use platform Operators to further customize your OpenShift Container Platform installation to meet your requirements and use cases. For more information about platform Operators, see Managing platform Operators . For more information about RukPak and its resources, see Operator Framework packaging format . 1.3.12.2. Controlling where an Operator is installed By default, when you install an Operator, OpenShift Container Platform randomly installs the Operator pod to one of your worker nodes. In OpenShift Container Platform 4.12, you can control where an Operator pod is installed by adding affinity constraints to the Operator's Subscription object. For more information, see Controlling where an Operator is installed . 1.3.12.3. Pod security admission synchronization for user-created openshift-* namespaces In OpenShift Container Platform 4.12, pod security admission synchronization is enabled by default if an Operator is installed in user-created namespaces that have an openshift- prefix. Synchronization is enabled after a cluster service version (CSV) is created in the namespace. The synchronized label inherits the permissions of the service accounts in the namespace. For more information, see Security context constraint synchronization with pod security standards . 1.3.13. Operator development 1.3.13.1. Configuring the security context of a catalog pod You can configure the security context of a catalog pod by using the --security-context-config flag on the run bundle and bundle-upgrade subcommands. The flag enables seccomp profiles to comply with pod security admission. The flag accepts the values of restricted and legacy . If you do not specify a value, the seccomp profile defaults to restricted . If your catalog pod cannot run with restricted permissions, set the flag to legacy , as shown in the following example: USD operator-sdk run bundle \ --security-context-config=legacy 1.3.13.2. Validating bundle manifests for APIs removed from Kubernetes 1.25 You can now check bundle manifests for deprecated APIs removed from Kubernetes 1.25 by using the Operator Framework suite of tests with the bundle validate subcommand. For example: USD operator-sdk bundle validate .<bundle_dir_or_image> \ --select-optional suite=operatorframework \ --optional-values=k8s-version=1.25 If your Operator requests permission to use any of the APIs removed from Kubernetes 1.25, the command displays a warning message. If any of the API versions removed from Kubernetes 1.25 are included in your Operator's cluster service version (CSV), the command displays an error message. See Beta APIs removed from Kubernetes 1.25 and the Operator SDK CLI reference for more information. 1.3.14. Machine API 1.3.14.1. Control plane machine sets OpenShift Container Platform 4.12 introduces control plane machine sets. Control plane machine sets provide management capabilities for control plane machines that are similar to what compute machine sets provide for compute machines. For more information, see Managing control plane machines . 1.3.14.2. Specifying cluster autoscaler log level verbosity OpenShift Container Platform now supports setting the log level verbosity of the cluster autoscaler by setting the logVerbosity parameter in the ClusterAutoscaler custom resource. For more information, see the ClusterAutoscaler resource definition . 1.3.14.3. Enabling Azure boot diagnostics OpenShift Container Platform now supports enabling boot diagnostics on Azure machines that your machine set creates. For more information, see "Enabling Azure boot diagnostics" for compute machines or control plane machines . 1.3.15. Machine Config Operator 1.3.15.1. RHCOS image layering Red Hat Enterprise Linux CoreOS (RHCOS) image layering allows you to add new images on top of the base RHCOS image. This layering does not modify the base RHCOS image. Instead, it creates a custom layered image that includes all RHCOS functionality and adds additional functionality to specific nodes in the cluster. Currently, RHCOS image layering allows you to work with Customer Experience and Engagement (CEE) to obtain and apply Hotfix packages on top of your RHCOS image, based on the Red Hat Hotfix policy . It is planned for future releases that you can use RHCOS image layering to incorporate third-party software packages such as Libreswan or numactl. For more information, see RHCOS image layering . 1.3.16. Nodes 1.3.16.1. Updating the interface-specific safe list (Technology Preview) OpenShift Container Platform now supports updating the default interface-specific safe sysctls . You can add or remove sysctls from the predefined list. When you add sysctls , they can be set across all nodes. Updating the interface-specific safe sysctls list is a Technology Preview feature only. For more information, see Updating the interface-specific safe sysctls list . 1.3.16.2. Cron job time zones (Technology Preview) Setting a time zone for a cron job schedule is now offered as a Technology Preview . If a time zone is not specified, the Kubernetes controller manager interprets the schedule relative to its local time zone. For more information, see Creating cron jobs . 1.3.16.3. Linux Control Group version 2 promoted to Technology Preview OpenShift Container Platform support for Linux Control Group version 2 (cgroup v2) has been promoted to Technology Preview. cgroup v2 is the version of the kernel control groups . cgroups v2 offers multiple improvements, including a unified hierarchy, safer sub-tree delegation, new features such as Pressure Stall Information , and enhanced resource management and isolation. For more information, see Enabling Linux Control Group version 2 (cgroup v2) . 1.3.16.4. crun container runtime (Technology Preview) OpenShift Container Platform now supports the crun container runtime in Technology Preview. You can switch between the crun container runtime and the default container runtime as needed by using a ContainerRuntimeConfig custom resource (CR). For more information, see About the container engine and container runtime . 1.3.16.5. Self Node Remediation Operator enhancements OpenShift Container Platform now supports control plane fencing by the Self Node Remediation Operator. In the event of node failure, you can follow remediation strategies on both worker nodes and control plane nodes. For more information, see the Workload Availability for Red Hat OpenShift documentation. 1.3.16.6. Node Health Check Operator enhancements OpenShift Container Platform now supports control plane fencing on the Node Health Check Operator. In the event of node failure, you can follow remediation strategies on both worker nodes and control plane nodes. For more information, see the Workload Availability for Red Hat OpenShift documentation. The Node Health Check Operator now also includes a web console plugin for managing Node Health Checks. For more information, see the Workload Availability for Red Hat OpenShift documentation. For installing or updating to the latest version of the Node Health Check Operator, use the stable subscription channel. For more information, see the Workload Availability for Red Hat OpenShift documentation. 1.3.17. Monitoring The monitoring stack for this release includes the following new and modified features. 1.3.17.1. Updates to monitoring stack components and dependencies This release includes the following version updates for monitoring stack components and dependencies: kube-state-metrics to 2.6.0 node-exporter to 1.4.0 prom-label-proxy to 0.5.0 Prometheus to 2.39.1 prometheus-adapter to 0.10.0 prometheus-operator to 0.60.1 Thanos to 0.28.1 1.3.17.2. Changes to alerting rules Note Red Hat does not guarantee backward compatibility for recording rules or alerting rules. New Added the TelemeterClientFailures alert, which triggers when a cluster tries and fails to submit Telemetry data at a certain rate over a period of time. The alert fires when the rate of failed requests reaches 20% of the total rate of requests within a 15-minute window. Changed The KubeAggregatedAPIDown alert now waits 900 seconds rather than 300 seconds before sending a notification. The NodeClockNotSynchronising and NodeClockSkewDetected alerts now only evaluate metrics from the node-exporter job. The NodeRAIDDegraded and NodeRAIDDiskFailure alerts now include a device label filter to match only the value returned by mmcblk.p.|nvme.|sd.|vd.|xvd.|dm-.|dasd.+ . The PrometheusHighQueryLoad and ThanosQueryOverload alerts now also trigger when a high querying load exists on the query layer. 1.3.17.3. New option to specify pod topology spread constraints for monitoring components You can now use pod topology spread constraints to control how Prometheus, Thanos Ruler, and Alertmanager pods are spread across a network topology when OpenShift Container Platform pods are deployed in multiple availability zones. 1.3.17.4. New option to improve data consistency for Prometheus Adapter You can now configure an optional kubelet service monitor for Prometheus Adapter (PA) that improves data consistency across multiple autoscaling requests. Enabling this service monitor eliminates the possibility that two queries sent at the same time to PA might yield different results because the underlying PromQL queries executed by PA might be on different Prometheus servers. 1.3.17.5. Update to Alertmanager configuration for additional secret keys With this release, if you configure an Alertmanager secret to hold additional keys and if the Alertmanager configuration references these keys as files (such as templates, TLS certificates, or tokens), your configuration settings must point to these keys by using an absolute path rather than a relative path. These keys are available under the /etc/alertmanager/config directory. In earlier releases of OpenShift Container Platform, you could use relative paths in your configuration to point to these keys because the Alertmanager configuration file was located in the same directory as the keys. Important If you are upgrading to OpenShift Container Platform 4.12 and have specified relative paths for additional Alertmanager secret keys that are referenced as files, you must change these relative paths to absolute paths in your Alertmanager configuration. Otherwise, alert receivers that use the files will fail to deliver notifications. 1.3.18. New Network Observability Operator As an administrator, you can now install the Network Observability Operator to observe the network traffic for OpenShift Container Platform cluster in the console. You can view and monitor the network traffic data in different graphical representations. The Network Observability Operator uses eBPF technology to create the network flows. The network flows are enriched with OpenShift Container Platform information, and stored in Loki. You can use the network traffic information for detailed troubleshooting and analysis. For more information, see Network Observability . 1.3.19. Scalability and performance 1.3.19.1. Disabling realtime using workload hints removes Receive Packet Steering from the cluster At the cluster level by default, a systemd service sets a Receive Packet Steering (RPS) mask for virtual network interfaces. The RPS mask routes interrupt requests from virtual network interfaces according to the list of reserved CPUs defined in the performance profile. At the container level, a CRI-O hook script also sets an RPS mask for all virtual network devices. With this update, if you set spec.workloadHints.realTime in the performance profile to False , the system also disables both the systemd service and the CRI-O hook script which set the RPS mask. The system disables these RPS functions because RPS is typically relevant to use cases requiring low-latency, realtime workloads only. To retain RPS functions even when you set spec.workloadHints.realTime to False , see the RPS Settings section of the Red Hat Knowledgebase solution Performance addons operator advanced configuration . For more information about configuring workload hints, see Understanding workload hints . 1.3.19.2. Tuned profile The tuned profile now defines the fs.aio-max-nr sysctl value by default, improving asynchronous I/O performance for default node profiles. 1.3.19.3. Support for new kernel features and options The low latency tuning has been updated to use the latest kernel features and options. The fix for 2117780 introduced a new per-CPU kthread , ktimers . This thread must be pinned to the proper CPU cores. With this update, there is no functional change; the isolation of the workload is the same. For more information, see 2102450 . 1.3.19.4. Power-saving configurations In OpenShift Container Platform 4.12, by enabling C-states and OS-controlled P-states, you can use different power-saving configurations for critical and non-critical workloads. You can apply the configurations through the new perPodPowerManagement workload hint, and the cpu-c-states.crio.io and cpu-freq-governor.crio.io CRI-O annotations. For more information about the feature, see Power-saving configurations . 1.3.19.5. Expanding Single-node OpenShift clusters with worker nodes using GitOps ZTP (Technology Preview) In OpenShift Container Platform 4.11, a feature allowing you to manually add worker nodes to single-node OpenShift clusters was introduced. This feature is now also available in GitOps ZTP. For more information, see Adding worker nodes to single-node OpenShift clusters with GitOps ZTP . 1.3.19.6. Factory-precaching-cli tool to reduce OpenShift Container Platform and Operator deployment times (Technology Preview) In OpenShift Container Platform 4.12, you can use the factory-precaching-cli tool to pre-cache OpenShift Container Platform and Operator images on a server at the factory, and then you can include the pre-cached server to the site for deployment. For more information about the factory-precaching-cli tool, see Pre-caching images for single-node OpenShift deployments . 1.3.19.7. Zero touch provisioning (ZTP) integration of the factory-precaching-cli tool (Technology Preview) In OpenShift Container Platform 4.12, you can use the factory-precaching-cli tool in the GitOps ZTP workflow. For more information, see Pre-caching images for single-node OpenShift deployments . 1.3.19.8. Node tuning in a hosted cluster (Technology Preview) You can now configure OS-level tuning for nodes in a hosted cluster by using the Node Tuning Operator. To configure node tuning, you can create config maps in the management cluster that contain Tuned objects, and reference those config maps in your node pools. The tuning configuration that is definied in the Tuned objects is applied to the nodes in the node pool. For more information, see Configuring node tuning in a hosted cluster . 1.3.19.9. Kernel module management Operator The kernel module management (KMM) Operator replaces the Special Resource Operator (SRO). KMM includes the following features for connected environments only: Hub and spoke support for edge deployments Pre-flight checks for upgrade support Secure boot kernel module signing Must gather logs to assist with troubleshooting Binary firmware deployment 1.3.19.10. Hub and spoke cluster support (Technology Preview) For hub and spoke deployments in an environment that can access the internet, you can use the kernel module management (KMM) Operator deployed in the hub cluster to manage the deployment of the required kernel modules to one or more managed clusters. 1.3.19.11. Topology Aware Lifecycle Manager (TALM) Topology Aware Lifecycle Manager (TALM) now provides more detailed status information and messages, and redesigned conditions. You can use the ClusterLabelSelector field for greater flexibility in selecting clusters for update. You can use timeout settings to determine what happens if an update fails for a cluster, for example, skipping the failing cluster and continuing to upgrade other clusters, or stopping policy remediation for all clusters. For more information see Topology Aware Lifecycle Manager for cluster updates . 1.3.19.12. Mount namespace encapsulation (Technology Preview) Encapsulation is the process of moving all Kubernetes-specific mount points to an alternative namespace to reduce the visibility and performance impact of a large number of mount points in the default namespace. Previously, mount namespace encapsulation has been deployed transparently in OpenShift Container Platform specifically for Distributed Units (DUs) installed using GitOps ZTP. In OpenShift Container Platform v4.12, this functionality is now available as a configurable option. A standard host operating system uses systemd to constantly scan all mount namespaces: both the standard Linux mounts and the numerous mounts that Kubernetes uses to operate. The current implementation of Kubelet and CRI-O both use the top-level namespace for all container and Kubelet mount points. Encapsulating these container-specific mount points in a private namespace reduces systemd overhead and enhances CPU performance. Encapsulation can also improve security, by storing Kubernetes-specific mount points in a location safe from inspection by unprivileged users. For more information, see Optimizing CPU usage with mount namespace encapsulation . 1.3.19.13. Changing the workload partitioning CPU set in single-node OpenShift clusters that are deployed with GitOps ZTP You can configure the workload partitioning CPU set in single-node OpenShift clusters that you deploy with GitOps ZTP. To do this, you specify cluster management CPU resources with the cpuset field of the SiteConfig custom resource (CR) and the reserved field of the group PolicyGenTemplate CR. The value that you set for cpuset should match the value set in the cluster PerformanceProfile CR .spec.cpu.reserved field for workload partitioning. For more information, see Workload partitioning . 1.3.19.14. RHACM hub template functions now available for use with GitOps ZTP Hub template functions are now available for use with GitOps ZTP using Red Hat Advanced Cluster Management (RHACM) and Topology Aware Lifecycle Manager (TALM). Hub-side cluster templates reduce the need to create separate policies for many clusters with similiar configurations but with different values. For more information, see Using hub templates in PolicyGenTemplate CRs . 1.3.19.15. ArgoCD managed cluster limits RHACM uses SiteConfig CRs to generate the Day 1 managed cluster installation CRs for ArgoCD. Each ArgoCD application can manage a maximum of 300 SiteConfig CRs. For more information, see Configuring the hub cluster with ArgoCD . 1.3.19.16. GitOps ZTP support for configuring policy compliance evaluation timeouts in PolicyGenTemplate CRs In GitOps ZTP v4.11+, a default policy compliance evaluation timeout value is available for use in PolicyGenTemplate custom resources (CRs). This value specifies how long the related ConfigurationPolicy CR can be in a state of policy compliance or non-compliance before RHACM re-evaluates the applied cluster policies. Optionally, you can now override the default evaluation intervals for all policies in PolicyGenTemplate CRs. For more information, see Configuring policy compliance evaluation timeouts for PolicyGenTemplate CRs . 1.3.19.17. Specifying the platform type for managed clusters The Assisted Installer currently supports the following OpenShift Container Platform platforms: BareMetal VSphere None Single-node OpenShift does not support VSphere . 1.3.19.18. Configuring the hub cluster to use unauthenticated registries This release supports the use of unauthenticated registries when configuring the hub cluster. Registries that do not require authentication are listed under spec.unauthenticatedRegistries in the AgentServiceConfig resource. Any registry on this list is not required to have an entry in the pull secret used for the spoke cluster installation. assisted-service validates the pull secret by making sure it contains the authentication information for every image registry used for installation. For more information, see Configuring the hub cluster to use unauthenticated registries . 1.3.19.19. Ironic agent mirroring in disconnected GitOps ZTP installations For disconnected installations using GitOps ZTP, if you are deploying OpenShift Container Platform version 4.11 or earlier to a spoke cluster with converged flow enabled, you must mirror the default Ironic agent image to the local image repository. The default Ironic agent images are the following: AMD64 Ironic agent image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3f1d4d3cd5fbcf1b9249dd71d01be4b901d337fdc5f8f66569eb71df4d9d446 AArch64 Ironic agent image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb0edf19fffc17f542a7efae76939b1e9757dc75782d4727fb0aa77ed5809b43 For more information about mirroring images, see Mirroring the OpenShift Container Platform image repository . 1.3.19.20. Configuring kernel arguments for the Discovery ISO by using GitOps ZTP OpenShift Container Platform now supports specifying kernel arguments for the Discovery ISO in GitOps ZTP deployments. In both manual and automated GitOps ZTP deployments, the Discovery ISO is part of the OpenShift Container Platform installation process on managed bare-metal hosts. You can now edit the InfraEnv resource to specify kernel arguments for the Discovery ISO. This is useful for cluster installations with specific environmental requirements. For example, you can define the rd.net.timeout.carrier kernel argument to help configure the cluster for static networking. For more information about how to specify kernel arguments, see Configuring kernel arguments for the Discovery ISO by using GitOps ZTP and Configuring kernel arguments for the Discovery ISO for manual installations by using GitOps ZTP . 1.3.19.21. Deploy heterogeneous spoke clusters from a hub cluster With this update, you can create OpenShift Container Platform mixed-architecture clusters, also known as heterogeneous clusters, that feature hosts with both AMD64 and AArch64 CPU architectures. You can deploy a heterogeneous spoke cluster from a hub cluster managed by Red Hat Advanced Cluster Management (RHACM). To create a heterogeneous spoke cluster, add an AArch64 worker node to a deployed AMD64 cluster. To add an AArch64 worker node to a deployed AMD64 cluster, you can specify the AArch64 architecture, the multi-architecture release image, and the operating system required for the node by using an InfraEnv custom resource (CR). You can then provision the AArch64 worker node to the AMD64 cluster by using the Assisted Installer API and the InfraEnv CR. 1.3.19.22. HTTP transport replaces AMQP for PTP and bare-metal events (Technology Preview) HTTP is now the default transport in the PTP and bare-metal events infrastructure. AMQ Interconnect is end of life (EOL) from 30 June 2024. For more information, see About the PTP fast event notifications framework . 1.3.20. Insights Operator 1.3.20.1. Insights alerts In OpenShift Container Platform 4.12, active Insights recommendations are now presented to the user as alerts. You can view and configure these alerts with Alertmanager. 1.3.20.2. Insights Operator data collection enhancements In OpenShift Container Platform 4.12, the Insights Operator now collects the following metrics: console_helm_uninstalls_total console_helm_upgrades_total 1.3.21. Authentication and authorization 1.3.21.1. Application credentials on RHOSP You can now specify application credentials in the clouds.yaml files of clusters that run on Red Hat OpenStack Platform (RHOSP). Application credentials are an alternative to embedding user account details in configuration files. As an example, see the following section of a clouds.yaml file that includes user account details: clouds: openstack: auth: auth_url: https://127.0.0.1:13000 password: thepassword project_domain_name: Default project_name: theprojectname user_domain_name: Default username: theusername region_name: regionOne Compare that section to one that uses application credentials: clouds: openstack: auth: auth_url: https://127.0.0.1:13000 application_credential_id: '5dc185489adc4b0f854532e1af81ffe0' application_credential_secret: 'PDCTKans2bPBbaEqBLiT_IajG8e5J_nJB4kvQHjaAy6ufhod0Zl0NkNoBzjn_bWSYzk587ieIGSlT11c4pVehA' auth_type: "v3applicationcredential" region_name: regionOne To use application credentials with your cluster as a RHOSP administrator, create the credentials. Then, use them in a clouds.yaml file when you install a cluster. Alternatively, you can create the clouds.yaml file and rotate it into an existing cluster. 1.3.22. Hosted control planes (Technology Preview) 1.3.22.1. HyperShift API beta release now available The default version for the hypershift.openshift.io API, which is the API for hosted control planes on OpenShift Container Platform, is now v1beta1. Currently, for an existing cluster, the move from alpha to beta is not supported. 1.3.22.2. Versioning for hosted control planes With each major, minor, or patch version release of OpenShift Container Platform, the HyperShift Operator is released. The HyperShift command-line interface (CLI) is released as part of each HyperShift Operator release. The HostedCluster and NodePool API resources are available in the beta version of the API and follow a similar policy to OpenShift Container Platform and Kubernetes . 1.3.22.3. Backing up and restoring etcd on a hosted cluster If you use hosted control planes on OpenShift Container Platform, you can back up and restore etcd by taking a snapshot of etcd and uploading it to a location where you can retrieve it later, such as an S3 bucket. Later, if needed, you can restore the snapshot. For more information, see Backing up and restoring etcd on a hosted cluster . 1.3.22.4. Disaster recovery for a hosted cluster within an AWS region In a situation where you need disaster recovery for a hosted cluster, you can recover the hosted cluster to the same region within AWS. For more information, see Disaster recovery for a hosted cluster within an AWS region . 1.3.23. Red Hat Virtualization (RHV) This release provides several updates to Red Hat Virtualization (RHV). With this release: The oVirt CSI driver logging was revised with new error messages to improve the clarity and readability of the logs. The cluster API provider automatically updates oVirt and Red Hat Virtualization (RHV) credentials when they are changed in OpenShift Container Platform. 1.4. Notable technical changes OpenShift Container Platform 4.12 introduces the following notable technical changes. AWS Security Token Service regional endpoints The Cloud Credential Operator utility ( ccoctl ) now creates secrets that use regional endpoints for the AWS Security Token Service (AWS STS) . This approach aligns with AWS recommended best practices. cert-manager Operator general availability cert-manager Operator is generally available in OpenShift Container Platform 4.12. Credentials requests directory parameter for deleting GCP resources with the Cloud Credential Operator utility With this release, when you delete GCP resources with the Cloud Credential Operator utility , you must specify the directory containing the files for the component CredentialsRequest objects. Future restricted enforcement for pod security admission Currently, pod security violations are shown as warnings and logged in the audit logs, but do not cause the pod to be rejected. Global restricted enforcement for pod security admission is currently planned for the minor release of OpenShift Container Platform. When this restricted enforcement is enabled, pods with pod security violations will be rejected. To prepare for this upcoming change, ensure that your workloads match the pod security admission profile that applies to them. Workloads that are not configured according to the enforced security standards defined globally or at the namespace level will be rejected. The restricted-v2 SCC admits workloads according to the Restricted Kubernetes definition. If you are receiving pod security violations, see the following resources: See Identifying pod security violations for information about how to find which workloads are causing pod security violations. See Security context constraint synchronization with pod security standards to understand when pod security admission label synchronization is performed. Pod security admission labels are not synchronized in certain situations, such as the following situations: The workload is running in a system-created namespace that is prefixed with openshift- . The workload is running on a pod that was created directly without a pod controller. If necessary, you can set a custom admission profile on the namespace or pod by setting the pod-security.kubernetes.io/enforce label. Catalog sources and restricted pod security admission enforcement Catalog sources built using the SQLite-based catalog format and a version of the opm CLI tool released before OpenShift Container Platform 4.11 cannot run under restricted pod security enforcement. In OpenShift Container Platform 4.12, namespaces do not have restricted pod security enforcement by default and the default catalog source security mode is set to legacy . If you do not want to run your SQLite-based catalog source pods under restricted pod security enforcement, you do not need to update your catalog source in OpenShift Container Platform 4.12. However, to ensure your catalog sources run in future OpenShift Container Platform releases, you must update your catalog sources to run under restricted pod security enforcement. As a catalog author, you can enable compatibility with restricted pod security enforcement by completing either of the following actions: Migrate your catalog to the file-based catalog format. Update your catalog image with a version of the opm CLI tool released with OpenShift Container Platform 4.11 or later. If you do not want to update your SQLite database catalog image or migrate your catalog to the file-based catalog format, you can configure your catalog to run with elevated permissions. For more information, see Catalog sources and pod security admission . Operator SDK 1.25.4 OpenShift Container Platform 4.12 supports Operator SDK 1.25.4. See Installing the Operator SDK CLI to install or update to this latest version. Note Operator SDK 1.25.4 supports Kubernetes 1.25. For more information, see Beta APIs removed from Kubernetes 1.25 and Validating bundle manifests for APIs removed from Kubernetes 1.25 . If you have Operator projects that were previously created or maintained with Operator SDK 1.22.2, update your projects to keep compatibility with Operator SDK 1.25.4. Updating Go-based Operator projects Updating Ansible-based Operator projects Updating Helm-based Operator projects Updating Hybrid Helm-based Operator projects Updating Java-based Operator projects LVM Operator is now called Logical Volume Manager Storage The LVM Operator that was previously delivered with Red Hat OpenShift Data Foundation requires installation through the OpenShift Data Foundation. In OpenShift Container Platform v4.12, the LVM Operator has been renamed Logical Volume Manager Storage . Now, you install it as a standalone Operator from the OpenShift Operator catalog. Logical Volume Manager Storage provides dynamic provisioning of block storage on a single, limited resources single-node OpenShift cluster. End of support for RHOSP 16.1 OpenShift Container Platform no longer supports RHOSP 16.1 as a deployment target. See OpenShift Container Platform on Red Hat OpenStack Platform Support Matrix for complete details. 1.5. Deprecated and removed features Some features available in releases have been deprecated or removed. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality deprecated and removed within OpenShift Container Platform 4.12, refer to the table below. Additional details for more functionality that has been deprecated and removed are listed after the table. In the following tables, features are marked with the following statuses: General Availability Deprecated Removed Operator deprecated and removed features Table 1.2. Operator deprecated and removed tracker Feature 4.10 4.11 4.12 SQLite database format for Operator catalogs Deprecated Deprecated Deprecated Images deprecated and removed features Table 1.3. Images deprecated and removed tracker Feature 4.10 4.11 4.12 ImageChangesInProgress condition for Cluster Samples Operator Deprecated Deprecated Deprecated MigrationInProgress condition for Cluster Samples Operator Deprecated Deprecated Deprecated Removal of Jenkins images from install payload General Availability Removed Removed Monitoring deprecated and removed features Table 1.4. Monitoring deprecated and removed tracker Feature 4.10 4.11 4.12 Grafana component in monitoring stack Deprecated Removed Removed Access to Prometheus and Grafana UIs in monitoring stack Deprecated Removed Removed Installation deprecated and removed features Table 1.5. Installation deprecated and removed tracker Feature 4.10 4.11 4.12 vSphere 6.x or earlier Deprecated Removed Removed vSphere 7.0 Update 1 or earlier General Availability Deprecated Deprecated VMware ESXi 6.x or earlier Deprecated Removed Removed VMware ESXi 7.0 Update 1 or earlier General Availability Deprecated Deprecated CoreDNS wildcard queries for the cluster.local domain General Availability General Availability Deprecated ingressVIP and apiVIP settings in the install-config.yaml file for installer-provisioned infrastructure clusters General Availability General Availability Deprecated Updating clusters deprecated and removed features Table 1.6. Updating clusters deprecated and removed tracker Feature 4.10 4.11 4.12 Virtual hardware version 13 Deprecated Removed Removed Storage deprecated and removed features Table 1.7. Storage deprecated and removed tracker Feature 4.10 4.11 4.12 Snapshot.storage.k8s.io/v1beta1 API endpoint Deprecated Removed Removed Persistent storage using FlexVolume Deprecated Deprecated Deprecated Authentication and authorization deprecated and removed features Table 1.8. Authentication and authorization deprecated and removed tracker Feature 4.10 4.11 4.12 Automatic generation of service account token secrets General Availability Removed Removed Specialized hardware and driver enablement deprecated and removed features Table 1.9. Specialized hardware and driver enablement deprecated and removed tracker Feature 4.10 4.11 4.12 Special Resource Operator (SRO) Technology Preview Technology Preview Removed Multi-architecture deprecated and removed features Table 1.10. Multi-architecture deprecated and removed tracker Feature 4.10 4.11 4.12 IBM POWER8 all models ( ppc64le ) General Availability General Availability Deprecated IBM IBM POWER9 AC922 ( ppc64le ) General Availability General Availability Deprecated IBM IBM POWER9 IC922 ( ppc64le ) General Availability General Availability Deprecated IBM IBM POWER9 LC922 ( ppc64le ) General Availability General Availability Deprecated IBM z13 all models ( s390x ) General Availability General Availability Deprecated IBM LinuxONE Emperor ( s390x ) General Availability General Availability Deprecated IBM LinuxONE Rockhopper ( s390x ) General Availability General Availability Deprecated AMD64 (x86_64) v1 CPU General Availability General Availability Deprecated Networking deprecated and removed features Table 1.11. Networking deprecated and removed tracker Feature 4.10 4.11 4.12 Kuryr on RHOSP General Availability General Availability Deprecated Web console deprecated and removed features Table 1.12. Web console deprecated and removed tracker Feature 4.10 4.11 4.12 Multicluster console (Technology Preview) REM REM REM 1.5.1. Deprecated features 1.5.1.1. Red Hat Virtualization (RHV) as a host platform for OpenShift Container Platform will be deprecated Red Hat Virtualization (RHV) will be deprecated in an upcoming release of OpenShift Container Platform. Support for OpenShift Container Platform on RHV will be removed from a future OpenShift Container Platform release, currently planned as OpenShift Container Platform 4.14. 1.5.1.2. Wildcard DNS queries for the cluster.local domain are deprecated CoreDNS will stop supporting wildcard DNS queries for names under the cluster.local domain. These queries will resolve in OpenShift Container Platform 4.12 as they do in earlier versions, but support will be removed from a future OpenShift Container Platform release. 1.5.1.3. Specific hardware models on ppc64le , s390x , and x86_64 v1 CPU architectures are deprecated In OpenShift Container Platform 4.12, support for RHCOS functionality is deprecated for: IBM POWER8 all models (ppc64le) IBM POWER9 AC922 (ppc64le) IBM POWER9 IC922 (ppc64le) IBM POWER9 LC922 (ppc64le) IBM z13 all models (s390x) LinuxONE Emperor (s390x) LinuxONE Rockhopper (s390x) AMD64 (x86_64) v1 CPU While these hardware models remain fully supported in OpenShift Container Platform 4.12, Red Hat recommends that you use later hardware models. 1.5.1.4. Kuryr support for clusters that run on RHOSP In OpenShift Container Platform 4.12, support for Kuryr on clusters that run on RHOSP is deprecated. Support will be removed no earlier than OpenShift Container Platform 4.14. 1.5.2. Removed features 1.5.2.1. Beta APIs removed from Kubernetes 1.25 Kubernetes 1.25 removed the following deprecated APIs, so you must migrate manifests and API clients to use the appropriate API version. For more information about migrating removed APIs, see the Kubernetes documentation . Table 1.13. APIs removed from Kubernetes 1.25 Resource Removed API Migrate to Notable changes CronJob batch/v1beta1 batch/v1 No EndpointSlice discovery.k8s.io/v1beta1 discovery.k8s.io/v1 Yes Event events.k8s.io/v1beta1 events.k8s.io/v1 Yes HorizontalPodAutoscaler autoscaling/v2beta1 autoscaling/v2 No PodDisruptionBudget policy/v1beta1 policy/v1 Yes PodSecurityPolicy policy/v1beta1 Pod Security Admission [1] Yes RuntimeClass node.k8s.io/v1beta1 node.k8s.io/v1 No For more information about pod security admission in OpenShift Container Platform, see Understanding and managing pod security admission . 1.5.2.2. Empty file and stdout support for the oc registry login command The --registry-config and --to option options for the oc registry login command now stop accepting empty files. These options continue to work with files that do not exist. The ability to write output to - (stdout) is also removed. 1.5.2.3. RHEL 7 support for the OpenShift CLI (oc) has been removed Support for using Red Hat Enterprise Linux (RHEL) 7 with the OpenShift CLI ( oc ) has been removed. If you use the OpenShift CLI ( oc ) with RHEL, you must use RHEL 8 or later. 1.5.2.4. OpenShift CLI (oc) commands have been removed The following OpenShift CLI ( oc ) commands were removed with this release: oc adm migrate etcd-ttl oc adm migrate image-references oc adm migrate legacy-hpa oc adm migrate storage 1.5.2.5. Grafana component removed from monitoring stack The Grafana component is no longer a part of the OpenShift Container Platform 4.12 monitoring stack. As an alternative, go to Observe Dashboards in the OpenShift Container Platform web console to view monitoring dashboards. 1.5.2.6. Prometheus and Grafana user interface access removed from monitoring stack Access to the third-party Prometheus and Grafana user interfaces have been removed from the OpenShift Container Platform 4.12 monitoring stack. As an alternative, click Observe in the OpenShift Container Platform web console to view alerting, metrics, dashboards, and metrics targets for monitoring components. 1.5.2.7. Support for virtual hardware version 13 is removed In OpenShift Container Platform 4.11, support for virtual hardware version 13 is removed. Support for virtual hardware version 13 was deprecated in OpenShift Container Platform 4.9. Red Hat recommends that you use virtual hardware version 15 or later. 1.5.2.8. Support for snapshot v1beta1 API endpoint is removed In OpenShift Container Platform 4.11, support for snapshot.storage.k8s.io/v1beta1 API endpoint is removed. Support for snapshot.storage.k8s.io/v1beta1 API endpoint was deprecated in OpenShift Container Platform 4.7. Red Hat recommends that you use snapshot.storage.k8s.io/v1 . All objects created as v1beta1 are available through the v1 endpoint. 1.5.2.9. Support for manually deploying a custom scheduler has been removed Support for deploying custom schedulers manually has been removed with this release. Use the Secondary Scheduler Operator for Red Hat OpenShift instead to deploy a custom secondary scheduler in OpenShift Container Platform. 1.5.2.10. Support for deploying single-node OpenShift with OpenShiftSDN has been removed Support for deploying single-node OpenShift clusters with OpenShiftSDN has been removed with this release. OVN-Kubernetes is the default networking solution for single-node OpenShift deployments. 1.5.2.11. Removal of Jenkins images from install payload OpenShift Container Platform 4.11 moves the "OpenShift Jenkins" and "OpenShift Agent Base" images to the ocp-tools-4 repository at registry.redhat.io so that Red Hat can produce and update the images outside the OpenShift Container Platform lifecycle. Previously, these images were in the OpenShift Container Platform install payload and the openshift4 repository at registry.redhat.io . For more information, see OpenShift Jenkins . OpenShift Container Platform 4.11 removes "OpenShift Jenkins Maven" and "NodeJS Agent" images from its payload. Previously, OpenShift Container Platform 4.10 deprecated these images. Red Hat no longer produces these images, and they are not available from the ocp-tools-4 repository at registry.redhat.io . However, upgrading to OpenShift Container Platform 4.11 does not remove "OpenShift Jenkins Maven" and "NodeJS Agent" images from 4.10 and earlier releases. And Red Hat provides bug fixes and support for these images through the end of the 4.10 release lifecycle, in accordance with the OpenShift Container Platform lifecycle policy . For more information, see OpenShift Jenkins . 1.5.3. Future Kubernetes API removals The minor release of OpenShift Container Platform is expected to use Kubernetes 1.26. Currently, Kubernetes 1.26 is scheduled to remove several deprecated APIs. See the Deprecated API Migration Guide in the upstream Kubernetes documentation for the list of planned Kubernetes API removals. See Navigating Kubernetes API deprecations and removals for information about how to check your cluster for Kubernetes APIs that are planned for removal. 1.6. Bug fixes API Server and Authentication Previously, the Cluster Authentication Operator state was set to progressing = false after receiving a workloadIsBeingUpdatedTooLong error. At the same time, degraded = false was kept for the time of the inertia defined. Consequently, the shortened amount of progressing and increased time of degradedation would create a situation where progressing = false and degraded = false were set prematurely. This caused inconsistent OpenShift CI tests because a healthy state was assumed, which was incorrect. This issue has been fixed by removing the progressing = false setting after the workloadIsBeingUpdatedTooLong error is returned. Now, because there is no progressing = false state, OpenShift CI tests are more consistent. ( BZ#2111842 ) Bare Metal Hardware Provisioning In recent versions of server firmware the time between server operations has increased. This causes timeouts during installer-provisioned infrastructure installations when the OpenShift Container Platform installation program waits for a response from the Baseboard Management Controller (BMC). The new python3-sushy release increases the number of server side attempts to contact the BMC. This update accounts for the extended waiting time and avoids timeouts during installation. ( OCPBUGS-4097 ) Before this update, the Ironic provisioning service did not support Baseboard Management Controllers (BMC) that use weak eTags combined with strict eTag validation. By design, if the BMC provides a weak eTag, Ironic returns two eTags: the original eTag and the original eTag converted to the strong format for compatibility with BMC that do not support weak eTags. Although Ironic can send two eTags, BMC using strict eTag validation rejects such requests due to the presence of the second eTag. As a result, on some older server hardware, bare-metal provisioning failed with the following error: HTTP 412 Precondition Failed . In OpenShift Container Platform 4.12 and later, this behavior changes and Ironic no longer attempts to send two eTags in cases where a weak eTag is provided. Instead, if a Redfish request dependent on an eTag fails with an eTag validation error, Ironic retries the request with known workarounds. This minimizes the risk of bare-metal provisioning failures on machines with strict eTag validation. ( OCPBUGS-3479 ) Before this update, when a Redfish system features a Settings URI, the Ironic provisioning service always attempts to use this URI to make changes to boot-related BIOS settings. However, bare-metal provisioning fails if the Baseboard Management Controller (BMC) features a Settings URI but does not support changing a particular BIOS setting by using this Settings URI. In OpenShift Container Platform 4.12 and later, if a system features a Settings URI, Ironic verifies that it can change a particular BIOS setting by using the Settings URI before proceeding. Otherwise, Ironic implements the change by using the System URI. This additional logic ensures that Ironic can apply boot-related BIOS setting changes and bare-metal provisioning can succeed. ( OCPBUGS-2052 ) Builds By default, Buildah prints steps to the log file, including the contents of environment variables, which might include build input secrets . Although you can use the --quiet build argument to suppress printing of those environment variables, this argument isn't available if you use the source-to-image (S2I) build strategy. The current release fixes this issue. To suppress printing of environment variables, set the BUILDAH_QUIET environment variable in your build configuration: sourceStrategy: ... env: - name: "BUILDAH_QUIET" value: "true" ( BZ#2099991 ) Cloud Compute Previously, instances were not set to respect the GCP infrastructure default option for automated restarts. As a result, instances could be created without using the infrastructure default for automatic restarts. This sometimes meant that instances were terminated in GCP but their associated machines were still listed in the Running state because they did not automatically restart. With this release, the code for passing the automatic restart option has been improved to better detect and pass on the default option selection from users. Instances now use the infrastructure default properly and are automatically restarted when the user requests the default functionality. ( OCPBUGS-4504 ) The v1beta1 version of the PodDisruptionBudget object is now deprecated in Kubernetes. With this release, internal references to v1beta1 are replaced with v1 . This change is internal to the cluster autoscaler and does not require user action beyond the advice in the Preparing to upgrade to OpenShift Container Platform 4.12 Red Hat Knowledgebase Article. ( OCPBUGS-1484 ) Previously, the GCP machine controller reconciled the state of machines every 10 hours. Other providers set this value to 10 minutes so that changes that happen outside of the Machine API system are detected within a short period. The longer reconciliation period for GCP could cause unexpected issues such as missing certificate signing requests (CSR) approvals due to an external IP address being added but not detected for an extended period. With this release, the GCP machine controller is updated to reconcile every 10 minutes to be consistent with other platforms and so that external changes are picked up sooner. ( OCPBUGS-4499 ) Previously, due to a deployment misconfiguration for the Cluster Machine Approver Operator, enabling the TechPreviewNoUpgrade feature set caused errors and sporadic Operator degradation. Because clusters with the TechPreviewNoUpgrade feature set enabled use two instances of the Cluster Machine Approver Operator and both deployments used the same set of ports, there was a conflict that lead to errors for single-node topology. With this release, the Cluster Machine Approver Operator deployment is updated to use a different set of ports for different deployments. ( OCPBUGS-2621 ) Previously, the scale from zero functionality in Azure relied on a statically compiled list of instance types mapping the name of the instance type to the number of CPUs and the amount of memory allocated to the instance type. This list grew stale over time. With this release, information about instance type sizes is dynamically gathered from the Azure API directly to prevent the list from becoming stale. ( OCPBUGS-2558 ) Previously, Machine API termination handler pods did not start on spot instances. As a result, pods that were running on tainted spot instances did not receive a termination signal if the instance was terminated. This could result in loss of data in workload applications. With this release, the Machine API termination handler deployment is modified to tolerate the taints and pods running on spot instances with taints now receive termination signals. ( OCPBUGS-1274 ) Previously, error messages for Azure clusters did not explain that it is not possible to create new machines with public IP addresses for a disconnected install that uses only the internal publish strategy. With this release, the error message is updated for improved clarity. ( OCPBUGS-519 ) Previously, the Cloud Controller Manager Operator did not check the cloud-config configuration file for AWS clusters. As a result, it was not possible to pass additional settings to the AWS cloud controller manager component by using the configuration file. With this release, the Cloud Controller Manager Operator checks the infrastructure resource and parses references to the cloud-config configuration file so that users can configure additional settings. ( BZ#2104373 ) Previously, when Azure added new instance types and enabled accelerated networking support on instance types that previously did not have it, the list of Azure instances in the machine controller became outdated. As a result, the machine controller could not create machines with instance types that did not previously support accelerated networking, even if they support this feature on Azure. With this release, the required instance type information is retrieved from Azure API before the machine is created to keep it up to date so the machine controller is able to create machines with new and updated instance types. This fix also applies to any instance types that are added in the future. ( BZ#2108647 ) Previously, the cluster autoscaler did not respect the AWS, IBM Cloud, and Alibaba Cloud topology labels for the CSI drivers when using the Cluster API provider. As a result, nodes with the topology label were not processed properly by the autoscaler when attempting to balance nodes during a scale-out event. With this release, the autoscaler's custom processors are updated so that it respects this label. The autoscaler can now balance similar node groups that are labeled by the AWS, IBM Cloud, or Alibaba CSI labels. ( BZ#2001027 ) Previously, Power VS cloud providers were not capable of fetching the machine IP address from a DHCP server. Changing the IP address did not update the node, which caused some inconsistencies, such as pending certificate signing requests. With this release, the Power VS cloud provider is updated to fetch the machine IP address from the DHCP server so that the IP addresses for the nodes are consistent with the machine IP address. ( BZ#2111474 ) Previously, machines created in early versions of OpenShift Container Platform with invalid configurations could not be deleted. With this release, the webhooks that prevent the creation of machines with invalid configurations no longer prevent the deletion of existing invalid machines. Users can now successfully remove these machines from their cluster by manually removing the finalizers on these machines. ( BZ#2101736 ) Previously, short DHCP lease times, caused by NetworkManager not being run as a daemon or in continuous mode, caused machines to become stuck during initial provisioning and never become nodes in the cluster. With this release, extra checks are added so that if a machine becomes stuck in this state it is deleted and recreated automatically. Machines that are affected by this network condition can become nodes after a reboot from the Machine API controller. ( BZ#2115090 ) Previously, when creating a new Machine resource using a machine profile that does not exist in IBM Cloud, the machines became stuck in the Provisioning phase. With this release, validation is added to the IBM Cloud Machine API provider to ensure that a machine profile exists, and machines with an invalid machine profile are rejected by the Machine API. ( BZ#2062579 ) Previously, the Machine API provider for AWS did not verify that the security group defined in the machine specification exists. Instead of returning an error in this case, it used a default security group, which should not be used for OpenShift Container Platform machines, and successfully created a machine without informing the user that the default group was used. With this release, the Machine API returns an error when users set either incorrect or empty security group names in the machine specification. ( BZ#2060068 ) Previously, the Machine API provider Azure did not treated user-provided values for instance types as case sensitive. This led to false-positive errors when instance types were correct but did not match the case. With this release, instance types are converted to the lowercase characters so that users get correct results without false-positive errors for mismatched case. ( BZ#2085390 ) Previously, there was no check for nil values in the annotations of a machine object before attempting to access the object. This situation was rare, but caused the machine controller to panic when reconciling the machine. With this release, nil values are checked and the machine controller is able to reconcile machines without annotations. ( BZ#2106733 ) Previously, the cluster autoscaler metrics for cluster CPU and memory usage would never reach, or exceed, the limits set by the ClusterAutoscaler resource. As a result, no alerts were fired when the cluster autoscaler could not scale due to resource limitations. With this release, a new metric called cluster_autoscaler_skipped_scale_events_count is added to the cluster autoscaler to more accurately detect when resource limits are reached or exceeded. Alerts will now fire when the cluster autoscaler is unable to scale the cluster up because it has reached the cluster resource limits. ( BZ#1997396 ) Previously, when the Machine API provider failed to fetch the machine IP address, it would not set the internal DNS name and the machine certificate signing requests were not automatically approved. With this release, the Power VS machine provider is updated to set the server name as the internal DNS name even when it fails to fetch the IP address. ( BZ#2111467 ) Previously, the Machine API vSphere machine controller set the PowerOn flag when cloning a VM. This created a PowerOn task that the machine controller was not aware of. If that PowerOn task failed, machines were stuck in the Provisioned phase but never powered on. With this release, the cloning sequence is altered to avoid the issue. Additionally, the machine controller now retries powering on the VM in case of failure and reports failures properly. ( BZ#2087981 , OCPBUGS-954 ) With this release, AWS security groups are tagged immediately instead of after creation. This means that fewer requests are sent to AWS and the required user privileges are lowered. ( BZ#2098054 , OCPBUGS-3094 ) Previously, a bug in the RHOSP legacy cloud provider resulted in a crash if certain RHOSP operations were attempted after authentication had failed. For example, shutting down a server causes the Kubernetes controller manager to fetch server information from RHOSP, which triggered this bug. As a result, if initial cloud authentication failed or was configured incorrectly, shutting down a server caused the Kubernetes controller manager to crash. With this release, the RHOSP legacy cloud provider is updated to not attempt any RHOSP API calls if it has not previously authenticated successfully. Now, shutting down a server with invalid cloud credentials no longer causes Kubernetes controller manager to crash. ( BZ#2102383 ) Developer Console Previously, the openshift-config namespace was hard coded for the HelmChartRepository custom resource, which was the same namespace for the ProjectHelmChartRepository custom resource. This prevented users from adding private ProjectHelmChartRepository custom resources in their desired namespace. Consequently, users were unable to access secrets and configmaps in the openshift-config namespace. This update fixes the ProjectHelmChartRepository custom resource definition with a namespace field that can read the secret and configmaps from a namespace of choice by a user with the correct permissions. Additionally, the user can add secrets and configmaps to the accessible namespace, and they can add private Helm chart repositories in the namespace used the creation resources. ( BZ#2071792 ) Image Registry Previously, the image trigger controller did not have permissions to change objects. Consequently, image trigger annotations did not work on some resources. This update creates a cluster role binding that provides the controller the required permissions to update objects according to annotations. ( BZ#2055620 ) Previously, the Image Registry Operator did not have a progressing condition for the node-ca daemon set and used generation from an incorrect object. Consequently, the node-ca daemon set could be marked as degraded while the Operator was still running. This update adds the progressing condition, which indicates that the installation is not complete. As a result, the Image Registry Operator successfully installs the node-ca daemon set, and the installer waits until it is fully deployed. ( [ BZ#2093440 ) Installer Previously, the number of supported user-defined tags was 8, and reserved OpenShift Container Platform tags were 2 for AWS resources. With this release, the number of supported user-defined tags is now 25 and reserved OpenShift Container Platform tags are 25 for AWS resources. You can now add up to 25 user tags during installation. ( CFE#592 ) Previously, installing a cluster on Amazon Web Services started and then failed when the IAM administrative user was not assigned the s3:GetBucketPolicy permission. This update adds this policy to checklist that the installation program uses to ensure that all of the required permissions are assigned. As a result, the installation program now stops the installation with a warning that the IAM administrative user is missing the s3:GetBucketPolicy permission. ( BZ#2109388 ) Previously, installing a cluster on Microsoft Azure failed when the Azure DCasv5-series or DCadsv5-series of confidential VMs were specified as control plane nodes. With this update, the installation program now stops the installation with an error, which states that confidential VMs are not yet supported. ( BZ#2055247 ) Previously, gathering bootstrap logs was not possible until the control plane machines were running. With this update, gathering bootstrap logs now only requires that the bootstrap machine be available. ( BZ#2105341 ) Previously, if a cluster failed to install on Google Cloud Platform because the service account had insufficient permissions, the resulting error message did not mention this as the cause of the failure. This update improves the error message, which now instructs users to check the permissions that are assigned to the service account. ( BZ#2103236 ) Previously, when an installation on Google Cloud provider (GCP) failed because an invalid GCP region was specified, the resulting error message did not mention this as the cause of the failure. This update improves the error message, which now states the region is not valid. ( BZ#2102324 ) Previously, cluster installations using Hive could fail if Hive used an older version of the install-config.yaml file. This update allows the installation program to accept older versions of the install-config.yaml file provided by Hive. ( BZ#2098299 ) Previously, the installation program would incorrectly allow the apiVIP and ingressVIP parameters to use the same IPv6 address if they represented the address differently, such as listing the address in an abbreviated format. In this update, the installer correctly validates these two parameters regardless of their formatting, requiring separate IP addresses for each parameter. ( BZ#2103144 ) Previously, uninstalling a cluster using the installation program failed to delete all resources in clusters installed on GCP if the cluster name was more than 22 characters long. In this update, uninstalling a cluster using the installation program correctly locates and deletes all GCP cluster resources in cases of long cluster names. ( BZ#2076646 ) Previously, when installing a cluster on Red Hat OpenStack Platform (RHOSP) with multiple networks defined in the machineNetwork parameter, the installation program only created security group rules for the first network. With this update, the installation program creates security group rules for all networks defined in the machineNetwork so that users no longer need to manually edit security group rules after installation. ( BZ#2095323 ) Previously, users could manually set the API and Ingress virtual IP addresses to values that conflicted with the allocation pool of the DHCP server when installing a cluster on OpenStack. This could cause the DHCP server to assign one of the VIP addresses to a new machine, which would fail to start. In this update, the installation program validates the user-provided VIP addresses to ensure that they do not conflict with any DHCP pools. ( BZ#1944365 ) Previously, when installing a cluster on vSphere using a datacenter that is embedded inside a folder, the installation program could not locate the datacenter object, causing the installation to fail. In this update, the installation program can traverse the directory that contains the datacenter object, allowing the installation to succeed. ( BZ#2097691 ) Previously, when installing a cluster on Azure using arm64 architecture with installer-provisioned infrastructure, the image definition resource for hyperVGeneration V1 incorrectly had an architecture value of x64 . With this update, the image definition resource for hyperVGeneration V1 has the correct architecture value of Arm64 . ( OCPBUGS-3639 ) Previously, when installing a cluster on VMware vSphere, the installation could fail if the user specified a user-defined folder in the failureDomain section of the install-config.yaml file. With this update, the installation program correctly validates user-defined folders in the failureDomain section of the install-config.yaml file. ( OCPBUGS-3343 ) Previously, when destroying a partially deployed cluster after an installation failed on VMware vSphere, some virtual machine folders were not destroyed. This error could occur in clusters configured with multiple vSphere datacenters or multiple vSphere clusters. With this update, all installer-provisioned infrastructure is correctly deleted when destroying a partially deployed cluster after an installation failure. ( OCPBUGS-1489 ) Previously, when installing a cluster on VMware vSphere, the installation failed if the user specified the platform.vsphere.vcenters parameter but did not specify the platform.vsphere.failureDomains.topology.networks parameter in the install-config.yaml file. With this update, the installation program alerts the user that the platform.vsphere.failureDomains.topology.networks field is required when specifying platform.vsphere.vcenters . ( OCPBUGS-1698 ) Previously, when installing a cluster on VMware vSphere, the installation failed if the user defined the platform.vsphere.vcenters and platform.vsphere.failureDomains parameters but did not define platform.vsphere.defaultMachinePlatform.zones , or compute.platform.vsphere.zones and controlPlane.platform.vsphere.zones . With this update, the installation program validates that the user has defined the zones parameter in multi-region or multi-zone deployments prior to installation. ( OCPBUGS-1490 ) Kubernetes Controller Manager Previously, the Kubernetes Controller Manager Operator reported degraded on environments without a monitoring stack presence. With this update, the Kubernetes Controller Manager Operator skips checking the monitoring for cues about degradation when the monitoring stack is not present. ( BZ#2118286 ) With this update, Kubernetes Controller Manager alerts ( KubeControllerManagerDown , PodDisruptionBudgetAtLimit , PodDisruptionBudgetLimit , and GarbageCollectorSyncFailed ) have links to Github runbooks. The runbooks help users to understand debug these alerts. ( BZ#2001409 ) Kubernetes Scheduler Previously, the secondary scheduler deployment was not deleted after a secondary scheduler custom resource was deleted. Consequently, the Secondary Schedule Operator and Operand were not fully uninstalled. With this update, the correct owner reference is set in the secondary scheduler custom resource so that it points to the secondary scheduler deployment. As a result, secondary scheduler deployments are deleted when the secondary scheduler custom resource is deleted. ( BZ#2100923 ) For the OpenShift Container Platform 4.12 release, the descheduler can now publish events to an API group because the release adds additional role-based access controls (RBAC) rules to the descheduler's profile.( OCPBUGS-2330 ) Machine Config Operator Previously, the Machine Config Operator (MCO) ControllerConfig resource, which contains important certificates, was only synced if the Operator's daemon sync succeeded. By design, unready nodes during a daemon sync prevent that daemon sync from succeeding, so unready nodes were indirectly preventing the ControllerConfig resource, and therefore those certificates, from syncing. This resulted in eventual cluster degradation when there were unready nodes due to inability to rotate the certificates contained in the ControllerConfig resource. With this release, the sync of the ControllerConfig resource is no longer dependent on the daemon sync succeeding, so the ControllerConfig resource now continues to sync if the daemon sync fails. This means that unready nodes no longer prevent the ControllerConfig resource from syncing, so certificates continue to be updated even when there are unready nodes. ( BZ#2034883 ) Management Console Previously, the Operator details page attempted to display multiple error messages, but the error message component can only display a single error message at a time. As a result, relevant error messages were not displayed. With this update, the Operator details page displays only the first error message so the user sees a relevant error. ( OCPBUGS-3927 ) Previously, the product name for Azure Red Hat OpenShift was incorrect in Customer Case Management (CCM). As a result, the console had to use the same incorrect product name to correctly populate the fields in CCM. Once the product name in CCM was updated, the console needed to be updated as well. With this update, the same, correct product name as CCM is correctly populated with the correct Azure product name when following the link from the console. ( OCPBUGS-869 ) Previously, when a plugin page resulted in an error, the error did not reset when navigating away from the error page, and the error persisted after navigating to a page that was not the cause of the error. With this update, the error state is reset to its default when a user navigates to a new page, and the error no longer persists after navigating to a new page. ( BZ#2117738 , OCPBUGS-523 ) Previously, the View it here link in the Operator details pane for installed Operators was incorrectly built when All Namespaces was selected. As a result, the link attempted to navigate to the Operator details page for a cluster service version (CSV) in All Projects , which is an invalid route. With this update, the View it here link to use the namespace where the CSV is installed now builds correctly and the link works as expected. ( OCPBUGS-184 ) Previously, line numbers with more than five digits resulted in a cosmetic issue where the line number overlaid the vertical divider between the line number and the line contents making it harder to read. With this update, the amount of space available for line numbers was increased to account for longer line numbers, and the line number no longer overlays the vertical divider. ( OCPBUGS-183 ) Previously, in the administrator perspective of the web console, the link to Learn more about the OpenShift local update services on the Default update server pop-up window in the Cluster Settings page produced a 404 error. With this update, the link works as expected. ( BZ#2098234 ) Previously, the MatchExpression component did not account for array-type values. As a result, only single values could be entered through forms using this component. With this update, the MatchExpression component accepts comma-separated values as an array. ( BZ#207690 ) Previously, there were redundant checks for the model resulting in tab reloading which occasionally resulted in a flickering of the tab contents where they rerendered. With this update, the redundant model check was removed, and the model is only checked once. As a result, the tab contents do not flicker and no longer rerender. ( BZ#2037329 ) Previously, when selecting the edit label from the action list on the OpenShift Dedicated node page, no response was elicited and a web hook error was returned. This issue has been fixed so that the error message is only returned when editing fails. ( BZ#2102098 ) Previously, if issues were pending, clicking on the Insights link would crash the page. As a workaround, you can wait for the variable to become initialized before clicking the Insights link. As a result, the Insights page will open as expected. ( BZ#2052662 ) Previously, when the MachineConfigPool resource was paused, the option to unpause said Resume rollouts . The wording has been updated so that it now says Resume updates . ( BZ#2094240 ) Previously, the wrong calculating method was used when counting master and worker nodes. With this update, the correct worker nodes are calculated when nodes have both the master and worker role. ( BZ#1951901 ) Previously, conflicting react-router routes for ImageManifestVuln resulted in attempts to render a details page for ImageManifestVuln with a ~new name. Now, the container security plugin has been updated to remove conflicting routes and to ensure dynamic lists and details page extensions are used on the Operator details page. As a result, the console renders the correct create, list, and details pages for ImageManifestVuln . ( BZ#2080260 ) Previously, incomplete YAML was not synced was occasionally displayed to users. With this update, synced YAML always displays. ( BZ#2084453 ) Previously, when installing an Operator that required a custom resource (CR) to be created for use, the Create resource button could fail to install the CR because it was pointing to the incorrect namespace. With this update, the Create resource button works as expected. ( BZ#2094502 ) Previously, the Cluster update modal was not displaying errors properly. As a result, the Cluster update modal did not display or explain errors when they occurred. With this update, the Cluster update modal correctly display errors. ( BZ#2096350 ) Monitoring Before this update, cluster administrators could not distinguish between a pod being not ready because of a scheduling issue and a pod being not ready because it could not be started by the kubelet. In both cases, the KubePodNotReady alert would fire. With this update, the KubePodNotScheduled alert now fires when a pod is not ready because of a scheduling issue, and the KubePodNotReady alert fires when a pod is not ready because it could not be started by the kubelet. ( OCPBUGS-4431 ) Before this update, node_exporter would report metrics about virtual network interfaces such as tun interfaces, br interfaces, and ovn-k8s-mp interfaces. With this update, metrics for these virtual interfaces are no longer collected, which decreases monitoring resource consumption. ( OCPBUGS-1321 ) Before this update, Alertmanager pod startup might time out because of slow DNS resolution, and the Alertmanager pods would not start. With this release, the timeout value has been increased to seven minutes, which prevents pod startup from timing out. ( BZ#2083226 ) Before this update, if Prometheus Operator failed to run or schedule Prometheus pods, the system provided no underlying reason for the failure. With this update, if Prometheus pods are not run or scheduled, the Cluster Monitoring Operator updates the clusterOperator monitoring status with a reason for the failure, which can be used to troubleshoot the underlying issue. ( BZ#2043518 ) Before this update, if you created an alert silence from the Developer perspective in the OpenShift Container Platform web console, external labels were included that did not match the alert. Therefore, the alert would not be silenced. With this update, external labels are now excluded when you create a silence in the Developer perspective so that newly created silences function as expected. ( BZ#2084504 ) Previously, if you enabled an instance of Alertmanager dedicated to user-defined projects, a misconfiguration could occur in certain circumstances, and you would not be informed that the user-defined project Alertmanager config map settings did not load for either the main instance of Alertmanager or the instance dedicated to user-defined projects. With this release, if this misconfiguration occurs, the Cluster Monitoring Operator now displays a message that informs you of the issue and provides resolution steps. ( BZ#2099939 ) Before this update, if the Cluster Monitoring Operator (CMO) failed to update Prometheus, the CMO did not verify whether a deployment was running and would report that cluster monitoring was unavailable even if one of the Prometheus pods was still running. With this update, the CMO now checks for running Prometheus pods in this situation and reports that cluster monitoring is unavailable only if no Prometheus pods are running. ( BZ#2039411 ) Before this update, if you configured OpsGenie as an alert receiver, a warning would appear in the log that api_key and api_key_file are mutually exclusive and that api_key takes precedence. This warning appeared even if you had not defined api_key_file . With this update, this warning only appears in the log if you have defined both api_key and api_key_file . ( BZ#2093892 ) Before this update the Telemeter Client (TC) only loaded new pull secrets when it was manually restarted. Therefore, if a pull secret had been changed or updated and the TC had not been restarted, the TC would fail to authenticate with the server. This update addresses the issue so that when the secret is rotated, the deployment is automatically restarted and uses the updated token to authenticate. ( BZ#2114721 ) Networking Previously, routers that were in the terminating state delayed the oc cp command which would delay the oc adm must-gather command until the pod was terminated. With this update, a timeout for each issued oc cp command is set to prevent delaying the must-gather command from running. As a result, terminating pods no longer delay must-gather commands. ( BZ#2103283 ) Previously, an Ingress Controller could not be configured with both the Private endpoint publishing strategy type and PROXY protocol. With this update, users can now configure an Ingress Controller with both the Private endpoint publishing strategy type and PROXY protocol. ( BZ#2104481 ) Previously, the routeSelector parameter cleared the route status of the Ingress Controller prior to the router deployment. Because of this, the route status repopulated incorrectly. To avoid using stale data, route status detection has been updated to no longer rely on the Kubernetes object cache. Additionally, this update includes a fix to check the generation ID on route deployment to determine the route status. As a result, the route status is consistently cleared with a routeSelector update. ( BZ#2101878 ) Previously, a cluster that was upgraded from a version of OpenShift Container Platform earlier than 4.8 could have orphaned Route objects. This was caused by earlier versions of OpenShift Container Platform translating Ingress objects into Route objects irrespective of a given Ingress object's indicated IngressClass . With this update, an alert is sent to the cluster administrator about any orphaned Route objects still present in the cluster after Ingress-to-Route translation. This update also adds another alert that notifies the cluster administrator about any Ingress objects that do not specify an IngressClass . ( BZ#1962502 ) Previously, if a configmap that the router deployment depends on is not created, then the router deployment does not progress. With this update, the cluster Operator reports ingress progressing=true if the default ingress controller deployment is progressing. This results in users debugging issues with the ingress controller by using the command oc get co . ( BZ#2066560 ) Previously, when an incorrectly created network policy was added to the OVN-Kubernetes cache, it would cause the OVN-Kubernetes leader to enter crashloopbackoff status. With this update, OVN-Kubernetes leader does not enter crashloopbackoff status by skipping deleting nil policies. ( BZ#2091238 ) Previously, recreating an EgressIP pod with the same namespace or name within 60 seconds of deleting an older one with the same namespace or name causes the wrong SNAT to be configured. As a result, packets could go out with nodeIP instead of EgressIP SNAT. With this update, traffic leaves the pod with EgressIP instead of nodeIP. ( BZ#2097243 ). Previously, older Access Control Lists (ACL)s with arp produced unexpectedly found multiple equivalent ACLs (arp v/s arp||nd) errors due to a change in the ACL from arp to arp II nd . This prevented network policies from being created properly. With this update, older ACLs with just the arp match have been removed so that only ACLs with the new arp II nd match exist so that network policies can be created correctly and no errors will be observed on ovnkube-master . NOTE: This effects customers upgrading into 4.8.14, 4.9.32, 4.10.13 or higher from older versions. ( BZ#2095852 ). With this update, CoreDNS has been updated to version 1.10.0, which is based on Kubernetes 1.25. This keeps both the CoreDNS version and OpenShift Container Platform 4.12, which is also based on Kubernetes 1.25, in alignment with one another. ( OCPBUGS-1731 ) With this update, the OpenShift Container Platform router now uses k8s.io/client-go version 1.25.2, which supports Kubernetes 1.25. This keeps both the openshift-router and OpenShift Container Platform 4.12, which is also based on Kubernetes 1.25, in alignment with one another. ( OCPBUGS-1730 ) With this update, the Ingress Operator now uses k8s.io/client-go version 1.25.2, which supports Kubernetes 1.25. This keeps both the Ingress Operator and OpenShift Container Platform 4.12, which is also based on Kubernetes 1.25, in alignment with one another. ( OCPBUGS-1554 ) Previously, the DNS Operator did not reconcile the openshift-dns namespace. Because OpenShift Container Platform 4.12 requires the openshift-dns namespace to have pod-security labels, this caused the namespace to be missing those labels upon cluster update. Without the pod-security labels, the pods failed to start. With this update, the DNS Operator now reconciles the openshift-dns namespace, and the pod-security labels are now present. As a result, pods start as expected. ( OCPBUGS-1549 ) Previously, the ingresscontroller.spec.tuniningOptions.reloadInterval did not support decimal numerals as valid parameter values because the Ingress Operator internally converts the specified value into milliseconds, which was not a supported time unit. This prevented an Ingress Controller from being deleted. With this update, ingresscontroller.spec.tuningOptions.reloadInterval now supports decimal numerals and users can delete Ingress Controllers with reloadInterval parameter values which were previously unsupported. ( OCPBUGS-236 ) Previously, the Cluster DNS Operator used GO Kubernetes libraries that were based on Kubernetes 1.24 while OpenShift Container Platform 4.12 is based on Kubernetes 1.25. With this update, GO Kubernetes API is v1.25.2, which aligns the Cluster DNS Operator with OpenShift Container Platform 4.12 that uses Kubernetes 1.25 APIs. (link: OCPBUGS-1558 ) Previously, setting the disableNetworkDiagnostics configuration to true did not persist when the network-operator pod was re-created. With this update, the disableNetworkDiagnostics configuration property of network operator.openshift.io/cluster no longer resets to its default value after network operator restart. ( OCPBUGS-392 ) Previously, ovn-kubernetes did not configure the correct MAC address of bonded interfaces in br-ex bridge. As a result, a node that uses bonding for the primary Kubernetes interface fails to join the cluster. With this update, ovn-kubernetes configures the correct MAC address of bonded interfaces in br-ex bridge, and nodes that use bonding for the primary Kubernetes interface successfully join the cluster. ( BZ2096413 ) Previously, when the Ingress Operator was configured to enable the use of mTLS, the Operator would not check if CRLs needed updating until some other event caused it to reconcile. As a result, CRLs used for mTLS could become out of date. With this update, the Ingress Operator now automatically reconciles when any CRL expires, and CRLs will be updated at the time specified by their nextUpdate field. ( BZ#2117524 ) Node Previously, a symlinks error message was printed out as raw data instead of formatted as an error, making it difficult to understand. This fix formats the error message properly, so that it is easily understood. ( BZ#1977660 ) Previously, kubelet hard eviction thresholds were different from Kubernetes defaults when a performance profile was applied to a node. With this release, the defaults have been updated to match the expected Kubernetes defaults. ( OCPBUGS-4362 ). OpenShift CLI (oc) The OpenShift Container Platform 4.12 release fixes an issue with entering a debug session on a target node when the target namespace lacks the appropriate security level. This caused the oc CLI to prompt you with a pod security error message. If the existing namespace does not contain the appropriate security levels, OpenShift Container Platform now creates a temporary namespace when you enter oc debug mode on a target node. ( OCPBUGS-852 ) Previously, on macOS arm64 architecture, the oc binary needed to be signed manually. As a result, the oc binary did not work as expected. This update implements a self-signing binary for oc mimicking. As a result, the oc binary on macOS arm64 architectures works properly. ( BZ#2059125 ) Previously, must-gather was trying to collect resources that were not present on the server. Consequently, must-gather would print error messages. Now, before collecting resources, must-gather checks whether the resource exists. As a result, must-gather no longer prints an error when it fails to collect non-existing resources on the server. ( BZ#2095708 ) The OpenShift Container Platform 4.12 release updates the oc-mirror library, so that the library supports multi-arch platform images. This means that you can choose from a wider selection of architectures, such as arm64 , when mirroring a platform release payload. ( OCPBUGS-617 ) Operator Lifecycle Manager (OLM) Before the OpenShift Container Platform 4.12 release, the package-server-manager controller would not revert any changes made to a package-server cluster service version (CSV), because of an issue with the on-cluster function. These persistent changes might impact how an Operator starts in a cluster. For OpenShift Container Platform 4.12, the package-server-manager controller always rebuilds a package-server CSV to its original state, so that no modifications to the CSV persist after a cluster upgrade operation. The on-cluster function no longer controls the state of a package-server CSV. ( OCPBUGS-867 ) Previously, Operator Lifecycle Manager (OLM) would attempt to update namespaces to apply a label, even if the label was present on the namespace. Consequently, the update requests increased the workload in API and etcd services. With this update, OLM compares existing labels against the expected labels on a namespace before issuing an update. As a result, OLM no longer attempts to make unnecessary update requests on namespaces. ( BZ#2105045 ) Previously, Operator Lifecycle Manager (OLM) would prevent minor cluster upgrades that should not be blocked based on a miscalculation of the ClusterVersion custom resources's spec.DesiredVersion field. With this update, OLM no longer prevents cluster upgrades when the upgrade should be supported. ( BZ#2097557 ) Previously, the reconciler would update a resource's annotation without making a copy of the resource. This caused an error that would terminate the reconciler process. With this update, the reconciler no longer stops due the error. ( BZ#2105045 ) The package-server-manifest (PSM) is a controller that ensures that the correct package-server Cluster Service Version (CSV) is installed on a cluster. Previously, changes to the package-server CSV were not being reverted because of a logical error in the reconcile function in which an on-cluster object could influence the expected object. Users could modify the package-server CSV and the changes would not be reverted. Additionally, cluster upgrades would not update the YAML for the package-server CSV. With this update, the expected version of the CSV is now always built from scratch, which removes the ability for an on-cluster object to influence the expected values. As a result, the PSM now reverts any attempts to modify the package-server CSV, and cluster upgrades now deploy the expected package-server CSV. ( OCPBUGS-858 ) Previously, OLM would upgrade an Operator according to the Operator's CRD status. A CRD lists component references in an order defined by the group/version/kind (GVK) identifier. Operators that share the same components might cause the GVK to change the component listings for an Operator, and this can cause the OLM to require more system resources to continuously update the status of a CRD. With this update, the Operator Lifecycle Manager (OLM) now upgrades an Operator according to the Operator's component references. A change to the custom resource definition (CRD) status of an Operator does not impact the OLM Operator upgrade process.( OCPBUGS-3795 ) Operator SDK With this update, you can now set the security context for the registry pod by including the securityContext configuration field in the pod specification. This will apply the security context for all containers in the pod. The securityContext field also defines the pod's privileges. ( BZ#2091864 ) File Integrity Operator Previously, the File Integrity Operator deployed templates using the openshift-file-integrity namespace in the permissions for the Operator. When the Operator attempted to create objects in the namespace, it would fail due to permission issues. With this release, the deployment resources used by OLM are updated to use the correct namespace, fixing the permission issues so that users can install and use the operator in non-default namespaces. ( BZ#2104897 ) Previously, underlying dependencies of the File Integrity Operator changed how alerts and notifications were handled, and the Operator didn't send metrics as a result. With this release the Operator ensures that the metrics endpoint is correct and reachable on startup. ( BZ#2115821 ) Previously, alerts issued by the File Integrity Operator did not set a namespace. This made it difficult to understand where the alert was coming from, or what component was responsible for issuing it. With this release, the Operator includes the namespace it was installed into in the alert, making it easier to narrow down what component needs attention. ( BZ#2101393 ) Previously, the File Integrity Operator did not properly handle modifying alerts during an upgrade. As a result, alerts did not include the namespace in which the Operator was installed. With this release, the Operator includes the namespace it was installed into in the alert, making it easier to narrow down what component needs attention. ( BZ#2112394 ) Previously, service account ownership for the File Integrity Operator regressed due to underlying OLM updates, and updates from 0.1.24 to 0.1.29 were broken. With this update, the Operator defaults to upgrading to 0.1.30. ( BZ#2109153 ) Previously, the File Integrity Operator daemon used the ClusterRoles parameter instead of the Roles parameter for a recent permission change. As a result, OLM could not update the Operator. With this release, the Operator daemon reverts to using the Roles parameter and updates from older versions to version 0.1.29 are successful. ( BZ#2108475 ) Compliance Operator Previously, the Compliance Operator used an old version of the Operator SDK, which is a dependency for building Operators. This caused alerts about deprecated Kubernetes functionality used by the Operator SDK. With this release, the Compliance Operator is updated to version 0.1.55, which includes an updated version of the Operator SDK. ( BZ#2098581 ) Previously, applying automatic remediation for the rhcos4-high-master-sysctl-kernel-yama-ptrace-scope and rhcos4-sysctl-kernel-core-pattern rules resulted in subsequent failures of those rules in scan results, even though they were remediated. The issue is fixed in this release. ( BZ#2094382 ) Previously, the Compliance Operator hard coded notifications to the default namespace. As a result, notifications from the Operator would not appear if the Operator was installed in a different namespace. This issue is fixed in this release. ( BZ#2060726 ) Previously, the Compliance Operator failed to fetch API resources when parsing machine configurations without Ignition specifications. This caused the api-check-pods check to crash loop. With this release, the Compliance Operator is updated to gracefully handle machine configuration pools without Ignition specifications. ( BZ#2117268 ) Previously, the Compliance Operator held machine configurations in a stuck state because it could not determine the relationship between machine configurations and kubelet configurations. This was due to incorrect assumptions about machine configuration names. With this release, the Compliance Operator is able to determine if a kubelet configuration is a subset of a machine configuration. ( BZ#2102511 ) OpenShift API server Previously, adding a member could remove members from a group. As a result, the user lost group privileges. With this release, the dependencies were bumped and users no longer lose group privledges. ( OCPBUGS-533 ) Red Hat Enterprise Linux CoreOS (RHCOS) Previously, updating to Podman 4.0 prevented users from using custom images with toolbox containers on RHCOS. This fix updates the toolbox library code to account for the new Podman behavior, so users can now use custom images with toolbox on RHCOS as expected. ( BZ#2048789 ) Previously, the podman exec command did not work well with nested containers. Users encountered this issue when accessing a node using the oc debug command and then running a container with the toolbox command. Because of this, users were unable to reuse toolboxes on RHCOS. This fix updates the toolbox library code to account for this behavior, so users can now reuse toolboxes on RHCOS. ( BZ#1915537 ) With this update, running the toolbox command now checks for updates to the default image before launching the container. This improves security and provides users with the latest bug fixes. ( BZ#2049591 ) Previously, updating to Podman 4.0 prevented users from running the toolbox command on RHCOS. This fix updates the toolbox library code to account for the new Podman behavior, so users can now run toolbox on RHCOS as expected. ( BZ#2093040 ) Previously, custom SELinux policy modules were not properly supported by rpm-ostree , so they were not updated along with the rest of the system upon update. This would surface as failures in unrelated components. Pending SELinux userspace improvements landing in a future OpenShift Container Platform release, this update provides a workaround to RHCOS that will rebuild and reload the SELinux policy during boot as needed. ( OCPBUGS-595 ) Scalability and performance The tuned profile has been modified to assign the same priority as ksoftirqd and rcuc to the newly introduced per-CPU kthreads ( ktimers ) added in a recent Red Hat Enterprise Linux (RHEL) kernel patch. For more information, see OCPBUGS-3475 , BZ#2117780 and BZ#2122220 . Previously, restarts of the tuned service caused improper reset of the irqbalance configuration, leading to IRQ operation being served again on the isolated CPUs, therefore violating the isolation guarantees. With this fix, the irqbalance service configuration is properly preserved across tuned service restarts (explicit or caused by bugs), therefore preserving the CPU isolation guarantees with respect to IRQ serving. ( OCPBUGS-585 ) Previously, when the tuned daemon was restarted out of order as part of the cluster Node Tuning Operator, the CPU affinity of interrupt handlers was reset and the tuning was compromised. With this fix, the irqbalance plugin in tuned is disabled, and OpenShift Container Platform now relies on the logic and interaction between CRI-O and irqbalance .( BZ#2105123 ) Previously, a low latency hook script executing for every new veth device took too long when the node was under load. The resultant accumulated delays during pod start events caused the rollout time for kube-apiserver to be slow and sometimes exceed the 5-minute rollout timeout. With this fix, the container start time should be shorter and within the 5-minute threshold. ( BZ#2109965 ). Previously, the oslat control thread was collocated with one of the test threads, which caused latency spikes in the measurements. With this fix, the oslat runner now reserves one CPU for the control thread, meaning the test uses one less CPU for running the busy threads. ( BZ#2051443 ) Latency measurement tools, also known as oslat , cyclictest , and hwlatdetect , now run on completely isolated CPUs without the helper process running in the background that might cause latency spikes, therefore providing more accurate latency measurements. ( OCPBUGS-2618 ) Previously, although the reference PolicyGenTemplate for group-du-sno-ranGen.yaml includes two StorageClass entries, the generated policy included only one. With this update, the generated policy now includes both policies. ( BZ#2049306 ). Storage Previously, checks for generic ephemeral volumes failed. With this update, checks for expandable volumes now include generic ephemeral volumes. ( BZ#2082773 ) Previously, if more than one secret was present for vSphere, the vSphere CSI Operator randomly picked a secret and sometimes caused the Operator to restart. With this update, a warning appears when there is more than one secret on the vCenter CSI Operator. ( BZ#2108473 ) Previously, OpenShift Container Platform detached a volume when a Container Storage Interface (CSI) driver was not able to unmount the volume from a node. Detaching a volume without unmount is not allowed by CSI specifications and drivers could enter an undocumented state. With this update, CSI drivers are detached before unmounting only on unhealthy nodes preventing the undocumented state. ( BZ#2049306 ) Previously, there were missing annotations on the Manila CSI Driver Operator's VolumeSnapshotClass. Consequently, the Manila CSI snapshotter could not locate secrets, and could not create snapshots with the default VolumeSnapshotClass. This update fixes the issue so that secret names and namespaces are included in the default VolumeSnapshotClass. As a result, users can now create snapshots in the Manila CSI Driver Operator using the default VolumeSnapshotClass. ( BZ#2057637 ) Users can now opt into using the experimental VHD feature on Azure File. To opt in, users must specify the fstype parameter in a storage class and enable it with --enable-vhd=true . If fstype is used and the feature is not set to true , the volumes will fail to provision. To opt out of using the VHD feature, remove the fstype parameter from your storage class. ( BZ#2080449 ) Previously, if more than one secret was present for vSphere, the vSphere CSI Operator randomly picked a secret and sometimes caused the Operator to restart. With this update, a warning appears when there is more than one secret on the vCenter CSI Operator. ( BZ#2108473 ) Web console (Developer perspective) Previously, the users could not deselect a Git secret in add and edit forms. As a result, the resources had to be recreated. This fix resolves the issue by adding the option to choose No Secret in the select secret option list. As a result, the users can easily select, deselect, or detach any attached secrets. ( BZ#2089221 ) In OpenShift Container Platform 4.9, when it is minimal or no data in the Developer Perspective , most of the monitoring charts or graphs (CPU consumption, memory usage, and bandwidth) show a range of -1 to 1. However, none of these values can ever go below zero. This will be resolved in a future release. ( BZ#1904106 ) Before this update, users could not silence alerts in the Developer perspective in the OpenShift Container Platform web console when a user-defined Alertmanager service was deployed because the web console would forward the request to the platform Alertmanager service in the openshift-monitoring namespace. With this update, when you view the Developer perspective in the web console and try to silence an alert, the request is forwarded to the correct Alertmanager service. ( OCPBUGS-1789 ) Previously, there was a known issue in the Add Helm Chart Repositories form to extend the Developer Catalog of a project. The Quick Start guides shows that you can add the ProjectHelmChartRepository CR in the required namespace whereas it does not mention that to perform this you need permission from the kubeadmin. This issue was resolved with Quickstart mentioning the correct steps to create ProjectHelmChartRepository CR. ( BZ#2057306 ) 1.7. Technology Preview features Some features in this release are currently in Technology Preview. These experimental features are not intended for production use. Note the following scope of support on the Red Hat Customer Portal for these features: Technology Preview Features Support Scope In the following tables, features are marked with the following statuses: Technology Preview General Availability Not Available Deprecated Networking Technology Preview features Table 1.14. Networking Technology Preview tracker Feature 4.10 4.11 4.12 PTP single NIC hardware configured as boundary clock Technology Preview General Availability General Availability PTP dual NIC hardware configured as boundary clock Not Available Technology Preview Technology Preview PTP events with boundary clock Technology Preview General Availability General Availability HTTP transport replaces AMQP for PTP and bare-metal events Not Available Not Available General Availability Pod-level bonding for secondary networks General Availability General Availability General Availability External DNS Operator Technology Preview General Availability General Availability AWS Load Balancer Operator Not Available Technology Preview General Availability Ingress Node Firewall Operator Not Available Not Available Technology Preview Advertise using BGP mode the MetalLB service from a subset of nodes, using a specific pool of IP addresses Not Available Technology Preview General Availability Advertise using L2 mode the MetalLB service from a subset of nodes, using a specific pool of IP addresses Not Available Technology Preview Technology Preview Multi-network policies for SR-IOV networks Not Available Not Available Technology Preview Updating the interface-specific safe sysctls list Not Available Not Available Technology Preview MT2892 Family [ConnectX-6 Dx] SR-IOV support Not Available Not Available Technology Preview MT2894 Family [ConnectX-6 Lx] SR-IOV support Not Available Not Available Technology Preview MT42822 BlueField-2 in ConnectX-6 NIC mode SR-IOV support Not Available Not Available Technology Preview Silicom STS Family SR-IOV support Not Available Not Available Technology Preview MT2892 Family [ConnectX-6 Dx] OvS Hardware Offload support Not Available Not Available Technology Preview MT2894 Family [ConnectX-6 Lx] OvS Hardware Offload support Not Available Not Available Technology Preview MT42822 BlueField-2 in ConnectX-6 NIC mode OvS Hardware Offload support Not Available Not Available Technology Preview Switching Bluefield-2 from DPU to NIC Not available Not available Technology Preview Storage Technology Preview features Table 1.15. Storage Technology Preview tracker Feature 4.10 4.11 4.12 Shared Resources CSI Driver and Build CSI Volumes in OpenShift Builds Technology Preview Technology Preview Technology Preview CSI volume expansion Technology Preview General Availability General Availability CSI Azure File Driver Operator Technology Preview General Availability General Availability CSI Google Filestore Driver Operator Not Available Not Available Technology Preview CSI automatic migration (Azure file, VMware vSphere) Technology Preview Technology Preview Technology Preview CSI automatic migration (Azure Disk, OpenStack Cinder) Technology Preview General Availability General Availability CSI automatic migration (AWS EBS, GCP disk) Technology Preview Technology Preview General Availability CSI inline ephemeral volumes Technology Preview Technology Preview Technology Preview CSI generic ephemeral volumes Not Available General Availability General Availability Shared Resource CSI Driver Technology Preview Technology Preview Technology Preview CSI Google Filestore Driver Operator Not Available Not Available Technology Preview Automatic device discovery and provisioning with Local Storage Operator Technology Preview Technology Preview Technology Preview NFS support for Azure File CSI Operator Driver Not Available Not Available Generally Available Installation Technology Preview features Table 1.16. Installation Technology Preview tracker Feature 4.10 4.11 4.12 Adding kernel modules to nodes with kvc Technology Preview Technology Preview Technology Preview IBM Cloud VPC clusters Technology Preview Technology Preview General Availability Selectable Cluster Inventory Technology Preview Technology Preview Technology Preview Multi-architecture compute machines Not Available Technology Preview Technology Preview Disconnected mirroring with the oc-mirror CLI plugin Technology Preview General Availability General Availability Mount shared entitlements in BuildConfigs in RHEL Technology Preview Technology Preview Technology Preview Agent-based OpenShift Container Platform Installer Not Available Not Available General Availability AWS Outposts platform Not Available Not Available Technology Preview Installing a cluster on Alibaba Cloud using installer-provisioned infrastructure Technology Preview Technology Preview Technology Preview Node Technology Preview features Table 1.17. Nodes Technology Preview tracker Feature 4.10 4.11 4.12 Non-preempting priority classes Technology Preview General Availability General Availability Node Health Check Operator Technology Preview General Availability General Availability Linux Control Group version 2 (cgroup v2) Not Available Not Available Technology Preview crun container runtime Not Available Not Available Technology Preview Multi-Architecture Technology Preview features Table 1.18. Multi-Architecture Technology Preview tracker Feature 4.10 4.11 4.12 kdump on x86_64 architecture Technology Preview General Availability General Availability kdump on arm64 architecture Not Available Technology Preview Technology Preview kdump on s390x architecture Technology Preview Technology Preview Technology Preview kdump on ppc64le architecture Technology Preview Technology Preview Technology Preview IBM Secure Execution on IBM Z and LinuxONE Not Available Not Available Technology Preview Serverless Technology Preview features Table 1.19. Serverless Technology Preview tracker Feature 4.10 4.11 4.12 Serverless functions Technology Preview Technology Preview Technology Preview Specialized hardware and driver enablement Technology Preview features Table 1.20. Specialized hardware and driver enablement Technology Preview tracker Feature 4.10 4.11 4.12 Driver Toolkit Technology Preview Technology Preview General Availability Special Resource Operator (SRO) Technology Preview Technology Preview Not Available Hub and spoke cluster support Not Available Not Available Technology Preview Web console Technology Preview features Table 1.21. Web console Technology Preview tracker Feature 4.10 4.11 4.12 Dynamic Plugins Technology Preview Technology Preview General Availability Scalability and performance Technology Preview features Table 1.22. Scalability and performance Technology Preview tracker Feature 4.10 4.11 4.12 Hyperthreading-aware CPU manager policy Technology Preview Technology Preview Technology Preview Node Observability Operator Not Available Technology Preview Technology Preview factory-precaching-cli tool Not Available Not Available Technology Preview Adding worker nodes to Single-node OpenShift clusters with GitOps ZTP Not Available Not Available Technology Preview Topology Aware Lifecycle Manager (TALM) Technology Preview Technology Preview General Availability Mount namespace encapsulation Not Available Not Available Technology Preview NUMA-aware scheduling with NUMA Resources Operator Technology Preview Technology Preview General Availability Operator Technology Preview features Table 1.23. Operator Technology Preview tracker Feature 4.10 4.11 4.12 Hybrid Helm Operator Technology Preview Technology Preview Technology Preview Java-based Operator Not Available Technology Preview Technology Preview Node Observability Operator Not Available Not Available Technology Preview Network Observability Operator Supported Supported General Availability Platform Operators Not Available Not Available Technology Preview RukPak Not Available Not Available Technology Preview cert-manager Operator Technology Preview Technology Preview General Availability Monitoring Technology Preview features Table 1.24. Monitoring Technology Preview tracker Feature 4.10 4.11 4.12 Alert routing for user-defined projects monitoring Technology Preview General Availability General Availability Alerting rules based on platform monitoring metrics Not Available Technology Preview Technology Preview Red Hat OpenStack Platform (RHOSP) Technology Preview features Table 1.25. RHOSP Technology Preview tracker Feature 4.10 4.11 4.12 Support for RHOSP DCN Technology Preview Technology Preview Technology Preview Support for external cloud providers for clusters on RHOSP Technology Preview Technology Preview General Availability OVS hardware offloading for clusters on RHOSP Technology Preview General Availability General Availability Architecture Technology Preview features Table 1.26. Architecture Technology Preview tracker Feature 4.10 4.11 4.12 Hosted control planes for OpenShift Container Platform on bare metal Not Available Not Available Technology Preview Hosted control planes for OpenShift Container Platform on Amazon Web Services (AWS) Not Available Technology Preview Technology Preview Machine management Technology Preview features Table 1.27. Machine management Technology Preview tracker Feature 4.10 4.11 4.12 Managing machines with the Cluster API for Amazon Web Services Not Available Technology Preview Technology Preview Managing machines with the Cluster API for Google Cloud Platform Not Available Technology Preview Technology Preview Cron job time zones Not Available Not Available Technology Preview Cloud controller manager for Alibaba Cloud Technology Preview Technology Preview Technology Preview Cloud controller manager for Amazon Web Services Technology Preview Technology Preview Technology Preview Cloud controller manager for Google Cloud Platform Technology Preview Technology Preview Technology Preview Cloud controller manager for IBM Cloud Technology Preview Technology Preview General Availability Cloud controller manager for Microsoft Azure Technology Preview Technology Preview Technology Preview Cloud controller manager for Red Hat OpenStack Platform (RHOSP) Technology Preview Technology Preview General Availability Cloud controller manager for VMware vSphere Technology Preview Technology Preview Technology Preview Custom Metrics Autoscaler Operator Not Available Technology Preview Technology Preview Authentication and authorization Technology Preview features Table 1.28. Authentication and authorization Technology Preview tracker Feature 4.10 4.11 4.12 Pod security admission restricted enforcement Not Available Not Available Technology Preview 1.8. Known issues In OpenShift Container Platform 4.1, anonymous users could access discovery endpoints. Later releases revoked this access to reduce the possible attack surface for security exploits because some discovery endpoints are forwarded to aggregated API servers. However, unauthenticated access is preserved in upgraded clusters so that existing use cases are not broken. If you are a cluster administrator for a cluster that has been upgraded from OpenShift Container Platform 4.1 to 4.12, you can either revoke or continue to allow unauthenticated access. Unless there is a specific need for unauthenticated access, you should revoke it. If you do continue to allow unauthenticated access, be aware of the increased risks. Warning If you have applications that rely on unauthenticated access, they might receive HTTP 403 errors if you revoke unauthenticated access. Use the following script to revoke unauthenticated access to discovery endpoints: ## Snippet to remove unauthenticated group from all the cluster role bindings USD for clusterrolebinding in cluster-status-binding discovery system:basic-user system:discovery system:openshift:discovery ; do ### Find the index of unauthenticated group in list of subjects index=USD(oc get clusterrolebinding USD{clusterrolebinding} -o json | jq 'select(.subjects!=null) | .subjects | map(.name=="system:unauthenticated") | index(true)'); ### Remove the element at index from subjects array oc patch clusterrolebinding USD{clusterrolebinding} --type=json --patch "[{'op': 'remove','path': '/subjects/USDindex'}]"; done This script removes unauthenticated subjects from the following cluster role bindings: cluster-status-binding discovery system:basic-user system:discovery system:openshift:discovery ( BZ#1821771 ) Intermittently, an IBM Cloud VPC cluster might fail to install because some worker machines do not start. Rather, these worker machines remain in the Provisioned phase. There is a workaround for this issue. From the host where you performed the initial installation, delete the failed machines and run the installation program again. Verify that the status of the internal application load balancer (ALB) for the master API server is active . Identify the cluster's infrastructure ID by running the following command: USD oc get infrastructure/cluster -ojson | jq -r '.status.infrastructureName' Log into the IBM Cloud account for your cluster and target the correct region for your cluster. Verify that the internal ALB status is active by running the following command: USD ibmcloud is lb <cluster_ID>-kubernetes-api-private --output json | jq -r '.provisioning_status' Identify the machines that are in the Provisioned phase by running the following command: USD oc get machine -n openshift-machine-api Example output NAME PHASE TYPE REGION ZONE AGE example-public-1-x4gpn-master-0 Running bx2-4x16 us-east us-east-1 23h example-public-1-x4gpn-master-1 Running bx2-4x16 us-east us-east-2 23h example-public-1-x4gpn-master-2 Running bx2-4x16 us-east us-east-3 23h example-public-1-x4gpn-worker-1-xqzzm Running bx2-4x16 us-east us-east-1 22h example-public-1-x4gpn-worker-2-vg9w6 Provisioned bx2-4x16 us-east us-east-2 22h example-public-1-x4gpn-worker-3-2f7zd Provisioned bx2-4x16 us-east us-east-3 22h Delete each failed machine by running the following command: USD oc delete machine <name_of_machine> -n openshift-machine-api Wait for the deleted worker machines to be replaced, which can take up to 10 minutes. Verify that the new worker machines are in the Running phase by running the following command: USD oc get machine -n openshift-machine-api Example output NAME PHASE TYPE REGION ZONE AGE example-public-1-x4gpn-master-0 Running bx2-4x16 us-east us-east-1 23h example-public-1-x4gpn-master-1 Running bx2-4x16 us-east us-east-2 23h example-public-1-x4gpn-master-2 Running bx2-4x16 us-east us-east-3 23h example-public-1-x4gpn-worker-1-xqzzm Running bx2-4x16 us-east us-east-1 23h example-public-1-x4gpn-worker-2-mnlsz Running bx2-4x16 us-east us-east-2 8m2s example-public-1-x4gpn-worker-3-7nz4q Running bx2-4x16 us-east us-east-3 7m24s Complete the installation by running the following command. Running the installation program again ensures that the cluster's kubeconfig is initialized properly: USD ./openshift-install wait-for install-complete ( OCPBUGS#1327 ) The oc annotate command does not work for LDAP group names that contain an equal sign ( = ), because the command uses the equal sign as a delimiter between the annotation name and value. As a workaround, use oc patch or oc edit to add the annotation. ( BZ#1917280 ) Due to the inclusion of old images in some image indexes, running oc adm catalog mirror and oc image mirror might result in the following error: error: unable to retrieve source image . As a temporary workaround, you can use the --skip-missing option to bypass the error and continue downloading the image index. For more information, see Service Mesh Operator mirroring failed . When using the egress IP address feature in OpenShift Container Platform on RHOSP, you can assign a floating IP address to a reservation port to have a predictable SNAT address for egress traffic. The floating IP address association must be created by the same user that installed the OpenShift Container Platform cluster. Otherwise any delete or move operation for the egress IP address hangs indefinitely because of insufficient privileges. When this issue occurs, a user with sufficient privileges must manually unset the floating IP address association to resolve the issue. ( OCPBUGS-4902 ) There is a known issue with Nutanix installation where the installation fails if you use 4096-bit certificates with Prism Central 2022.x. Instead, use 2048-bit certificates. ( KCS ) Deleting the bidirectional forwarding detection (BFD) profile and removing the bfdProfile added to the border gateway protocol (BGP) peer resource does not disable the BFD. Instead, the BGP peer starts using the default BFD profile. To disable BFD from a BGP peer resource, delete the BGP peer configuration and recreate it without a BFD profile. ( BZ#2050824 ) Due to an unresolved metadata API issue, you cannot install clusters that use bare-metal workers on RHOSP 16.1. Clusters on RHOSP 16.2 are not impacted by this issue. ( BZ#2033953 ) The loadBalancerSourceRanges attribute is not supported, and is therefore ignored, in load-balancer type services in clusters that run on RHOSP and use the OVN Octavia provider. There is no workaround for this issue. ( OCPBUGS-2789 ) After a catalog source update, it takes time for OLM to update the subscription status. This can mean that the status of the subscription policy may continue to show as compliant when Topology Aware Lifecycle Manager (TALM) decides whether remediation is needed. As a result the operator specified in the subscription policy does not get upgraded. As a workaround, include a status field in the spec section of the catalog source policy as follows: metadata: name: redhat-operators-disconnected spec: displayName: disconnected-redhat-operators image: registry.example.com:5000/disconnected-redhat-operators/disconnected-redhat-operator-index:v4.11 status: connectionState: lastObservedState: READY This mitigates the delay for OLM to pull the new index image and get the pod ready, reducing the time between completion of catalog source policy remediation and the update of the subscription status. If the issue persists and the subscription policy status update is still late you can apply another ClusterGroupUpdate CR with the same subscription policy, or an identical ClusterGroupUpdate CR with a different name. ( OCPBUGS-2813 ) TALM skips remediating a policy if all selected clusters are compliant when the ClusterGroupUpdate CR is started. The update of operators with a modified catalog source policy and a subscription policy in the same ClusterGroupUpdate CR does not complete. The subscription policy is skipped as it is still compliant until the catalog source change is enforced. As a workaround, add the following change to one CR in the common-subscription policy, for example: metadata.annotations.upgrade: "1" This makes the policy non-compliant prior to the start of the ClusterGroupUpdate CR. ( OCPBUGS-2812 ) On a single-node OpenShift instance, rebooting without draining the node to remove all the running pods can cause issues with workload container recovery. After the reboot, the workload restarts before all the device plugins are ready, resulting in resources not being available or the workload running on the wrong NUMA node. The workaround is to restart the workload pods when all the device plugins have re-registered themselves during the reboot recovery procedure. ( OCPBUGS-2180 ) The default dataset_comparison is currently ieee1588 . The recommended dataset_comparison is G.8275.x . It is planned to be fixed in a future version of OpenShift Container Platform. In the short term, you can manually update the ptp configuration to include the recommended dataset_comparison . ( OCPBUGS-2336 ) The default step_threshold is 0.0. The recommended step_threshold is 2.0. It is planned to be fixed in a future version of OpenShift Container Platform. In the short term, you can manually update the ptp configuration to include the recommended step_threshold . ( OCPBUGS-3005 ) The BMCEventSubscription CR fails to create a Redfish subscription for a spoke cluster in an ACM-deployed multi-cluster environment, where the metal3 service is only running on a hub cluster. The workaround is to create the subscription by calling the Redfish API directly, for example, by running the following command: curl -X POST -i --insecure -u "<BMC_username>:<BMC_password>" https://<BMC_IP>/redfish/v1/EventService/Subscriptions \ -H 'Content-Type: application/json' \ --data-raw '{ "Protocol": "Redfish", "Context": "any string is valid", "Destination": "https://hw-event-proxy-openshift-bare-metal-events.apps.example.com/webhook", "EventTypes": ["Alert"] }' You should receive a 201 Created response and a header with Location: /redfish/v1/EventService/Subscriptions/<sub_id> that indicates that the Redfish events subscription is successfully created. ( OCPBUGSM-43707 ) When using the GitOps ZTP pipeline to install a single-node OpenShift cluster in a disconnected environment, there should be two CatalogSource CRs applied in the cluster. One of the CatalogSource CRs gets deleted following multiple node reboots. As a workaround, you can change the default names, such as certified-operators and redhat-operators , of the catalog sources. ( OCPBUGSM-46245 ) If an invalid subscription channel is specified in the subscription policy that is used to perform a cluster upgrade, the Topology Aware Lifecycle Manager indicates a successful upgrade right after the policy is enforced because the Subscription state remains AtLatestKnown . ( OCPBUGSM-43618 ) The SiteConfig disk partition definition fails when applied to multiple nodes in a cluster. When a SiteConfig CR is used to provision a compact cluster, creating a valid diskPartition config on multiple nodes fails with a Kustomize plugin error. ( OCPBUGSM-44403 ) If secure boot is currently disabled and you try to enable it using ZTP, the cluster installation does not start. When secure boot is enabled through ZTP, the boot options are configured before the virtual CD is attached. Therefore, the first boot from the existing hard disk has the secure boot turned on. The cluster installation gets stuck because the system never boots from the CD. ( OCPBUGSM-45085 ) Using Red Hat Advanced Cluster Management (RHACM), spoke cluster deployments on Dell PowerEdge R640 servers are blocked when the virtual media does not disconnect the ISO in the iDRAC console after writing the image to the disk. As a workaround, disconnect the ISO manually through the Virtual Media tab in the iDRAC console. ( OCPBUGSM-45884 ) Low-latency applications that rely on high-resolution timers to wake up their threads might experience higher wake up latencies than expected. Although the expected wake up latency is under 20us, latencies exceeding this can occasionally be seen when running the cyclictest tool for long durations (24 hours or more). Testing has shown that wake up latencies are under 20us for over 99.999999% of the samples. ( RHELPLAN-138733 ) A Chapman Beach NIC from Intel must be installed in a bifurcated PCIe slot to ensure that both ports are visible. A limitation also exists in the current devlink tooling in RHEL 8.6 which prevents the configuration of 2 ports in the bifurcated PCIe slot. ( RHELPLAN-142458 ) Disabling an SR-IOV VF when a port goes down can cause a 3-4 second delay with Intel NICs. ( RHELPLAN-126931 ) When using Intel NICs, IPV6 traffic stops when an SR-IOV VF is assigned an IPV6 address. ( RHELPLAN-137741 ) When using VLAN strip offloading, the offload flag ( ol_flag ) is not consistently set correctly with the iavf driver. ( RHELPLAN-141240 ) A deadlock can occur if an allocation fails during a configuration change with the ice driver. ( RHELPLAN-130855 ) SR-IOV VFs send GARP packets with the wrong MAC address when using Intel NICs. ( RHELPLAN-140971 ) When using the GitOps ZTP method of managing clusters and deleting a cluster which has not completed installation, the cleanup of the cluster namespace on the hub cluster might hang indefinitely. To complete the namespace deletion, remove the baremetalhost.metal3.io finalizer from two CRs in the cluster namespace: Remove the finalizer from the secret that is pointed to by the BareMetalHost CR .spec.bmc.credentialsName . Remove the finalizer from the BareMetalHost CR. When these finalizers are removed the namespace termination completes within a few seconds. ( OCPBUGS-3029 ) The addition of a new feature in OCP 4.12 that enables UDP GRO also causes all veth devices to have one RX queue per available CPU (previously each veth had one queue). Those queues are dynamically configured by OVN and there is no synchronization between latency tuning and this queue creation. The latency tuning logic monitors the veth NIC creation events and starts configuring the RPS queue cpu masks before all the queues are properly created. This means that some of the RPS queue masks are not configured. Since not all NIC queues are configured properly there is a chance of latency spikes in a real-time application that uses timing-sensitive cpus for communicating with services in other containers. Applications that do not use kernel networking stack are not affected. ( OCPBUGS-4194 ) Platform Operator and RukPak known issues: Deleting a platform Operator results in a cascading deletion of the underlying resources. This cascading deletion logic can only delete resources that are defined in the Operator Lifecycle Manager-based (OLM) Operator's bundle format. In the case that a platform Operator creates resources that are defined outside of that bundle format, then the platform Operator is responsible for handling this cleanup interaction. This behavior can be observed when installing the cert-manager Operator as a platform Operator, and then removing it. The expected behavior is that a namespace is left behind that the cert-manager Operator created. The platform Operators manager does not have any logic that compares the current and desired state of the cluster-scoped BundleDeployment resource it is managing. This leaves the possibility for a user who has sufficient role-based access control (RBAC) to manually modify that underlying BundleDeployment resource and can lead to situations where users can escalate their permissions to the cluster-admin role. By default, you should limit access to this resource to a small number of users that explicitly require access. The only supported client for the BundleDeployment resource during this Technology Preview release is the platform Operators manager component. OLM's Marketplace component is an optional cluster capability that can be disabled. This has implications during the Technology Preview release because platform Operators are currently only sourced from the redhat-operators catalog source that is managed by the Marketplace component. As a workaround, a cluster administrator can create this catalog source manually. The RukPak provisioner implementations do not have the ability to inspect the health or state of the resources that they are managing. This has implications for surfacing the generated BundleDeployment resource state to the PlatformOperator resource that owns it. If a registry+v1 bundle contains manifests that can be successfully applied to the cluster, but will fail at runtime, such as a Deployment object referencing a non-existent image, the result is a successful status being reflected in individual PlatformOperator and BundleDeployment resources. Cluster administrators configuring PlatformOperator resources before cluster creation cannot easily determine the desired package name without leveraging an existing cluster or relying on documented examples. There is currently no validation logic that ensures an individually configured PlatformOperator resource will be able to successfully roll out to the cluster. When using the Technology Preview OCI feature with the oc-mirror CLI plugin, the mirrored catalog embeds all of the Operator bundles, instead of filtering only on those specified in the image set configuration file. ( OCPBUGS-5085 ) There is currently a known issue when you run the Agent-based OpenShift Container Platform Installer to generate an ISO image from a directory where the release was used for ISO image generation. An error message is displayed with the release version not matching. As a workaround, create and use a new directory. ( OCPBUGS#5159 ) The defined capabilities in the install-config.yaml file are not applied in the Agent-based OpenShift Container Platform installation. Currently, there is no workaround. ( OCPBUGS#5129 ) Fully populated load balancers on RHOSP that are created with the OVN driver can contain pools that are stuck in a pending creation status. This issue can cause problems for clusters that are deployed on RHOSP. To resolve the issue, update your RHOSP packages. ( BZ#2042976 ) Bulk load-balancer member updates on RHOSP can return a 500 code in response to PUT requests. This issue can cause problems for clusters that are deployed on RHOSP. To resolve the issue, update your RHOSP packages. ( BZ#2100135 ) Clusters that use external cloud providers can fail to retrieve updated credentials after rotation. The following platforms are affected: Alibaba Cloud IBM Cloud VPC IBM Power OpenShift Virtualization RHOSP As a workaround, restart openshift-cloud-controller-manager pods by running the following command: USD oc delete pods --all -n openshift-cloud-controller-manager ( OCPBUGS-5036 ) There is a known issue when cloud-provider-openstack tries to create health monitors on OVN load balancers by using the API to create fully populated load balancers. These health monitors become stuck in a PENDING_CREATE status. After their deletion, associated load balancers are are stuck in a PENDING_UPDATE status. There is no workaround. ( BZ#2143732 ) Due to a known issue, to use stateful IPv6 networks with cluster that run on RHOSP, you must include ip=dhcp,dhcpv6 in the kernel arguments of worker nodes . ( OCPBUGS-2104 ) It is not possible to create a macvlan on the physical function (PF) when a virtual function (VF) already exists. This issue affects the Intel E810 NIC. ( BZ#2120585 ) There is currently a known issue when manually configuring IPv6 addresses and routes on an IPv4 OpenShift Container Platform cluster. When converting to a dual-stack cluster, newly created pods remain in the ContainerCreating status. Currently, there is no workaround. This issue is planned to be addressed in a future OpenShift Container Platform release. ( OCPBUGS-4411 ) When an OVN cluster installed on IBM Public Cloud has more than 60 worker nodes, simultaneously creating 2000 or more services and route objects can cause pods created at the same time to remain in the ContainerCreating status. If this problem occurs, entering the oc describe pod <podname> command shows events with the following warning: FailedCreatePodSandBox... failed to configure pod interface: timed out waiting for OVS port binding (ovn-installed) . There is currently no workaround for this issue. ( OCPBUGS-3470 ) When a control plane machine is replaced on a cluster that uses the OVN-Kubernetes network provider, the pods related to OVN-Kubernetes might not start on the replacement machine. When this occurs, the lack of networking on the new machine prevents etcd from allowing it to replace the old machine. As a result, the cluster is stuck in this state and might become degraded. This behavior can occur when the control plane is replaced manually or by the control plane machine set. There is currently no workaround to resolve this issue if encountered. To avoid this issue, disable the control plane machine set and do not replace control plane machines manually if your cluster uses the OVN-Kubernetes network provider. ( OCPBUGS-5306 ) If a cluster that was deployed through ZTP has policies that do not become compliant, and no ClusterGroupUpdates object is present, you must restart the TALM pods. Restarting TALM creates the proper ClusterGroupUpdates object, which enforces the policy compliance. ( OCPBUGS-4065 ) Currently, a certificate compliance issue, specifically outputted as x509: certificate is not standards compliant , exists when you run the installation program on macOS for the purposes of installing an OpenShift Container Platform cluster on VMware vSphere. This issue relates to a known issue with the golang compiler in that the compiler does not recognize newly supported macOS certificate standards. No workaround exists for this issue. ( OSDOCS-5694 ) Currently, when using a persistent volume (PV) that contains a very large number of files, the pod might not start or can take an excessive amount of time to start. For more information, see this knowledge base article . ( BZ1987112 ) Creating pods with Azure File NFS volumes that are scheduled to the control plane node causes the mount to be denied. ( OCPBUGS-18581 ) To work around this issue: If your control plane nodes are schedulable, and the pods can run on worker nodes, use nodeSelector or Affinity to schedule the pod in worker nodes. When installing an OpenShift Container Platform cluster with static IP addressing and Tang encryption, nodes start without network settings. This condition prevents nodes from accessing the Tang server, causing installation to fail. To address this condition, you must set the network settings for each node as ip installer arguments. For installer-provisioned infrastructure, before installation provide the network settings as ip installer arguments for each node by executing the following steps. Create the manifests. For each node, modify the BareMetalHost custom resource with annotations to include the network settings. For example: USD cd ~/clusterconfigs/openshift USD vim openshift-worker-0.yaml apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: annotations: bmac.agent-install.openshift.io/installer-args: '["--append-karg", "ip=<static_ip>::<gateway>:<netmask>:<hostname_1>:<interface>:none", "--save-partindex", "1", "-n"]' 1 2 3 4 5 inspect.metal3.io: disabled bmac.agent-install.openshift.io/hostname: <fqdn> 6 bmac.agent-install.openshift.io/role: <role> 7 generation: 1 name: openshift-worker-0 namespace: mynamespace spec: automatedCleaningMode: disabled bmc: address: idrac-virtualmedia://<bmc_ip>/redfish/v1/Systems/System.Embedded.1 8 credentialsName: bmc-secret-openshift-worker-0 disableCertificateVerification: true bootMACAddress: 94:6D:AE:AB:EE:E8 bootMode: "UEFI" rootDeviceHints: deviceName: /dev/sda For the ip settings, replace: 1 <static_ip> with the static IP address for the node, for example, 192.168.1.100 2 <gateway> with the IP address of your network's gateway, for example, 192.168.1.1 3 <netmask> with the network mask, for example, 255.255.255.0 4 <hostname_1> with the node's hostname, for example, node1.example.com 5 <interface> with the name of the network interface, for example, eth0 6 <fqdn> with the fully qualified domain name of the node 7 <role> with worker or master to reflect the node's role 8 <bmc_ip> with with the BMC IP address and the protocol and path of the BMC, as needed. Save the file to the clusterconfigs/openshift directory. Create the cluster. When installing with the Assisted Installer, before installation modify each node's installer arguments using the API to append the network settings as ip installer arguments. For example: USD curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{infra_env_id}/hosts/USD{host_id}/installer-args \ -X PATCH \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d ' { "args": [ "--append-karg", "ip=<static_ip>::<gateway>:<netmask>:<hostname_1>:<interface>:none", 1 2 3 4 5 "--save-partindex", "1", "-n" ] } ' | jq For the network settings, replace: 1 <static_ip> with the static IP address for the node, for example, 192.168.1.100 2 <gateway> with the IP address of your network's gateway, for example, 192.168.1.1 3 <netmask> with the network mask, for example, 255.255.255.0 4 <hostname_1> with the node's hostname, for example, node1.example.com 5 <interface> with the name of the network interface, for example, eth0 . Contact Red Hat Support for additional details and assistance. ( OCPBUGS-23119 ) 1.9. Asynchronous errata updates Security, bug fix, and enhancement updates for OpenShift Container Platform 4.12 are released as asynchronous errata through the Red Hat Network. All OpenShift Container Platform 4.12 errata is available on the Red Hat Customer Portal . See the OpenShift Container Platform Life Cycle for more information about asynchronous errata. Red Hat Customer Portal users can enable errata notifications in the account settings for Red Hat Subscription Management (RHSM). When errata notifications are enabled, users are notified through email whenever new errata relevant to their registered systems are released. Note Red Hat Customer Portal user accounts must have systems registered and consuming OpenShift Container Platform entitlements for OpenShift Container Platform errata notification emails to generate. This section will continue to be updated over time to provide notes on enhancements and bug fixes for future asynchronous errata releases of OpenShift Container Platform 4.12. Versioned asynchronous releases, for example with the form OpenShift Container Platform 4.12.z, will be detailed in subsections. In addition, releases in which the errata text cannot fit in the space provided by the advisory will be detailed in subsections that follow. Important For any OpenShift Container Platform release, always review the instructions on updating your cluster properly. 1.9.1. RHSA-2022:7399 - OpenShift Container Platform 4.12.0 image release, bug fix, and security update advisory Issued: 17 January 2023 OpenShift Container Platform release 4.12.0, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2022:7399 advisory. The RPM packages that are included in the update are provided by the RHSA-2022:7398 advisory. Space precluded documenting all of the container images for this release in the advisory. See the following article for notes on the container images in this release: You can view the container images in this release by running the following command: USD oc adm release info 4.12.0 --pullspecs 1.9.1.1. Features 1.9.1.1.1. General availability of pod-level bonding for secondary networks With this update, Using pod-level bonding is now generally available. 1.9.2. RHSA-2023:0449 - OpenShift Container Platform 4.12.1 bug fix and security update Issued: 30 January 2023 OpenShift Container Platform release 4.12.1, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2023:0449 advisory. The RPM packages that are included in the update are provided by the RHBA-2023:0448 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.1 --pullspecs 1.9.2.1. Bug fixes Previously, due to a wrong check in the OpenStack cloud provider, the load balancers were populated with External IP addresses when all of the Octavia load balancers were created. This increased the time for the load balancers to be handled. With this update, load balancers are still created sequentially and External IP addresses are populated one-by-one. ( OCPBUGS-5403 ) Previously, the cluster-image-registry-operator would default to using persistent volume claim (PVC) when it failed to reach Swift. With this update, failure to connect to Red Hat OpenStack Platform (RHOSP) API or other incidental failures cause the cluster-image-registry-operator to retry the probe. During the retry, the default to PVC only occurs if the RHOSP catalog is correctly found, and it does not contain object storage; or alternatively, if RHOSP catalog is there and the current user does not have permission to list containers. ( OCPBUGS-5154 ) 1.9.2.2. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.3. RHSA-2023:0569 - OpenShift Container Platform 4.12.2 bug fix and security update Issued: 7 February 2023 OpenShift Container Platform release 4.12.2, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2023:0569 advisory. The RPM packages that are included in the update are provided by the RHBA-2023:0568 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.2 --pullspecs 1.9.3.1. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.4. RHSA-2023:0728 - OpenShift Container Platform 4.12.3 bug fix and security update Issued: 16 February 2023 OpenShift Container Platform release 4.12.3, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2023:0728 advisory. The RPM packages that are included in the update are provided by the RHSA-2023:0727 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.3 --pullspecs 1.9.4.1. Bug fixes Previously, when a control plane machine was replaced on a cluster that used the OVN-Kubernetes network provider, the pods related to OVN-Kubernetes sometimes did not start on the replacement machine, and prevented etcd from allowing it to replace the old machine. With this update, pods related to OVN-Kubernetes start in the replacement machine as expected.( OCPBUGS-6494 ) 1.9.4.2. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.5. RHSA-2023:0769 - OpenShift Container Platform 4.12.4 bug fix and security update Issued: 20 February 2023 OpenShift Container Platform release 4.12.4, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2023:0769 advisory. The RPM packages that are included in the update are provided by the RHBA-2023:0768 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.4 --pullspecs 1.9.5.1. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.6. RHSA-2023:0890 - OpenShift Container Platform 4.12.5 bug fix and security update Issued: 28 February 2023 OpenShift Container Platform release 4.12.5, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2023:0890 advisory. The RPM packages that are included in the update are provided by the RHBA-2023:0889 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.5 --pullspecs 1.9.6.1. Bug fixes Previously, in the repositories list, you could see the PipelineRuns only when the status was Succeeded or Failed but not when the status was Running . With this fix, when the PipelineRuns is triggered, you can see it in the repositories list with the status Running . ( OCPBUGS-6816 ) Previously, when creating a Secret , the Start Pipeline model created an invalid JSON value, As a result, the Secret was unusable and the PipelineRun could fail. With this fix, the Start Pipeline model creates a valid JSON value for the Secret . Now, you can create valid Secrets while starting a Pipeline. ( OCPBUGS-6671 ) Previously, when a BindableKinds resource did not have a status, the web console crashed, fetching and showing the same data in a loop. With this fix, you can set the BindableKinds resource status array to [] , expecting it to exist without a status field. As a result, the web browser or the application does not crash. ( OCPBUGS-4072 ) Previously, the associated webhook <kn-service-name>-github-webhook-secret did not delete when deleting a Knative ( kn ) service from OpenShift Container Platform. With this fix, all the associated webhook secrets are deleted. Now, you can create a Knative ( kn ) service with the same name as the deleted one. ( OCPBUGS-7437 ) 1.9.6.2. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.7. RHSA-2023:1034 - OpenShift Container Platform 4.12.6 bug fix and security update Issued: 7 March 2023 OpenShift Container Platform release 4.12.6, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHBA-2023:1034 advisory. The RPM packages that are included in the update are provided by the RHSA-2023:1033 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.6 --pullspecs 1.9.7.1. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.8. RHBA-2023:1163 - OpenShift Container Platform 4.12.7 bug fix update Issued: 13 March 2023 OpenShift Container Platform release 4.12.7 is now available. The list of bug fixes that are included in the update is documented in the RHBA-2023:1163 advisory. The RPM packages that are included in the update are provided by the RHBA-2023:1162 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.7 --pullspecs 1.9.8.1. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.9. RHBA-2023:1269 - OpenShift Container Platform 4.12.8 bug fix and security update Issued: 21 March 2023 OpenShift Container Platform release 4.12.8, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHBA-2023:1269 advisory. The RPM packages that are included in the update are provided by the RHSA-2023:1268 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.8 --pullspecs 1.9.9.1. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.10. RHSA-2023:1409 - OpenShift Container Platform 4.12.9 bug fix and security update Issued: 27 March 2023 OpenShift Container Platform release 4.12.9, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2023:1409 advisory. The RPM packages that are included in the update are provided by the RHSA-2023:1408 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.9 --pullspecs 1.9.10.1. Bug fixes Previously, validation was not preventing users from installing a GCP cluster into a shared VPC if they did not enable the Technology Preview feature gate. Therefore, you could install a cluster into a shared VPC without enabling the Technology Preview feature gate. This release added a feature gate validation to 4.12 so you must enable featureSet: TechPreviewNoUpgrade to install a GCP cluster into a shared VPC. ( OCPBUGS-7469 ) Previously, MTU migration configuration would sometimes be cleaned up before the migration was complete causing the migration to fail. This release ensures that the MTU migration is preserved while migration is in progress so that the migration can complete successfully. ( OCPBUGS-7445 ) 1.9.10.2. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.11. RHBA-2023:1508 - OpenShift Container Platform 4.12.10 bug fix update Issued: 3 April 2023 OpenShift Container Platform release 4.12.10 is now available. The list of bug fixes that are included in the update is documented in the RHBA-2023:1508 advisory. The RPM packages that are included in the update are provided by the RHBA-2023:1507 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.10 --pullspecs 1.9.11.1. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.12. RHSA-2023:1645 - OpenShift Container Platform 4.12.11 bug fix and security update Issued: 11 April 2023 OpenShift Container Platform release 4.12.11, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHBA-2023:1645 advisory. The RPM packages that are included in the update are provided by the RHBA-2023:1644 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.11 --pullspecs 1.9.12.1. Features 1.9.12.1.1. New flag for the oc-mirror plugin: --max-nested-paths With this update, you can now use the --max-nested-paths flag for the oc-mirror plugin to specify the maximum number of nested paths for destination registries that limit nested paths. The default is 2 . 1.9.12.1.2. New flag for the oc-mirror plugin: --skip-pruning With this update, you can now use the --skip-pruning flag for the oc-mirror plugin to disable automatic pruning of images from the target mirror registry. 1.9.12.2. Bug fixes Previously, the openshift-install agent create cluster-manifests command required a non-empty list of imageContentSources in the install-config.yaml file. If no image content sources were supplied, the command generated the error failed to write asset (Mirror Registries Config) to disk: failed to write file: open .: is a directory . With this update, the command works whether or not the imageContentSources section of install-config.yaml file contains anything. ( OCPBUGS-8384 ) Previously, the OpenStack Machine API provider had to be restarted so that new cloud credentials were used in the event of a rotation of the OpenStack clouds.yaml file. Consequently, the ability of a MachineSet to scale to zero was affected. With this update, cloud credentials are no longer cached and the OpenStack Machine API provider reads the corresponding secret on demand. ( OCPBUGS-10603 ) 1.9.12.3. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.13. RHBA-2023:1734 - OpenShift Container Platform 4.12.12 bug fix Issued: 13 April 2023 OpenShift Container Platform release 4.12.12 is now available. The list of bug fixes that are included in the update is documented in the RHBA-2023:1734 advisory. There are no RPM packages for this update. You can view the container images in this release by running the following command: USD oc adm release info 4.12.12 --pullspecs 1.9.13.1. Updating All OpenShift Container Platform 4.12 users are advised that the only defect fixed in this release is limited to install time; therefore, there is no need to update previously installed clusters to this version. 1.9.14. RHBA-2023:1750 - OpenShift Container Platform 4.12.13 bug fix update Issued: 19 April 2023 OpenShift Container Platform release 4.12.13 is now available. The list of bug fixes that are included in the update is documented in the RHBA-2023:1750 advisory. The RPM packages that are included in the update are provided by the RHBA-2023:1749 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.13 --pullspecs 1.9.14.1. Features 1.9.14.1.1. Pod security admission restricted enforcement (Technology Preview) With this release, pod security admission restricted enforcement is available as a Technology Preview feature by enabling the TechPreviewNoUpgrade feature set. If you enable the TechPreviewNoUpgrade feature set, pods are rejected if they violate pod security standards, instead of only logging a warning. Note Pod security admission restricted enforcement is only activated if you enable the TechPreviewNoUpgrade feature set after your OpenShift Container Platform cluster is installed. It is not activated if you enable the TechPreviewNoUpgrade feature set during cluster installation. For more information, see Understanding feature gates . 1.9.14.2. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.15. RHBA-2023:1858 - OpenShift Container Platform 4.12.14 bug fix update Issued: 24 April 2023 OpenShift Container Platform release 4.12.14 is now available. The list of bug fixes that are included in the update is documented in the RHBA-2023:1858 advisory. The RPM packages that are included in the update are provided by the RHBA-2023:1857 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.14 --pullspecs 1.9.15.1. Features 1.9.15.1.1. Cloud provider OpenStack is updated to 1.25 With this release, Cloud Provider Red Hat OpenStack Platform (RHOSP) is updated to 1.25.5. The update includes the addition of an annotation for real load balancer IP addresses and the global source for math/rand packages are seeded in main.go . 1.9.15.2. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.16. RHBA-2023:2037 - OpenShift Container Platform 4.12.15 bug fix update Issued: 3 May 2023 OpenShift Container Platform release 4.12.15 is now available. The list of bug fixes that are included in the update is documented in the RHBA-2023:2037 advisory. The RPM packages that are included in the update are provided by the RHBA-2023:2036 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.15 --pullspecs 1.9.16.1. Bug fixes Previously, the Cluster Network Operator (CNO) configuration ignored Kuryr's maximum transmission unit (MTU) settings when using the Red Hat OpenStack Platform (RHOSP) Networking service (neutron) component to create a network for OpenShift services. CNO would create a network in Neutron with the wrong MTU property, and this action could cause incompatibility issues among network components. With this update, the CNO does not ignore the Kuryr MTU setting when creating the network for services. You can then use the network to host OpenShift services.( OCPBUGS-4896 ) 1.9.16.2. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.17. RHSA-2023:2110 - OpenShift Container Platform 4.12.16 bug fix and security update Issued: 10 May 2023 OpenShift Container Platform release 4.12.16, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2023:2110 advisory. The RPM packages that are included in the update are provided by the RHBA-2023:2109 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.16 --pullspecs 1.9.17.1. Bug fixes Previously, in the Import from Git and Deploy Image flows, the Resource Type section was moved to Advanced section. As a result, it was difficult to identify the type of resource created. With this fix, Resource Type section is moved to the General section. ( OCPBUGS-7395 ) 1.9.17.2. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.18. RHBA-2023:2699 - OpenShift Container Platform 4.12.17 bug fix update Issued: 18 May 2023 OpenShift Container Platform release 4.12.17 is now available. The list of bug fixes that are included in the update is documented in the RHBA-2023:2699 advisory. The RPM packages that are included in the update are provided by the RHBA-2023:2698 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.17 --pullspecs 1.9.18.1. Bug fixes Previously, you used the edit form for creating ConfigMaps , Secrets , Deployments , and DeploymentConfigs . For BuildConfigs , you used the edit form only for editing. With this fix, you can use the edit form for creating BuildConfigs too. ( OCPBUGS-9336 ) 1.9.18.2. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.19. RHBA-2023:3208 - OpenShift Container Platform 4.12.18 bug fix update Issued: 23 May 2023 OpenShift Container Platform release 4.12.18 is now available. The list of bug fixes that are included in the update is documented in the RHBA-2023:3208 advisory. The RPM packages that are included in the update are provided by the RHBA-2023:3207 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.18 --pullspecs 1.9.19.1. Bug fixes Previously, the Samples page in the OpenShift Container Platform did not allow distinguishing between the types of samples listed. With this fix, you can identify the sample from the badges displayed on the Samples page. ( OCPBUGS-7446 ) Previously, when viewing resource consumption for a specific pod, graphs displaying CPU usage and Memory Usage metrics were stacked even though these metrics are static values, which should be displayed as a static line across the graph. With this update, OpenShift Container Platform correctly displays the values for CPU Usage and Memory Usage in the monitoring dashboard. ( OCPBUGS-5353 ) 1.9.19.2. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.20. RHSA-2023:3287 - OpenShift Container Platform 4.12.19 bug fix and security update Issued: 31 May 2023 OpenShift Container Platform release 4.12.19 is now available. The list of bug fixes that are included in the update is documented in the RHSA-2023:3287 advisory. The RPM packages that are included in the update are provided by the RHBA-2023:3286 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.19 --pullspecs 1.9.20.1. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.21. RHSA-2023:3410 - OpenShift Container Platform 4.12.20 bug fix update Issued: 7 June 2023 OpenShift Container Platform release 4.12.20 is now available. The list of bug fixes that are included in the update is documented in the RHSA-2023:3410 advisory. The RPM packages that are included in the update are provided by the RHSA-2023:3409 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.20 --pullspecs 1.9.21.1. Bug fixes Previously, mirroring from a registry to a disk by using an image set configuration file that specifies several digests of the same image, without tags, caused an error because the oc-mirror plugin added a default tag latest to all the images (digests). With this update, the oc-mirror plugin now uses a truncated digest of the image, which eliminantes the error. ( OCPBUGS-13432 ) 1.9.21.2. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.22. RHBA-2023:3546 - OpenShift Container Platform 4.12.21 bug fix and security update Issued: 14 June 2023 OpenShift Container Platform release 4.12.21, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHBA-2023:3546 advisory. The RPM packages that are included in the update are provided by the RHSA-2023:3545 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.21 --pullspecs 1.9.22.1. Bug fixes Previously on single-node OpenShift, in case of node reboot there was a race condition that could result in admission of application pods requesting devices on the node even if devices were unhealthy or unavailable to be allocated. This resulted in runtime failures when the application tried to access devices. With this update, the resources requested by the pod are only allocated if the device plugin has registered itself to kubelet and healthy devices are present on the node to be allocated. If these conditions are not met, the pod can fail at admission with UnexpectedAdmissionError error, which is an expected behavior. If the application pod is part of deployments, in case of failure subsequent pods are created and ultimately successfully run when devices are suitable to be allocated. ( OCPBUGS-14437 ) 1.9.22.2. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.23. RHSA-2023:3615 - OpenShift Container Platform 4.12.22 bug fix and security update Issued: 26 June 2023 OpenShift Container Platform release 4.12.22, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2023:3615 advisory. The RPM packages that are included in the update are provided by the RHSA-2023:3613 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.22 --pullspecs 1.9.23.1. Bug fixes Previously, client TLS (mTLS) was configured on an Ingress Controller, and the certificate authority (CA) in the client CA bundle required more than 1MB of certificate revocation list (CRLs) to be downloaded. The CRL ConfigMap object size limitations prevented updates from occurring. As a result of the missing CRLs, connections with valid client certificates may have been rejected with the error unknown ca . With this update, the CRL ConfigMap for each Ingress Controller no longer exists; instead, each router pod directly downloads CRLs, ensuring connections with valid client certificates are no longer rejected. ( OCPBUGS-14454 ) Previously, because client TLS (mTLS) was configured on an Ingress Controller, mismatches between the distributing certificate authority (CA) and the issuing CA caused the incorrect certificate revocation list (CRL) to be downloaded. As a result, the incorrect CRL was downloaded instead of the correct CRL, causing connections with valid client certificates to be rejected with the error message unknown ca . With this update, downloaded CRLs are now tracked by the CA that distributes them. This ensures that valid client certificates are no longer rejected. ( OCPBUGS-14455 ) 1.9.23.2. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.24. RHSA-2023:3925 - OpenShift Container Platform 4.12.23 bug fix and security update Issued: 6 July 2023 OpenShift Container Platform release 4.12.23, which includes security updates, is now available.This update includes a Red Hat security bulletin for customers who run OpenShift Container Platform in FIPS mode. For more information, see RHSB-2023:001 . The list of bug fixes that are included in the update is documented in the RHSA-2023:3925 advisory. The RPM packages that are included in the update are provided by the RHSA-2023:3924 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.23 --pullspecs 1.9.24.1. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.25. RHBA-2023:3977 - OpenShift Container Platform 4.12.24 bug fix and security update Issued: 12 July 2023 OpenShift Container Platform release 4.12.24, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHBA-2023:3977 advisory. The RPM packages that are included in the update are provided by the RHSA-2023:3976 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.24 --pullspecs 1.9.25.1. Features 1.9.25.2. NUMA-aware scheduling with the NUMA Resources Operator is generally available NUMA-aware scheduling with the NUMA Resources Operator was previously introduced as a Technology Preview in OpenShift Container Platform 4.10. It is now generally available in OpenShift Container Platform version 4.12.24 and later. The NUMA Resources Operator deploys a NUMA-aware secondary scheduler that makes scheduling decisions for workloads based on a complete picture of available NUMA zones in clusters. This enhanced NUMA-aware scheduling ensures that latency-sensitive workloads are processed in a single NUMA zone for maximum efficiency and performance. This update adds the following features: Fine-tuning of API polling for NUMA resource reports. Configuration options at the node group level for the node topology exporter. Note NUMA-aware scheduling with the NUMA Resources Operator is not yet available on single-node OpenShift. For more information, see Scheduling NUMA-aware workloads . 1.9.25.3. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.26. RHBA-2023:4048 - OpenShift Container Platform 4.12.25 bug fix update Issued: 19 July 2023 OpenShift Container Platform release 4.12.25 is now available. The list of bug fixes that are included in the update is documented in the RHBA-2023:4048 advisory. The RPM packages that are included in the update are provided by the RHBA-2023:4047 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.25 --pullspecs 1.9.26.1. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.27. RHBA-2023:4221 - OpenShift Container Platform 4.12.26 bug fix update Issued: 26 July 2023 OpenShift Container Platform release 4.12.26 is now available. The list of bug fixes that are included in the update is documented in the RHBA-2023:4221 advisory. There are no RPM packages for this update. You can view the container images in this release by running the following command: USD oc adm release info 4.12.26 --pullspecs 1.9.27.1. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.28. RHBA-2023:4319 - OpenShift Container Platform 4.12.27 bug fix update Issued: 2 August 2023 OpenShift Container Platform release 4.12.27 is now available. The list of bug fixes that are included in the update is documented in the RHBA-2023:4319 advisory. The RPM packages that are included in the update are provided by the RHBA-2023:4322 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.27 --pullspecs 1.9.28.1. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.29. RHBA-2023:4440 - OpenShift Container Platform 4.12.28 bug fix update Issued: 9 August 2023 OpenShift Container Platform release 4.12.28 is now available. The list of bug fixes that are included in the update is documented in the RHBA-2023:4440 advisory. The RPM packages that are included in the update are provided by the RHBA-2023:4443 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.28 --pullspecs 1.9.29.1. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.30. RHBA-2023:4608 - OpenShift Container Platform 4.12.29 bug fix update Issued: 16 August 2023 OpenShift Container Platform release 4.12.29 is now available. The list of bug fixes that are included in the update is documented in the RHBA-2023:4608 advisory. The RPM packages that are included in the update are provided by the RHBA-2023:4611 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.29 --pullspecs 1.9.30.1. Bug fixes Previously, the Ingress Operator did not include an Amazon Web Services (AWS) permission in its cloud credentials request. This impacted the management of domain name system (DNS) records in the Commercial Cloud Services (C2S) us-iso-east-1 and the Secret Commercial Cloud Services (SC2S) us-isob-east-1 AWS Regions. If you installed an OpenShift Container Platform cluster in a C2S or an SC2S AWS Region, the Ingress Operator failed to publish DNS records for the Route 53 service and you received an error message similar to the following example: The DNS provider failed to ensure the record: failed to find hosted zone for record: failed to get tagged resources: AccessDenied: User: [...] is not authorized to perform: route53:ListTagsForResources on resource: [...] With this update, the Ingress Operator's cloud credentials request includes the route53:ListTagsForResources permission, so that the Operator can publish DNS records in the C2S and SC2S AWS Regions for the Route 53 service. ( OCPBUGS-15467 ) 1.9.30.2. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.31. RHSA-2023:4671 - OpenShift Container Platform 4.12.30 bug fix update Issued: 23 August 2023 OpenShift Container Platform release 4.12.30, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2023:4671 advisory. The RPM packages that are included in the update are provided by the RHSA-2023:4674 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.30 --pullspecs 1.9.31.1. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.32. RHBA-2023:4756 - OpenShift Container Platform 4.12.31 bug fix update Issued: 31 August 2023 OpenShift Container Platform release 4.12.31 is now available. The list of bug fixes that are included in the update is documented in the RHBA-2023:4756 advisory. The RPM packages that are included in the update are provided by the RHBA-2023:4759 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.31 --pullspecs 1.9.32.1. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.33. RHBA-2023:4900 - OpenShift Container Platform 4.12.32 bug fix update Issued: 6 September 2023 OpenShift Container Platform release 4.12.32 is now available. The list of bug fixes that are included in the update is documented in the RHBA-2023:4900 advisory. The RPM packages that are included in the update are provided by the RHBA-2023:4903 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.32 --pullspecs 1.9.33.1. Bug fix Previously, an issue was observed in OpenShift Container Platform with some pods getting stuck in the terminating state. This affected the reconciliation loop of the allowlist controller, which resulted in unwanted retries that caused the creation of multiple pods. With this update, the allowlist controller only inspects pods that belong to the current daemon set. As a result, retries no longer occur when one or more pods are not ready. ( OCPBUGS-16019 ) 1.9.33.2. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.34. RHBA-2023:5016 - OpenShift Container Platform 4.12.33 bug fix update Issued: 12 September 2023 OpenShift Container Platform release 4.12.33 is now available. The list of bug fixes that are included in the update is documented in the RHBA-2023:5016 advisory. The RPM packages that are included in the update are provided by the RHBA-2023:5018 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.33 --pullspecs 1.9.34.1. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.35. RHBA-2023:5151 - OpenShift Container Platform 4.12.34 bug fix update Issued: 20 September 2023 OpenShift Container Platform release 4.12.34 is now available. The list of bug fixes that are included in the update is documented in the RHBA-2023:5151 advisory. There are no RPM packages for this release. You can view the container images in this release by running the following command: USD oc adm release info 4.12.34 --pullspecs 1.9.35.1. Bug fixes Previously, a non-compliant upstream DNS server that provided a UDP response larger than OpenShift Container Platform specified bufsize of 512 bytes, caused an overflow error in CoreDNS in which a response to a DNS query was not given. With this update, users can configure the protocolStrategy field on the dnses.operator.openshift.io custom resource to be "TCP". This resolves issues with non-compliant upstream DNS servers. ( OCPBUGS-15251 ) Previously, the OpenShift Container Platform Router directed traffic to a route with a weight of 0 when it had only one back end. With this update, the router will not send traffic to routes with a single back end with weight 0 . ( OCPBUGS-18639 ) Previously, the cloud credentials used in Manila CSI Driver Operator were cached, resulting in authentication issues if these credentials were rotated. With this update, this issue is resolved. ( OCPBUGS-18475 ) 1.9.35.2. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.36. RHBA-2023:5321 - OpenShift Container Platform 4.12.35 bug fix update Issued: 27 September 2023 OpenShift Container Platform release 4.12.35 is now available. The list of bug fixes that are included in the update is documented in the RHBA-2023:5321 advisory. The RPM packages that are included in the update are provided by the RHBA-2023:5323 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.35 --pullspecs 1.9.36.1. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.37. RHSA-2023:5390 - OpenShift Container Platform 4.12.36 bug fix and security update Issued: 4 October 2023 OpenShift Container Platform release 4.12.36, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2023:5390 advisory. The RPM packages that are included in the update are provided by the RHBA-2023:5392 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.36 --pullspecs 1.9.37.1. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.38. RHBA-2023:5450 - OpenShift Container Platform 4.12.37 bug fix update Issued: 11 October 2023 OpenShift Container Platform release 4.12.37 is now available. The list of bug fixes that are included in the update is documented in the RHBA-2023:5450 advisory. The RPM packages that are included in the update are provided by the RHBA-2023:5452 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.37 --pullspecs 1.9.38.1. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.39. RHSA-2023:5677 - OpenShift Container Platform 4.12.39 bug fix and security update Issued: 18 October 2023 OpenShift Container Platform release 4.12.39, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2023:5677 advisory. The RPM packages that are included in the update are provided by the RHSA-2023:5679 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.39 --pullspecs 1.9.39.1. Bug fixes Previously, CoreDNS would crash if an EndpointSlice port was created without a port number. With this update, validation was added to CoreDNS so it will no longer crash in this situation. ( OCPBUGS-20144 ) Previously, large clusters were slow to attach volumes through cinder-csi-driver . With this update, cinder-csi-driver is updated with slow volume attachment when the number of Cinder volumes in the project exceed 1000. ( OCPBUGS-20124 ) 1.9.39.2. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.40. RHSA-2023:5896 - OpenShift Container Platform 4.12.40 bug fix and security update Issued: 25 October 2023 OpenShift Container Platform release 4.12.40, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2023:5896 advisory. The RPM packages that are included in the update are provided by the RHBA-2023:5898 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.40 --pullspecs 1.9.40.1. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.41. RHSA-2023:6126 - OpenShift Container Platform 4.12.41 bug fix and security update Issued: 2 November 2023 OpenShift Container Platform release 4.12.41, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2023:6126 advisory. The RPM packages that are included in the update are provided by the RHSA-2023:6128 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.41 --pullspecs 1.9.41.1. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.42. RHSA-2023:6276 - OpenShift Container Platform 4.12.42 bug fix and security update Issued: 8 November 2023 OpenShift Container Platform release 4.12.42, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2023:6276 advisory. The RPM packages that are included in the update are provided by the RHBA-2023:6278 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.42 --pullspecs 1.9.42.1. Feature 1.9.42.1.1. APIServer.config.openshift.io is now tracked by Insights Operator After running the Insights Operator, a new file is now available in the archive in the path config/apiserver.json with the information about the audit profile for APIServer.config.openshift.io . Access to audit profiles help you to understand what audit policy is common practice, what profiles are most commonly used, what differences there are between industries, and what kind of customization is applied. 1.9.42.2. Bug fixes Previously, the Cluster Version Operator (CVO) did not reconcile SecurityContextConstraints (SCC) resources as expected. The CVO now properly reconciles the volumes field in the SecurityContextConstraints resources towards the state defined in the release image. User modifications to system SCC resources are tolerated. For more information about how SCC resources can impact updating, see Resolving Detected modified SecurityContextConstraints update gate before upgrading to 4.14 . ( OCPBUGS-22198 ) Previously, a large number of ClusterServiceVersion (CSV) resources on startup caused a pod running the Node Tuning Operator (NTO) to restart and loop, which resulted in an error. With this update, the issue is fixed. ( OCPBUGS-21837 ) 1.9.42.3. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.43. RHSA-2023:6842 - OpenShift Container Platform 4.12.43 bug fix and security update Issued: 16 November 2023 OpenShift Container Platform release 4.12.43, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2023:6842 advisory. The RPM packages that are included in the update are provided by the RHBA-2023:6844 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.43 --pullspecs 1.9.43.1. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.44. RHSA-2023:6894 - OpenShift Container Platform 4.12.44 bug fix and security update Issued: 21 November 2023 OpenShift Container Platform release 4.12.44, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2023:6894 advisory. The RPM packages that are included in the update are provided by the RHBA-2023:6896 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.44 --pullspecs 1.9.44.1. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.45. RHSA-2023:7608 - OpenShift Container Platform 4.12.45 bug fix and security update Issued: 6 December 2023 OpenShift Container Platform release 4.12.45, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2023:7608 advisory. The RPM packages that are included in the update are provided by the RHSA-2023:7610 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.45 --pullspecs 1.9.45.1. Bug fixes Previously, using the cluster autoscaler with nodes that have CSI storage would cause the cluster autoscaler pods to enter in a CrashLoopBackoff status. With this release, you can use the cluster autoscaler with nodes that have CSI storage successfully. ( OCPBUGS-23274 ) Previously, you could not assign an egress IP to the egress node on an Azure private cluster. With this release, egress IP is enabled for Azure private clusters that use outbound rules to achieve outbound connectivity. ( OCPBUGS-22949 ) Previously, there was no suitable virtual media device for Cisco UCS Blade. With this release, you can use Redfish virtual media to provision Cisco UCS hardware. ( OCPBUGS-19064 ) 1.9.45.2. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.46. RHSA-2023:7823 - OpenShift Container Platform 4.12.46 bug fix and security update Issued: 4 January 2024 OpenShift Container Platform release 4.12.46, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2023:7823 advisory. The RPM packages that are included in the update are provided by the RHBA-2023:7825 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.46 --pullspecs 1.9.46.1. Bug fixes Previously, the Image Registry Operator made API calls to the Storage Account List endpoint as part of obtaining access keys every 5 minutes. In projects with several OpenShift Container Platform clusters, this could lead to API rate limits being reached, which could result in several HTTP errors when attempting to create new clusters. With this release, the time between calls is increased from 5 minutes to 20 minutes. ( OCPBUGS-22125 ) 1.9.46.2. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.47. RHSA-2024:0198 - OpenShift Container Platform 4.12.47 bug fix and security update Issued: 17 January 2024 OpenShift Container Platform release 4.12.47, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:0198 advisory. The RPM packages that are included in the update are provided by the RHSA-2024:0200 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.47 --pullspecs 1.9.47.1. Bug fixes Previously, the spec.storage.deviceClasses.thinPoolConfig.overprovisionRatio value on a Logical Volume Manager Storage (LVMS) cluster custom resource could only be set to a minimum of 2 . With this release, the spec.storage.deviceClasses.thinPoolConfig.overprovisionRatio value can now be set to as low as 1 , which disables overprovisioning. ( OCPBUGS-24480 ) 1.9.47.2. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.48. RHSA-2024:0485 - OpenShift Container Platform 4.12.48 bug fix and security update Issued: 31 January 2024 OpenShift Container Platform release 4.12.48, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:0485 advisory. The RPM packages that are included in the update are provided by the RHSA-2024:0489 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.48 --pullspecs 1.9.48.1. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.49. RHSA-2024:0664 - OpenShift Container Platform 4.12.49 bug fix and security update Issued: 9 February 2024 OpenShift Container Platform release 4.12.49, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:0664 advisory. The RPM packages that are included in the update are provided by the RHSA-2024:0666 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.49 --pullspecs 1.9.49.1. Bug fixes Previously, pods assigned an IP from the pool created by the Whereabouts CNI plugin were getting stuck in ContainerCreating state after a node force reboot. With this release, the Whereabouts CNI plugin issue associated with the IP allocation after a node force reboot is resolved. ( OCPBUGS-16008 ) 1.9.49.2. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.50. RHSA-2024:0833 - OpenShift Container Platform 4.12.50 bug fix and security update Issued: 21 February 2024 OpenShift Container Platform release 4.12.50, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:0833 advisory. The RPM packages that are included in the update are provided by the RHBA-2024:0835 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.50 --pullspecs 1.9.50.1. Bug fixes Previously, CPU limits applied on the Amazon Elastic File System (EFS) Container Storage Interface (CSI) driver container caused performance degradation issues for I/O operations to EFS volumes. Now, the CPU limits for the EFS CSI driver are removed so the performance degradation issue no longer occurs. ( OCPBUGS-29066 ) 1.9.50.2. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.51. RHSA-2024:1052 - OpenShift Container Platform 4.12.51 bug fix and security update Issued: 6 March 2024 OpenShift Container Platform release 4.12.51, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:1052 advisory. The RPM packages that are included in the update are provided by the RHBA-2024:1054 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.51 --pullspecs 1.9.51.1. Bug fixes Previously, when the most recent and default channels were selectively mirrored, and a new release introduced a new channel, the current default channel became invalid. This caused the automatic assignment of the new default channel to fail. With this release, you can now define a defaultChannel field in the ImageSetConfig custom resource (CR) that overrides the currentDefault channel. ( OCPBUGS-29232 ) Previously, the compat-openssl10 package was included in the Red Hat Enterprise Linux CoreOS (RHCOS). This package did not meet Common Vulnerabilities and Exposures (CVE) remediation requirements for Federal Risk and Authorization Management Program (FedRAMP). With this release, compat-openssl10 has been removed from the RHCOS. As a result, security scanners will no longer identify potential common vulnerabilities and exposures (CVEs) in this package. Any binary running on the host RHCOS requiring Red Hat Enterprise Linux (RHEL) OpenSSL compatibility must be upgraded to support RHEL8 OpenSSL. ( OCPBUGS-22928 ) 1.9.51.2. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.52. RHSA-2024:1265 - OpenShift Container Platform 4.12.53 bug fix update Issued: 20 March 2024 OpenShift Container Platform release 4.12.53 is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:1265 advisory. The RPM packages that are included in the update are provided by the RHSA-2024:1267 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.53 --pullspecs 1.9.52.1. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.53. RHSA-2024:1572 - OpenShift Container Platform 4.12.54 bug fix and security update Issued: 3 April 2024 OpenShift Container Platform release 4.12.54, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:1572 advisory. The RPM packages that are included in the update are provided by the RHSA-2024:1574 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.54 --pullspecs 1.9.53.1. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.54. RHSA-2024:1679 - OpenShift Container Platform 4.12.55 bug fix and security update Issued: 8 April 2024 OpenShift Container Platform release 4.12.55, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:1679 advisory. There are no RPM packages for this update. You can view the container images in this release by running the following command: USD oc adm release info 4.12.55 --pullspecs 1.9.54.1. Bug fixes Previously, the manila-csi-driver-controller-metrics service had empty endpoints due to an incorrect name for the app selector. With this release the app selector name is changed to openstack-manila-csi and the issue is fixed. ( OCPBUGS-30295 ) 1.9.54.2. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.55. RHSA-2024:1896 - OpenShift Container Platform 4.12.56 bug fix and security update Issued: 25 April 2024 OpenShift Container Platform release 4.12.56, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:1896 advisory. The RPM packages that are included in the update are provided by the RHSA-2024:1899 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.56 --pullspecs 1.9.55.1. Bug fixes Previously, two components ( tuned and irqbalanced ) were modifying the irq CPU affinity simultaneously, which caused issues. With this release, the irqbalanced component is the only component that configures the interrupt affinity and the issues are resolved. ( OCPBUGS-32205 ) 1.9.55.2. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.56. RHSA-2024:2782 - OpenShift Container Platform 4.12.57 bug fix and security update Issued: 16 May 2024 OpenShift Container Platform release 4.12.57, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:2782 advisory. The RPM packages that are included in the update are provided by the RHSA-2024:2784 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.57 --pullspecs 1.9.56.1. Bug fixes Previously, the clear-irqbalance-banned-cpus.sh script set an empty value for IRQBALANCE_BANNED_CPUS in the /etc/sysconfig/irqbalance pod annotation. As a result, the IRQs only balanced over the reserved CPUs. With this release, the clear-irqbalance-banned-cpus.sh script sets the banned mask to zeros on startup and the issue has been resolved. ( OCPBUGS-31442 ) Previously, a kernel regression introduced in OpenShift Container Platform versions 4.15.0, 4.14.14, 4.13.36, and 4.12.54 led to potential kernel panics in nodes and mounted CephFS volumes. In this release, the regression issue is fixed so that the kernel regression issue no longer occurs. ( OCPBUGS-33253 ) 1.9.56.2. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.57. RHSA-2024:3349 - OpenShift Container Platform 4.12.58 bug fix and security update Issued: 30 May 2024 OpenShift Container Platform release 4.12.58, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:3349 advisory. The RPM packages that are included in the update are provided by the RHSA-2024:3351 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.58 --pullspecs 1.9.57.1. Bug fixes Previously, some container processes created by using the exec command persisted even when CRI-O stopped the container. Consequently, lingering processes led to tracking issues, causing process leaks and defunct statuses. With this release, CRI-O tracks the exec calls processed for a container and ensures that the processes created as part of the exec calls are terminated when the container is stopped. ( OCPBUGS-33175 ) Previously, an issue with NodePort traffic-forwarding caused the Transmission Control Protocol (TCP) traffic to be directed to pods under a terminating state. With this release, the endpoints selection logic fully implements KEP-1669 ProxyTerminatingEndpoints and the issue has been resolved. ( OCPBUGS-33422 ) Previously, the load balancing algorithm did not differentiate between active and inactive services when determining weights, and it employed the random algorithm excessively in environments with many inactive services or environments routing backends with weight 0. This led to increased memory usage and a higher risk of excessive memory consumption. With this release, changes are made to optimize traffic direction towards active services only and prevent unnecessary use of the random algorithm with higher weights, reducing the potential for excessive memory consumption. ( OCPBUGS-33517 ) Previously, the load-balancing algorithm had flaws that led to increased memory usage and a higher risk of excessive memory consumption. With this release, the service filtering logic for load-balancing is updated and the issue has been resolved. ( OCPBUGS-33778 ) 1.9.57.2. Known issue Sometimes, the Console Operator status can get stuck in a failed state when the Operator is removed. To work around this issue, patch the Console controller to reset all conditions to the default state when the Console Operator is removed. For example, log in to the cluster and run the following command: USD oc patch console.operator.openshift.io/cluster --type='merge' -p='{"status":{"conditions":null}}' ( OCPBUGS-33386 ) 1.9.57.3. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.58. RHSA-2024:3713 - OpenShift Container Platform 4.12.59 bug fix and security update Issued: 12 June 2024 OpenShift Container Platform release 4.12.59, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:3713 advisory. The RPM packages that are included in the update are provided by the RHSA-2024:3715 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.59 --pullspecs 1.9.58.1. Bug fixes Previously, the Ingress Operator would specify the spec.template.spec.hostNetwork: true on a router deployment without specifying the spec.template.spec.containers[ ].ports[ ].hostPort . This caused the API server to set a default value for each filed port's hostPort , which the Ingress Operator would then detect as an external update and attempt to revert it. Now, the Ingress Operator no longer incorrectly performs these updates. ( OCPBUGS-34888 ) Previously, the Ingress Operator was leaving the spec.internalTrafficPolicy , spec.ipFamilies , and spec.ipFamilyPolicy fields unspecified for NodePort and ClusterIP type services. The API would then set default values for these fields, which the Ingress Operator would try to revert. With this update, the Ingress Operator specifies an initial value and fixes the error caused by API default values. ( OCPBUGS-34757 ) Previously, if you configured an OpenShift Container Platform cluster with a high number of internal services or user-managed load balancer IP addresses, you experienced a delayed startup time for the OVN-Kubernetes service. This delay occurred when the OVN-Kubernetes service attempted to install iptables rules on a node. With this release, the OVN-Kubernetes service can process a large number of services in a few seconds. Additionally, you can access a new log to view the status of installing iptables rules on a node. ( OCPBUGS-34273 ) Previously, the Ingress Operator did not specify ingressClass.spec.parameters.scope , while the Ingress Class API object specifies type cluster by default. This caused unnecessary updates to all Ingress Classes when the Operator starts. With this update, the Ingress Operator specifies ingressClass.spec.parameters.scope of type cluster . ( OCPBUGS-34110 ) 1.9.58.2. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.59. RHSA-2024:4006 - OpenShift Container Platform 4.12.60 bug fix and security update Issued: 27 June 2024 OpenShift Container Platform release 4.12.60, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:4006 advisory. The RPM packages that are included in the update are provided by the RHSA-2024:4008 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.60 --pullspecs 1.9.59.1. Bug fixes Previously, for the cluster:capacity_cpu_cores:sum metric, nodes with the`infra` role but not master role were not assigned a value of infra for the label_node_role_kubernetes_io label. With this update, nodes with the infra role but not master role are now correctly labeled as infra for this metric. ( OCPBUGS-35558 ) Previously, when an Ingress Controller was configured with client SSL/TLS, but did not have the clientca-configmap finalizer, the Ingress Operator would try to add the finalizer without checking whether the Ingress Controller was marked for deletion. Consequently, if an Ingress Controller was configured with client SSL/TLS and was subsequently deleted, the Operator would correctly remove the finalizer. It would then repeatedly try and fail to update the IngressController to add the finalizer back, resulting in error messages in the Operator's logs. With this update, the Ingress Operator does not add the clientca-configmap finalizer to an Ingress Controller that is marked for deletion. As a result, the Ingress Operator does not attempt incorrect updates and does not log the associated errors. ( OCPBUGS-35027 ) Previously, timeout values larger than what Golang could parse were not properly validated. Consequently, timeout values larger than what HAProxy could parse caused issues with HAProxy. With this update, if the timeout specifies a value larger than what can be parsed, the value is capped at the maximum value that HAProxy can parse. As a result, issues are no longer caused by HAProxy. ( OCPBUGS-33432 ) 1.9.59.2. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.60. RHSA-2024:4677 - OpenShift Container Platform 4.12.61 bug fix and security update Issued: 25 July 2024 OpenShift Container Platform release 4.12.61, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:4677 advisory. The RPM packages that are included in the update are provided by the RHBA-2024:4679 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.61 --pullspecs 1.9.60.1. Bug fixes Previously, the mapi_instance_create_failed alert metric did not fire when there was an error for the Accelerated Networking feature on Microsoft Azure clusters. This release adds the missing alert so that clusters with Accelerated Networking enabled generate alerts when required. ( OCPBUGS-5235 ) Previously, the wait-for-ceo command that is used during the bootstrap operation to verify etcd rollout did not report errors for some failure modes. With this release, a code fix ensures that error messages for these failure modes get reported. ( OCPBUGS-35501 ) Previously, the Helm Plugin index view did not display the same number of charts as the Helm CLI if the chart names varied. With this release, the Helm catalog now looks for charts.openshift.io/name and charts.openshift.io/provider so that all versions are grouped together in a single catalog title. ( OCPBUGS-34933 ) 1.9.60.2. Enhancements Previously, the installation program failed to install a cluster on IBM Cloud VPC on the "eu-es" region, although it is supported. With this update, the installation program successfully installs a cluster on IBM Cloud VPC on the "eu-es" region. ( OCPBUGS-22981 ) 1.9.60.3. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.61. RHSA-2024:5200 - OpenShift Container Platform 4.12.63 bug fix and security update Issued: 19 August 2024 OpenShift Container Platform release 4.12.63, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:5200 advisory. The RPM packages that are included in the update are provided by the RHSA-2024:5202 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.63 --pullspecs 1.9.61.1. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.62. RHSA-2024:5808 - OpenShift Container Platform 4.12.64 bug fix and security update Issued: 29 August 2024 OpenShift Container Platform release 4.12.64, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:5808 advisory. The RPM packages that are included in the update are provided by the RHSA-2024:5810 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.64 --pullspecs 1.9.62.1. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.63. RHSA-2024:6642 - OpenShift Container Platform 4.12.65 bug fix and security update Issued: 18 September 2024 OpenShift Container Platform release 4.12.65, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:6642 advisory. The RPM packages that are included in the update are provided by the RHBA-2024:6644 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.65 --pullspecs 1.9.63.1. Bug fixes Previously, when you received an OVNKubernetesNorthdInactive alert, you could not view the associated runbook. With this release, the runbook is added so you can reference it to resolve an OVNKubernetesNorthdInactive alert. ( OCPBUGS-38905 ) 1.9.63.2. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.64. RHSA-2024:6705 - OpenShift Container Platform 4.12.66 bug fix and security update Issued: 19 September 2024 OpenShift Container Platform release 4.12.66, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:6705 advisory. There are no RPM packages for this release. You can view the container images in this release by running the following command: USD oc adm release info 4.12.66 --pullspecs 1.9.64.1. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.65. RHSA-2024:7590 - OpenShift Container Platform 4.12.67 bug fix and security update Issued: 09 October 2024 OpenShift Container Platform release 4.12.67, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:7590 advisory. The RPM packages that are included in the update are provided by the RHBA-2024:7592 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.67 --pullspecs 1.9.65.1. Bug fixes Previously, when the Operator Lifecycle Manager (OLM) evaluated a potential upgrade, the Operator used the dynamic client list for all custom resource (CR) instances in the cluster. Clusters with a large number of CRs could experience timeouts from the apiserver and stranded upgrades. With this release, the issue is resolved. ( OCPBUGS-42161 ) Previously, the proxy service for the web console plugin handled non-200 response codes as error responses. This, in turn, caused browser caching issues. With this release, the proxy service is fixed so that it does not handle these responses as errors. ( OCPBUGS-41600 ) 1.9.65.2. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.66. RHSA-2024:8692 - OpenShift Container Platform 4.12.68 bug fix and security update Issued: 07 November 2024 OpenShift Container Platform release 4.12.68, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:8692 advisory. The RPM packages that are included in the update are provided by the RHSA-2024:8694 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.68 --pullspecs 1.9.66.1. Bug fixes Previously, an upstream change in Google Cloud Platform (GCP) caused the Cloud Credential Operator (CCO) to degrade. With this release, the CCO is no longer degraded, and the issue is resolved. ( OCPBUGS-43872 ) Previously, when updating to OpenShift Container Platform version 4.14, HAProxy 2.6 enforced strict RFC 7230 compliance and rejected requests with multiple Transfer-Encoding headers. Duplicate Transfer-Encoding headers were configured at the application level, so the requests resulted in 502 Bad Gateway errors and service disruptions. With this release, cluster administrators can use a procedure to proactively detect applications that would send duplicate Transfer-Encoding headers before updating their clusters. This allows administrators to mitigate the issue in advance and prevents service disruption. ( OCPBUGS-43703 ) Previously, a group ID was not added to the /etc/group within the container when the spec.securityContext.runAsGroup attribute was set in the pod resource. With this release, this issue is resolved. ( OCPBUGS-41248 ) Previously, stale data prevented the node of an updated, OVN-enabled cluster from rejoining the cluster and returning to the Ready state. This fix removes problematic stale data from older versions of OpenShift Container Platform and resolves the issue. ( OCPBUGS-38382 ) Previously, when opening a targeted link in a new tab, an error displayed the following message: Cannot read properties of undefined . This issue was the result of a missing object validation check. With this release, the object validation check is added, and the new tab of the target detail displays correctly. ( OCPBUGS-33519 ) 1.9.66.2. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.67. RHBA-2024:8996 - OpenShift Container Platform 4.12.69 bug fix Issued: 14 November 2024 OpenShift Container Platform release 4.12.69 is now available. The list of bug fixes that are included in the update is documented in the RHBA-2024:8996 advisory. The RPM packages that are included in the update are provided by the RHBA-2024:8998 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.69 --pullspecs 1.9.67.1. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.68. RHBA-2024:10533 - OpenShift Container Platform 4.12.70 bug fix and security update Issued: 05 December 2024 OpenShift Container Platform release 4.12.70 is now available. The list of bug fixes that are included in the update is documented in the RHBA-2024:10533 advisory. The RPM packages that are included in the update are provided by the RHSA-2024:10535 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.70 --pullspecs 1.9.68.1. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.69. RHSA-2025:0014 - OpenShift Container Platform 4.12.71 bug fix and security update Issued: 09 January 2025 OpenShift Container Platform release 4.12.71 is now available. The list of bug fixes that are included in the update is documented in the RHSA-2025:0014 advisory. The RPM packages that are included in the update are provided by the RHBA-2025:0016 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.71 --pullspecs 1.9.69.1. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.70. RHSA-2025:0832 - OpenShift Container Platform 4.12.72 bug fix and security update Issued: 06 February 2025 OpenShift Container Platform release 4.12.72 is now available. The list of bug fixes that are included in the update is documented in the RHSA-2025:0832 advisory. The RPM packages that are included in the update are provided by the RHSA-2025:0834 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.72 --pullspecs 1.9.70.1. Bug fixes Previously, during node reboots, especially during update operations, the node that interacted with the rebooting machine entered a Ready=Unknown state. This caused the Control Plane Machine Set Operator to enter an UnavailableReplicas condition and an Available=false state, which triggered alerts that demanded urgent action. However, manual intervention was not necessary because the condition was resolved when the node rebooted. With this release, a grace period for node unreadiness is provided. If a node enters an unready state, the Control Plane Machine Set Operator does not instantly enter an UnavailableReplicas condition or an Available=false state. ( OCPBUGS-48325 ). 1.9.70.2. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.71. RHSA-2025:1242 - OpenShift Container Platform 4.12.73 bug fix and security update Issued: 13 February 2025 OpenShift Container Platform release 4.12.73 is now available. The list of bug fixes that are included in the update is documented in the RHSA-2025:1242 advisory. The RPM packages that are included in the update are provided by the RHBA-2025:1244 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.73 --pullspecs 1.9.71.1. Bug fixes Previously, pods could get stuck when generating FailedMount errors. These errors caused nodes to need additional reboots and for the Network File System (NFS) volume mount to stay in a pending state. With this release, a kernel update fixes the issue so that pods no longer get stuck because nodes no longer need to be rebooted to clear the FailedMount errors. ( OCPBUGS-49398 ) 1.9.71.2. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . 1.9.72. RHSA-2025:2441 - OpenShift Container Platform 4.12.74 bug fix and security update Issued: 13 March 2025 OpenShift Container Platform release 4.12.74 is now available. The list of bug fixes that are included in the update is documented in the RHSA-2025:2441 advisory. The RPM packages that are included in the update are provided by the RHSA-2025:2443 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.12.74 --pullspecs 1.9.72.1. Updating To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a cluster using the CLI . | [
"operator-sdk run bundle --security-context-config=legacy",
"operator-sdk bundle validate .<bundle_dir_or_image> --select-optional suite=operatorframework --optional-values=k8s-version=1.25",
"clouds: openstack: auth: auth_url: https://127.0.0.1:13000 password: thepassword project_domain_name: Default project_name: theprojectname user_domain_name: Default username: theusername region_name: regionOne",
"clouds: openstack: auth: auth_url: https://127.0.0.1:13000 application_credential_id: '5dc185489adc4b0f854532e1af81ffe0' application_credential_secret: 'PDCTKans2bPBbaEqBLiT_IajG8e5J_nJB4kvQHjaAy6ufhod0Zl0NkNoBzjn_bWSYzk587ieIGSlT11c4pVehA' auth_type: \"v3applicationcredential\" region_name: regionOne",
"sourceStrategy: env: - name: \"BUILDAH_QUIET\" value: \"true\"",
"## Snippet to remove unauthenticated group from all the cluster role bindings for clusterrolebinding in cluster-status-binding discovery system:basic-user system:discovery system:openshift:discovery ; do ### Find the index of unauthenticated group in list of subjects index=USD(oc get clusterrolebinding USD{clusterrolebinding} -o json | jq 'select(.subjects!=null) | .subjects | map(.name==\"system:unauthenticated\") | index(true)'); ### Remove the element at index from subjects array patch clusterrolebinding USD{clusterrolebinding} --type=json --patch \"[{'op': 'remove','path': '/subjects/USDindex'}]\"; done",
"oc get infrastructure/cluster -ojson | jq -r '.status.infrastructureName'",
"ibmcloud is lb <cluster_ID>-kubernetes-api-private --output json | jq -r '.provisioning_status'",
"oc get machine -n openshift-machine-api",
"NAME PHASE TYPE REGION ZONE AGE example-public-1-x4gpn-master-0 Running bx2-4x16 us-east us-east-1 23h example-public-1-x4gpn-master-1 Running bx2-4x16 us-east us-east-2 23h example-public-1-x4gpn-master-2 Running bx2-4x16 us-east us-east-3 23h example-public-1-x4gpn-worker-1-xqzzm Running bx2-4x16 us-east us-east-1 22h example-public-1-x4gpn-worker-2-vg9w6 Provisioned bx2-4x16 us-east us-east-2 22h example-public-1-x4gpn-worker-3-2f7zd Provisioned bx2-4x16 us-east us-east-3 22h",
"oc delete machine <name_of_machine> -n openshift-machine-api",
"oc get machine -n openshift-machine-api",
"NAME PHASE TYPE REGION ZONE AGE example-public-1-x4gpn-master-0 Running bx2-4x16 us-east us-east-1 23h example-public-1-x4gpn-master-1 Running bx2-4x16 us-east us-east-2 23h example-public-1-x4gpn-master-2 Running bx2-4x16 us-east us-east-3 23h example-public-1-x4gpn-worker-1-xqzzm Running bx2-4x16 us-east us-east-1 23h example-public-1-x4gpn-worker-2-mnlsz Running bx2-4x16 us-east us-east-2 8m2s example-public-1-x4gpn-worker-3-7nz4q Running bx2-4x16 us-east us-east-3 7m24s",
"./openshift-install wait-for install-complete",
"metadata: name: redhat-operators-disconnected spec: displayName: disconnected-redhat-operators image: registry.example.com:5000/disconnected-redhat-operators/disconnected-redhat-operator-index:v4.11 status: connectionState: lastObservedState: READY",
"metadata.annotations.upgrade: \"1\"",
"curl -X POST -i --insecure -u \"<BMC_username>:<BMC_password>\" https://<BMC_IP>/redfish/v1/EventService/Subscriptions -H 'Content-Type: application/json' --data-raw '{ \"Protocol\": \"Redfish\", \"Context\": \"any string is valid\", \"Destination\": \"https://hw-event-proxy-openshift-bare-metal-events.apps.example.com/webhook\", \"EventTypes\": [\"Alert\"] }'",
"oc delete pods --all -n openshift-cloud-controller-manager",
"cd ~/clusterconfigs/openshift vim openshift-worker-0.yaml",
"apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: annotations: bmac.agent-install.openshift.io/installer-args: '[\"--append-karg\", \"ip=<static_ip>::<gateway>:<netmask>:<hostname_1>:<interface>:none\", \"--save-partindex\", \"1\", \"-n\"]' 1 2 3 4 5 inspect.metal3.io: disabled bmac.agent-install.openshift.io/hostname: <fqdn> 6 bmac.agent-install.openshift.io/role: <role> 7 generation: 1 name: openshift-worker-0 namespace: mynamespace spec: automatedCleaningMode: disabled bmc: address: idrac-virtualmedia://<bmc_ip>/redfish/v1/Systems/System.Embedded.1 8 credentialsName: bmc-secret-openshift-worker-0 disableCertificateVerification: true bootMACAddress: 94:6D:AE:AB:EE:E8 bootMode: \"UEFI\" rootDeviceHints: deviceName: /dev/sda",
"curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{infra_env_id}/hosts/USD{host_id}/installer-args -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d ' { \"args\": [ \"--append-karg\", \"ip=<static_ip>::<gateway>:<netmask>:<hostname_1>:<interface>:none\", 1 2 3 4 5 \"--save-partindex\", \"1\", \"-n\" ] } ' | jq",
"oc adm release info 4.12.0 --pullspecs",
"oc adm release info 4.12.1 --pullspecs",
"oc adm release info 4.12.2 --pullspecs",
"oc adm release info 4.12.3 --pullspecs",
"oc adm release info 4.12.4 --pullspecs",
"oc adm release info 4.12.5 --pullspecs",
"oc adm release info 4.12.6 --pullspecs",
"oc adm release info 4.12.7 --pullspecs",
"oc adm release info 4.12.8 --pullspecs",
"oc adm release info 4.12.9 --pullspecs",
"oc adm release info 4.12.10 --pullspecs",
"oc adm release info 4.12.11 --pullspecs",
"oc adm release info 4.12.12 --pullspecs",
"oc adm release info 4.12.13 --pullspecs",
"oc adm release info 4.12.14 --pullspecs",
"oc adm release info 4.12.15 --pullspecs",
"oc adm release info 4.12.16 --pullspecs",
"oc adm release info 4.12.17 --pullspecs",
"oc adm release info 4.12.18 --pullspecs",
"oc adm release info 4.12.19 --pullspecs",
"oc adm release info 4.12.20 --pullspecs",
"oc adm release info 4.12.21 --pullspecs",
"oc adm release info 4.12.22 --pullspecs",
"oc adm release info 4.12.23 --pullspecs",
"oc adm release info 4.12.24 --pullspecs",
"oc adm release info 4.12.25 --pullspecs",
"oc adm release info 4.12.26 --pullspecs",
"oc adm release info 4.12.27 --pullspecs",
"oc adm release info 4.12.28 --pullspecs",
"oc adm release info 4.12.29 --pullspecs",
"The DNS provider failed to ensure the record: failed to find hosted zone for record: failed to get tagged resources: AccessDenied: User: [...] is not authorized to perform: route53:ListTagsForResources on resource: [...]",
"oc adm release info 4.12.30 --pullspecs",
"oc adm release info 4.12.31 --pullspecs",
"oc adm release info 4.12.32 --pullspecs",
"oc adm release info 4.12.33 --pullspecs",
"oc adm release info 4.12.34 --pullspecs",
"oc adm release info 4.12.35 --pullspecs",
"oc adm release info 4.12.36 --pullspecs",
"oc adm release info 4.12.37 --pullspecs",
"oc adm release info 4.12.39 --pullspecs",
"oc adm release info 4.12.40 --pullspecs",
"oc adm release info 4.12.41 --pullspecs",
"oc adm release info 4.12.42 --pullspecs",
"oc adm release info 4.12.43 --pullspecs",
"oc adm release info 4.12.44 --pullspecs",
"oc adm release info 4.12.45 --pullspecs",
"oc adm release info 4.12.46 --pullspecs",
"oc adm release info 4.12.47 --pullspecs",
"oc adm release info 4.12.48 --pullspecs",
"oc adm release info 4.12.49 --pullspecs",
"oc adm release info 4.12.50 --pullspecs",
"oc adm release info 4.12.51 --pullspecs",
"oc adm release info 4.12.53 --pullspecs",
"oc adm release info 4.12.54 --pullspecs",
"oc adm release info 4.12.55 --pullspecs",
"oc adm release info 4.12.56 --pullspecs",
"oc adm release info 4.12.57 --pullspecs",
"oc adm release info 4.12.58 --pullspecs",
"oc patch console.operator.openshift.io/cluster --type='merge' -p='{\"status\":{\"conditions\":null}}'",
"oc adm release info 4.12.59 --pullspecs",
"oc adm release info 4.12.60 --pullspecs",
"oc adm release info 4.12.61 --pullspecs",
"oc adm release info 4.12.63 --pullspecs",
"oc adm release info 4.12.64 --pullspecs",
"oc adm release info 4.12.65 --pullspecs",
"oc adm release info 4.12.66 --pullspecs",
"oc adm release info 4.12.67 --pullspecs",
"oc adm release info 4.12.68 --pullspecs",
"oc adm release info 4.12.69 --pullspecs",
"oc adm release info 4.12.70 --pullspecs",
"oc adm release info 4.12.71 --pullspecs",
"oc adm release info 4.12.72 --pullspecs",
"oc adm release info 4.12.73 --pullspecs",
"oc adm release info 4.12.74 --pullspecs"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/release_notes/ocp-4-12-release-notes |
40.2.2. Setting Events to Monitor | 40.2.2. Setting Events to Monitor Most processors contain counters , which are used by OProfile to monitor specific events. As shown in Table 40.2, "OProfile Processors and Counters" , the number of counters available depends on the processor. Table 40.2. OProfile Processors and Counters Processor cpu_type Number of Counters Pentium Pro i386/ppro 2 Pentium II i386/pii 2 Pentium III i386/piii 2 Pentium 4 (non-hyper-threaded) i386/p4 8 Pentium 4 (hyper-threaded) i386/p4-ht 4 Athlon i386/athlon 4 AMD64 x86-64/hammer 4 Itanium ia64/itanium 4 Itanium 2 ia64/itanium2 4 TIMER_INT timer 1 IBM eServer iSeries and pSeries timer 1 ppc64/power4 8 ppc64/power5 6 ppc64/970 8 IBM eServer S/390 and S/390x timer 1 IBM eServer zSeries timer 1 Use Table 40.2, "OProfile Processors and Counters" to verify that the correct processor type was detected and to determine the number of events that can be monitored simultaneously. timer is used as the processor type if the processor does not have supported performance monitoring hardware. If timer is used, events cannot be set for any processor because the hardware does not have support for hardware performance counters. Instead, the timer interrupt is used for profiling. If timer is not used as the processor type, the events monitored can be changed, and counter 0 for the processor is set to a time-based event by default. If more than one counter exists on the processor, the counters other than counter 0 are not set to an event by default. The default events monitored are shown in Table 40.3, "Default Events" . Table 40.3. Default Events Processor Default Event for Counter Description Pentium Pro, Pentium II, Pentium III, Athlon, AMD64 CPU_CLK_UNHALTED The processor's clock is not halted Pentium 4 (HT and non-HT) GLOBAL_POWER_EVENTS The time during which the processor is not stopped Itanium 2 CPU_CYCLES CPU Cycles TIMER_INT (none) Sample for each timer interrupt ppc64/power4 CYCLES Processor Cycles ppc64/power5 CYCLES Processor Cycles ppc64/970 CYCLES Processor Cycles The number of events that can be monitored at one time is determined by the number of counters for the processor. However, it is not a one-to-one correlation; on some processors, certain events must be mapped to specific counters. To determine the number of counters available, execute the following command: The events available vary depending on the processor type. To determine the events available for profiling, execute the following command as root (the list is specific to the system's processor type): The events for each counter can be configured via the command line or with a graphical interface. For more information on the graphical interface, refer to Section 40.8, "Graphical Interface" . If the counter cannot be set to a specific event, an error message is displayed. To set the event for each configurable counter via the command line, use opcontrol : Replace <event-name> with the exact name of the event from op_help , and replace <sample-rate> with the number of events between samples. 40.2.2.1. Sampling Rate By default, a time-based event set is selected. It creates a sample every 100,000 clock cycles per processor. If the timer interrupt is used, the timer is set to whatever the jiffy rate is and is not user-settable. If the cpu_type is not timer , each event can have a sampling rate set for it. The sampling rate is the number of events between each sample snapshot. When setting the event for the counter, a sample rate can also be specified: Replace <sample-rate> with the number of events to wait before sampling again. The smaller the count, the more frequent the samples. For events that do not happen frequently, a lower count may be needed to capture the event instances. Warning Be extremely careful when setting sampling rates. Sampling too frequently can overload the system, causing the system to appear as if it is frozen or causing the system to actually freeze. | [
"cat /dev/oprofile/cpu_type",
"op_help",
"opcontrol --event= <event-name> : <sample-rate>",
"opcontrol --event= <event-name> : <sample-rate>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/configuring_oprofile-setting_events_to_monitor |
Administration Guide | Administration Guide Red Hat Ceph Storage 7 Administration of Red Hat Ceph Storage Red Hat Ceph Storage Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/administration_guide/index |
probe::vm.kfree | probe::vm.kfree Name probe::vm.kfree - Fires when kfree is requested Synopsis vm.kfree Values name name of the probe point ptr pointer to the kmemory allocated which is returned by kmalloc caller_function name of the caller function. call_site address of the function calling this kmemory function | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-vm-kfree |
16.4. Using the HMC vterm | 16.4. Using the HMC vterm The HMC vterm is the console for any partitioned IBM System p. This is opened by right clicking on the partition on the HMC, and then selecting Open Terminal Window . Only a single vterm can be connected to the console at one time and there is no console access for partitioned system besides the vterm. This often is referred to as a 'virtual console', but is different from the virtual consoles in Section 16.3, "A Note About Linux Virtual Consoles" . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s2-guimode-power-hmc |
2.26. RHEA-2011:0617 - new package: perl-Parse-RecDescent | 2.26. RHEA-2011:0617 - new package: perl-Parse-RecDescent A new perl-Parse-RecDescent package is now available for Red Hat Enterprise Linux 6. The Parse::RecDescent module provides a mechanism for Perl scripts to generate top-down recursive-descent text parsers from grammar specifications similar to yacc. This enhancement update adds the perl-Parse-RecDescent package to Red Hat Enterprise Linux 6. (BZ# 643547 ) All users who require the Parse::RecDescent Perl module should install this new package. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.1_technical_notes/perl-parse-recdescent_new |
Chapter 10. Worker nodes for single-node OpenShift clusters | Chapter 10. Worker nodes for single-node OpenShift clusters 10.1. Adding worker nodes to single-node OpenShift clusters Single-node OpenShift clusters reduce the host prerequisites for deployment to a single host. This is useful for deployments in constrained environments or at the network edge. However, sometimes you need to add additional capacity to your cluster, for example, in telecommunications and network edge scenarios. In these scenarios, you can add worker nodes to the single-node cluster. There are several ways that you can add worker nodes to a single-node cluster. You can add worker nodes to a cluster manually, using Red Hat OpenShift Cluster Manager , or by using the Assisted Installer REST API directly. Important Adding worker nodes does not expand the cluster control plane, and it does not provide high availability to your cluster. For single-node OpenShift clusters, high availability is handled by failing over to another site. It is not recommended to add a large number of worker nodes to a single-node cluster. Note Unlike multi-node clusters, by default all ingress traffic is routed to the single control-plane node, even after adding additional worker nodes. 10.1.1. Requirements for installing single-node OpenShift worker nodes To install a single-node OpenShift worker node, you must address the following requirements: Administration host: You must have a computer to prepare the ISO and to monitor the installation. Production-grade server: Installing single-node OpenShift worker nodes requires a server with sufficient resources to run OpenShift Container Platform services and a production workload. Table 10.1. Minimum resource requirements Profile vCPU Memory Storage Minimum 2 vCPU cores 8GB of RAM 100GB Note One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs The server must have a Baseboard Management Controller (BMC) when booting with virtual media. Networking: The worker node server must have access to the internet or access to a local registry if it is not connected to a routable network. The worker node server must have a DHCP reservation or a static IP address and be able to access the single-node OpenShift cluster Kubernetes API, ingress route, and cluster node domain names. You must configure the DNS to resolve the IP address to each of the following fully qualified domain names (FQDN) for the single-node OpenShift cluster: Table 10.2. Required DNS records Usage FQDN Description Kubernetes API api.<cluster_name>.<base_domain> Add a DNS A/AAAA or CNAME record. This record must be resolvable by clients external to the cluster. Internal API api-int.<cluster_name>.<base_domain> Add a DNS A/AAAA or CNAME record when creating the ISO manually. This record must be resolvable by nodes within the cluster. Ingress route *.apps.<cluster_name>.<base_domain> Add a wildcard DNS A/AAAA or CNAME record that targets the node. This record must be resolvable by clients external to the cluster. Without persistent IP addresses, communications between the apiserver and etcd might fail. Additional resources Minimum resource requirements for cluster installation Recommended practices for scaling the cluster User-provisioned DNS requirements Creating a bootable ISO image on a USB drive Booting from an ISO image served over HTTP using the Redfish API Deleting nodes from a cluster 10.1.2. Adding worker nodes using the Assisted Installer and OpenShift Cluster Manager You can add worker nodes to single-node OpenShift clusters that were created on Red Hat OpenShift Cluster Manager using the Assisted Installer . Important Adding worker nodes to single-node OpenShift clusters is only supported for clusters running OpenShift Container Platform version 4.11 and up. Prerequisites Have access to a single-node OpenShift cluster installed using Assisted Installer . Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Ensure that all the required DNS records exist for the cluster that you are adding the worker node to. Procedure Log in to OpenShift Cluster Manager and click the single-node cluster that you want to add a worker node to. Click Add hosts , and download the discovery ISO for the new worker node, adding SSH public key and configuring cluster-wide proxy settings as required. Boot the target host using the discovery ISO, and wait for the host to be discovered in the console. After the host is discovered, start the installation. As the installation proceeds, the installation generates pending certificate signing requests (CSRs) for the worker node. When prompted, approve the pending CSRs to complete the installation. When the worker node is sucessfully installed, it is listed as a worker node in the cluster web console. Important New worker nodes will be encrypted using the same method as the original cluster. Additional resources User-provisioned DNS requirements Approving the certificate signing requests for your machines 10.1.3. Adding worker nodes using the Assisted Installer API You can add worker nodes to single-node OpenShift clusters using the Assisted Installer REST API. Before you add worker nodes, you must log in to OpenShift Cluster Manager and authenticate against the API. 10.1.3.1. Authenticating against the Assisted Installer REST API Before you can use the Assisted Installer REST API, you must authenticate against the API using a JSON web token (JWT) that you generate. Prerequisites Log in to OpenShift Cluster Manager as a user with cluster creation privileges. Install jq . Procedure Log in to OpenShift Cluster Manager and copy your API token. Set the USDOFFLINE_TOKEN variable using the copied API token by running the following command: USD export OFFLINE_TOKEN=<copied_api_token> Set the USDJWT_TOKEN variable using the previously set USDOFFLINE_TOKEN variable: USD export JWT_TOKEN=USD( curl \ --silent \ --header "Accept: application/json" \ --header "Content-Type: application/x-www-form-urlencoded" \ --data-urlencode "grant_type=refresh_token" \ --data-urlencode "client_id=cloud-services" \ --data-urlencode "refresh_token=USD{OFFLINE_TOKEN}" \ "https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token" \ | jq --raw-output ".access_token" ) Note The JWT token is valid for 15 minutes only. Verification Optional: Check that you can access the API by running the following command: USD curl -s https://api.openshift.com/api/assisted-install/v2/component-versions -H "Authorization: Bearer USD{JWT_TOKEN}" | jq Example output { "release_tag": "v2.5.1", "versions": { "assisted-installer": "registry.redhat.io/rhai-tech-preview/assisted-installer-rhel8:v1.0.0-175", "assisted-installer-controller": "registry.redhat.io/rhai-tech-preview/assisted-installer-reporter-rhel8:v1.0.0-223", "assisted-installer-service": "quay.io/app-sre/assisted-service:ac87f93", "discovery-agent": "registry.redhat.io/rhai-tech-preview/assisted-installer-agent-rhel8:v1.0.0-156" } } 10.1.3.2. Adding worker nodes using the Assisted Installer REST API You can add worker nodes to clusters using the Assisted Installer REST API. Prerequisites Install the OpenShift Cluster Manager CLI ( ocm ). Log in to OpenShift Cluster Manager as a user with cluster creation privileges. Install jq . Ensure that all the required DNS records exist for the cluster that you are adding the worker node to. Procedure Authenticate against the Assisted Installer REST API and generate a JSON web token (JWT) for your session. The generated JWT token is valid for 15 minutes only. Set the USDAPI_URL variable by running the following command: USD export API_URL=<api_url> 1 1 Replace <api_url> with the Assisted Installer API URL, for example, https://api.openshift.com Import the single-node OpenShift cluster by running the following commands: Set the USDOPENSHIFT_CLUSTER_ID variable. Log in to the cluster and run the following command: USD export OPENSHIFT_CLUSTER_ID=USD(oc get clusterversion -o jsonpath='{.items[].spec.clusterID}') Set the USDCLUSTER_REQUEST variable that is used to import the cluster: USD export CLUSTER_REQUEST=USD(jq --null-input --arg openshift_cluster_id "USDOPENSHIFT_CLUSTER_ID" '{ "api_vip_dnsname": "<api_vip>", 1 "openshift_cluster_id": USDopenshift_cluster_id, "name": "<openshift_cluster_name>" 2 }') 1 Replace <api_vip> with the hostname for the cluster's API server. This can be the DNS domain for the API server or the IP address of the single node which the worker node can reach. For example, api.compute-1.example.com . 2 Replace <openshift_cluster_name> with the plain text name for the cluster. The cluster name should match the cluster name that was set during the Day 1 cluster installation. Import the cluster and set the USDCLUSTER_ID variable. Run the following command: USD CLUSTER_ID=USD(curl "USDAPI_URL/api/assisted-install/v2/clusters/import" -H "Authorization: Bearer USD{JWT_TOKEN}" -H 'accept: application/json' -H 'Content-Type: application/json' \ -d "USDCLUSTER_REQUEST" | tee /dev/stderr | jq -r '.id') Generate the InfraEnv resource for the cluster and set the USDINFRA_ENV_ID variable by running the following commands: Download the pull secret file from Red Hat OpenShift Cluster Manager at console.redhat.com . Set the USDINFRA_ENV_REQUEST variable: export INFRA_ENV_REQUEST=USD(jq --null-input \ --slurpfile pull_secret <path_to_pull_secret_file> \ 1 --arg ssh_pub_key "USD(cat <path_to_ssh_pub_key>)" \ 2 --arg cluster_id "USDCLUSTER_ID" '{ "name": "<infraenv_name>", 3 "pull_secret": USDpull_secret[0] | tojson, "cluster_id": USDcluster_id, "ssh_authorized_key": USDssh_pub_key, "image_type": "<iso_image_type>" 4 }') 1 Replace <path_to_pull_secret_file> with the path to the local file containing the downloaded pull secret from Red Hat OpenShift Cluster Manager at console.redhat.com . 2 Replace <path_to_ssh_pub_key> with the path to the public SSH key required to access the host. If you do not set this value, you cannot access the host while in discovery mode. 3 Replace <infraenv_name> with the plain text name for the InfraEnv resource. 4 Replace <iso_image_type> with the ISO image type, either full-iso or minimal-iso . Post the USDINFRA_ENV_REQUEST to the /v2/infra-envs API and set the USDINFRA_ENV_ID variable: USD INFRA_ENV_ID=USD(curl "USDAPI_URL/api/assisted-install/v2/infra-envs" -H "Authorization: Bearer USD{JWT_TOKEN}" -H 'accept: application/json' -H 'Content-Type: application/json' -d "USDINFRA_ENV_REQUEST" | tee /dev/stderr | jq -r '.id') Get the URL of the discovery ISO for the cluster worker node by running the following command: USD curl -s "USDAPI_URL/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID" -H "Authorization: Bearer USD{JWT_TOKEN}" | jq -r '.download_url' Example output https://api.openshift.com/api/assisted-images/images/41b91e72-c33e-42ee-b80f-b5c5bbf6431a?arch=x86_64&image_token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2NTYwMjYzNzEsInN1YiI6IjQxYjkxZTcyLWMzM2UtNDJlZS1iODBmLWI1YzViYmY2NDMxYSJ9.1EX_VGaMNejMhrAvVRBS7PDPIQtbOOc8LtG8OukE1a4&type=minimal-iso&version=4.13 Download the ISO: USD curl -L -s '<iso_url>' --output rhcos-live-minimal.iso 1 1 Replace <iso_url> with the URL for the ISO from the step. Boot the new worker host from the downloaded rhcos-live-minimal.iso . Get the list of hosts in the cluster that are not installed. Keep running the following command until the new host shows up: USD curl -s "USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID" -H "Authorization: Bearer USD{JWT_TOKEN}" | jq -r '.hosts[] | select(.status != "installed").id' Example output 2294ba03-c264-4f11-ac08-2f1bb2f8c296 Set the USDHOST_ID variable for the new worker node, for example: USD HOST_ID=<host_id> 1 1 Replace <host_id> with the host ID from the step. Check that the host is ready to install by running the following command: Note Ensure that you copy the entire command including the complete jq expression. USD curl -s USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID -H "Authorization: Bearer USD{JWT_TOKEN}" | jq ' def host_name(USDhost): if (.suggested_hostname // "") == "" then if (.inventory // "") == "" then "Unknown hostname, please wait" else .inventory | fromjson | .hostname end else .suggested_hostname end; def is_notable(USDvalidation): ["failure", "pending", "error"] | any(. == USDvalidation.status); def notable_validations(USDvalidations_info): [ USDvalidations_info // "{}" | fromjson | to_entries[].value[] | select(is_notable(.)) ]; { "Hosts validations": { "Hosts": [ .hosts[] | select(.status != "installed") | { "id": .id, "name": host_name(.), "status": .status, "notable_validations": notable_validations(.validations_info) } ] }, "Cluster validations info": { "notable_validations": notable_validations(.validations_info) } } ' -r Example output { "Hosts validations": { "Hosts": [ { "id": "97ec378c-3568-460c-bc22-df54534ff08f", "name": "localhost.localdomain", "status": "insufficient", "notable_validations": [ { "id": "ntp-synced", "status": "failure", "message": "Host couldn't synchronize with any NTP server" }, { "id": "api-domain-name-resolved-correctly", "status": "error", "message": "Parse error for domain name resolutions result" }, { "id": "api-int-domain-name-resolved-correctly", "status": "error", "message": "Parse error for domain name resolutions result" }, { "id": "apps-domain-name-resolved-correctly", "status": "error", "message": "Parse error for domain name resolutions result" } ] } ] }, "Cluster validations info": { "notable_validations": [] } } When the command shows that the host is ready, start the installation using the /v2/infra-envs/{infra_env_id}/hosts/{host_id}/actions/install API by running the following command: USD curl -X POST -s "USDAPI_URL/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID/hosts/USDHOST_ID/actions/install" -H "Authorization: Bearer USD{JWT_TOKEN}" As the installation proceeds, the installation generates pending certificate signing requests (CSRs) for the worker node. Important You must approve the CSRs to complete the installation. Keep running the following API call to monitor the cluster installation: USD curl -s "USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID" -H "Authorization: Bearer USD{JWT_TOKEN}" | jq '{ "Cluster day-2 hosts": [ .hosts[] | select(.status != "installed") | {id, requested_hostname, status, status_info, progress, status_updated_at, updated_at, infra_env_id, cluster_id, created_at} ] }' Example output { "Cluster day-2 hosts": [ { "id": "a1c52dde-3432-4f59-b2ae-0a530c851480", "requested_hostname": "control-plane-1", "status": "added-to-existing-cluster", "status_info": "Host has rebooted and no further updates will be posted. Please check console for progress and to possibly approve pending CSRs", "progress": { "current_stage": "Done", "installation_percentage": 100, "stage_started_at": "2022-07-08T10:56:20.476Z", "stage_updated_at": "2022-07-08T10:56:20.476Z" }, "status_updated_at": "2022-07-08T10:56:20.476Z", "updated_at": "2022-07-08T10:57:15.306369Z", "infra_env_id": "b74ec0c3-d5b5-4717-a866-5b6854791bd3", "cluster_id": "8f721322-419d-4eed-aa5b-61b50ea586ae", "created_at": "2022-07-06T22:54:57.161614Z" } ] } Optional: Run the following command to see all the events for the cluster: USD curl -s "USDAPI_URL/api/assisted-install/v2/events?cluster_id=USDCLUSTER_ID" -H "Authorization: Bearer USD{JWT_TOKEN}" | jq -c '.[] | {severity, message, event_time, host_id}' Example output {"severity":"info","message":"Host compute-0: updated status from insufficient to known (Host is ready to be installed)","event_time":"2022-07-08T11:21:46.346Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"} {"severity":"info","message":"Host compute-0: updated status from known to installing (Installation is in progress)","event_time":"2022-07-08T11:28:28.647Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"} {"severity":"info","message":"Host compute-0: updated status from installing to installing-in-progress (Starting installation)","event_time":"2022-07-08T11:28:52.068Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"} {"severity":"info","message":"Uploaded logs for host compute-0 cluster 8f721322-419d-4eed-aa5b-61b50ea586ae","event_time":"2022-07-08T11:29:47.802Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"} {"severity":"info","message":"Host compute-0: updated status from installing-in-progress to added-to-existing-cluster (Host has rebooted and no further updates will be posted. Please check console for progress and to possibly approve pending CSRs)","event_time":"2022-07-08T11:29:48.259Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"} {"severity":"info","message":"Host: compute-0, reached installation stage Rebooting","event_time":"2022-07-08T11:29:48.261Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"} Log in to the cluster and approve the pending CSRs to complete the installation. Verification Check that the new worker node was successfully added to the cluster with a status of Ready : USD oc get nodes Example output NAME STATUS ROLES AGE VERSION control-plane-1.example.com Ready master,worker 56m v1.26.0 compute-1.example.com Ready worker 11m v1.26.0 Additional resources User-provisioned DNS requirements Approving the certificate signing requests for your machines 10.1.4. Adding worker nodes to single-node OpenShift clusters manually You can add a worker node to a single-node OpenShift cluster manually by booting the worker node from Red Hat Enterprise Linux CoreOS (RHCOS) ISO and by using the cluster worker.ign file to join the new worker node to the cluster. Prerequisites Install a single-node OpenShift cluster on bare metal. Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Ensure that all the required DNS records exist for the cluster that you are adding the worker node to. Procedure Set the OpenShift Container Platform version: USD OCP_VERSION=<ocp_version> 1 1 Replace <ocp_version> with the current version, for example, latest-4.13 Set the host architecture: USD ARCH=<architecture> 1 1 Replace <architecture> with the target host architecture, for example, aarch64 or x86_64 . Get the worker.ign data from the running single-node cluster by running the following command: USD oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign Host the worker.ign file on a web server accessible from your network. Download the OpenShift Container Platform installer and make it available for use by running the following commands: USD curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDOCP_VERSION/openshift-install-linux.tar.gz > openshift-install-linux.tar.gz USD tar zxvf openshift-install-linux.tar.gz USD chmod +x openshift-install Retrieve the RHCOS ISO URL: USD ISO_URL=USD(./openshift-install coreos print-stream-json | grep location | grep USDARCH | grep iso | cut -d\" -f4) Download the RHCOS ISO: USD curl -L USDISO_URL -o rhcos-live.iso Use the RHCOS ISO and the hosted worker.ign file to install the worker node: Boot the target host with the RHCOS ISO and your preferred method of installation. When the target host has booted from the RHCOS ISO, open a console on the target host. If your local network does not have DHCP enabled, you need to create an ignition file with the new hostname and configure the worker node static IP address before running the RHCOS installation. Perform the following steps: Configure the worker host network connection with a static IP. Run the following command on the target host console: USD nmcli con mod <network_interface> ipv4.method manual / ipv4.addresses <static_ip> ipv4.gateway <network_gateway> ipv4.dns <dns_server> / 802-3-ethernet.mtu 9000 where: <static_ip> Is the host static IP address and CIDR, for example, 10.1.101.50/24 <network_gateway> Is the network gateway, for example, 10.1.101.1 Activate the modified network interface: USD nmcli con up <network_interface> Create a new ignition file new-worker.ign that includes a reference to the original worker.ign and an additional instruction that the coreos-installer program uses to populate the /etc/hostname file on the new worker host. For example: { "ignition":{ "version":"3.2.0", "config":{ "merge":[ { "source":"<hosted_worker_ign_file>" 1 } ] } }, "storage":{ "files":[ { "path":"/etc/hostname", "contents":{ "source":"data:,<new_fqdn>" 2 }, "mode":420, "overwrite":true, "path":"/etc/hostname" } ] } } 1 <hosted_worker_ign_file> is the locally accessible URL for the original worker.ign file. For example, http://webserver.example.com/worker.ign 2 <new_fqdn> is the new FQDN that you set for the worker node. For example, new-worker.example.com . Host the new-worker.ign file on a web server accessible from your network. Run the following coreos-installer command, passing in the ignition-url and hard disk details: USD sudo coreos-installer install --copy-network / --ignition-url=<new_worker_ign_file> <hard_disk> --insecure-ignition where: <new_worker_ign_file> is the locally accessible URL for the hosted new-worker.ign file, for example, http://webserver.example.com/new-worker.ign <hard_disk> Is the hard disk where you install RHCOS, for example, /dev/sda For networks that have DHCP enabled, you do not need to set a static IP. Run the following coreos-installer command from the target host console to install the system: USD coreos-installer install --ignition-url=<hosted_worker_ign_file> <hard_disk> To manually enable DHCP, apply the following NMStateConfig CR to the single-node OpenShift cluster: apiVersion: agent-install.openshift.io/v1 kind: NMStateConfig metadata: name: nmstateconfig-dhcp namespace: example-sno labels: nmstate_config_cluster_name: <nmstate_config_cluster_label> spec: config: interfaces: - name: eth0 type: ethernet state: up ipv4: enabled: true dhcp: true ipv6: enabled: false interfaces: - name: "eth0" macAddress: "AA:BB:CC:DD:EE:11" Important The NMStateConfig CR is required for successful deployments of worker nodes with static IP addresses and for adding a worker node with a dynamic IP address if the single-node OpenShift was deployed with a static IP address. The cluster network DHCP does not automatically set these network settings for the new worker node. As the installation proceeds, the installation generates pending certificate signing requests (CSRs) for the worker node. When prompted, approve the pending CSRs to complete the installation. When the install is complete, reboot the host. The host joins the cluster as a new worker node. Verification Check that the new worker node was successfully added to the cluster with a status of Ready : USD oc get nodes Example output NAME STATUS ROLES AGE VERSION control-plane-1.example.com Ready master,worker 56m v1.26.0 compute-1.example.com Ready worker 11m v1.26.0 Additional resources User-provisioned DNS requirements Approving the certificate signing requests for your machines 10.1.5. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.26.0 master-1 Ready master 63m v1.26.0 master-2 Ready master 64m v1.26.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.26.0 master-1 Ready master 73m v1.26.0 master-2 Ready master 74m v1.26.0 worker-0 Ready worker 11m v1.26.0 worker-1 Ready worker 11m v1.26.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . | [
"export OFFLINE_TOKEN=<copied_api_token>",
"export JWT_TOKEN=USD( curl --silent --header \"Accept: application/json\" --header \"Content-Type: application/x-www-form-urlencoded\" --data-urlencode \"grant_type=refresh_token\" --data-urlencode \"client_id=cloud-services\" --data-urlencode \"refresh_token=USD{OFFLINE_TOKEN}\" \"https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token\" | jq --raw-output \".access_token\" )",
"curl -s https://api.openshift.com/api/assisted-install/v2/component-versions -H \"Authorization: Bearer USD{JWT_TOKEN}\" | jq",
"{ \"release_tag\": \"v2.5.1\", \"versions\": { \"assisted-installer\": \"registry.redhat.io/rhai-tech-preview/assisted-installer-rhel8:v1.0.0-175\", \"assisted-installer-controller\": \"registry.redhat.io/rhai-tech-preview/assisted-installer-reporter-rhel8:v1.0.0-223\", \"assisted-installer-service\": \"quay.io/app-sre/assisted-service:ac87f93\", \"discovery-agent\": \"registry.redhat.io/rhai-tech-preview/assisted-installer-agent-rhel8:v1.0.0-156\" } }",
"export API_URL=<api_url> 1",
"export OPENSHIFT_CLUSTER_ID=USD(oc get clusterversion -o jsonpath='{.items[].spec.clusterID}')",
"export CLUSTER_REQUEST=USD(jq --null-input --arg openshift_cluster_id \"USDOPENSHIFT_CLUSTER_ID\" '{ \"api_vip_dnsname\": \"<api_vip>\", 1 \"openshift_cluster_id\": USDopenshift_cluster_id, \"name\": \"<openshift_cluster_name>\" 2 }')",
"CLUSTER_ID=USD(curl \"USDAPI_URL/api/assisted-install/v2/clusters/import\" -H \"Authorization: Bearer USD{JWT_TOKEN}\" -H 'accept: application/json' -H 'Content-Type: application/json' -d \"USDCLUSTER_REQUEST\" | tee /dev/stderr | jq -r '.id')",
"export INFRA_ENV_REQUEST=USD(jq --null-input --slurpfile pull_secret <path_to_pull_secret_file> \\ 1 --arg ssh_pub_key \"USD(cat <path_to_ssh_pub_key>)\" \\ 2 --arg cluster_id \"USDCLUSTER_ID\" '{ \"name\": \"<infraenv_name>\", 3 \"pull_secret\": USDpull_secret[0] | tojson, \"cluster_id\": USDcluster_id, \"ssh_authorized_key\": USDssh_pub_key, \"image_type\": \"<iso_image_type>\" 4 }')",
"INFRA_ENV_ID=USD(curl \"USDAPI_URL/api/assisted-install/v2/infra-envs\" -H \"Authorization: Bearer USD{JWT_TOKEN}\" -H 'accept: application/json' -H 'Content-Type: application/json' -d \"USDINFRA_ENV_REQUEST\" | tee /dev/stderr | jq -r '.id')",
"curl -s \"USDAPI_URL/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID\" -H \"Authorization: Bearer USD{JWT_TOKEN}\" | jq -r '.download_url'",
"https://api.openshift.com/api/assisted-images/images/41b91e72-c33e-42ee-b80f-b5c5bbf6431a?arch=x86_64&image_token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2NTYwMjYzNzEsInN1YiI6IjQxYjkxZTcyLWMzM2UtNDJlZS1iODBmLWI1YzViYmY2NDMxYSJ9.1EX_VGaMNejMhrAvVRBS7PDPIQtbOOc8LtG8OukE1a4&type=minimal-iso&version=4.13",
"curl -L -s '<iso_url>' --output rhcos-live-minimal.iso 1",
"curl -s \"USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID\" -H \"Authorization: Bearer USD{JWT_TOKEN}\" | jq -r '.hosts[] | select(.status != \"installed\").id'",
"2294ba03-c264-4f11-ac08-2f1bb2f8c296",
"HOST_ID=<host_id> 1",
"curl -s USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID -H \"Authorization: Bearer USD{JWT_TOKEN}\" | jq ' def host_name(USDhost): if (.suggested_hostname // \"\") == \"\" then if (.inventory // \"\") == \"\" then \"Unknown hostname, please wait\" else .inventory | fromjson | .hostname end else .suggested_hostname end; def is_notable(USDvalidation): [\"failure\", \"pending\", \"error\"] | any(. == USDvalidation.status); def notable_validations(USDvalidations_info): [ USDvalidations_info // \"{}\" | fromjson | to_entries[].value[] | select(is_notable(.)) ]; { \"Hosts validations\": { \"Hosts\": [ .hosts[] | select(.status != \"installed\") | { \"id\": .id, \"name\": host_name(.), \"status\": .status, \"notable_validations\": notable_validations(.validations_info) } ] }, \"Cluster validations info\": { \"notable_validations\": notable_validations(.validations_info) } } ' -r",
"{ \"Hosts validations\": { \"Hosts\": [ { \"id\": \"97ec378c-3568-460c-bc22-df54534ff08f\", \"name\": \"localhost.localdomain\", \"status\": \"insufficient\", \"notable_validations\": [ { \"id\": \"ntp-synced\", \"status\": \"failure\", \"message\": \"Host couldn't synchronize with any NTP server\" }, { \"id\": \"api-domain-name-resolved-correctly\", \"status\": \"error\", \"message\": \"Parse error for domain name resolutions result\" }, { \"id\": \"api-int-domain-name-resolved-correctly\", \"status\": \"error\", \"message\": \"Parse error for domain name resolutions result\" }, { \"id\": \"apps-domain-name-resolved-correctly\", \"status\": \"error\", \"message\": \"Parse error for domain name resolutions result\" } ] } ] }, \"Cluster validations info\": { \"notable_validations\": [] } }",
"curl -X POST -s \"USDAPI_URL/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID/hosts/USDHOST_ID/actions/install\" -H \"Authorization: Bearer USD{JWT_TOKEN}\"",
"curl -s \"USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID\" -H \"Authorization: Bearer USD{JWT_TOKEN}\" | jq '{ \"Cluster day-2 hosts\": [ .hosts[] | select(.status != \"installed\") | {id, requested_hostname, status, status_info, progress, status_updated_at, updated_at, infra_env_id, cluster_id, created_at} ] }'",
"{ \"Cluster day-2 hosts\": [ { \"id\": \"a1c52dde-3432-4f59-b2ae-0a530c851480\", \"requested_hostname\": \"control-plane-1\", \"status\": \"added-to-existing-cluster\", \"status_info\": \"Host has rebooted and no further updates will be posted. Please check console for progress and to possibly approve pending CSRs\", \"progress\": { \"current_stage\": \"Done\", \"installation_percentage\": 100, \"stage_started_at\": \"2022-07-08T10:56:20.476Z\", \"stage_updated_at\": \"2022-07-08T10:56:20.476Z\" }, \"status_updated_at\": \"2022-07-08T10:56:20.476Z\", \"updated_at\": \"2022-07-08T10:57:15.306369Z\", \"infra_env_id\": \"b74ec0c3-d5b5-4717-a866-5b6854791bd3\", \"cluster_id\": \"8f721322-419d-4eed-aa5b-61b50ea586ae\", \"created_at\": \"2022-07-06T22:54:57.161614Z\" } ] }",
"curl -s \"USDAPI_URL/api/assisted-install/v2/events?cluster_id=USDCLUSTER_ID\" -H \"Authorization: Bearer USD{JWT_TOKEN}\" | jq -c '.[] | {severity, message, event_time, host_id}'",
"{\"severity\":\"info\",\"message\":\"Host compute-0: updated status from insufficient to known (Host is ready to be installed)\",\"event_time\":\"2022-07-08T11:21:46.346Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Host compute-0: updated status from known to installing (Installation is in progress)\",\"event_time\":\"2022-07-08T11:28:28.647Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Host compute-0: updated status from installing to installing-in-progress (Starting installation)\",\"event_time\":\"2022-07-08T11:28:52.068Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Uploaded logs for host compute-0 cluster 8f721322-419d-4eed-aa5b-61b50ea586ae\",\"event_time\":\"2022-07-08T11:29:47.802Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Host compute-0: updated status from installing-in-progress to added-to-existing-cluster (Host has rebooted and no further updates will be posted. Please check console for progress and to possibly approve pending CSRs)\",\"event_time\":\"2022-07-08T11:29:48.259Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Host: compute-0, reached installation stage Rebooting\",\"event_time\":\"2022-07-08T11:29:48.261Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"}",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION control-plane-1.example.com Ready master,worker 56m v1.26.0 compute-1.example.com Ready worker 11m v1.26.0",
"OCP_VERSION=<ocp_version> 1",
"ARCH=<architecture> 1",
"oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign",
"curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDOCP_VERSION/openshift-install-linux.tar.gz > openshift-install-linux.tar.gz",
"tar zxvf openshift-install-linux.tar.gz",
"chmod +x openshift-install",
"ISO_URL=USD(./openshift-install coreos print-stream-json | grep location | grep USDARCH | grep iso | cut -d\\\" -f4)",
"curl -L USDISO_URL -o rhcos-live.iso",
"nmcli con mod <network_interface> ipv4.method manual / ipv4.addresses <static_ip> ipv4.gateway <network_gateway> ipv4.dns <dns_server> / 802-3-ethernet.mtu 9000",
"nmcli con up <network_interface>",
"{ \"ignition\":{ \"version\":\"3.2.0\", \"config\":{ \"merge\":[ { \"source\":\"<hosted_worker_ign_file>\" 1 } ] } }, \"storage\":{ \"files\":[ { \"path\":\"/etc/hostname\", \"contents\":{ \"source\":\"data:,<new_fqdn>\" 2 }, \"mode\":420, \"overwrite\":true, \"path\":\"/etc/hostname\" } ] } }",
"sudo coreos-installer install --copy-network / --ignition-url=<new_worker_ign_file> <hard_disk> --insecure-ignition",
"coreos-installer install --ignition-url=<hosted_worker_ign_file> <hard_disk>",
"apiVersion: agent-install.openshift.io/v1 kind: NMStateConfig metadata: name: nmstateconfig-dhcp namespace: example-sno labels: nmstate_config_cluster_name: <nmstate_config_cluster_label> spec: config: interfaces: - name: eth0 type: ethernet state: up ipv4: enabled: true dhcp: true ipv6: enabled: false interfaces: - name: \"eth0\" macAddress: \"AA:BB:CC:DD:EE:11\"",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION control-plane-1.example.com Ready master,worker 56m v1.26.0 compute-1.example.com Ready worker 11m v1.26.0",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.26.0 master-1 Ready master 63m v1.26.0 master-2 Ready master 64m v1.26.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.26.0 master-1 Ready master 73m v1.26.0 master-2 Ready master 74m v1.26.0 worker-0 Ready worker 11m v1.26.0 worker-1 Ready worker 11m v1.26.0"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/nodes/worker-nodes-for-single-node-openshift-clusters |
Chapter 21. Network File System (NFS) | Chapter 21. Network File System (NFS) Network File System (NFS) is a way to share files between machines on a network as if the files were located on the client's local hard drive. Red Hat Enterprise Linux can be both an NFS server and an NFS client, which means that it can export file systems to other systems and mount file systems exported from other machines. 21.1. Why Use NFS? NFS is useful for sharing directories of files between multiple users on the same network. For example, a group of users working on the same project can have access to the files for that project using a shared directory of the NFS file system (commonly known as an NFS share) mounted in the directory /myproject . To access the shared files, the user goes into the /myproject directory on his machine. There are no passwords to enter or special commands to remember. Users work as if the directory is on their local machines. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/network_file_system_nfs |
Chapter 4. View OpenShift Data Foundation Topology | Chapter 4. View OpenShift Data Foundation Topology The topology shows the mapped visualization of the OpenShift Data Foundation storage cluster at various abstraction levels and also lets you to interact with these layers. The view also shows how the various elements compose the Storage cluster altogether. Procedure On the OpenShift Web Console, navigate to Storage Data Foundation Topology . The view shows the storage cluster and the zones inside it. You can see the nodes depicted by circular entities within the zones, which are indicated by dotted lines. The label of each item or resource contains basic information such as status and health or indication for alerts. Choose a node to view node details on the right-hand panel. You can also access resources or deployments within a node by clicking on the search/preview decorator icon. To view deployment details Click the preview decorator on a node. A modal window appears above the node that displays all of the deployments associated with that node along with their statuses. Click the Back to main view button in the model's upper left corner to close and return to the view. Select a specific deployment to see more information about it. All relevant data is shown in the side panel. Click the Resources tab to view the pods information. This tab provides a deeper understanding of the problems and offers granularity that aids in better troubleshooting. Click the pod links to view the pod information page on OpenShift Container Platform. The link opens in a new window. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/deploying_and_managing_openshift_data_foundation_using_google_cloud/viewing-odf-topology_rhodf |
Chapter 1. Configuring an OpenShift cluster by deploying an application with cluster configurations | Chapter 1. Configuring an OpenShift cluster by deploying an application with cluster configurations With Red Hat OpenShift GitOps, you can configure Argo CD to recursively sync the content of a Git directory with an application that contains custom configurations for your cluster. 1.1. Prerequisites You have logged in to the OpenShift Container Platform cluster as an administrator. You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster. 1.2. Using an Argo CD instance to manage cluster-scoped resources Warning Do not elevate the permissions of Argo CD instances to be cluster-scoped unless you have a distinct use case that requires it. Only users with cluster-admin privileges should manage the instances you elevate. Anyone with access to the namespace of a cluster-scoped instance can elevate their privileges on the cluster to become a cluster administrator themselves. To manage cluster-scoped resources, update the existing Subscription object for the Red Hat OpenShift GitOps Operator and add the namespace of the Argo CD instance to the ARGOCD_CLUSTER_CONFIG_NAMESPACES environment variable in the spec section. Procedure In the Administrator perspective of the web console, navigate to Operators Installed Operators Red Hat OpenShift GitOps Subscription . Click the Actions list and then click Edit Subscription . On the openshift-gitops-operator Subscription details page, under the YAML tab, edit the Subscription YAML file by adding the namespace of the Argo CD instance to the ARGOCD_CLUSTER_CONFIG_NAMESPACES environment variable in the spec section: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-gitops-operator namespace: openshift-gitops-operator # ... spec: config: env: - name: ARGOCD_CLUSTER_CONFIG_NAMESPACES value: openshift-gitops, <list of namespaces of cluster-scoped Argo CD instances> # ... Click Save and Reload . To verify that the Argo CD instance is configured with a cluster role to manage cluster-scoped resources, perform the following steps: Navigate to User Management Roles and from the Filter list select Cluster-wide Roles . Search for the argocd-application-controller by using the Search by name field. The Roles page displays the created cluster role. Tip Alternatively, in the OpenShift CLI, run the following command: oc auth can-i create oauth -n openshift-gitops --as system:serviceaccount:openshift-gitops:openshift-gitops-argocd-application-controller The output yes verifies that the Argo instance is configured with a cluster role to manage cluster-scoped resources. Else, check your configurations and take necessary steps as required. 1.3. Default permissions of an Argo CD instance By default Argo CD instance has the following permissions: Argo CD instance has the admin privileges to manage resources only in the namespace where it is deployed. For instance, an Argo CD instance deployed in the foo namespace has the admin privileges to manage resources only for that namespace. Argo CD has the following cluster-scoped permissions because Argo CD requires cluster-wide read privileges on resources to function appropriately: - verbs: - get - list - watch apiGroups: - '*' resources: - '*' - verbs: - get - list nonResourceURLs: - '*' Note You can edit the cluster roles used by the argocd-server and argocd-application-controller components where Argo CD is running such that the write privileges are limited to only the namespaces and resources that you wish Argo CD to manage. USD oc edit clusterrole argocd-server USD oc edit clusterrole argocd-application-controller 1.4. Running the Argo CD instance at the cluster-level The default Argo CD instance and the accompanying controllers, installed by the Red Hat OpenShift GitOps Operator, can now run on the infrastructure nodes of the cluster by setting a simple configuration toggle. Procedure Label the existing nodes: USD oc label node <node-name> node-role.kubernetes.io/infra="" Optional: If required, you can also apply taints and isolate the workloads on infrastructure nodes and prevent other workloads from scheduling on these nodes: USD oc adm taint nodes -l node-role.kubernetes.io/infra \ infra=reserved:NoSchedule infra=reserved:NoExecute Add the runOnInfra toggle in the GitOpsService custom resource: apiVersion: pipelines.openshift.io/v1alpha1 kind: GitopsService metadata: name: cluster spec: runOnInfra: true Optional: If taints have been added to the nodes, then add tolerations to the GitOpsService custom resource. Example apiVersion: pipelines.openshift.io/v1alpha1 kind: GitopsService metadata: name: cluster spec: runOnInfra: true tolerations: - effect: NoSchedule key: infra value: reserved - effect: NoExecute key: infra value: reserved Verify that the workloads in the openshift-gitops namespace are now scheduled on the infrastructure nodes by viewing Pods Pod details for any pod in the console UI. Note Any nodeSelectors and tolerations manually added to the default Argo CD custom resource are overwritten by the toggle and tolerations in the GitOpsService custom resource. Additional resources To learn more about taints and tolerations, see Controlling pod placement using node taints . For more information on infrastructure machine sets, see Creating infrastructure machine sets . 1.5. Creating an application by using the Argo CD dashboard Argo CD provides a dashboard which allows you to create applications. This sample workflow walks you through the process of configuring Argo CD to recursively sync the content of the cluster directory to the cluster-configs application. The directory defines the OpenShift Container Platform web console cluster configurations that add a link to the Red Hat Developer Blog - Kubernetes under the menu in the web console, and defines a namespace spring-petclinic on the cluster. Prerequisites You have logged in to the OpenShift Container Platform cluster as an administrator. You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster. You have logged in to Argo CD instance. Procedure In the Argo CD dashboard, click NEW APP to add a new Argo CD application. For this workflow, create a cluster-configs application with the following configurations: Application Name cluster-configs Project default Sync Policy Manual Repository URL https://github.com/redhat-developer/openshift-gitops-getting-started Revision HEAD Path cluster Destination https://kubernetes.default.svc Namespace spring-petclinic Directory Recurse checked Click CREATE to create your application. Open the Administrator perspective of the web console and expand Administration Namespaces . Search for and select the namespace, then enter argocd.argoproj.io/managed-by=openshift-gitops in the Label field so that the Argo CD instance in the openshift-gitops namespace can manage your namespace. 1.6. Creating an application by using the oc tool You can create Argo CD applications in your terminal by using the oc tool. Prerequisites You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster. You have logged in to an Argo CD instance. Procedure Download the sample application : USD git clone [email protected]:redhat-developer/openshift-gitops-getting-started.git Create the application: USD oc create -f openshift-gitops-getting-started/argo/app.yaml Run the oc get command to review the created application: USD oc get application -n openshift-gitops Add a label to the namespace your application is deployed in so that the Argo CD instance in the openshift-gitops namespace can manage it: USD oc label namespace spring-petclinic argocd.argoproj.io/managed-by=openshift-gitops 1.7. Creating an application in the default mode by using the GitOps CLI You can create applications in the default mode by using the GitOps argocd CLI. This sample workflow walks you through the process of configuring Argo CD to recursively sync the content of the cluster directory to the cluster-configs application. The directory defines the OpenShift Container Platform cluster configurations and the spring-petclinic namespace on the cluster. Prerequisites You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster. You have installed the OpenShift CLI ( oc ). You have installed the Red Hat OpenShift GitOps argocd CLI. You have logged in to Argo CD instance. Procedure Get the admin account password for the Argo CD server: USD ADMIN_PASSWD=USD(oc get secret openshift-gitops-cluster -n openshift-gitops -o jsonpath='{.data.admin\.password}' | base64 -d) Get the Argo CD server URL: USD SERVER_URL=USD(oc get routes openshift-gitops-server -n openshift-gitops -o jsonpath='{.status.ingress[0].host}') Log in to the Argo CD server by using the admin account password and enclosing it in single quotes: Important Enclosing the password in single quotes ensures that special characters, such as USD , are not misinterpreted by the shell. Always use single quotes to enclose the literal value of the password. USD argocd login --username admin --password USD{ADMIN_PASSWD} USD{SERVER_URL} Example USD argocd login --username admin --password '<password>' openshift-gitops.openshift-gitops.apps-crc.testing Verify that you are able to run argocd commands in the default mode by listing all applications: USD argocd app list If the configuration is correct, then existing applications will be listed with the following header: Sample output NAME CLUSTER NAMESPACE PROJECT STATUS HEALTH SYNCPOLICY CONDITIONS REPO PATH TARGET Create an application in the default mode: USD argocd app create app-cluster-configs \ --repo https://github.com/redhat-developer/openshift-gitops-getting-started.git \ --path cluster \ --revision main \ --dest-server https://kubernetes.default.svc \ --dest-namespace spring-petclinic \ --directory-recurse \ --sync-policy none \ --sync-option Prune=true \ --sync-option CreateNamespace=true Label the spring-petclinic destination namespace to be managed by the openshif-gitops Argo CD instance: USD oc label ns spring-petclinic "argocd.argoproj.io/managed-by=openshift-gitops" List the available applications to confirm that the application is created successfully: USD argocd app list Even though the cluster-configs Argo CD application has the Healthy status, it is not automatically synced due to its none sync policy, causing it to remain in the OutOfSync status. 1.8. Creating an application in core mode by using the GitOps CLI You can create applications in core mode by using the GitOps argocd CLI. This sample workflow walks you through the process of configuring Argo CD to recursively sync the content of the cluster directory to the cluster-configs application. The directory defines the OpenShift Container Platform cluster configurations and the spring-petclinic namespace on the cluster. Prerequisites You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster. You have installed the OpenShift CLI ( oc ). You have installed the Red Hat OpenShift GitOps argocd CLI. Procedure Log in to the OpenShift Container Platform cluster by using the oc CLI tool: USD oc login -u <username> -p <password> <server_url> Example USD oc login -u kubeadmin -p '<password>' https://api.crc.testing:6443 Check whether the context is set correctly in the kubeconfig file: USD oc config current-context Set the default namespace of the current context to openshift-gitops : USD oc config set-context --current --namespace openshift-gitops Set the following environment variable to override the Argo CD component names: USD export ARGOCD_REPO_SERVER_NAME=openshift-gitops-repo-server Verify that you are able to run argocd commands in core mode by listing all applications: USD argocd app list --core If the configuration is correct, then existing applications will be listed with the following header: Sample output NAME CLUSTER NAMESPACE PROJECT STATUS HEALTH SYNCPOLICY CONDITIONS REPO PATH TARGET Create an application in core mode: USD argocd app create app-cluster-configs --core \ --repo https://github.com/redhat-developer/openshift-gitops-getting-started.git \ --path cluster \ --revision main \ --dest-server https://kubernetes.default.svc \ --dest-namespace spring-petclinic \ --directory-recurse \ --sync-policy none \ --sync-option Prune=true \ --sync-option CreateNamespace=true Label the spring-petclinic destination namespace to be managed by the openshif-gitops Argo CD instance: USD oc label ns spring-petclinic "argocd.argoproj.io/managed-by=openshift-gitops" List the available applications to confirm that the application is created successfully: USD argocd app list --core Even though the cluster-configs Argo CD application has the Healthy status, it is not automatically synced due to its none sync policy, causing it to remain in the OutOfSync status. 1.9. Synchronizing your application with your Git repository You can synchronize your application with your Git repository by modifying the synchronization policy for Argo CD. The policy modification automatically applies the changes in your cluster configurations from your Git repository to the cluster. Procedure In the Argo CD dashboard, notice that the cluster-configs Argo CD application has the statuses Missing and OutOfSync . Because the application was configured with a manual sync policy, Argo CD does not sync it automatically. Click SYNC on the cluster-configs tile, review the changes, and then click SYNCHRONIZE . Argo CD will detect any changes in the Git repository automatically. If the configurations are changed, Argo CD will change the status of the cluster-configs to OutOfSync . You can modify the synchronization policy for Argo CD to automatically apply changes from your Git repository to the cluster. Notice that the cluster-configs Argo CD application now has the statuses Healthy and Synced . Click the cluster-configs tile to check the details of the synchronized resources and their status on the cluster. Navigate to the OpenShift Container Platform web console and click to verify that a link to the Red Hat Developer Blog - Kubernetes is now present there. Navigate to the Project page and search for the spring-petclinic namespace to verify that it has been added to the cluster. Your cluster configurations have been successfully synchronized to the cluster. 1.10. Synchronizing an application in the default mode by using the GitOps CLI You can synchronize applications in the default mode by using the GitOps argocd CLI. This sample workflow walks you through the process of configuring Argo CD to recursively sync the content of the cluster directory to the cluster-configs application. The directory defines the OpenShift Container Platform cluster configurations and the spring-petclinic namespace on the cluster. Prerequisites You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster. You have logged in to Argo CD instance. You have installed the OpenShift CLI ( oc ). You have installed the Red Hat OpenShift GitOps argocd CLI. Procedure Get the admin account password for the Argo CD server: USD ADMIN_PASSWD=USD(oc get secret openshift-gitops-cluster -n openshift-gitops -o jsonpath='{.data.admin\.password}' | base64 -d) Get the Argo CD server URL: USD SERVER_URL=USD(oc get routes openshift-gitops-server -n openshift-gitops -o jsonpath='{.status.ingress[0].host}') Log in to the Argo CD server by using the admin account password and enclosing it in single quotes: Important Enclosing the password in single quotes ensures that special characters, such as USD , are not misinterpreted by the shell. Always use single quotes to enclose the literal value of the password. USD argocd login --username admin --password USD{ADMIN_PASSWD} USD{SERVER_URL} Example USD argocd login --username admin --password '<password>' openshift-gitops.openshift-gitops.apps-crc.testing Because the application is configured with the none sync policy, you must manually trigger the sync operation: USD argocd app sync openshift-gitops/app-cluster-configs List the application to confirm that the application has the Healthy and Synced statuses: USD argocd app list 1.11. Synchronizing an application in core mode by using the GitOps CLI You can synchronize applications in core mode by using the GitOps argocd CLI. This sample workflow walks you through the process of configuring Argo CD to recursively sync the content of the cluster directory to the cluster-configs application. The directory defines the OpenShift Container Platform cluster configurations and the spring-petclinic namespace on the cluster. Prerequisites You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster. You have installed the OpenShift CLI ( oc ). You have installed the Red Hat OpenShift GitOps argocd CLI. Procedure Log in to the OpenShift Container Platform cluster by using the oc CLI tool: USD oc login -u <username> -p <password> <server_url> Example USD oc login -u kubeadmin -p '<password>' https://api.crc.testing:6443 Check whether the context is set correctly in the kubeconfig file: USD oc config current-context Set the default namespace of the current context to openshift-gitops : USD oc config set-context --current --namespace openshift-gitops Set the following environment variable to override the Argo CD component names: USD export ARGOCD_REPO_SERVER_NAME=openshift-gitops-repo-server Because the application is configured with the none sync policy, you must manually trigger the sync operation: USD argocd app sync --core openshift-gitops/app-cluster-configs List the application to confirm that the application has the Healthy and Synced statuses: USD argocd app list --core 1.12. In-built permissions for cluster configuration By default, the Argo CD instance has permissions to manage specific cluster-scoped resources such as cluster Operators, optional OLM Operators and user management. Note Argo CD does not have cluster-admin permissions. You can extend the permissions bound to any Argo CD instances managed by the GitOps Operator. However, you must not modify the permission resources, such as roles or cluster roles created by the GitOps Operator, because the Operator might reconcile them back to their initial state. Instead, create dedicated role and cluster role objects and bind them to the appropriate service account that the application controller uses. Permissions for the Argo CD instance: Resources Descriptions Resource Groups Configure the user or administrator operators.coreos.com Optional Operators managed by OLM user.openshift.io , rbac.authorization.k8s.io Groups, Users and their permissions config.openshift.io Control plane Operators managed by CVO used to configure cluster-wide build configuration, registry configuration and scheduler policies storage.k8s.io Storage console.openshift.io Console customization 1.13. Adding permissions for cluster configuration You can grant permissions for an Argo CD instance to manage cluster configuration. Create a cluster role with additional permissions and then create a new cluster role binding to associate the cluster role with a service account. Prerequisites You have access to an OpenShift Container Platform cluster with cluster-admin privileges and are logged in to the web console. You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster. Procedure In the web console, select User Management Roles Create Role . Use the following ClusterRole YAML template to add rules to specify the additional permissions. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: secrets-cluster-role rules: - apiGroups: [""] resources: ["secrets"] verbs: ["*"] Click Create to add the cluster role. To create the cluster role binding, select User Management Role Bindings Create Binding . Select All Projects from the Project list. Click Create binding . Select Binding type as Cluster-wide role binding (ClusterRoleBinding) . Enter a unique value for the RoleBinding name . Select the newly created cluster role or an existing cluster role from the drop-down list. Select the Subject as ServiceAccount and the provide the Subject namespace and name . Subject namespace : openshift-gitops Subject name : openshift-gitops-argocd-application-controller Note The value of Subject name depends on the GitOps control plane components for which you create the cluster roles and cluster role bindings. Click Create . The YAML file for the ClusterRoleBinding object is as follows: kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: cluster-role-binding subjects: - kind: ServiceAccount name: openshift-gitops-argocd-application-controller namespace: openshift-gitops roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: secrets-cluster-role Additional resources Customizing permissions by creating user-defined cluster roles for cluster-scoped instances Customizing permissions by creating aggregated cluster roles 1.14. Installing OLM Operators using Red Hat OpenShift GitOps Red Hat OpenShift GitOps with cluster configurations manages specific cluster-scoped resources and takes care of installing cluster Operators or any namespace-scoped OLM Operators. Consider a case where as a cluster administrator, you have to install an OLM Operator such as Tekton. You use the OpenShift Container Platform web console to manually install a Tekton Operator or the OpenShift CLI to manually install a Tekton subscription and Tekton Operator group on your cluster. Red Hat OpenShift GitOps places your Kubernetes resources in your Git repository. As a cluster administrator, use Red Hat OpenShift GitOps to manage and automate the installation of other OLM Operators without any manual procedures. For example, after you place the Tekton subscription in your Git repository by using Red Hat OpenShift GitOps, the Red Hat OpenShift GitOps automatically takes this Tekton subscription from your Git repository and installs the Tekton Operator on your cluster. 1.14.1. Installing cluster-scoped Operators Operator Lifecycle Manager (OLM) uses a default global-operators Operator group in the openshift-operators namespace for cluster-scoped Operators. Hence you do not have to manage the OperatorGroup resource in your Gitops repository. However, for namespace-scoped Operators, you must manage the OperatorGroup resource in that namespace. To install cluster-scoped Operators, create and place the Subscription resource of the required Operator in your Git repository. Example: Grafana Operator subscription apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: grafana spec: channel: v4 installPlanApproval: Automatic name: grafana-operator source: redhat-operators sourceNamespace: openshift-marketplace 1.14.2. Installing namepace-scoped Operators To install namespace-scoped Operators, create and place the Subscription and OperatorGroup resources of the required Operator in your Git repository. Example: Ansible Automation Platform Resource Operator # ... apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: "true" name: ansible-automation-platform # ... apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ansible-automation-platform-operator namespace: ansible-automation-platform spec: targetNamespaces: - ansible-automation-platform # ... apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ansible-automation-platform namespace: ansible-automation-platform spec: channel: patch-me installPlanApproval: Automatic name: ansible-automation-platform-operator source: redhat-operators sourceNamespace: openshift-marketplace # ... Important When deploying multiple Operators using Red Hat OpenShift GitOps, you must create only a single Operator group in the corresponding namespace. If more than one Operator group exists in a single namespace, any CSV created in that namespace transition to a failure state with the TooManyOperatorGroups reason. After the number of Operator groups in their corresponding namespaces reaches one, all the failure state CSVs transition to pending state. You must manually approve the pending install plan to complete the Operator installation. 1.15. Additional resources Installing the GitOps CLI Basic GitOps argocd commands Multitenancy support in GitOps | [
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-gitops-operator namespace: openshift-gitops-operator spec: config: env: - name: ARGOCD_CLUSTER_CONFIG_NAMESPACES value: openshift-gitops, <list of namespaces of cluster-scoped Argo CD instances>",
"auth can-i create oauth -n openshift-gitops --as system:serviceaccount:openshift-gitops:openshift-gitops-argocd-application-controller",
"- verbs: - get - list - watch apiGroups: - '*' resources: - '*' - verbs: - get - list nonResourceURLs: - '*'",
"oc edit clusterrole argocd-server oc edit clusterrole argocd-application-controller",
"oc label node <node-name> node-role.kubernetes.io/infra=\"\"",
"oc adm taint nodes -l node-role.kubernetes.io/infra infra=reserved:NoSchedule infra=reserved:NoExecute",
"apiVersion: pipelines.openshift.io/v1alpha1 kind: GitopsService metadata: name: cluster spec: runOnInfra: true",
"apiVersion: pipelines.openshift.io/v1alpha1 kind: GitopsService metadata: name: cluster spec: runOnInfra: true tolerations: - effect: NoSchedule key: infra value: reserved - effect: NoExecute key: infra value: reserved",
"git clone [email protected]:redhat-developer/openshift-gitops-getting-started.git",
"oc create -f openshift-gitops-getting-started/argo/app.yaml",
"oc get application -n openshift-gitops",
"oc label namespace spring-petclinic argocd.argoproj.io/managed-by=openshift-gitops",
"ADMIN_PASSWD=USD(oc get secret openshift-gitops-cluster -n openshift-gitops -o jsonpath='{.data.admin\\.password}' | base64 -d)",
"SERVER_URL=USD(oc get routes openshift-gitops-server -n openshift-gitops -o jsonpath='{.status.ingress[0].host}')",
"argocd login --username admin --password USD{ADMIN_PASSWD} USD{SERVER_URL}",
"argocd login --username admin --password '<password>' openshift-gitops.openshift-gitops.apps-crc.testing",
"argocd app list",
"NAME CLUSTER NAMESPACE PROJECT STATUS HEALTH SYNCPOLICY CONDITIONS REPO PATH TARGET",
"argocd app create app-cluster-configs --repo https://github.com/redhat-developer/openshift-gitops-getting-started.git --path cluster --revision main --dest-server https://kubernetes.default.svc --dest-namespace spring-petclinic --directory-recurse --sync-policy none --sync-option Prune=true --sync-option CreateNamespace=true",
"oc label ns spring-petclinic \"argocd.argoproj.io/managed-by=openshift-gitops\"",
"argocd app list",
"oc login -u <username> -p <password> <server_url>",
"oc login -u kubeadmin -p '<password>' https://api.crc.testing:6443",
"oc config current-context",
"oc config set-context --current --namespace openshift-gitops",
"export ARGOCD_REPO_SERVER_NAME=openshift-gitops-repo-server",
"argocd app list --core",
"NAME CLUSTER NAMESPACE PROJECT STATUS HEALTH SYNCPOLICY CONDITIONS REPO PATH TARGET",
"argocd app create app-cluster-configs --core --repo https://github.com/redhat-developer/openshift-gitops-getting-started.git --path cluster --revision main --dest-server https://kubernetes.default.svc --dest-namespace spring-petclinic --directory-recurse --sync-policy none --sync-option Prune=true --sync-option CreateNamespace=true",
"oc label ns spring-petclinic \"argocd.argoproj.io/managed-by=openshift-gitops\"",
"argocd app list --core",
"ADMIN_PASSWD=USD(oc get secret openshift-gitops-cluster -n openshift-gitops -o jsonpath='{.data.admin\\.password}' | base64 -d)",
"SERVER_URL=USD(oc get routes openshift-gitops-server -n openshift-gitops -o jsonpath='{.status.ingress[0].host}')",
"argocd login --username admin --password USD{ADMIN_PASSWD} USD{SERVER_URL}",
"argocd login --username admin --password '<password>' openshift-gitops.openshift-gitops.apps-crc.testing",
"argocd app sync openshift-gitops/app-cluster-configs",
"argocd app list",
"oc login -u <username> -p <password> <server_url>",
"oc login -u kubeadmin -p '<password>' https://api.crc.testing:6443",
"oc config current-context",
"oc config set-context --current --namespace openshift-gitops",
"export ARGOCD_REPO_SERVER_NAME=openshift-gitops-repo-server",
"argocd app sync --core openshift-gitops/app-cluster-configs",
"argocd app list --core",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: secrets-cluster-role rules: - apiGroups: [\"\"] resources: [\"secrets\"] verbs: [\"*\"]",
"kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: cluster-role-binding subjects: - kind: ServiceAccount name: openshift-gitops-argocd-application-controller namespace: openshift-gitops roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: secrets-cluster-role",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: grafana spec: channel: v4 installPlanApproval: Automatic name: grafana-operator source: redhat-operators sourceNamespace: openshift-marketplace",
"apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: \"true\" name: ansible-automation-platform apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ansible-automation-platform-operator namespace: ansible-automation-platform spec: targetNamespaces: - ansible-automation-platform apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ansible-automation-platform namespace: ansible-automation-platform spec: channel: patch-me installPlanApproval: Automatic name: ansible-automation-platform-operator source: redhat-operators sourceNamespace: openshift-marketplace"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.15/html/declarative_cluster_configuration/configuring-an-openshift-cluster-by-deploying-an-application-with-cluster-configurations |
Chapter 7. File and directory layouts | Chapter 7. File and directory layouts As a storage administrator, you can control how file or directory data is mapped to objects. This section describes how to: Understand file and directory layouts Set file and directory layouts View file and directory layout fields View individual layout fields Remove the directory layouts Prerequisites A running, and healthy Red Hat Ceph Storage cluster. Deployment of a Ceph File System. The installation of the attr package. 7.1. Overview of file and directory layouts This section explains what file and directory layouts are in the context for the Ceph File System. A layout of a file or directory controls how its content is mapped to Ceph RADOS objects. The directory layouts serve primarily for setting an inherited layout for new files in that directory. To view and set a file or directory layout, use virtual extended attributes or extended file attributes ( xattrs ). The name of the layout attributes depends on whether a file is a regular file or a directory: Regular files layout attributes are called ceph.file.layout . Directories layout attributes are called ceph.dir.layout . Layouts Inheritance Files inherit the layout of their parent directory when you create them. However, subsequent changes to the parent directory layout do not affect children. If a directory does not have any layouts set, files inherit the layout from the closest directory to the layout in the directory structure. 7.2. Setting file and directory layout fields Use the setfattr command to set layout fields on a file or directory. Important When you modify the layout fields of a file, the file must be empty, otherwise an error occurs. Prerequisites Root-level access to the node. Procedure To modify layout fields on a file or directory: Syntax Replace: TYPE with file or dir . FIELD with the name of the field. VALUE with the new value of the field. PATH with the path to the file or directory. Example Additional Resources See the table in the Overview of file and directory layouts section of the Red Hat Ceph Storage File System Guide for more details. See the setfattr(1) manual page. 7.3. Viewing file and directory layout fields To use the getfattr command to view layout fields on a file or directory. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all nodes in the storage cluster. Procedure To view layout fields on a file or directory as a single string: Syntax Replace PATH with the path to the file or directory. TYPE with file or dir . Example Note A directory does not have an explicit layout until you set it. Consequently, attempting to view the layout without first setting it fails because there are no changes to display. Additional Resources The getfattr(1) manual page. For more information, see Setting file and directory layout fields section in the Red Hat Ceph Storage File System Guide . 7.4. Viewing individual layout fields Use the getfattr command to view individual layout fields for a file or directory. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all nodes in the storage cluster. Procedure To view individual layout fields on a file or directory: Syntax Replace TYPE with file or dir . FIELD with the name of the field. PATH with the path to the file or directory. Example Note Pools in the pool field are indicated by name. However, newly created pools can be indicated by ID. Additional Resources The getfattr(1) manual page. 7.5. Removing directory layouts Use the setfattr command to remove layouts from a directory. Note When you set a file layout, you cannot change or remove it. Prerequisites A directory with a layout. Procedure To remove a layout from a directory: Syntax Example To remove the pool_namespace field: Syntax Example Note The pool_namespace field is the only field you can remove separately. Additional Resources The setfattr(1) manual page Additional Resources See the Deployment of the Ceph File System section in the Red Hat Ceph Storage File System Guide . See the getfattr(1) manual page for more information. See the setfattr(1) manual page for more information. | [
"setfattr -n ceph. TYPE .layout. FIELD -v VALUE PATH",
"setfattr -n ceph.file.layout.stripe_unit -v 1048576 test",
"getfattr -n ceph. TYPE .layout PATH",
"getfattr -n ceph.dir.layout /home/test ceph.dir.layout=\"stripe_unit=4194304 stripe_count=2 object_size=4194304 pool=cephfs_data\"",
"getfattr -n ceph. TYPE .layout. FIELD _PATH",
"getfattr -n ceph.file.layout.pool test ceph.file.layout.pool=\"cephfs_data\"",
"setfattr -x ceph.dir.layout DIRECTORY_PATH",
"[user@client ~]USD setfattr -x ceph.dir.layout /home/cephfs",
"setfattr -x ceph.dir.layout.pool_namespace DIRECTORY_PATH",
"[user@client ~]USD setfattr -x ceph.dir.layout.pool_namespace /home/cephfs"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/file_system_guide/file-and-directory-layouts |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . The best way to become familiar with a new programming language or a technology is to create a "Hello World" application. You can create a "Hello World" application for JBoss EAP by using Maven as the project management tool. To create a Hello World application, deploy it and test the deployment, follow these procedures: Bare metal deployment Creating a Maven project for a hello world application Creating a hello world servlet Deploying an application to a bare metal installation Adding the Maven dependencies and profile required for integration tests Testing an application deployed on JBoss EAP that is running on bare metal OpenShift Container Platform deployment Creating a Maven project for a hello world application Creating a hello world servlet Deploying an application to OpenShift Container Platform Adding the Maven dependencies and profile required for integration tests Testing an application deployed to JBoss EAP on OpenShift Container Platform | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/getting_started_with_developing_applications_for_jboss_eap_deployment/con_making-open-source-more-inclusive |
Chapter 1. Dynamically provisioned OpenShift Data Foundation deployed on AWS | Chapter 1. Dynamically provisioned OpenShift Data Foundation deployed on AWS 1.1. Replacing operational or failed storage devices on AWS user-provisioned infrastructure When you need to replace a device in a dynamically created storage cluster on an AWS user-provisioned infrastructure, you must replace the storage node. For information about how to replace nodes, see: Replacing an operational AWS node on user-provisioned infrastructure . Replacing a failed AWS node on user-provisioned infrastructure . 1.2. Replacing operational or failed storage devices on AWS installer-provisioned infrastructure When you need to replace a device in a dynamically created storage cluster on an AWS installer-provisioned infrastructure, you must replace the storage node. For information about how to replace nodes, see: Replacing an operational AWS node on installer-provisioned infrastructure . Replacing a failed AWS node on installer-provisioned infrastructure . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/replacing_devices/dynamically_provisioned_openshift_data_foundation_deployed_on_aws |
Chapter 12. Kernel | Chapter 12. Kernel Kernel version in RHEL 7.6 Red Hat Enterprise Linux 7.6 is distributed with the kernel version 3.10.0-957. (BZ#1801759) The kdump FCoE target has been added into the kexec-tools documents This update adds the kdump Fibre Channel over Ethernet (FCoE) target into the kexec-tools documents. As a result, users now have better understanding about the state and details of kdump on FCoE target support. (BZ#1352763) The SCHED_DEADLINE scheduler class enabled This update adds support for the SCHED_DEADLINE scheduler class for the Linux kernel. The scheduler enables predictable task scheduling based on application deadlines. SCHED_DEADLINE benefits periodic workloads by guaranteeing timing isolation, which is not based only on a fixed priority but also on the applications' timing requirements. (BZ#1344565) User mount namespaces now fully supported The mount namespaces feature, previously available as a Technology Preview, is now fully supported. (BZ#1350553) kernel.shmmax and kernel.shmall updated to kernel defaults on IBM Z Previously, applications that required a large amount of memory in some cases terminated unexpectedly due to low values of the kernel.shmmax and kernel.shmall parameters on IBM Z. This update aligns the values of kernel.shmmax and kernel.shmall with kernel defaults, which helps avoid the described crashes. (BZ# 1493069 ) Updated aQuantia Corporation atlantic Network driver The aQuantia Corporation Network driver, atlantic.ko.xz , has been updated to version 2.0.2.1-kern and it is now fully supported. (BZ#1451438) Thunderbolt 3 is now supported This update adds support for the Thunderbolt 3 interface. (BZ#1620372) Intel(R) Omni-Path Architecture (OPA) Host Software Intel Omni-Path Architecture (OPA) host software is fully supported in Red Hat Enterprise Linux 7.6. Intel OPA provides Host Fabric Interface (HFI) hardware with initialization and setup for high performance data transfers (high bandwidth, high message rate, low latency) between compute and I/O nodes in a clustered environment. For instructions on installing Intel Omni-Path Architecture documentation, see: https://www.intel.com/content/dam/support/us/en/documents/network-and-i-o/fabric-products/Intel_OP_Software_RHEL_7_6_RN_K34562.pdf (BZ# 1627126 ) opal-prd rebased to version 6.0.4 on the little-endian variant of IBM POWER Systems On the little-endian variant of IBM POWER Systems, the opal-prd packages have been upgraded to upstream version 6.0.4, which provides a number of bug fixes and enhancements over the version. For example: Performance in High Performance Computing (HPC) environments has been improved. The powernv_flash module is now explicitly loaded on systems based on Baseboard Management Controller (BMC), which ensures that the flash device is created before the opal-prd daemon starts. Error on the first failure for soft or hard offline is no longer displayed by the opal-prd daemon. (BZ# 1564097 , BZ#1537001) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.6_release_notes/new_features_kernel |
Chapter 25. Uninstalling the integrated IdM DNS service from an IdM server | Chapter 25. Uninstalling the integrated IdM DNS service from an IdM server If you have more than one server with integrated DNS in an Identity Management (IdM) deployment, you might decide to remove the integrated DNS service from one of the servers. To do this, you must first decommission the IdM server completely before re-installing IdM on it, this time without the integrated DNS. Note While you can add the DNS role to an IdM server, IdM does not provide a method to remove only the DNS role from an IdM server: the ipa-dns-install command does not have an --uninstall option. Prerequisites You have integrated DNS installed on an IdM server. This is not the last integrated DNS service in your IdM topology. Procedure Identify the redundant DNS service and follow the procedure in Uninstalling an IdM server on the IdM replica that hosts this service. On the same host, follow the procedure in either Without integrated DNS, with an integrated CA as the root CA or Without integrated DNS, with an external CA as the root CA , depending on your use case. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/installing_identity_management/uninstalling-the-integrated-idm-dns-service-from-an-idm-server_installing-identity-management |
Composing, installing, and managing RHEL for Edge images | Composing, installing, and managing RHEL for Edge images Red Hat Enterprise Linux 8 Creating, deploying, and managing Edge systems with Red Hat Enterprise Linux 8 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/composing_installing_and_managing_rhel_for_edge_images/index |
Chapter 16. Jakarta Authentication | Chapter 16. Jakarta Authentication 16.1. About Jakarta Authentication Security Jakarta Authentication is a pluggable interface for Java applications. For information about the specification, see the Jakarta Authentication specification . 16.2. Configure Jakarta Authentication You can authenticate a Jakarta Authentication provider by adding <authentication-jaspi> element to your security domain. The configuration is similar to that of a standard authentication module, but login module elements are enclosed in a <login-module-stack> element. The structure of the configuration is: Example: Structure of the authentication-jaspi Element <authentication-jaspi> <login-module-stack name="..."> <login-module code="..." flag="..."> <module-option name="..." value="..."/> </login-module> </login-module-stack> <auth-module code="..." login-module-stack-ref="..."> <module-option name="..." value="..."/> </auth-module> </authentication-jaspi> The login module itself is configured the same way as a standard authentication module. The web-based management console does not expose the configuration of JASPI authentication modules. You must stop the JBoss EAP running instance completely before adding the configuration directly to the EAP_HOME /domain/configuration/domain.xml file or the EAP_HOME /standalone/configuration/standalone.xml file. 16.3. Configure Jakarta Authentication Security Using Elytron Starting in JBoss EAP 7.3, the elytron subsystem provides an implementation of the Servlet profile from the Jakarta Authentication. This allows tighter integration with the security features provided by Elytron. Enabling Jakarta Authentication for a Web Application For the Jakarta Authentication integration to be enabled for a web application, the web application needs to be associated with either an Elytron http-authentication-factory or a security-domain . By doing this, the Elytron security handlers get installed for the deployment and the Elytron security framework gets activated for the deployment. When the Elytron security framework is activated for a deployment, the globally registered AuthConfigFactory is queried when requests are handled. It will identify if an AuthConfigProvider , which should be used for that deployment, has been registered. If an AuthConfigProvider is found, then JASPI authentication will be used instead of the deployment's authentication configuration. If no AuthConfigProvider is found, then the authentication configuration for the deployment will be used instead. This could result in one of the three possibilities: Use of authentication mechanisms from an http-authentication-factory . Use of mechanisms specified in the web.xml . No authentication is performed if the application does not have any mechanisms defined. Any updates made to the AuthConfigFactory are immediately available. This means if an AuthConfigProvider is registered and is a match for an existing application, it will start to be used immediately without requiring redeployment of the application. All web applications deployed to JBoss EAP have a security domain, which will be resolved in the following order: From the deployment descriptors or annotations of the deployment. The value defined on the default-security-domain attribute on the undertow subsystem. Default to other . Note It is assumed that this security domain is a reference to the PicketBox security domain, so the final step in activation is ensuring this is mapped to Elytron using an application-security-domain resource in the undertow subsystem. This mapping can do one of the following: Reference an elytron security domain directly, for example: Reference a http-authentication-factory resource to obtain instances of authentication mechanisms, for example: The minimal steps to enable the Jakarta Authentication integration are: Leave the default-security-domain attribute on the undertow subsystem undefined so that it defaults to other . Add an application-security-domain mapping from other to an Elytron security domain. The security domain associated with a deployment in these steps is the security domain that will be wrapped in a CallbackHandler to be passed into the ServerAuthModule instances used for authentication. Additional Options Two additional attributes have been added to the application-security-domain resource to allow some further control of the Jakarta Authentication behavior. Table 16.1. Attributes Added to the application-security-domain Resource Attribute Description enable-jaspi Can be set to false to disable Jakarta Authentication support for all deployments using this mapping. integrated-jaspi By default, all identities are loaded from the security domain. If set to false , ad-hoc identities will be created instead. Subsystem Configuration One way to register a configuration that will result in an AuthConfigProvider being returned for a deployment is to register a jaspi-configuration in the elytron subsystem. The following command demonstrates how to add a configuration containing two ServerAuthModule definitions. This results in the following configuration being persisted. <jaspi> <jaspi-configuration name="simple-configuration" layer="HttpServlet" application-context="default-host /webctx" description="Elytron Test Configuration"> <server-auth-modules> <server-auth-module class-name="org.wildfly.security.examples.jaspi.SimpleServerAuthModule" module="org.wildfly.security.examples.jaspi" flag="OPTIONAL"> <options> <property name="a" value="b"/> <property name="c" value="d"/> </options> </server-auth-module> <server-auth-module class-name="org.wildfly.security.examples.jaspi.SecondServerAuthModule" module="org.wildfly.security.examples.jaspi"/> </server-auth-modules> </jaspi-configuration> </jaspi> Note The name attribute is just a name that allows the resource to be referenced in the management model. The layer and application-context attributes are used when registering this configuration with the AuthConfigFactory . Both of these attributes can be omitted allowing wildcard matching. The description attribute is also optional and is used to provide a description to the AuthConfigFactory . Within the configuration, one or more server-auth-module instances can be defined with the following attributes. class-name - The fully qualified class name of the ServerAuthModule . module - The module to load the ServerAuthModule from. flag - The control flag to indicate how this module operates in relation to the other modules. options - Configuration options to be passed into the ServerAuthModule on initialization. Configuration defined in this way is immediately registered with the AuthConfigFactory . Any existing deployments using the Elytron security framework, that matches the layer and application-context , will immediately start making use of this configuration. Programmatic Configuration The APIs defined within the Jakarta Authentication specification allow for applications to dynamically register custom AuthConfigProvider instances. However, the specification does not provide the actual implementations to be used or any standard way to create instances of the implementations. The Elytron project contains a simple utility that deployments can use to help with this. The following code example demonstrates how to use this API to register a configuration similar to the one illustrated in the Subsystem Configuration above. String registrationId = org.wildfly.security.auth.jaspi.JaspiConfigurationBuilder.builder("HttpServlet", servletContext.getVirtualServerName() + " " + servletContext.getContextPath()) .addAuthModuleFactory(SimpleServerAuthModule::new, Flag.OPTIONAL, Collections.singletonMap("a", "b")) .addAuthModuleFactory(SecondServerAuthModule::new) .register(); As an example, this code could be executed within the init() method of a Servlet to register the AuthConfigProvider specific for that deployment. In this code example, the application context has also been assembled by consulting the ServletContext . The register() method returns the resulting registration ID that can also be used to subsequently remove this registration directly from the AuthConfigFactory . As with the Subsystem Configuration , this call also has an immediate effect and will be live for all web applications using the Elytron security framework. Authentication Process Based on the configuration of the application-security-domain resource in the undertow subsystem, the CallbackHandler passed to the ServerAuthModule can operate in either of the following modes: Integrated Mode Non-integrated Mode Integrated Mode When operating in the integrated mode, although the ServerAuthModule instances will be handling the actual authentication, the resulting identity will be loaded from the referenced SecurityDomain using the SecurityRealms referenced by that SecurityDomain . In this mode, it is still possible to override the roles that will be assigned within the servlet container. The advantage of this mode is that ServerAuthModules are able to take advantage of the Elytron configuration for the loading of identities, so that the identities stored in the usual locations, such as databases and LDAP, can be loaded without the ServerAuthModule needing to be aware of these locations. In addition, other Elytron configuration can be applied, such as role and permission mapping. The referenced SecurityDomain can also be referenced in other places, such as for SASL authentication or other non JASPI applications, all backed by a common repository of identities. Table 16.2. Operations of the CallbackHandlers method in the integrated mode. Operation Description PasswordValidationCallback The username and password will be used with the SecurityDomain to perform an authentication. If successful, there will be an authenticated identity. CallerPrincipalCallback This Callback is used to establish the authorized identity or the identity that will be available once the request reached the web application. Note If an authenticated identity has already been established via the PasswordValidationCallback , this Callback is interpreted as a run-as request. In this case, authorization checks are performed to ensure the authenticated identity is authorized to run as the identity specified in this Callback . If no authenticated identity has been established by a PasswordValidationCallback , it is assumed that the ServerAuthModule has handled the authentication step. If a Callback is received with a null Principal and name then: If an authenticated identity has already been established, authorization will be performed as that identity. If no identity has been established, authorization of the anonymous identity will be performed. Where authorization of the anonymous identity is performed, the SecurityDomain must have been configured to grant the anonymous identity the LoginPermission . GroupPrincipalCallback In this mode, the attribute loading, role decoding, and role mapping configured on the security domain are used to establish the identity. If this Callback is received, the groups specified are used to determine the roles that are assigned to the identity. The request will be in the servlet container and these roles are visible in the servlet container only. Non-Integrated Mode When operating in non-integrated mode, the ServerAuthModules are completely responsible for all authentication and identity management. The specified Callbacks can be used to establish an identity. The resulting identity will be created on the SecurityDomain but it will be independent of any identities stored in referenced SecurityRealms . The advantage of this mode is that JASPI configurations that are able to completely handle the identities can be deployed to the application server without requiring anything beyond a simple SecurityDomain definitions. There is no need for this SecurityDomain to actually contain the identities that will be used at runtime. The disadvantage of this mode is that the ServerAuthModule is now responsible for all identity handling, potentially making the implementation much more complex. Table 16.3. Operations of the CallbackHandlers method in the non-integrated mode. Operation Description PasswordValidationCallback The Callback is not supported in this mode. The purpose of this mode is for the ServerAuthModule to operate independently of the referenced SecurityDomain . Requesting a password to be validated would not be suitable. CallerPrincipalCallback This Callback is used to establish the Principal for the resulting identity. Because the ServerAuthModule is handling all of the identity checking requirements, no checks are performed to verify if the identity exists in the security domain and no authorization checks are performed. If a Callback is received with a null Principal and name, then the identity will be established as the anonymous identity. Because the ServerAuthModule is making the decisions, no authorization check will be performed with the SecurityDomain . GroupPrincipalCallback As the identity is created in this mode without loading from the SecurityDomain , it will by default have no roles assigned. If this Callback is received, the groups will be taken and assigned to the resulting identity while the request is in the servlet container. These roles will be visible in the servlet container only. validateRequest During the call to validateRequest on the ServerAuthContext , the individual ServerAuthModule instances will be called in the order in which they are defined. A control flag can also be specified for each module. This flag defines how the response should be interpreted and if processing should continue to the server authentication module or return immediately. Control Flags Whether the configuration is provided within the elytron subsystem or using the JaspiConfigurationBuilder API, it is possible to associate a control flag with each ServerAuthModule . If one is not specified, it defaults to REQUIRED . The flags have the following meanings depending on their result. Flag AuthStatus.SEND_SUCCESS AuthStatus.SEND_FAILURE, AuthStatus.SEND_CONTINUE Required Validation will continue to the remaining modules. Provided the requirements of the remaining modules are satisfied, the request will be allowed to proceed to authorization. Validation will continue to the remaining modules; however, regardless of their outcomes, the validation will not be successful and control will return to the client. Requisite Validation will continue to the remaining modules. Provided the requirements of the remaining modules are satisfied, the request will be allowed to proceed to authorization. The request will return immediately to the client. Sufficient Validation is deemed successful and complete, provided no Required or Requisite module has returned an AuthStatus other than AuthStatus.SUCCESS . The request will proceed to authorization of the secured resource. Validation will continue down the list of remaining modules. This status will only affect the decision if there are no REQUIRED or REQUISITE modules. Optional Validation will continue to the remaining modules, provided any Required or Requisite modules have not returned SUCCESS . This will be sufficient for validation to be deemed successful and for the request to proceed to the authorization stage and the secured resource. Validation will continue down the list of remaining modules. This status will only affect the decision if there are no REQUIRED or REQUISITE modules. Note For all ServerAuthModule instances, if they throw an AuthException , an error will be immediately reported to the client with no further module calls. secureResponse During the call to secureResponse , each ServerAuthModule is called, but this time in reverse order where a module only undertakes an action in secureResponse . If the module undertook an action in validateResponse , it is the responsibility of the module to track this. The control flag has no effect on secureResponse processing. Processing ends when one of the following is true: All of the ServerAuthModule instances have been called. A module returns AuthStatus.SEND_FAILURE . A module throws an AuthException . SecurityIdentity Creation Once the authentication process has completed, the org.wildfly.security.auth.server.SecurityIdentity for the deployment's SecurityDomain will have been created as a result of the Callbacks to the CallbackHandler . Depending on the Callbacks , this will either be an identity loaded directly from the SecurityDomain , or it will be an ad-hoc identity described by the callbacks. This SecurityIdentity will be associated with the request, in the same way it is done for other authentication mechanisms. | [
"<authentication-jaspi> <login-module-stack name=\"...\"> <login-module code=\"...\" flag=\"...\"> <module-option name=\"...\" value=\"...\"/> </login-module> </login-module-stack> <auth-module code=\"...\" login-module-stack-ref=\"...\"> <module-option name=\"...\" value=\"...\"/> </auth-module> </authentication-jaspi>",
"/subsystem=undertow/application-security-domain=MyAppSecurity:add(security-domain=ApplicationDomain)",
"/subsystem=undertow/application-security-domain=MyAppSecurity:add(http-authentication-factory=application-http-authentication)",
"/subsystem=elytron/jaspi-configuration=simple-configuration:add(layer=HttpServlet, application-context=\"default-host /webctx\", description=\"Elytron Test Configuration\", server-auth-modules=[{class-name=org.wildfly.security.examples.jaspi.SimpleServerAuthModule, module=org.wildfly.security.examples.jaspi, flag=OPTIONAL, options={a=b, c=d}}, {class-name=org.wildfly.security.examples.jaspi.SecondServerAuthModule, module=org.wildfly.security.examples.jaspi}])",
"<jaspi> <jaspi-configuration name=\"simple-configuration\" layer=\"HttpServlet\" application-context=\"default-host /webctx\" description=\"Elytron Test Configuration\"> <server-auth-modules> <server-auth-module class-name=\"org.wildfly.security.examples.jaspi.SimpleServerAuthModule\" module=\"org.wildfly.security.examples.jaspi\" flag=\"OPTIONAL\"> <options> <property name=\"a\" value=\"b\"/> <property name=\"c\" value=\"d\"/> </options> </server-auth-module> <server-auth-module class-name=\"org.wildfly.security.examples.jaspi.SecondServerAuthModule\" module=\"org.wildfly.security.examples.jaspi\"/> </server-auth-modules> </jaspi-configuration> </jaspi>",
"String registrationId = org.wildfly.security.auth.jaspi.JaspiConfigurationBuilder.builder(\"HttpServlet\", servletContext.getVirtualServerName() + \" \" + servletContext.getContextPath()) .addAuthModuleFactory(SimpleServerAuthModule::new, Flag.OPTIONAL, Collections.singletonMap(\"a\", \"b\")) .addAuthModuleFactory(SecondServerAuthModule::new) .register();"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/development_guide/jakarta_authentication |
3.6. Testing the Resource Configuration | 3.6. Testing the Resource Configuration You can validate your system configuration with the following procedure. You should be able to mount the exported file system with either NFSv3 or NFSv4. On a node outside of the cluster, residing in the same network as the deployment, verify that the NFS share can be seen by mounting the NFS share. For this example, we are using the 192.168.122.0/24 network. To verify that you can mount the NFS share with NFSv4, mount the NFS share to a directory on the client node. After mounting, verify that the contents of the export directories are visible. Unmount the share after testing. Verify that you can mount the NFS share with NFSv3. After mounting, verify that the test file clientdatafile1 is visible. Unlike NFSv4, since NFSV3 does not use the virtual file system, you must mount a specific export. Unmount the share after testing. To test for failover, perform the following steps. On a node outside of the cluster, mount the NFS share and verify access to the clientdatafile1 we created in Section 3.3, "NFS Share Setup" . From a node within the cluster, determine which node in the cluster is running nfsgroup . In this example, nfsgroup is running on z1.example.com . From a node within the cluster, put the node that is running nfsgroup in standby mode. Verify that nfsgroup successfully starts on the other cluster node. From the node outside the cluster on which you have mounted the NFS share, verify that this outside node still continues to have access to the test file within the NFS mount. Service will be lost briefly for the client during the failover briefly but the client should recover in with no user intervention. By default, clients using NFSv4 may take up to 90 seconds to recover the mount; this 90 seconds represents the NFSv4 file lease grace period observed by the server on startup. NFSv3 clients should recover access to the mount in a matter of a few seconds. From a node within the cluster, remove the node that was initially running running nfsgroup from standby mode. This will not in itself move the cluster resources back to this node. Note Removing a node from standby mode does not in itself cause the resources to fail back over to that node. This will depend on the resource-stickiness value for the resources. For information on the resource-stickiness meta attribute, see Configuring a Resource to Prefer its Current Node in the Red Hat High Availability Add-On Reference . | [
"showmount -e 192.168.122.200 Export list for 192.168.122.200: /nfsshare/exports/export1 192.168.122.0/255.255.255.0 /nfsshare/exports 192.168.122.0/255.255.255.0 /nfsshare/exports/export2 192.168.122.0/255.255.255.0",
"mkdir nfsshare mount -o \"vers=4\" 192.168.122.200:export1 nfsshare ls nfsshare clientdatafile1 umount nfsshare",
"mkdir nfsshare mount -o \"vers=3\" 192.168.122.200:/nfsshare/exports/export2 nfsshare ls nfsshare clientdatafile2 umount nfsshare",
"mkdir nfsshare mount -o \"vers=4\" 192.168.122.200:export1 nfsshare ls nfsshare clientdatafile1",
"pcs status Full list of resources: myapc (stonith:fence_apc_snmp): Started z1.example.com Resource Group: nfsgroup my_lvm (ocf::heartbeat:LVM): Started z1.example.com nfsshare (ocf::heartbeat:Filesystem): Started z1.example.com nfs-daemon (ocf::heartbeat:nfsserver): Started z1.example.com nfs-root (ocf::heartbeat:exportfs): Started z1.example.com nfs-export1 (ocf::heartbeat:exportfs): Started z1.example.com nfs-export2 (ocf::heartbeat:exportfs): Started z1.example.com nfs_ip (ocf::heartbeat:IPaddr2): Started z1.example.com nfs-notify (ocf::heartbeat:nfsnotify): Started z1.example.com",
"pcs node standby z1.example.com",
"pcs status Full list of resources: Resource Group: nfsgroup my_lvm (ocf::heartbeat:LVM): Started z2.example.com nfsshare (ocf::heartbeat:Filesystem): Started z2.example.com nfs-daemon (ocf::heartbeat:nfsserver): Started z2.example.com nfs-root (ocf::heartbeat:exportfs): Started z2.example.com nfs-export1 (ocf::heartbeat:exportfs): Started z2.example.com nfs-export2 (ocf::heartbeat:exportfs): Started z2.example.com nfs_ip (ocf::heartbeat:IPaddr2): Started z2.example.com nfs-notify (ocf::heartbeat:nfsnotify): Started z2.example.com",
"ls nfsshare clientdatafile1",
"pcs node unstandby z1.example.com"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_administration/s1-unittestNFS-HAAA |
7.2. Performing Remote Queries via the Hot Rod Java Client | 7.2. Performing Remote Queries via the Hot Rod Java Client Remote querying over Hot Rod can be enabled once the RemoteCacheManager has been configured with the Protobuf marshaller. The following procedure describes how to enable remote querying over its caches. Prerequisites RemoteCacheManager must be configured to use the Protobuf Marshaller. Procedure 7.1. Enabling Remote Querying via Hot Rod Add the infinispan-remote.jar The infinispan-remote.jar is an uberjar, and therefore no other dependencies are required for this feature. Enable indexing on the cache configuration. Indexing is not mandatory for Remote Queries, but it is highly recommended because it makes searches on caches that contain large amounts of data significantly faster. Indexing can be configured at any time. Enabling and configuring indexing is the same as for Library mode. Add the following configuration within the cache-container element loated inside the Infinispan subsystem element. Register the Protobuf schema definition files Register the Protobuf schema definition files by adding them in the ___protobuf_metadata system cache. The cache key is a string that denotes the file name and the value is .proto file, as a string. Alternatively, protobuf schemas can also be registered by invoking the registerProtofile methods of the server's ProtobufMetadataManager MBean. There is one instance of this MBean per cache container and is backed by the ___protobuf_metadata , so that the two approaches are equivalent. For an example of providing the protobuf schema via ___protobuf_metadata system cache, see Example 7.6, "Registering a Protocol Buffers schema file" . The following example demonstrates how to invoke the registerProtofile methods of the ProtobufMetadataManager MBean. Example 7.1. Registering Protobuf schema definition files via JMX Result All data placed in the cache is immediately searchable, whether or not indexing is in use. Entries do not need to be annotated, unlike embedded queries. The entity classes are only meaningful to the Java client and do not exist on the server. Once remote querying has been enabled, the QueryFactory can be obtained using the following: Example 7.2. Obtaining the QueryFactory Queries can now be run over Hot Rod similar to Library mode. Report a bug | [
"<!-- A basic example of an indexed local cache that uses the RAM Lucene directory provider --> <local-cache name=\"an-indexed-cache\" start=\"EAGER\"> <!-- Enable indexing using the RAM Lucene directory provider --> <indexing index=\"ALL\"> <property name=\"default.directory_provider\">ram</property> </indexing> </local-cache>",
"import javax.management.MBeanServerConnection; import javax.management.ObjectName; import javax.management.remote.JMXConnector; import javax.management.remote.JMXServiceURL; String serverHost = ... // The address of your JDG server int serverJmxPort = ... // The JMX port of your server String cacheContainerName = ... // The name of your cache container String schemaFileName = ... // The name of the schema file String schemaFileContents = ... // The Protobuf schema file contents JMXConnector jmxConnector = JMXConnectorFactory.connect(new JMXServiceURL( \"service:jmx:remoting-jmx://\" + serverHost + \":\" + serverJmxPort)); MBeanServerConnection jmxConnection = jmxConnector.getMBeanServerConnection(); ObjectName protobufMetadataManagerObjName = new ObjectName(\"jboss.infinispan:type=RemoteQuery,name=\" + ObjectName.quote(cacheContainerName) + \",component=ProtobufMetadataManager\"); jmxConnection.invoke(protobufMetadataManagerObjName, \"registerProtofile\", new Object[]{schemaFileName, schemaFileContents}, new String[]{String.class.getName(), String.class.getName()}); jmxConnector.close();",
"import org.infinispan.client.hotrod.Search; import org.infinispan.query.dsl.QueryFactory; import org.infinispan.query.dsl.Query; import org.infinispan.query.dsl.SortOrder; remoteCache.put(2, new User(\"John\", 33)); remoteCache.put(3, new User(\"Alfred\", 40)); remoteCache.put(4, new User(\"Jack\", 56)); remoteCache.put(4, new User(\"Jerry\", 20)); QueryFactory qf = Search.getQueryFactory(remoteCache); Query query = qf.from(User.class) .orderBy(\"age\", SortOrder.ASC) .having(\"name\").like(\"J%\") .and().having(\"age\").gte(33) .toBuilder().build(); List<User> list = query.list(); assertEquals(2, list.size()); assertEquals(\"John\", list.get(0).getName()); assertEquals(33, list.get(0).getAge()); assertEquals(\"Jack\", list.get(1).getName()); assertEquals(56, list.get(1).getAge());"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/infinispan_query_guide/Remote_Querying_via_Hot_Rod |
Chapter 3. Configuring external alertmanager instances | Chapter 3. Configuring external alertmanager instances The OpenShift Container Platform monitoring stack includes a local Alertmanager instance that routes alerts from Prometheus. You can add external Alertmanager instances by configuring the cluster-monitoring-config config map in either the openshift-monitoring project or the user-workload-monitoring-config project. If you add the same external Alertmanager configuration for multiple clusters and disable the local instance for each cluster, you can then manage alert routing for multiple clusters by using a single external Alertmanager instance. Prerequisites You have installed the OpenShift CLI ( oc ). If you are configuring core OpenShift Container Platform monitoring components in the openshift-monitoring project : You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config config map. If you are configuring components that monitor user-defined projects : You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. You have created the user-workload-monitoring-config config map. Procedure Edit the ConfigMap object. To configure additional Alertmanagers for routing alerts from core OpenShift Container Platform projects : Edit the cluster-monitoring-config config map in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add an additionalAlertmanagerConfigs: section under data/config.yaml/prometheusK8s . Add the configuration details for additional Alertmanagers in this section: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: additionalAlertmanagerConfigs: - <alertmanager_specification> For <alertmanager_specification> , substitute authentication and other configuration details for additional Alertmanager instances. Currently supported authentication methods are bearer token ( bearerToken ) and client TLS ( tlsConfig ). The following sample config map configures an additional Alertmanager using a bearer token with client TLS authentication: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: additionalAlertmanagerConfigs: - scheme: https pathPrefix: / timeout: "30s" apiVersion: v1 bearerToken: name: alertmanager-bearer-token key: token tlsConfig: key: name: alertmanager-tls key: tls.key cert: name: alertmanager-tls key: tls.crt ca: name: alertmanager-tls key: tls.ca staticConfigs: - external-alertmanager1-remote.com - external-alertmanager1-remote2.com To configure additional Alertmanager instances for routing alerts from user-defined projects : Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add a <component>/additionalAlertmanagerConfigs: section under data/config.yaml/ . Add the configuration details for additional Alertmanagers in this section: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: additionalAlertmanagerConfigs: - <alertmanager_specification> For <component> , substitute one of two supported external Alertmanager components: prometheus or thanosRuler . For <alertmanager_specification> , substitute authentication and other configuration details for additional Alertmanager instances. Currently supported authentication methods are bearer token ( bearerToken ) and client TLS ( tlsConfig ). The following sample config map configures an additional Alertmanager using Thanos Ruler with a bearer token and client TLS authentication: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: additionalAlertmanagerConfigs: - scheme: https pathPrefix: / timeout: "30s" apiVersion: v1 bearerToken: name: alertmanager-bearer-token key: token tlsConfig: key: name: alertmanager-tls key: tls.key cert: name: alertmanager-tls key: tls.crt ca: name: alertmanager-tls key: tls.ca staticConfigs: - external-alertmanager1-remote.com - external-alertmanager1-remote2.com Note Configurations applied to the user-workload-monitoring-config ConfigMap object are not activated unless a cluster administrator has enabled monitoring for user-defined projects. Save the file to apply the changes to the ConfigMap object. The new component placement configuration is applied automatically. 3.1. Attaching additional labels to your time series and alerts Using the external labels feature of Prometheus, you can attach custom labels to all time series and alerts leaving Prometheus. Prerequisites If you are configuring core OpenShift Container Platform monitoring components : You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. If you are configuring components that monitor user-defined projects : You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. You have created the user-workload-monitoring-config ConfigMap object. You have installed the OpenShift CLI ( oc ). Procedure Edit the ConfigMap object: To attach custom labels to all time series and alerts leaving the Prometheus instance that monitors core OpenShift Container Platform projects : Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Define a map of labels you want to add for every metric under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: externalLabels: <key>: <value> 1 1 Substitute <key>: <value> with a map of key-value pairs where <key> is a unique name for the new label and <value> is its value. Warning Do not use prometheus or prometheus_replica as key names, because they are reserved and will be overwritten. For example, to add metadata about the region and environment to all time series and alerts, use: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: externalLabels: region: eu environment: prod To attach custom labels to all time series and alerts leaving the Prometheus instance that monitors user-defined projects : Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Define a map of labels you want to add for every metric under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: externalLabels: <key>: <value> 1 1 Substitute <key>: <value> with a map of key-value pairs where <key> is a unique name for the new label and <value> is its value. Warning Do not use prometheus or prometheus_replica as key names, because they are reserved and will be overwritten. Note In the openshift-user-workload-monitoring project, Prometheus handles metrics and Thanos Ruler handles alerting and recording rules. Setting externalLabels for prometheus in the user-workload-monitoring-config ConfigMap object will only configure external labels for metrics and not for any rules. For example, to add metadata about the region and environment to all time series and alerts related to user-defined projects, use: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: externalLabels: region: eu environment: prod Save the file to apply the changes. The new configuration is applied automatically. Note Configurations applied to the user-workload-monitoring-config ConfigMap object are not activated unless a cluster administrator has enabled monitoring for user-defined projects. Warning When changes are saved to a monitoring config map, the pods and other resources in the related project might be redeployed. The running monitoring processes in that project might also be restarted. Additional resources See Preparing to configure the monitoring stack for steps to create monitoring config maps. Enabling monitoring for user-defined projects 3.2. Setting log levels for monitoring components You can configure the log level for Alertmanager, Prometheus Operator, Prometheus, Thanos Querier, and Thanos Ruler. The following log levels can be applied to the relevant component in the cluster-monitoring-config and user-workload-monitoring-config ConfigMap objects: debug . Log debug, informational, warning, and error messages. info . Log informational, warning, and error messages. warn . Log warning and error messages only. error . Log error messages only. The default log level is info . Prerequisites If you are setting a log level for Alertmanager, Prometheus Operator, Prometheus, or Thanos Querier in the openshift-monitoring project : You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. If you are setting a log level for Prometheus Operator, Prometheus, or Thanos Ruler in the openshift-user-workload-monitoring project : You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. You have created the user-workload-monitoring-config ConfigMap object. You have installed the OpenShift CLI ( oc ). Procedure Edit the ConfigMap object: To set a log level for a component in the openshift-monitoring project : Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add logLevel: <log_level> for a component under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: 1 logLevel: <log_level> 2 1 The monitoring stack component for which you are setting a log level. For default platform monitoring, available component values are prometheusK8s , alertmanagerMain , prometheusOperator , and thanosQuerier . 2 The log level to set for the component. The available values are error , warn , info , and debug . The default value is info . To set a log level for a component in the openshift-user-workload-monitoring project : Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add logLevel: <log_level> for a component under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 logLevel: <log_level> 2 1 The monitoring stack component for which you are setting a log level. For user workload monitoring, available component values are prometheus , prometheusOperator , and thanosRuler . 2 The log level to set for the component. The available values are error , warn , info , and debug . The default value is info . Save the file to apply the changes. The pods for the component restarts automatically when you apply the log-level change. Note Configurations applied to the user-workload-monitoring-config ConfigMap object are not activated unless a cluster administrator has enabled monitoring for user-defined projects. Warning When changes are saved to a monitoring config map, the pods and other resources in the related project might be redeployed. The running monitoring processes in that project might also be restarted. Confirm that the log-level has been applied by reviewing the deployment or pod configuration in the related project. The following example checks the log level in the prometheus-operator deployment in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep "log-level" Example output - --log-level=debug Check that the pods for the component are running. The following example lists the status of pods in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring get pods Note If an unrecognized loglevel value is included in the ConfigMap object, the pods for the component might not restart successfully. 3.3. Enabling the query log file for Prometheus You can configure Prometheus to write all queries that have been run by the engine to a log file. You can do so for default platform monitoring and for user-defined workload monitoring. Important Because log rotation is not supported, only enable this feature temporarily when you need to troubleshoot an issue. After you finish troubleshooting, disable query logging by reverting the changes you made to the ConfigMap object to enable the feature. Prerequisites You have installed the OpenShift CLI ( oc ). If you are enabling the query log file feature for Prometheus in the openshift-monitoring project : You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. If you are enabling the query log file feature for Prometheus in the openshift-user-workload-monitoring project : You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. You have created the user-workload-monitoring-config ConfigMap object. Procedure To set the query log file for Prometheus in the openshift-monitoring project : Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add queryLogFile: <path> for prometheusK8s under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: queryLogFile: <path> 1 1 The full path to the file in which queries will be logged. Save the file to apply the changes. Warning When you save changes to a monitoring config map, pods and other resources in the related project might be redeployed. The running monitoring processes in that project might also be restarted. Verify that the pods for the component are running. The following sample command lists the status of pods in the openshift-monitoring project: USD oc -n openshift-monitoring get pods Read the query log: USD oc -n openshift-monitoring exec prometheus-k8s-0 -- cat <path> Important Revert the setting in the config map after you have examined the logged query information. To set the query log file for Prometheus in the openshift-user-workload-monitoring project : Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add queryLogFile: <path> for prometheus under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: queryLogFile: <path> 1 1 The full path to the file in which queries will be logged. Save the file to apply the changes. Note Configurations applied to the user-workload-monitoring-config ConfigMap object are not activated unless a cluster administrator has enabled monitoring for user-defined projects. Warning When you save changes to a monitoring config map, pods and other resources in the related project might be redeployed. The running monitoring processes in that project might also be restarted. Verify that the pods for the component are running. The following example command lists the status of pods in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring get pods Read the query log: USD oc -n openshift-user-workload-monitoring exec prometheus-user-workload-0 -- cat <path> Important Revert the setting in the config map after you have examined the logged query information. Additional resources See Preparing to configure the monitoring stack for steps to create monitoring config maps See Enabling monitoring for user-defined projects for steps to enable user-defined monitoring. 3.4. Enabling query logging for Thanos Querier For default platform monitoring in the openshift-monitoring project, you can enable the Cluster Monitoring Operator to log all queries run by Thanos Querier. Important Because log rotation is not supported, only enable this feature temporarily when you need to troubleshoot an issue. After you finish troubleshooting, disable query logging by reverting the changes you made to the ConfigMap object to enable the feature. Prerequisites You have installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. Procedure You can enable query logging for Thanos Querier in the openshift-monitoring project: Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add a thanosQuerier section under data/config.yaml and add values as shown in the following example: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | thanosQuerier: enableRequestLogging: <value> 1 logLevel: <value> 2 1 Set the value to true to enable logging and false to disable logging. The default value is false . 2 Set the value to debug , info , warn , or error . If no value exists for logLevel , the log level defaults to error . Save the file to apply the changes. Warning When you save changes to a monitoring config map, pods and other resources in the related project might be redeployed. The running monitoring processes in that project might also be restarted. Verification Verify that the Thanos Querier pods are running. The following sample command lists the status of pods in the openshift-monitoring project: USD oc -n openshift-monitoring get pods Run a test query using the following sample commands as a model: USD token=`oc create token prometheus-k8s -n openshift-monitoring` USD oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -k -H "Authorization: Bearer USDtoken" 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_version' Run the following command to read the query log: USD oc -n openshift-monitoring logs <thanos_querier_pod_name> -c thanos-query Note Because the thanos-querier pods are highly available (HA) pods, you might be able to see logs in only one pod. After you examine the logged query information, disable query logging by changing the enableRequestLogging value to false in the config map. Additional resources See Preparing to configure the monitoring stack for steps to create monitoring config maps. | [
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: additionalAlertmanagerConfigs: - <alertmanager_specification>",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: additionalAlertmanagerConfigs: - scheme: https pathPrefix: / timeout: \"30s\" apiVersion: v1 bearerToken: name: alertmanager-bearer-token key: token tlsConfig: key: name: alertmanager-tls key: tls.key cert: name: alertmanager-tls key: tls.crt ca: name: alertmanager-tls key: tls.ca staticConfigs: - external-alertmanager1-remote.com - external-alertmanager1-remote2.com",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: additionalAlertmanagerConfigs: - <alertmanager_specification>",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: additionalAlertmanagerConfigs: - scheme: https pathPrefix: / timeout: \"30s\" apiVersion: v1 bearerToken: name: alertmanager-bearer-token key: token tlsConfig: key: name: alertmanager-tls key: tls.key cert: name: alertmanager-tls key: tls.crt ca: name: alertmanager-tls key: tls.ca staticConfigs: - external-alertmanager1-remote.com - external-alertmanager1-remote2.com",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: externalLabels: <key>: <value> 1",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: externalLabels: region: eu environment: prod",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: externalLabels: <key>: <value> 1",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: externalLabels: region: eu environment: prod",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: 1 logLevel: <log_level> 2",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 logLevel: <log_level> 2",
"oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep \"log-level\"",
"- --log-level=debug",
"oc -n openshift-user-workload-monitoring get pods",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: queryLogFile: <path> 1",
"oc -n openshift-monitoring get pods",
"oc -n openshift-monitoring exec prometheus-k8s-0 -- cat <path>",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: queryLogFile: <path> 1",
"oc -n openshift-user-workload-monitoring get pods",
"oc -n openshift-user-workload-monitoring exec prometheus-user-workload-0 -- cat <path>",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | thanosQuerier: enableRequestLogging: <value> 1 logLevel: <value> 2",
"oc -n openshift-monitoring get pods",
"token=`oc create token prometheus-k8s -n openshift-monitoring` oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -k -H \"Authorization: Bearer USDtoken\" 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_version'",
"oc -n openshift-monitoring logs <thanos_querier_pod_name> -c thanos-query"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/monitoring/monitoring-configuring-external-alertmanagers_configuring-the-monitoring-stack |
Compiling your Red Hat build of Quarkus applications to native executables | Compiling your Red Hat build of Quarkus applications to native executables Red Hat build of Quarkus 3.2 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.2/html/compiling_your_red_hat_build_of_quarkus_applications_to_native_executables/index |
OpenStack Integration Test Suite Guide | OpenStack Integration Test Suite Guide Red Hat OpenStack Platform 17.0 Introduction to the OpenStack Integration Test Suite OpenStack Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/openstack_integration_test_suite_guide/index |
Chapter 11. ThanosRuler [monitoring.coreos.com/v1] | Chapter 11. ThanosRuler [monitoring.coreos.com/v1] Description ThanosRuler defines a ThanosRuler deployment. Type object Required spec 11.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Specification of the desired behavior of the ThanosRuler cluster. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status status object Most recent observed status of the ThanosRuler cluster. Read-only. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status 11.1.1. .spec Description Specification of the desired behavior of the ThanosRuler cluster. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status Type object Property Type Description additionalArgs array AdditionalArgs allows setting additional arguments for the ThanosRuler container. It is intended for e.g. activating hidden flags which are not supported by the dedicated configuration options yet. The arguments are passed as-is to the ThanosRuler container which may cause issues if they are invalid or not supported by the given ThanosRuler version. In case of an argument conflict (e.g. an argument which is already set by the operator itself) or when providing an invalid argument the reconciliation will fail and an error will be logged. additionalArgs[] object Argument as part of the AdditionalArgs list. affinity object If specified, the pod's scheduling constraints. alertDropLabels array (string) AlertDropLabels configure the label names which should be dropped in ThanosRuler alerts. The replica label thanos_ruler_replica will always be dropped in alerts. alertQueryUrl string The external Query URL the Thanos Ruler will set in the 'Source' field of all alerts. Maps to the '--alert.query-url' CLI arg. alertRelabelConfigFile string AlertRelabelConfigFile specifies the path of the alert relabeling configuration file. When used alongside with AlertRelabelConfigs, alertRelabelConfigFile takes precedence. alertRelabelConfigs object AlertRelabelConfigs configures alert relabeling in ThanosRuler. Alert relabel configurations must have the form as specified in the official Prometheus documentation: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#alert_relabel_configs Alternative to AlertRelabelConfigFile, and lower order priority. alertmanagersConfig object Define configuration for connecting to alertmanager. Only available with thanos v0.10.0 and higher. Maps to the alertmanagers.config arg. alertmanagersUrl array (string) Define URLs to send alerts to Alertmanager. For Thanos v0.10.0 and higher, AlertManagersConfig should be used instead. Note: this field will be ignored if AlertManagersConfig is specified. Maps to the alertmanagers.url arg. containers array Containers allows injecting additional containers or modifying operator generated containers. This can be used to allow adding an authentication proxy to a ThanosRuler pod or to change the behavior of an operator generated container. Containers described here modify an operator generated container if they share the same name and modifications are done via a strategic merge patch. The current container names are: thanos-ruler and config-reloader . Overriding containers is entirely outside the scope of what the maintainers will support and by doing so, you accept that this behaviour may break at any time without notice. containers[] object A single application container that you want to run within a pod. enforcedNamespaceLabel string EnforcedNamespaceLabel enforces adding a namespace label of origin for each alert and metric that is user created. The label value will always be the namespace of the object that is being created. evaluationInterval string Interval between consecutive evaluations. excludedFromEnforcement array List of references to PrometheusRule objects to be excluded from enforcing a namespace label of origin. Applies only if enforcedNamespaceLabel set to true. excludedFromEnforcement[] object ObjectReference references a PodMonitor, ServiceMonitor, Probe or PrometheusRule object. externalPrefix string The external URL the Thanos Ruler instances will be available under. This is necessary to generate correct URLs. This is necessary if Thanos Ruler is not served from root of a DNS name. grpcServerTlsConfig object GRPCServerTLSConfig configures the gRPC server from which Thanos Querier reads recorded rule data. Note: Currently only the CAFile, CertFile, and KeyFile fields are supported. Maps to the '--grpc-server-tls-*' CLI args. hostAliases array Pods' hostAliases configuration hostAliases[] object HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pod's hosts file. image string Thanos container image URL. imagePullPolicy string Image pull policy for the 'thanos', 'init-config-reloader' and 'config-reloader' containers. See https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy for more details. imagePullSecrets array An optional list of references to secrets in the same namespace to use for pulling thanos images from registries see http://kubernetes.io/docs/user-guide/images#specifying-imagepullsecrets-on-a-pod imagePullSecrets[] object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. initContainers array InitContainers allows adding initContainers to the pod definition. Those can be used to e.g. fetch secrets for injection into the ThanosRuler configuration from external sources. Any errors during the execution of an initContainer will lead to a restart of the Pod. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ Using initContainers for any use case other then secret fetching is entirely outside the scope of what the maintainers will support and by doing so, you accept that this behaviour may break at any time without notice. initContainers[] object A single application container that you want to run within a pod. labels object (string) Labels configure the external label pairs to ThanosRuler. A default replica label thanos_ruler_replica will be always added as a label with the value of the pod's name and it will be dropped in the alerts. listenLocal boolean ListenLocal makes the Thanos ruler listen on loopback, so that it does not bind against the Pod IP. logFormat string Log format for ThanosRuler to be configured with. logLevel string Log level for ThanosRuler to be configured with. minReadySeconds integer Minimum number of seconds for which a newly created pod should be ready without any of its container crashing for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready) This is an alpha field from kubernetes 1.22 until 1.24 which requires enabling the StatefulSetMinReadySeconds feature gate. nodeSelector object (string) Define which Nodes the Pods are scheduled on. objectStorageConfig object ObjectStorageConfig configures object storage in Thanos. Alternative to ObjectStorageConfigFile, and lower order priority. objectStorageConfigFile string ObjectStorageConfigFile specifies the path of the object storage configuration file. When used alongside with ObjectStorageConfig, ObjectStorageConfigFile takes precedence. paused boolean When a ThanosRuler deployment is paused, no actions except for deletion will be performed on the underlying objects. podMetadata object PodMetadata configures labels and annotations which are propagated to the ThanosRuler pods. The following items are reserved and cannot be overridden: * "app.kubernetes.io/name" label, set to "thanos-ruler". * "app.kubernetes.io/managed-by" label, set to "prometheus-operator". * "app.kubernetes.io/instance" label, set to the name of the ThanosRuler instance. * "thanos-ruler" label, set to the name of the ThanosRuler instance. * "kubectl.kubernetes.io/default-container" annotation, set to "thanos-ruler". portName string Port name used for the pods and governing service. Defaults to web . priorityClassName string Priority class assigned to the Pods prometheusRulesExcludedFromEnforce array PrometheusRulesExcludedFromEnforce - list of Prometheus rules to be excluded from enforcing of adding namespace labels. Works only if enforcedNamespaceLabel set to true. Make sure both ruleNamespace and ruleName are set for each pair Deprecated: use excludedFromEnforcement instead. prometheusRulesExcludedFromEnforce[] object PrometheusRuleExcludeConfig enables users to configure excluded PrometheusRule names and their namespaces to be ignored while enforcing namespace label for alerts and metrics. queryConfig object Define configuration for connecting to thanos query instances. If this is defined, the QueryEndpoints field will be ignored. Maps to the query.config CLI argument. Only available with thanos v0.11.0 and higher. queryEndpoints array (string) QueryEndpoints defines Thanos querier endpoints from which to query metrics. Maps to the --query flag of thanos ruler. replicas integer Number of thanos ruler instances to deploy. resources object Resources defines the resource requirements for single Pods. If not provided, no requests/limits will be set retention string Time duration ThanosRuler shall retain data for. Default is '24h', and must match the regular expression [0-9]+(ms|s|m|h|d|w|y) (milliseconds seconds minutes hours days weeks years). routePrefix string The route prefix ThanosRuler registers HTTP handlers for. This allows thanos UI to be served on a sub-path. ruleNamespaceSelector object Namespaces to be selected for Rules discovery. If unspecified, only the same namespace as the ThanosRuler object is in is used. ruleSelector object A label selector to select which PrometheusRules to mount for alerting and recording. securityContext object SecurityContext holds pod-level security attributes and common container settings. This defaults to the default PodSecurityContext. serviceAccountName string ServiceAccountName is the name of the ServiceAccount to use to run the Thanos Ruler Pods. storage object Storage spec to specify how storage shall be used. tolerations array If specified, the pod's tolerations. tolerations[] object The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. topologySpreadConstraints array If specified, the pod's topology spread constraints. topologySpreadConstraints[] object TopologySpreadConstraint specifies how to spread matching pods among the given topology. tracingConfig object TracingConfig configures tracing in Thanos. tracingConfigFile takes precedence over this field. This is an experimental feature , it may change in any upcoming release in a breaking way. tracingConfigFile string TracingConfig specifies the path of the tracing configuration file. This field takes precedence over tracingConfig . This is an experimental feature , it may change in any upcoming release in a breaking way. version string Version of Thanos to be deployed. volumeMounts array VolumeMounts allows configuration of additional VolumeMounts on the output StatefulSet definition. VolumeMounts specified will be appended to other VolumeMounts in the ruler container, that are generated as a result of StorageSpec objects. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. volumes array Volumes allows configuration of additional volumes on the output StatefulSet definition. Volumes specified will be appended to other volumes that are generated as a result of StorageSpec objects. volumes[] object Volume represents a named volume in a pod that may be accessed by any container in the pod. web object Defines the configuration of the ThanosRuler web server. 11.1.2. .spec.additionalArgs Description AdditionalArgs allows setting additional arguments for the ThanosRuler container. It is intended for e.g. activating hidden flags which are not supported by the dedicated configuration options yet. The arguments are passed as-is to the ThanosRuler container which may cause issues if they are invalid or not supported by the given ThanosRuler version. In case of an argument conflict (e.g. an argument which is already set by the operator itself) or when providing an invalid argument the reconciliation will fail and an error will be logged. Type array 11.1.3. .spec.additionalArgs[] Description Argument as part of the AdditionalArgs list. Type object Required name Property Type Description name string Name of the argument, e.g. "scrape.discovery-reload-interval". value string Argument value, e.g. 30s. Can be empty for name-only arguments (e.g. --storage.tsdb.no-lockfile) 11.1.4. .spec.affinity Description If specified, the pod's scheduling constraints. Type object Property Type Description nodeAffinity object Describes node affinity scheduling rules for the pod. podAffinity object Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). podAntiAffinity object Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). 11.1.5. .spec.affinity.nodeAffinity Description Describes node affinity scheduling rules for the pod. Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). requiredDuringSchedulingIgnoredDuringExecution object If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. 11.1.6. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. Type array 11.1.7. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). Type object Required preference weight Property Type Description preference object A node selector term, associated with the corresponding weight. weight integer Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100. 11.1.8. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference Description A node selector term, associated with the corresponding weight. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 11.1.9. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions Description A list of node selector requirements by node's labels. Type array 11.1.10. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 11.1.11. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields Description A list of node selector requirements by node's fields. Type array 11.1.12. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 11.1.13. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. Type object Required nodeSelectorTerms Property Type Description nodeSelectorTerms array Required. A list of node selector terms. The terms are ORed. nodeSelectorTerms[] object A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. 11.1.14. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms Description Required. A list of node selector terms. The terms are ORed. Type array 11.1.15. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[] Description A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 11.1.16. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions Description A list of node selector requirements by node's labels. Type array 11.1.17. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 11.1.18. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields Description A list of node selector requirements by node's fields. Type array 11.1.19. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 11.1.20. .spec.affinity.podAffinity Description Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 11.1.21. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 11.1.22. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required podAffinityTerm weight Property Type Description podAffinityTerm object Required. A pod affinity term, associated with the corresponding weight. weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 11.1.23. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Required. A pod affinity term, associated with the corresponding weight. Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with LabelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. Also, MatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. mismatchLabelKeys array (string) MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with LabelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MismatchLabelKeys and LabelSelector. Also, MismatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 11.1.24. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector Description A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 11.1.25. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 11.1.26. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 11.1.27. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 11.1.28. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 11.1.29. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 11.1.30. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 11.1.31. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with LabelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. Also, MatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. mismatchLabelKeys array (string) MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with LabelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MismatchLabelKeys and LabelSelector. Also, MismatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 11.1.32. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector Description A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 11.1.33. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 11.1.34. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 11.1.35. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 11.1.36. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 11.1.37. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 11.1.38. .spec.affinity.podAntiAffinity Description Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 11.1.39. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 11.1.40. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required podAffinityTerm weight Property Type Description podAffinityTerm object Required. A pod affinity term, associated with the corresponding weight. weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 11.1.41. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Required. A pod affinity term, associated with the corresponding weight. Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with LabelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. Also, MatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. mismatchLabelKeys array (string) MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with LabelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MismatchLabelKeys and LabelSelector. Also, MismatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 11.1.42. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector Description A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 11.1.43. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 11.1.44. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 11.1.45. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 11.1.46. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 11.1.47. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 11.1.48. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 11.1.49. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with LabelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. Also, MatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. mismatchLabelKeys array (string) MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with LabelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MismatchLabelKeys and LabelSelector. Also, MismatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 11.1.50. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector Description A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 11.1.51. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 11.1.52. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 11.1.53. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 11.1.54. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 11.1.55. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 11.1.56. .spec.alertRelabelConfigs Description AlertRelabelConfigs configures alert relabeling in ThanosRuler. Alert relabel configurations must have the form as specified in the official Prometheus documentation: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#alert_relabel_configs Alternative to AlertRelabelConfigFile, and lower order priority. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 11.1.57. .spec.alertmanagersConfig Description Define configuration for connecting to alertmanager. Only available with thanos v0.10.0 and higher. Maps to the alertmanagers.config arg. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 11.1.58. .spec.containers Description Containers allows injecting additional containers or modifying operator generated containers. This can be used to allow adding an authentication proxy to a ThanosRuler pod or to change the behavior of an operator generated container. Containers described here modify an operator generated container if they share the same name and modifications are done via a strategic merge patch. The current container names are: thanos-ruler and config-reloader . Overriding containers is entirely outside the scope of what the maintainers will support and by doing so, you accept that this behaviour may break at any time without notice. Type array 11.1.59. .spec.containers[] Description A single application container that you want to run within a pod. Type object Required name Property Type Description args array (string) Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command array (string) Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env array List of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images lifecycle object Actions that the management system should take in response to container lifecycle events. Cannot be updated. livenessProbe object Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports array List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. ports[] object ContainerPort represents a network port in a single container. readinessProbe object Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes resizePolicy array Resources resize policy for the container. resizePolicy[] object ContainerResizePolicy represents resource resize policy for the container. resources object Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ restartPolicy string RestartPolicy defines the restart behavior of individual containers in a pod. This field may only be set for init containers, and the only allowed value is "Always". For non-init containers or when this field is not specified, the restart behavior is defined by the Pod's restart policy and the container type. Setting the RestartPolicy as "Always" for the init container will have the following effect: this init container will be continually restarted on exit until all regular containers have terminated. Once all regular containers have completed, all init containers with restartPolicy "Always" will be shut down. This lifecycle differs from normal init containers and is often referred to as a "sidecar" container. Although this init container still starts in the init container sequence, it does not wait for the container to complete before proceeding to the init container. Instead, the init container starts immediately after this init container is started, or after any startupProbe has successfully completed. securityContext object SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ startupProbe object StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices array volumeDevices is the list of block devices to be used by the container. volumeDevices[] object volumeDevice describes a mapping of a raw block device within a container. volumeMounts array Pod volumes to mount into the container's filesystem. Cannot be updated. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. 11.1.60. .spec.containers[].env Description List of environment variables to set in the container. Cannot be updated. Type array 11.1.61. .spec.containers[].env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object Source for the environment variable's value. Cannot be used if value is not empty. 11.1.62. .spec.containers[].env[].valueFrom Description Source for the environment variable's value. Cannot be used if value is not empty. Type object Property Type Description configMapKeyRef object Selects a key of a ConfigMap. fieldRef object Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. secretKeyRef object Selects a key of a secret in the pod's namespace 11.1.63. .spec.containers[].env[].valueFrom.configMapKeyRef Description Selects a key of a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 11.1.64. .spec.containers[].env[].valueFrom.fieldRef Description Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 11.1.65. .spec.containers[].env[].valueFrom.resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 11.1.66. .spec.containers[].env[].valueFrom.secretKeyRef Description Selects a key of a secret in the pod's namespace Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 11.1.67. .spec.containers[].envFrom Description List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. Type array 11.1.68. .spec.containers[].envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object The ConfigMap to select from prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object The Secret to select from 11.1.69. .spec.containers[].envFrom[].configMapRef Description The ConfigMap to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap must be defined 11.1.70. .spec.containers[].envFrom[].secretRef Description The Secret to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret must be defined 11.1.71. .spec.containers[].lifecycle Description Actions that the management system should take in response to container lifecycle events. Cannot be updated. Type object Property Type Description postStart object PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks preStop object PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks 11.1.72. .spec.containers[].lifecycle.postStart Description PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. sleep object Sleep represents the duration that the container should sleep before being terminated. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 11.1.73. .spec.containers[].lifecycle.postStart.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 11.1.74. .spec.containers[].lifecycle.postStart.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 11.1.75. .spec.containers[].lifecycle.postStart.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 11.1.76. .spec.containers[].lifecycle.postStart.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 11.1.77. .spec.containers[].lifecycle.postStart.sleep Description Sleep represents the duration that the container should sleep before being terminated. Type object Required seconds Property Type Description seconds integer Seconds is the number of seconds to sleep. 11.1.78. .spec.containers[].lifecycle.postStart.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 11.1.79. .spec.containers[].lifecycle.preStop Description PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. sleep object Sleep represents the duration that the container should sleep before being terminated. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 11.1.80. .spec.containers[].lifecycle.preStop.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 11.1.81. .spec.containers[].lifecycle.preStop.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 11.1.82. .spec.containers[].lifecycle.preStop.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 11.1.83. .spec.containers[].lifecycle.preStop.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 11.1.84. .spec.containers[].lifecycle.preStop.sleep Description Sleep represents the duration that the container should sleep before being terminated. Type object Required seconds Property Type Description seconds integer Seconds is the number of seconds to sleep. 11.1.85. .spec.containers[].lifecycle.preStop.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 11.1.86. .spec.containers[].livenessProbe Description Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 11.1.87. .spec.containers[].livenessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 11.1.88. .spec.containers[].livenessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 11.1.89. .spec.containers[].livenessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 11.1.90. .spec.containers[].livenessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 11.1.91. .spec.containers[].livenessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 11.1.92. .spec.containers[].livenessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 11.1.93. .spec.containers[].ports Description List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. Type array 11.1.94. .spec.containers[].ports[] Description ContainerPort represents a network port in a single container. Type object Required containerPort Property Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". 11.1.95. .spec.containers[].readinessProbe Description Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 11.1.96. .spec.containers[].readinessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 11.1.97. .spec.containers[].readinessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 11.1.98. .spec.containers[].readinessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 11.1.99. .spec.containers[].readinessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 11.1.100. .spec.containers[].readinessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 11.1.101. .spec.containers[].readinessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 11.1.102. .spec.containers[].resizePolicy Description Resources resize policy for the container. Type array 11.1.103. .spec.containers[].resizePolicy[] Description ContainerResizePolicy represents resource resize policy for the container. Type object Required resourceName restartPolicy Property Type Description resourceName string Name of the resource to which this resource resize policy applies. Supported values: cpu, memory. restartPolicy string Restart policy to apply when specified resource is resized. If not specified, it defaults to NotRequired. 11.1.104. .spec.containers[].resources Description Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 11.1.105. .spec.containers[].resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 11.1.106. .spec.containers[].resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 11.1.107. .spec.containers[].securityContext Description SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ Type object Property Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. capabilities object The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seccompProfile object The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. windowsOptions object The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. 11.1.108. .spec.containers[].securityContext.capabilities Description The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description add array (string) Added capabilities drop array (string) Removed capabilities 11.1.109. .spec.containers[].securityContext.seLinuxOptions Description The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 11.1.110. .spec.containers[].securityContext.seccompProfile Description The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must be set if type is "Localhost". Must NOT be set for any other type. type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. 11.1.111. .spec.containers[].securityContext.windowsOptions Description The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 11.1.112. .spec.containers[].startupProbe Description StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 11.1.113. .spec.containers[].startupProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 11.1.114. .spec.containers[].startupProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 11.1.115. .spec.containers[].startupProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 11.1.116. .spec.containers[].startupProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 11.1.117. .spec.containers[].startupProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 11.1.118. .spec.containers[].startupProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 11.1.119. .spec.containers[].volumeDevices Description volumeDevices is the list of block devices to be used by the container. Type array 11.1.120. .spec.containers[].volumeDevices[] Description volumeDevice describes a mapping of a raw block device within a container. Type object Required devicePath name Property Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod 11.1.121. .spec.containers[].volumeMounts Description Pod volumes to mount into the container's filesystem. Cannot be updated. Type array 11.1.122. .spec.containers[].volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required mountPath name Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 11.1.123. .spec.excludedFromEnforcement Description List of references to PrometheusRule objects to be excluded from enforcing a namespace label of origin. Applies only if enforcedNamespaceLabel set to true. Type array 11.1.124. .spec.excludedFromEnforcement[] Description ObjectReference references a PodMonitor, ServiceMonitor, Probe or PrometheusRule object. Type object Required namespace resource Property Type Description group string Group of the referent. When not specified, it defaults to monitoring.coreos.com name string Name of the referent. When not set, all resources in the namespace are matched. namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resource string Resource of the referent. 11.1.125. .spec.grpcServerTlsConfig Description GRPCServerTLSConfig configures the gRPC server from which Thanos Querier reads recorded rule data. Note: Currently only the CAFile, CertFile, and KeyFile fields are supported. Maps to the '--grpc-server-tls-*' CLI args. Type object Property Type Description ca object Certificate authority used when verifying server certificates. caFile string Path to the CA cert in the Prometheus container to use for the targets. cert object Client certificate to present when doing client-authentication. certFile string Path to the client cert file in the Prometheus container for the targets. insecureSkipVerify boolean Disable target certificate validation. keyFile string Path to the client key file in the Prometheus container for the targets. keySecret object Secret containing the client key file for the targets. serverName string Used to verify the hostname for the targets. 11.1.126. .spec.grpcServerTlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 11.1.127. .spec.grpcServerTlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 11.1.128. .spec.grpcServerTlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 11.1.129. .spec.grpcServerTlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 11.1.130. .spec.grpcServerTlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 11.1.131. .spec.grpcServerTlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 11.1.132. .spec.grpcServerTlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 11.1.133. .spec.hostAliases Description Pods' hostAliases configuration Type array 11.1.134. .spec.hostAliases[] Description HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pod's hosts file. Type object Required hostnames ip Property Type Description hostnames array (string) Hostnames for the above IP address. ip string IP address of the host file entry. 11.1.135. .spec.imagePullSecrets Description An optional list of references to secrets in the same namespace to use for pulling thanos images from registries see http://kubernetes.io/docs/user-guide/images#specifying-imagepullsecrets-on-a-pod Type array 11.1.136. .spec.imagePullSecrets[] Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 11.1.137. .spec.initContainers Description InitContainers allows adding initContainers to the pod definition. Those can be used to e.g. fetch secrets for injection into the ThanosRuler configuration from external sources. Any errors during the execution of an initContainer will lead to a restart of the Pod. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ Using initContainers for any use case other then secret fetching is entirely outside the scope of what the maintainers will support and by doing so, you accept that this behaviour may break at any time without notice. Type array 11.1.138. .spec.initContainers[] Description A single application container that you want to run within a pod. Type object Required name Property Type Description args array (string) Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command array (string) Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env array List of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images lifecycle object Actions that the management system should take in response to container lifecycle events. Cannot be updated. livenessProbe object Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports array List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. ports[] object ContainerPort represents a network port in a single container. readinessProbe object Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes resizePolicy array Resources resize policy for the container. resizePolicy[] object ContainerResizePolicy represents resource resize policy for the container. resources object Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ restartPolicy string RestartPolicy defines the restart behavior of individual containers in a pod. This field may only be set for init containers, and the only allowed value is "Always". For non-init containers or when this field is not specified, the restart behavior is defined by the Pod's restart policy and the container type. Setting the RestartPolicy as "Always" for the init container will have the following effect: this init container will be continually restarted on exit until all regular containers have terminated. Once all regular containers have completed, all init containers with restartPolicy "Always" will be shut down. This lifecycle differs from normal init containers and is often referred to as a "sidecar" container. Although this init container still starts in the init container sequence, it does not wait for the container to complete before proceeding to the init container. Instead, the init container starts immediately after this init container is started, or after any startupProbe has successfully completed. securityContext object SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ startupProbe object StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices array volumeDevices is the list of block devices to be used by the container. volumeDevices[] object volumeDevice describes a mapping of a raw block device within a container. volumeMounts array Pod volumes to mount into the container's filesystem. Cannot be updated. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. 11.1.139. .spec.initContainers[].env Description List of environment variables to set in the container. Cannot be updated. Type array 11.1.140. .spec.initContainers[].env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object Source for the environment variable's value. Cannot be used if value is not empty. 11.1.141. .spec.initContainers[].env[].valueFrom Description Source for the environment variable's value. Cannot be used if value is not empty. Type object Property Type Description configMapKeyRef object Selects a key of a ConfigMap. fieldRef object Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. secretKeyRef object Selects a key of a secret in the pod's namespace 11.1.142. .spec.initContainers[].env[].valueFrom.configMapKeyRef Description Selects a key of a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 11.1.143. .spec.initContainers[].env[].valueFrom.fieldRef Description Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 11.1.144. .spec.initContainers[].env[].valueFrom.resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 11.1.145. .spec.initContainers[].env[].valueFrom.secretKeyRef Description Selects a key of a secret in the pod's namespace Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 11.1.146. .spec.initContainers[].envFrom Description List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. Type array 11.1.147. .spec.initContainers[].envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object The ConfigMap to select from prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object The Secret to select from 11.1.148. .spec.initContainers[].envFrom[].configMapRef Description The ConfigMap to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap must be defined 11.1.149. .spec.initContainers[].envFrom[].secretRef Description The Secret to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret must be defined 11.1.150. .spec.initContainers[].lifecycle Description Actions that the management system should take in response to container lifecycle events. Cannot be updated. Type object Property Type Description postStart object PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks preStop object PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks 11.1.151. .spec.initContainers[].lifecycle.postStart Description PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. sleep object Sleep represents the duration that the container should sleep before being terminated. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 11.1.152. .spec.initContainers[].lifecycle.postStart.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 11.1.153. .spec.initContainers[].lifecycle.postStart.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 11.1.154. .spec.initContainers[].lifecycle.postStart.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 11.1.155. .spec.initContainers[].lifecycle.postStart.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 11.1.156. .spec.initContainers[].lifecycle.postStart.sleep Description Sleep represents the duration that the container should sleep before being terminated. Type object Required seconds Property Type Description seconds integer Seconds is the number of seconds to sleep. 11.1.157. .spec.initContainers[].lifecycle.postStart.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 11.1.158. .spec.initContainers[].lifecycle.preStop Description PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. sleep object Sleep represents the duration that the container should sleep before being terminated. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 11.1.159. .spec.initContainers[].lifecycle.preStop.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 11.1.160. .spec.initContainers[].lifecycle.preStop.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 11.1.161. .spec.initContainers[].lifecycle.preStop.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 11.1.162. .spec.initContainers[].lifecycle.preStop.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 11.1.163. .spec.initContainers[].lifecycle.preStop.sleep Description Sleep represents the duration that the container should sleep before being terminated. Type object Required seconds Property Type Description seconds integer Seconds is the number of seconds to sleep. 11.1.164. .spec.initContainers[].lifecycle.preStop.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 11.1.165. .spec.initContainers[].livenessProbe Description Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 11.1.166. .spec.initContainers[].livenessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 11.1.167. .spec.initContainers[].livenessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 11.1.168. .spec.initContainers[].livenessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 11.1.169. .spec.initContainers[].livenessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 11.1.170. .spec.initContainers[].livenessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 11.1.171. .spec.initContainers[].livenessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 11.1.172. .spec.initContainers[].ports Description List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. Type array 11.1.173. .spec.initContainers[].ports[] Description ContainerPort represents a network port in a single container. Type object Required containerPort Property Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". 11.1.174. .spec.initContainers[].readinessProbe Description Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 11.1.175. .spec.initContainers[].readinessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 11.1.176. .spec.initContainers[].readinessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 11.1.177. .spec.initContainers[].readinessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 11.1.178. .spec.initContainers[].readinessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 11.1.179. .spec.initContainers[].readinessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 11.1.180. .spec.initContainers[].readinessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 11.1.181. .spec.initContainers[].resizePolicy Description Resources resize policy for the container. Type array 11.1.182. .spec.initContainers[].resizePolicy[] Description ContainerResizePolicy represents resource resize policy for the container. Type object Required resourceName restartPolicy Property Type Description resourceName string Name of the resource to which this resource resize policy applies. Supported values: cpu, memory. restartPolicy string Restart policy to apply when specified resource is resized. If not specified, it defaults to NotRequired. 11.1.183. .spec.initContainers[].resources Description Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 11.1.184. .spec.initContainers[].resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 11.1.185. .spec.initContainers[].resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 11.1.186. .spec.initContainers[].securityContext Description SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ Type object Property Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. capabilities object The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seccompProfile object The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. windowsOptions object The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. 11.1.187. .spec.initContainers[].securityContext.capabilities Description The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description add array (string) Added capabilities drop array (string) Removed capabilities 11.1.188. .spec.initContainers[].securityContext.seLinuxOptions Description The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 11.1.189. .spec.initContainers[].securityContext.seccompProfile Description The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must be set if type is "Localhost". Must NOT be set for any other type. type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. 11.1.190. .spec.initContainers[].securityContext.windowsOptions Description The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 11.1.191. .spec.initContainers[].startupProbe Description StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 11.1.192. .spec.initContainers[].startupProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 11.1.193. .spec.initContainers[].startupProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 11.1.194. .spec.initContainers[].startupProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 11.1.195. .spec.initContainers[].startupProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 11.1.196. .spec.initContainers[].startupProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 11.1.197. .spec.initContainers[].startupProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 11.1.198. .spec.initContainers[].volumeDevices Description volumeDevices is the list of block devices to be used by the container. Type array 11.1.199. .spec.initContainers[].volumeDevices[] Description volumeDevice describes a mapping of a raw block device within a container. Type object Required devicePath name Property Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod 11.1.200. .spec.initContainers[].volumeMounts Description Pod volumes to mount into the container's filesystem. Cannot be updated. Type array 11.1.201. .spec.initContainers[].volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required mountPath name Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 11.1.202. .spec.objectStorageConfig Description ObjectStorageConfig configures object storage in Thanos. Alternative to ObjectStorageConfigFile, and lower order priority. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 11.1.203. .spec.podMetadata Description PodMetadata configures labels and annotations which are propagated to the ThanosRuler pods. The following items are reserved and cannot be overridden: * "app.kubernetes.io/name" label, set to "thanos-ruler". * "app.kubernetes.io/managed-by" label, set to "prometheus-operator". * "app.kubernetes.io/instance" label, set to the name of the ThanosRuler instance. * "thanos-ruler" label, set to the name of the ThanosRuler instance. * "kubectl.kubernetes.io/default-container" annotation, set to "thanos-ruler". Type object Property Type Description annotations object (string) Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations labels object (string) Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels name string Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names 11.1.204. .spec.prometheusRulesExcludedFromEnforce Description PrometheusRulesExcludedFromEnforce - list of Prometheus rules to be excluded from enforcing of adding namespace labels. Works only if enforcedNamespaceLabel set to true. Make sure both ruleNamespace and ruleName are set for each pair Deprecated: use excludedFromEnforcement instead. Type array 11.1.205. .spec.prometheusRulesExcludedFromEnforce[] Description PrometheusRuleExcludeConfig enables users to configure excluded PrometheusRule names and their namespaces to be ignored while enforcing namespace label for alerts and metrics. Type object Required ruleName ruleNamespace Property Type Description ruleName string Name of the excluded PrometheusRule object. ruleNamespace string Namespace of the excluded PrometheusRule object. 11.1.206. .spec.queryConfig Description Define configuration for connecting to thanos query instances. If this is defined, the QueryEndpoints field will be ignored. Maps to the query.config CLI argument. Only available with thanos v0.11.0 and higher. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 11.1.207. .spec.resources Description Resources defines the resource requirements for single Pods. If not provided, no requests/limits will be set Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 11.1.208. .spec.resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 11.1.209. .spec.resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 11.1.210. .spec.ruleNamespaceSelector Description Namespaces to be selected for Rules discovery. If unspecified, only the same namespace as the ThanosRuler object is in is used. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 11.1.211. .spec.ruleNamespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 11.1.212. .spec.ruleNamespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 11.1.213. .spec.ruleSelector Description A label selector to select which PrometheusRules to mount for alerting and recording. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 11.1.214. .spec.ruleSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 11.1.215. .spec.ruleSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 11.1.216. .spec.securityContext Description SecurityContext holds pod-level security attributes and common container settings. This defaults to the default PodSecurityContext. Type object Property Type Description fsGroup integer A special supplemental group that applies to all containers in a pod. Some volume types allow the Kubelet to change the ownership of that volume to be owned by the pod: 1. The owning GID will be the FSGroup 2. The setgid bit is set (new files created in the volume will be owned by FSGroup) 3. The permission bits are OR'd with rw-rw---- If unset, the Kubelet will not modify the ownership and permissions of any volume. Note that this field cannot be set when spec.os.name is windows. fsGroupChangePolicy string fsGroupChangePolicy defines behavior of changing ownership and permission of the volume before being exposed inside Pod. This field will only apply to volume types which support fsGroup based ownership(and permissions). It will have no effect on ephemeral volume types such as: secret, configmaps and emptydir. Valid values are "OnRootMismatch" and "Always". If not specified, "Always" is used. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object The SELinux context to be applied to all containers. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. seccompProfile object The seccomp options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows. supplementalGroups array (integer) A list of groups applied to the first process run in each container, in addition to the container's primary GID, the fsGroup (if specified), and group memberships defined in the container image for the uid of the container process. If unspecified, no additional groups are added to any container. Note that group memberships defined in the container image for the uid of the container process are still effective, even if they are not included in this list. Note that this field cannot be set when spec.os.name is windows. sysctls array Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. sysctls[] object Sysctl defines a kernel parameter to be set windowsOptions object The Windows specific settings applied to all containers. If unspecified, the options within a container's SecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. 11.1.217. .spec.securityContext.seLinuxOptions Description The SELinux context to be applied to all containers. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 11.1.218. .spec.securityContext.seccompProfile Description The seccomp options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must be set if type is "Localhost". Must NOT be set for any other type. type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. 11.1.219. .spec.securityContext.sysctls Description Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. Type array 11.1.220. .spec.securityContext.sysctls[] Description Sysctl defines a kernel parameter to be set Type object Required name value Property Type Description name string Name of a property to set value string Value of a property to set 11.1.221. .spec.securityContext.windowsOptions Description The Windows specific settings applied to all containers. If unspecified, the options within a container's SecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 11.1.222. .spec.storage Description Storage spec to specify how storage shall be used. Type object Property Type Description disableMountSubPath boolean Deprecated: subPath usage will be removed in a future release. emptyDir object EmptyDirVolumeSource to be used by the StatefulSet. If specified, it takes precedence over ephemeral and volumeClaimTemplate . More info: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir ephemeral object EphemeralVolumeSource to be used by the StatefulSet. This is a beta field in k8s 1.21 and GA in 1.15. For lower versions, starting with k8s 1.19, it requires enabling the GenericEphemeralVolume feature gate. More info: https://kubernetes.io/docs/concepts/storage/ephemeral-volumes/#generic-ephemeral-volumes volumeClaimTemplate object Defines the PVC spec to be used by the Prometheus StatefulSets. The easiest way to use a volume that cannot be automatically provisioned is to use a label selector alongside manually created PersistentVolumes. 11.1.223. .spec.storage.emptyDir Description EmptyDirVolumeSource to be used by the StatefulSet. If specified, it takes precedence over ephemeral and volumeClaimTemplate . More info: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir Type object Property Type Description medium string medium represents what type of storage medium should back this directory. The default is "" which means to use the node's default medium. Must be an empty string (default) or Memory. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir sizeLimit integer-or-string sizeLimit is the total amount of local storage required for this EmptyDir volume. The size limit is also applicable for memory medium. The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod. The default is nil which means that the limit is undefined. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir 11.1.224. .spec.storage.ephemeral Description EphemeralVolumeSource to be used by the StatefulSet. This is a beta field in k8s 1.21 and GA in 1.15. For lower versions, starting with k8s 1.19, it requires enabling the GenericEphemeralVolume feature gate. More info: https://kubernetes.io/docs/concepts/storage/ephemeral-volumes/#generic-ephemeral-volumes Type object Property Type Description volumeClaimTemplate object Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. 11.1.225. .spec.storage.ephemeral.volumeClaimTemplate Description Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. Type object Required spec Property Type Description metadata object May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. spec object The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. 11.1.226. .spec.storage.ephemeral.volumeClaimTemplate.metadata Description May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. Type object 11.1.227. .spec.storage.ephemeral.volumeClaimTemplate.spec Description The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. dataSourceRef object dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. resources object resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources selector object selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeAttributesClassName string volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim. If specified, the CSI driver will create or update the volume with the attributes defined in the corresponding VolumeAttributesClass. This has a different purpose than storageClassName, it can be changed after the claim is created. An empty string value means that no VolumeAttributesClass will be applied to the claim but it's not allowed to reset this field to empty string once it is set. If unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass will be set by the persistentvolume controller if it exists. If the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource exists. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#volumeattributesclass (Alpha) Using this field requires the VolumeAttributesClass feature gate to be enabled. volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. 11.1.228. .spec.storage.ephemeral.volumeClaimTemplate.spec.dataSource Description dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 11.1.229. .spec.storage.ephemeral.volumeClaimTemplate.spec.dataSourceRef Description dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced namespace string Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. 11.1.230. .spec.storage.ephemeral.volumeClaimTemplate.spec.resources Description resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources Type object Property Type Description limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 11.1.231. .spec.storage.ephemeral.volumeClaimTemplate.spec.selector Description selector is a label query over volumes to consider for binding. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 11.1.232. .spec.storage.ephemeral.volumeClaimTemplate.spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 11.1.233. .spec.storage.ephemeral.volumeClaimTemplate.spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 11.1.234. .spec.storage.volumeClaimTemplate Description Defines the PVC spec to be used by the Prometheus StatefulSets. The easiest way to use a volume that cannot be automatically provisioned is to use a label selector alongside manually created PersistentVolumes. Type object Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata object EmbeddedMetadata contains metadata relevant to an EmbeddedResource. spec object Defines the desired characteristics of a volume requested by a pod author. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims status object Deprecated: this field is never set. 11.1.235. .spec.storage.volumeClaimTemplate.metadata Description EmbeddedMetadata contains metadata relevant to an EmbeddedResource. Type object Property Type Description annotations object (string) Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations labels object (string) Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels name string Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names 11.1.236. .spec.storage.volumeClaimTemplate.spec Description Defines the desired characteristics of a volume requested by a pod author. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. dataSourceRef object dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. resources object resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources selector object selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeAttributesClassName string volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim. If specified, the CSI driver will create or update the volume with the attributes defined in the corresponding VolumeAttributesClass. This has a different purpose than storageClassName, it can be changed after the claim is created. An empty string value means that no VolumeAttributesClass will be applied to the claim but it's not allowed to reset this field to empty string once it is set. If unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass will be set by the persistentvolume controller if it exists. If the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource exists. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#volumeattributesclass (Alpha) Using this field requires the VolumeAttributesClass feature gate to be enabled. volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. 11.1.237. .spec.storage.volumeClaimTemplate.spec.dataSource Description dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 11.1.238. .spec.storage.volumeClaimTemplate.spec.dataSourceRef Description dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced namespace string Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. 11.1.239. .spec.storage.volumeClaimTemplate.spec.resources Description resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources Type object Property Type Description limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 11.1.240. .spec.storage.volumeClaimTemplate.spec.selector Description selector is a label query over volumes to consider for binding. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 11.1.241. .spec.storage.volumeClaimTemplate.spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 11.1.242. .spec.storage.volumeClaimTemplate.spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 11.1.243. .spec.storage.volumeClaimTemplate.status Description Deprecated: this field is never set. Type object Property Type Description accessModes array (string) accessModes contains the actual access modes the volume backing the PVC has. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 allocatedResourceStatuses object (string) allocatedResourceStatuses stores status of resource being resized for the given PVC. Key names follow standard Kubernetes label syntax. Valid values are either: * Un-prefixed keys: - storage - the capacity of the volume. * Custom resources must use implementation-defined prefixed names such as "example.com/my-custom-resource" Apart from above values - keys that are unprefixed or have kubernetes.io prefix are considered reserved and hence may not be used. ClaimResourceStatus can be in any of following states: - ControllerResizeInProgress: State set when resize controller starts resizing the volume in control-plane. - ControllerResizeFailed: State set when resize has failed in resize controller with a terminal error. - NodeResizePending: State set when resize controller has finished resizing the volume but further resizing of volume is needed on the node. - NodeResizeInProgress: State set when kubelet starts resizing the volume. - NodeResizeFailed: State set when resizing has failed in kubelet with a terminal error. Transient errors don't set NodeResizeFailed. For example: if expanding a PVC for more capacity - this field can be one of the following states: - pvc.status.allocatedResourceStatus['storage'] = "ControllerResizeInProgress" - pvc.status.allocatedResourceStatus['storage'] = "ControllerResizeFailed" - pvc.status.allocatedResourceStatus['storage'] = "NodeResizePending" - pvc.status.allocatedResourceStatus['storage'] = "NodeResizeInProgress" - pvc.status.allocatedResourceStatus['storage'] = "NodeResizeFailed" When this field is not set, it means that no resize operation is in progress for the given PVC. A controller that receives PVC update with previously unknown resourceName or ClaimResourceStatus should ignore the update for the purpose it was designed. For example - a controller that only is responsible for resizing capacity of the volume, should ignore PVC updates that change other valid resources associated with PVC. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature. allocatedResources integer-or-string allocatedResources tracks the resources allocated to a PVC including its capacity. Key names follow standard Kubernetes label syntax. Valid values are either: * Un-prefixed keys: - storage - the capacity of the volume. * Custom resources must use implementation-defined prefixed names such as "example.com/my-custom-resource" Apart from above values - keys that are unprefixed or have kubernetes.io prefix are considered reserved and hence may not be used. Capacity reported here may be larger than the actual capacity when a volume expansion operation is requested. For storage quota, the larger value from allocatedResources and PVC.spec.resources is used. If allocatedResources is not set, PVC.spec.resources alone is used for quota calculation. If a volume expansion capacity request is lowered, allocatedResources is only lowered if there are no expansion operations in progress and if the actual volume capacity is equal or lower than the requested capacity. A controller that receives PVC update with previously unknown resourceName should ignore the update for the purpose it was designed. For example - a controller that only is responsible for resizing capacity of the volume, should ignore PVC updates that change other valid resources associated with PVC. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature. capacity integer-or-string capacity represents the actual resources of the underlying volume. conditions array conditions is the current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'ResizeStarted'. conditions[] object PersistentVolumeClaimCondition contains details about state of pvc currentVolumeAttributesClassName string currentVolumeAttributesClassName is the current name of the VolumeAttributesClass the PVC is using. When unset, there is no VolumeAttributeClass applied to this PersistentVolumeClaim This is an alpha field and requires enabling VolumeAttributesClass feature. modifyVolumeStatus object ModifyVolumeStatus represents the status object of ControllerModifyVolume operation. When this is unset, there is no ModifyVolume operation being attempted. This is an alpha field and requires enabling VolumeAttributesClass feature. phase string phase represents the current phase of PersistentVolumeClaim. 11.1.244. .spec.storage.volumeClaimTemplate.status.conditions Description conditions is the current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'ResizeStarted'. Type array 11.1.245. .spec.storage.volumeClaimTemplate.status.conditions[] Description PersistentVolumeClaimCondition contains details about state of pvc Type object Required status type Property Type Description lastProbeTime string lastProbeTime is the time we probed the condition. lastTransitionTime string lastTransitionTime is the time the condition transitioned from one status to another. message string message is the human-readable message indicating details about last transition. reason string reason is a unique, this should be a short, machine understandable string that gives the reason for condition's last transition. If it reports "ResizeStarted" that means the underlying persistent volume is being resized. status string type string PersistentVolumeClaimConditionType is a valid value of PersistentVolumeClaimCondition.Type 11.1.246. .spec.storage.volumeClaimTemplate.status.modifyVolumeStatus Description ModifyVolumeStatus represents the status object of ControllerModifyVolume operation. When this is unset, there is no ModifyVolume operation being attempted. This is an alpha field and requires enabling VolumeAttributesClass feature. Type object Required status Property Type Description status string status is the status of the ControllerModifyVolume operation. It can be in any of following states: - Pending Pending indicates that the PersistentVolumeClaim cannot be modified due to unmet requirements, such as the specified VolumeAttributesClass not existing. - InProgress InProgress indicates that the volume is being modified. - Infeasible Infeasible indicates that the request has been rejected as invalid by the CSI driver. To resolve the error, a valid VolumeAttributesClass needs to be specified. Note: New statuses can be added in the future. Consumers should check for unknown statuses and fail appropriately. targetVolumeAttributesClassName string targetVolumeAttributesClassName is the name of the VolumeAttributesClass the PVC currently being reconciled 11.1.247. .spec.tolerations Description If specified, the pod's tolerations. Type array 11.1.248. .spec.tolerations[] Description The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. Type object Property Type Description effect string Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. key string Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. operator string Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. tolerationSeconds integer TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. value string Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. 11.1.249. .spec.topologySpreadConstraints Description If specified, the pod's topology spread constraints. Type array 11.1.250. .spec.topologySpreadConstraints[] Description TopologySpreadConstraint specifies how to spread matching pods among the given topology. Type object Required maxSkew topologyKey whenUnsatisfiable Property Type Description labelSelector object LabelSelector is used to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select the pods over which spreading will be calculated. The keys are used to lookup values from the incoming pod labels, those key-value labels are ANDed with labelSelector to select the group of existing pods over which spreading will be calculated for the incoming pod. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. MatchLabelKeys cannot be set when LabelSelector isn't set. Keys that don't exist in the incoming pod labels will be ignored. A null or empty list means only match against labelSelector. This is a beta field and requires the MatchLabelKeysInPodTopologySpread feature gate to be enabled (enabled by default). maxSkew integer MaxSkew describes the degree to which pods may be unevenly distributed. When whenUnsatisfiable=DoNotSchedule , it is the maximum permitted difference between the number of matching pods in the target topology and the global minimum. The global minimum is the minimum number of matching pods in an eligible domain or zero if the number of eligible domains is less than MinDomains. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 2/2/1: In this case, the global minimum is 1. | zone1 | zone2 | zone3 | | P P | P P | P | - if MaxSkew is 1, incoming pod can only be scheduled to zone3 to become 2/2/2; scheduling it onto zone1(zone2) would make the ActualSkew(3-1) on zone1(zone2) violate MaxSkew(1). - if MaxSkew is 2, incoming pod can be scheduled onto any zone. When whenUnsatisfiable=ScheduleAnyway , it is used to give higher precedence to topologies that satisfy it. It's a required field. Default value is 1 and 0 is not allowed. minDomains integer MinDomains indicates a minimum number of eligible domains. When the number of eligible domains with matching topology keys is less than minDomains, Pod Topology Spread treats "global minimum" as 0, and then the calculation of Skew is performed. And when the number of eligible domains with matching topology keys equals or greater than minDomains, this value has no effect on scheduling. As a result, when the number of eligible domains is less than minDomains, scheduler won't schedule more than maxSkew Pods to those domains. If value is nil, the constraint behaves as if MinDomains is equal to 1. Valid values are integers greater than 0. When value is not nil, WhenUnsatisfiable must be DoNotSchedule. For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same labelSelector spread as 2/2/2: | zone1 | zone2 | zone3 | | P P | P P | P P | The number of domains is less than 5(MinDomains), so "global minimum" is treated as 0. In this situation, new pod with the same labelSelector cannot be scheduled, because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones, it will violate MaxSkew. This is a beta field and requires the MinDomainsInPodTopologySpread feature gate to be enabled (enabled by default). nodeAffinityPolicy string NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations. If this value is nil, the behavior is equivalent to the Honor policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. nodeTaintsPolicy string NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included. If this value is nil, the behavior is equivalent to the Ignore policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. topologyKey string TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each <key, value> as a "bucket", and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy. e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. It's a required field. whenUnsatisfiable string WhenUnsatisfiable indicates how to deal with a pod if it doesn't satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location, but giving higher precedence to topologies that would help reduce the skew. A constraint is considered "Unsatisfiable" for an incoming pod if and only if every possible node assignment for that pod would violate "MaxSkew" on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P | P | P | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won't make it more imbalanced. It's a required field. 11.1.251. .spec.topologySpreadConstraints[].labelSelector Description LabelSelector is used to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 11.1.252. .spec.topologySpreadConstraints[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 11.1.253. .spec.topologySpreadConstraints[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 11.1.254. .spec.tracingConfig Description TracingConfig configures tracing in Thanos. tracingConfigFile takes precedence over this field. This is an experimental feature , it may change in any upcoming release in a breaking way. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 11.1.255. .spec.volumeMounts Description VolumeMounts allows configuration of additional VolumeMounts on the output StatefulSet definition. VolumeMounts specified will be appended to other VolumeMounts in the ruler container, that are generated as a result of StorageSpec objects. Type array 11.1.256. .spec.volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required mountPath name Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 11.1.257. .spec.volumes Description Volumes allows configuration of additional volumes on the output StatefulSet definition. Volumes specified will be appended to other volumes that are generated as a result of StorageSpec objects. Type array 11.1.258. .spec.volumes[] Description Volume represents a named volume in a pod that may be accessed by any container in the pod. Type object Required name Property Type Description awsElasticBlockStore object awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore azureDisk object azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. azureFile object azureFile represents an Azure File Service mount on the host and bind mount to the pod. cephfs object cephFS represents a Ceph FS mount on the host that shares a pod's lifetime cinder object cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md configMap object configMap represents a configMap that should populate this volume csi object csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature). downwardAPI object downwardAPI represents downward API about the pod that should populate this volume emptyDir object emptyDir represents a temporary directory that shares a pod's lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir ephemeral object ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. A pod can use both types of ephemeral volumes and persistent volumes at the same time. fc object fc represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. flexVolume object flexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. flocker object flocker represents a Flocker volume attached to a kubelet's host machine. This depends on the Flocker control service being running gcePersistentDisk object gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk gitRepo object gitRepo represents a git repository at a particular revision. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. glusterfs object glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md hostPath object hostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath --- TODO(jonesdl) We need to restrict who can use host directory mounts and who can/can not mount host directories as read/write. iscsi object iscsi represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md name string name of the volume. Must be a DNS_LABEL and unique within the pod. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names nfs object nfs represents an NFS mount on the host that shares a pod's lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs persistentVolumeClaim object persistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims photonPersistentDisk object photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine portworxVolume object portworxVolume represents a portworx volume attached and mounted on kubelets host machine projected object projected items for all in one resources secrets, configmaps, and downward API quobyte object quobyte represents a Quobyte mount on the host that shares a pod's lifetime rbd object rbd represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md scaleIO object scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. secret object secret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret storageos object storageOS represents a StorageOS volume attached and mounted on Kubernetes nodes. vsphereVolume object vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine 11.1.259. .spec.volumes[].awsElasticBlockStore Description awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore Type object Required volumeID Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore TODO: how do we prevent errors in the filesystem from compromising the machine partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). readOnly boolean readOnly value true will force the readOnly setting in VolumeMounts. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore volumeID string volumeID is unique ID of the persistent disk resource in AWS (Amazon EBS volume). More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore 11.1.260. .spec.volumes[].azureDisk Description azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. Type object Required diskName diskURI Property Type Description cachingMode string cachingMode is the Host Caching mode: None, Read Only, Read Write. diskName string diskName is the Name of the data disk in the blob storage diskURI string diskURI is the URI of data disk in the blob storage fsType string fsType is Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. kind string kind expected values are Shared: multiple blob disks per storage account Dedicated: single blob disk per storage account Managed: azure managed data disk (only in managed availability set). defaults to shared readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. 11.1.261. .spec.volumes[].azureFile Description azureFile represents an Azure File Service mount on the host and bind mount to the pod. Type object Required secretName shareName Property Type Description readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretName string secretName is the name of secret that contains Azure Storage Account Name and Key shareName string shareName is the azure share Name 11.1.262. .spec.volumes[].cephfs Description cephFS represents a Ceph FS mount on the host that shares a pod's lifetime Type object Required monitors Property Type Description monitors array (string) monitors is Required: Monitors is a collection of Ceph monitors More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it path string path is Optional: Used as the mounted root, rather than the full Ceph tree, default is / readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretFile string secretFile is Optional: SecretFile is the path to key ring for User, default is /etc/ceph/user.secret More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretRef object secretRef is Optional: SecretRef is reference to the authentication secret for User, default is empty. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it user string user is optional: User is the rados user name, default is admin More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it 11.1.263. .spec.volumes[].cephfs.secretRef Description secretRef is Optional: SecretRef is reference to the authentication secret for User, default is empty. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 11.1.264. .spec.volumes[].cinder Description cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md Type object Required volumeID Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://examples.k8s.io/mysql-cinder-pd/README.md readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/mysql-cinder-pd/README.md secretRef object secretRef is optional: points to a secret object containing parameters used to connect to OpenStack. volumeID string volumeID used to identify the volume in cinder. More info: https://examples.k8s.io/mysql-cinder-pd/README.md 11.1.265. .spec.volumes[].cinder.secretRef Description secretRef is optional: points to a secret object containing parameters used to connect to OpenStack. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 11.1.266. .spec.volumes[].configMap Description configMap represents a configMap that should populate this volume Type object Property Type Description defaultMode integer defaultMode is optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean optional specify whether the ConfigMap or its keys must be defined 11.1.267. .spec.volumes[].configMap.items Description items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 11.1.268. .spec.volumes[].configMap.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 11.1.269. .spec.volumes[].csi Description csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature). Type object Required driver Property Type Description driver string driver is the name of the CSI driver that handles this volume. Consult with your admin for the correct name as registered in the cluster. fsType string fsType to mount. Ex. "ext4", "xfs", "ntfs". If not provided, the empty value is passed to the associated CSI driver which will determine the default filesystem to apply. nodePublishSecretRef object nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed. readOnly boolean readOnly specifies a read-only configuration for the volume. Defaults to false (read/write). volumeAttributes object (string) volumeAttributes stores driver-specific properties that are passed to the CSI driver. Consult your driver's documentation for supported values. 11.1.270. .spec.volumes[].csi.nodePublishSecretRef Description nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 11.1.271. .spec.volumes[].downwardAPI Description downwardAPI represents downward API about the pod that should populate this volume Type object Property Type Description defaultMode integer Optional: mode bits to use on created files by default. Must be a Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array Items is a list of downward API volume file items[] object DownwardAPIVolumeFile represents information to create the file containing the pod field 11.1.272. .spec.volumes[].downwardAPI.items Description Items is a list of downward API volume file Type array 11.1.273. .spec.volumes[].downwardAPI.items[] Description DownwardAPIVolumeFile represents information to create the file containing the pod field Type object Required path Property Type Description fieldRef object Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. mode integer Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. 11.1.274. .spec.volumes[].downwardAPI.items[].fieldRef Description Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 11.1.275. .spec.volumes[].downwardAPI.items[].resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 11.1.276. .spec.volumes[].emptyDir Description emptyDir represents a temporary directory that shares a pod's lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir Type object Property Type Description medium string medium represents what type of storage medium should back this directory. The default is "" which means to use the node's default medium. Must be an empty string (default) or Memory. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir sizeLimit integer-or-string sizeLimit is the total amount of local storage required for this EmptyDir volume. The size limit is also applicable for memory medium. The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod. The default is nil which means that the limit is undefined. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir 11.1.277. .spec.volumes[].ephemeral Description ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. A pod can use both types of ephemeral volumes and persistent volumes at the same time. Type object Property Type Description volumeClaimTemplate object Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. 11.1.278. .spec.volumes[].ephemeral.volumeClaimTemplate Description Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. Type object Required spec Property Type Description metadata object May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. spec object The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. 11.1.279. .spec.volumes[].ephemeral.volumeClaimTemplate.metadata Description May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. Type object 11.1.280. .spec.volumes[].ephemeral.volumeClaimTemplate.spec Description The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. dataSourceRef object dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. resources object resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources selector object selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeAttributesClassName string volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim. If specified, the CSI driver will create or update the volume with the attributes defined in the corresponding VolumeAttributesClass. This has a different purpose than storageClassName, it can be changed after the claim is created. An empty string value means that no VolumeAttributesClass will be applied to the claim but it's not allowed to reset this field to empty string once it is set. If unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass will be set by the persistentvolume controller if it exists. If the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource exists. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#volumeattributesclass (Alpha) Using this field requires the VolumeAttributesClass feature gate to be enabled. volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. 11.1.281. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.dataSource Description dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 11.1.282. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.dataSourceRef Description dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced namespace string Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. 11.1.283. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.resources Description resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources Type object Property Type Description limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 11.1.284. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.selector Description selector is a label query over volumes to consider for binding. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 11.1.285. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 11.1.286. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 11.1.287. .spec.volumes[].fc Description fc represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. TODO: how do we prevent errors in the filesystem from compromising the machine lun integer lun is Optional: FC target lun number readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. targetWWNs array (string) targetWWNs is Optional: FC target worldwide names (WWNs) wwids array (string) wwids Optional: FC volume world wide identifiers (wwids) Either wwids or combination of targetWWNs and lun must be set, but not both simultaneously. 11.1.288. .spec.volumes[].flexVolume Description flexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. Type object Required driver Property Type Description driver string driver is the name of the driver to use for this volume. fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". The default filesystem depends on FlexVolume script. options object (string) options is Optional: this field holds extra command options if any. readOnly boolean readOnly is Optional: defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef is Optional: secretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts. 11.1.289. .spec.volumes[].flexVolume.secretRef Description secretRef is Optional: secretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 11.1.290. .spec.volumes[].flocker Description flocker represents a Flocker volume attached to a kubelet's host machine. This depends on the Flocker control service being running Type object Property Type Description datasetName string datasetName is Name of the dataset stored as metadata name on the dataset for Flocker should be considered as deprecated datasetUUID string datasetUUID is the UUID of the dataset. This is unique identifier of a Flocker dataset 11.1.291. .spec.volumes[].gcePersistentDisk Description gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk Type object Required pdName Property Type Description fsType string fsType is filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk TODO: how do we prevent errors in the filesystem from compromising the machine partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk pdName string pdName is unique name of the PD resource in GCE. Used to identify the disk in GCE. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk 11.1.292. .spec.volumes[].gitRepo Description gitRepo represents a git repository at a particular revision. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. Type object Required repository Property Type Description directory string directory is the target directory name. Must not contain or start with '..'. If '.' is supplied, the volume directory will be the git repository. Otherwise, if specified, the volume will contain the git repository in the subdirectory with the given name. repository string repository is the URL revision string revision is the commit hash for the specified revision. 11.1.293. .spec.volumes[].glusterfs Description glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md Type object Required endpoints path Property Type Description endpoints string endpoints is the endpoint name that details Glusterfs topology. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod path string path is the Glusterfs volume path. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod readOnly boolean readOnly here will force the Glusterfs volume to be mounted with read-only permissions. Defaults to false. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod 11.1.294. .spec.volumes[].hostPath Description hostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath --- TODO(jonesdl) We need to restrict who can use host directory mounts and who can/can not mount host directories as read/write. Type object Required path Property Type Description path string path of the directory on the host. If the path is a symlink, it will follow the link to the real path. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath type string type for HostPath Volume Defaults to "" More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath 11.1.295. .spec.volumes[].iscsi Description iscsi represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md Type object Required iqn lun targetPortal Property Type Description chapAuthDiscovery boolean chapAuthDiscovery defines whether support iSCSI Discovery CHAP authentication chapAuthSession boolean chapAuthSession defines whether support iSCSI Session CHAP authentication fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#iscsi TODO: how do we prevent errors in the filesystem from compromising the machine initiatorName string initiatorName is the custom iSCSI Initiator Name. If initiatorName is specified with iscsiInterface simultaneously, new iSCSI interface <target portal>:<volume name> will be created for the connection. iqn string iqn is the target iSCSI Qualified Name. iscsiInterface string iscsiInterface is the interface Name that uses an iSCSI transport. Defaults to 'default' (tcp). lun integer lun represents iSCSI Target Lun number. portals array (string) portals is the iSCSI Target Portal List. The portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. secretRef object secretRef is the CHAP Secret for iSCSI target and initiator authentication targetPortal string targetPortal is iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). 11.1.296. .spec.volumes[].iscsi.secretRef Description secretRef is the CHAP Secret for iSCSI target and initiator authentication Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 11.1.297. .spec.volumes[].nfs Description nfs represents an NFS mount on the host that shares a pod's lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs Type object Required path server Property Type Description path string path that is exported by the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs readOnly boolean readOnly here will force the NFS export to be mounted with read-only permissions. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs server string server is the hostname or IP address of the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs 11.1.298. .spec.volumes[].persistentVolumeClaim Description persistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims Type object Required claimName Property Type Description claimName string claimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims readOnly boolean readOnly Will force the ReadOnly setting in VolumeMounts. Default false. 11.1.299. .spec.volumes[].photonPersistentDisk Description photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine Type object Required pdID Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. pdID string pdID is the ID that identifies Photon Controller persistent disk 11.1.300. .spec.volumes[].portworxVolume Description portworxVolume represents a portworx volume attached and mounted on kubelets host machine Type object Required volumeID Property Type Description fsType string fSType represents the filesystem type to mount Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. volumeID string volumeID uniquely identifies a Portworx volume 11.1.301. .spec.volumes[].projected Description projected items for all in one resources secrets, configmaps, and downward API Type object Property Type Description defaultMode integer defaultMode are the mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. sources array sources is the list of volume projections sources[] object Projection that may be projected along with other supported volume types 11.1.302. .spec.volumes[].projected.sources Description sources is the list of volume projections Type array 11.1.303. .spec.volumes[].projected.sources[] Description Projection that may be projected along with other supported volume types Type object Property Type Description clusterTrustBundle object ClusterTrustBundle allows a pod to access the .spec.trustBundle field of ClusterTrustBundle objects in an auto-updating file. Alpha, gated by the ClusterTrustBundleProjection feature gate. ClusterTrustBundle objects can either be selected by name, or by the combination of signer name and a label selector. Kubelet performs aggressive normalization of the PEM contents written into the pod filesystem. Esoteric PEM features such as inter-block comments and block headers are stripped. Certificates are deduplicated. The ordering of certificates within the file is arbitrary, and Kubelet may change the order over time. configMap object configMap information about the configMap data to project downwardAPI object downwardAPI information about the downwardAPI data to project secret object secret information about the secret data to project serviceAccountToken object serviceAccountToken is information about the serviceAccountToken data to project 11.1.304. .spec.volumes[].projected.sources[].clusterTrustBundle Description ClusterTrustBundle allows a pod to access the .spec.trustBundle field of ClusterTrustBundle objects in an auto-updating file. Alpha, gated by the ClusterTrustBundleProjection feature gate. ClusterTrustBundle objects can either be selected by name, or by the combination of signer name and a label selector. Kubelet performs aggressive normalization of the PEM contents written into the pod filesystem. Esoteric PEM features such as inter-block comments and block headers are stripped. Certificates are deduplicated. The ordering of certificates within the file is arbitrary, and Kubelet may change the order over time. Type object Required path Property Type Description labelSelector object Select all ClusterTrustBundles that match this label selector. Only has effect if signerName is set. Mutually-exclusive with name. If unset, interpreted as "match nothing". If set but empty, interpreted as "match everything". name string Select a single ClusterTrustBundle by object name. Mutually-exclusive with signerName and labelSelector. optional boolean If true, don't block pod startup if the referenced ClusterTrustBundle(s) aren't available. If using name, then the named ClusterTrustBundle is allowed not to exist. If using signerName, then the combination of signerName and labelSelector is allowed to match zero ClusterTrustBundles. path string Relative path from the volume root to write the bundle. signerName string Select all ClusterTrustBundles that match this signer name. Mutually-exclusive with name. The contents of all selected ClusterTrustBundles will be unified and deduplicated. 11.1.305. .spec.volumes[].projected.sources[].clusterTrustBundle.labelSelector Description Select all ClusterTrustBundles that match this label selector. Only has effect if signerName is set. Mutually-exclusive with name. If unset, interpreted as "match nothing". If set but empty, interpreted as "match everything". Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 11.1.306. .spec.volumes[].projected.sources[].clusterTrustBundle.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 11.1.307. .spec.volumes[].projected.sources[].clusterTrustBundle.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 11.1.308. .spec.volumes[].projected.sources[].configMap Description configMap information about the configMap data to project Type object Property Type Description items array items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean optional specify whether the ConfigMap or its keys must be defined 11.1.309. .spec.volumes[].projected.sources[].configMap.items Description items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 11.1.310. .spec.volumes[].projected.sources[].configMap.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 11.1.311. .spec.volumes[].projected.sources[].downwardAPI Description downwardAPI information about the downwardAPI data to project Type object Property Type Description items array Items is a list of DownwardAPIVolume file items[] object DownwardAPIVolumeFile represents information to create the file containing the pod field 11.1.312. .spec.volumes[].projected.sources[].downwardAPI.items Description Items is a list of DownwardAPIVolume file Type array 11.1.313. .spec.volumes[].projected.sources[].downwardAPI.items[] Description DownwardAPIVolumeFile represents information to create the file containing the pod field Type object Required path Property Type Description fieldRef object Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. mode integer Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. 11.1.314. .spec.volumes[].projected.sources[].downwardAPI.items[].fieldRef Description Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 11.1.315. .spec.volumes[].projected.sources[].downwardAPI.items[].resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 11.1.316. .spec.volumes[].projected.sources[].secret Description secret information about the secret data to project Type object Property Type Description items array items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean optional field specify whether the Secret or its key must be defined 11.1.317. .spec.volumes[].projected.sources[].secret.items Description items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 11.1.318. .spec.volumes[].projected.sources[].secret.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 11.1.319. .spec.volumes[].projected.sources[].serviceAccountToken Description serviceAccountToken is information about the serviceAccountToken data to project Type object Required path Property Type Description audience string audience is the intended audience of the token. A recipient of a token must identify itself with an identifier specified in the audience of the token, and otherwise should reject the token. The audience defaults to the identifier of the apiserver. expirationSeconds integer expirationSeconds is the requested duration of validity of the service account token. As the token approaches expiration, the kubelet volume plugin will proactively rotate the service account token. The kubelet will start trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours.Defaults to 1 hour and must be at least 10 minutes. path string path is the path relative to the mount point of the file to project the token into. 11.1.320. .spec.volumes[].quobyte Description quobyte represents a Quobyte mount on the host that shares a pod's lifetime Type object Required registry volume Property Type Description group string group to map volume access to Default is no group readOnly boolean readOnly here will force the Quobyte volume to be mounted with read-only permissions. Defaults to false. registry string registry represents a single or multiple Quobyte Registry services specified as a string as host:port pair (multiple entries are separated with commas) which acts as the central registry for volumes tenant string tenant owning the given Quobyte volume in the Backend Used with dynamically provisioned Quobyte volumes, value is set by the plugin user string user to map volume access to Defaults to serivceaccount user volume string volume is a string that references an already created Quobyte volume by name. 11.1.321. .spec.volumes[].rbd Description rbd represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md Type object Required image monitors Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#rbd TODO: how do we prevent errors in the filesystem from compromising the machine image string image is the rados image name. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it keyring string keyring is the path to key ring for RBDUser. Default is /etc/ceph/keyring. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it monitors array (string) monitors is a collection of Ceph monitors. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it pool string pool is the rados pool name. Default is rbd. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it secretRef object secretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default is nil. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it user string user is the rados user name. Default is admin. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it 11.1.322. .spec.volumes[].rbd.secretRef Description secretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default is nil. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 11.1.323. .spec.volumes[].scaleIO Description scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. Type object Required gateway secretRef system Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Default is "xfs". gateway string gateway is the host address of the ScaleIO API Gateway. protectionDomain string protectionDomain is the name of the ScaleIO Protection Domain for the configured storage. readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail. sslEnabled boolean sslEnabled Flag enable/disable SSL communication with Gateway, default false storageMode string storageMode indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned. Default is ThinProvisioned. storagePool string storagePool is the ScaleIO Storage Pool associated with the protection domain. system string system is the name of the storage system as configured in ScaleIO. volumeName string volumeName is the name of a volume already created in the ScaleIO system that is associated with this volume source. 11.1.324. .spec.volumes[].scaleIO.secretRef Description secretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 11.1.325. .spec.volumes[].secret Description secret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret Type object Property Type Description defaultMode integer defaultMode is Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. optional boolean optional field specify whether the Secret or its keys must be defined secretName string secretName is the name of the secret in the pod's namespace to use. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret 11.1.326. .spec.volumes[].secret.items Description items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 11.1.327. .spec.volumes[].secret.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 11.1.328. .spec.volumes[].storageos Description storageOS represents a StorageOS volume attached and mounted on Kubernetes nodes. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef specifies the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted. volumeName string volumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace. volumeNamespace string volumeNamespace specifies the scope of the volume within StorageOS. If no namespace is specified then the Pod's namespace will be used. This allows the Kubernetes name scoping to be mirrored within StorageOS for tighter integration. Set VolumeName to any name to override the default behaviour. Set to "default" if you are not using namespaces within StorageOS. Namespaces that do not pre-exist within StorageOS will be created. 11.1.329. .spec.volumes[].storageos.secretRef Description secretRef specifies the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 11.1.330. .spec.volumes[].vsphereVolume Description vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine Type object Required volumePath Property Type Description fsType string fsType is filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. storagePolicyID string storagePolicyID is the storage Policy Based Management (SPBM) profile ID associated with the StoragePolicyName. storagePolicyName string storagePolicyName is the storage Policy Based Management (SPBM) profile name. volumePath string volumePath is the path that identifies vSphere volume vmdk 11.1.331. .spec.web Description Defines the configuration of the ThanosRuler web server. Type object Property Type Description httpConfig object Defines HTTP parameters for web server. tlsConfig object Defines the TLS parameters for HTTPS. 11.1.332. .spec.web.httpConfig Description Defines HTTP parameters for web server. Type object Property Type Description headers object List of headers that can be added to HTTP responses. http2 boolean Enable HTTP/2 support. Note that HTTP/2 is only supported with TLS. When TLSConfig is not configured, HTTP/2 will be disabled. Whenever the value of the field changes, a rolling update will be triggered. 11.1.333. .spec.web.httpConfig.headers Description List of headers that can be added to HTTP responses. Type object Property Type Description contentSecurityPolicy string Set the Content-Security-Policy header to HTTP responses. Unset if blank. strictTransportSecurity string Set the Strict-Transport-Security header to HTTP responses. Unset if blank. Please make sure that you use this with care as this header might force browsers to load Prometheus and the other applications hosted on the same domain and subdomains over HTTPS. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Strict-Transport-Security xContentTypeOptions string Set the X-Content-Type-Options header to HTTP responses. Unset if blank. Accepted value is nosniff. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Content-Type-Options xFrameOptions string Set the X-Frame-Options header to HTTP responses. Unset if blank. Accepted values are deny and sameorigin. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Frame-Options xXSSProtection string Set the X-XSS-Protection header to all responses. Unset if blank. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-XSS-Protection 11.1.334. .spec.web.tlsConfig Description Defines the TLS parameters for HTTPS. Type object Required cert keySecret Property Type Description cert object Contains the TLS certificate for the server. cipherSuites array (string) List of supported cipher suites for TLS versions up to TLS 1.2. If empty, Go default cipher suites are used. Available cipher suites are documented in the go documentation: https://golang.org/pkg/crypto/tls/#pkg-constants clientAuthType string Server policy for client authentication. Maps to ClientAuth Policies. For more detail on clientAuth options: https://golang.org/pkg/crypto/tls/#ClientAuthType client_ca object Contains the CA certificate for client certificate authentication to the server. curvePreferences array (string) Elliptic curves that will be used in an ECDHE handshake, in preference order. Available curves are documented in the go documentation: https://golang.org/pkg/crypto/tls/#CurveID keySecret object Secret containing the TLS key for the server. maxVersion string Maximum TLS version that is acceptable. Defaults to TLS13. minVersion string Minimum TLS version that is acceptable. Defaults to TLS12. preferServerCipherSuites boolean Controls whether the server selects the client's most preferred cipher suite, or the server's most preferred cipher suite. If true then the server's preference, as expressed in the order of elements in cipherSuites, is used. 11.1.335. .spec.web.tlsConfig.cert Description Contains the TLS certificate for the server. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 11.1.336. .spec.web.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 11.1.337. .spec.web.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 11.1.338. .spec.web.tlsConfig.client_ca Description Contains the CA certificate for client certificate authentication to the server. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 11.1.339. .spec.web.tlsConfig.client_ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 11.1.340. .spec.web.tlsConfig.client_ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 11.1.341. .spec.web.tlsConfig.keySecret Description Secret containing the TLS key for the server. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 11.1.342. .status Description Most recent observed status of the ThanosRuler cluster. Read-only. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status Type object Required availableReplicas paused replicas unavailableReplicas updatedReplicas Property Type Description availableReplicas integer Total number of available pods (ready for at least minReadySeconds) targeted by this ThanosRuler deployment. conditions array The current state of the Alertmanager object. conditions[] object Condition represents the state of the resources associated with the Prometheus, Alertmanager or ThanosRuler resource. paused boolean Represents whether any actions on the underlying managed objects are being performed. Only delete actions will be performed. replicas integer Total number of non-terminated pods targeted by this ThanosRuler deployment (their labels match the selector). unavailableReplicas integer Total number of unavailable pods targeted by this ThanosRuler deployment. updatedReplicas integer Total number of non-terminated pods targeted by this ThanosRuler deployment that have the desired version spec. 11.1.343. .status.conditions Description The current state of the Alertmanager object. Type array 11.1.344. .status.conditions[] Description Condition represents the state of the resources associated with the Prometheus, Alertmanager or ThanosRuler resource. Type object Required lastTransitionTime status type Property Type Description lastTransitionTime string lastTransitionTime is the time of the last update to the current status property. message string Human-readable message indicating details for the condition's last transition. observedGeneration integer ObservedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string Reason for the condition's last transition. status string Status of the condition. type string Type of the condition being reported. 11.2. API endpoints The following API endpoints are available: /apis/monitoring.coreos.com/v1/thanosrulers GET : list objects of kind ThanosRuler /apis/monitoring.coreos.com/v1/namespaces/{namespace}/thanosrulers DELETE : delete collection of ThanosRuler GET : list objects of kind ThanosRuler POST : create a ThanosRuler /apis/monitoring.coreos.com/v1/namespaces/{namespace}/thanosrulers/{name} DELETE : delete a ThanosRuler GET : read the specified ThanosRuler PATCH : partially update the specified ThanosRuler PUT : replace the specified ThanosRuler /apis/monitoring.coreos.com/v1/namespaces/{namespace}/thanosrulers/{name}/status GET : read status of the specified ThanosRuler PATCH : partially update status of the specified ThanosRuler PUT : replace status of the specified ThanosRuler 11.2.1. /apis/monitoring.coreos.com/v1/thanosrulers HTTP method GET Description list objects of kind ThanosRuler Table 11.1. HTTP responses HTTP code Reponse body 200 - OK ThanosRulerList schema 401 - Unauthorized Empty 11.2.2. /apis/monitoring.coreos.com/v1/namespaces/{namespace}/thanosrulers HTTP method DELETE Description delete collection of ThanosRuler Table 11.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ThanosRuler Table 11.3. HTTP responses HTTP code Reponse body 200 - OK ThanosRulerList schema 401 - Unauthorized Empty HTTP method POST Description create a ThanosRuler Table 11.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.5. Body parameters Parameter Type Description body ThanosRuler schema Table 11.6. HTTP responses HTTP code Reponse body 200 - OK ThanosRuler schema 201 - Created ThanosRuler schema 202 - Accepted ThanosRuler schema 401 - Unauthorized Empty 11.2.3. /apis/monitoring.coreos.com/v1/namespaces/{namespace}/thanosrulers/{name} Table 11.7. Global path parameters Parameter Type Description name string name of the ThanosRuler HTTP method DELETE Description delete a ThanosRuler Table 11.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 11.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ThanosRuler Table 11.10. HTTP responses HTTP code Reponse body 200 - OK ThanosRuler schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ThanosRuler Table 11.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.12. HTTP responses HTTP code Reponse body 200 - OK ThanosRuler schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ThanosRuler Table 11.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.14. Body parameters Parameter Type Description body ThanosRuler schema Table 11.15. HTTP responses HTTP code Reponse body 200 - OK ThanosRuler schema 201 - Created ThanosRuler schema 401 - Unauthorized Empty 11.2.4. /apis/monitoring.coreos.com/v1/namespaces/{namespace}/thanosrulers/{name}/status Table 11.16. Global path parameters Parameter Type Description name string name of the ThanosRuler HTTP method GET Description read status of the specified ThanosRuler Table 11.17. HTTP responses HTTP code Reponse body 200 - OK ThanosRuler schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ThanosRuler Table 11.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.19. HTTP responses HTTP code Reponse body 200 - OK ThanosRuler schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ThanosRuler Table 11.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.21. Body parameters Parameter Type Description body ThanosRuler schema Table 11.22. HTTP responses HTTP code Reponse body 200 - OK ThanosRuler schema 201 - Created ThanosRuler schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/monitoring_apis/thanosruler-monitoring-coreos-com-v1 |
function::cpu_clock_s | function::cpu_clock_s Name function::cpu_clock_s - Number of seconds on the given cpu's clock Synopsis Arguments cpu Which processor's clock to read Description This function returns the number of seconds on the given cpu's clock. This is always monotonic comparing on the same cpu, but may have some drift between cpus (within about a jiffy). | [
"cpu_clock_s:long(cpu:long)"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-cpu-clock-s |
Chapter 2. Jenkins agent | Chapter 2. Jenkins agent OpenShift Container Platform provides a base image for use as a Jenkins agent. The Base image for Jenkins agents does the following: Pulls in both the required tools, headless Java, the Jenkins JNLP client, and the useful ones, including git , tar , zip , and nss , among others. Establishes the JNLP agent as the entry point. Includes the oc client tool for invoking command line operations from within Jenkins jobs. Provides Dockerfiles for both Red Hat Enterprise Linux (RHEL) and localdev images. Important Use a version of the agent image that is appropriate for your OpenShift Container Platform release version. Embedding an oc client version that is not compatible with the OpenShift Container Platform version can cause unexpected behavior. The OpenShift Container Platform Jenkins image also defines the following sample java-builder pod template to illustrate how you can use the agent image with the Jenkins Kubernetes plugin. The java-builder pod template employs two containers: A jnlp container that uses the OpenShift Container Platform Base agent image and handles the JNLP contract for starting and stopping Jenkins agents. A java container that uses the java OpenShift Container Platform Sample ImageStream, which contains the various Java binaries, including the Maven binary mvn , for building code. 2.1. Jenkins agent images The OpenShift Container Platform Jenkins agent images are available on Quay.io or registry.redhat.io . Jenkins images are available through the Red Hat Registry: USD docker pull registry.redhat.io/ocp-tools-4/jenkins-rhel8:<image_tag> USD docker pull registry.redhat.io/ocp-tools-4/jenkins-agent-base-rhel8:<image_tag> To use these images, you can either access them directly from Quay.io or registry.redhat.io or push them into your OpenShift Container Platform container image registry. 2.2. Jenkins agent environment variables Each Jenkins agent container can be configured with the following environment variables. Variable Definition Example values and settings JAVA_MAX_HEAP_PARAM , CONTAINER_HEAP_PERCENT , JENKINS_MAX_HEAP_UPPER_BOUND_MB These values control the maximum heap size of the Jenkins JVM. If JAVA_MAX_HEAP_PARAM is set, its value takes precedence. Otherwise, the maximum heap size is dynamically calculated as CONTAINER_HEAP_PERCENT of the container memory limit, optionally capped at JENKINS_MAX_HEAP_UPPER_BOUND_MB MiB. By default, the maximum heap size of the Jenkins JVM is set to 50% of the container memory limit with no cap. JAVA_MAX_HEAP_PARAM example setting: -Xmx512m CONTAINER_HEAP_PERCENT default: 0.5 , or 50% JENKINS_MAX_HEAP_UPPER_BOUND_MB example setting: 512 MiB JAVA_INITIAL_HEAP_PARAM , CONTAINER_INITIAL_PERCENT These values control the initial heap size of the Jenkins JVM. If JAVA_INITIAL_HEAP_PARAM is set, its value takes precedence. Otherwise, the initial heap size is dynamically calculated as CONTAINER_INITIAL_PERCENT of the dynamically calculated maximum heap size. By default, the JVM sets the initial heap size. JAVA_INITIAL_HEAP_PARAM example setting: -Xms32m CONTAINER_INITIAL_PERCENT example setting: 0.1 , or 10% CONTAINER_CORE_LIMIT If set, specifies an integer number of cores used for sizing numbers of internal JVM threads. Example setting: 2 JAVA_TOOL_OPTIONS Specifies options to apply to all JVMs running in this container. It is not recommended to override this value. Default: -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Dsun.zip.disableMemoryMapping=true JAVA_GC_OPTS Specifies Jenkins JVM garbage collection parameters. It is not recommended to override this value. Default: -XX:+UseParallelGC -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90 JENKINS_JAVA_OVERRIDES Specifies additional options for the Jenkins JVM. These options are appended to all other options, including the Java options above, and can be used to override any of them, if necessary. Separate each additional option with a space and if any option contains space characters, escape them with a backslash. Example settings: -Dfoo -Dbar ; -Dfoo=first\ value -Dbar=second\ value USE_JAVA_VERSION Specifies the version of Java version to use to run the agent in its container. The container base image has two versions of java installed: java-11 and java-1.8.0 . If you extend the container base image, you can specify any alternative version of java using its associated suffix. The default value is java-11 . Example setting: java-1.8.0 2.3. Jenkins agent memory requirements A JVM is used in all Jenkins agents to host the Jenkins JNLP agent as well as to run any Java applications such as javac , Maven, or Gradle. By default, the Jenkins JNLP agent JVM uses 50% of the container memory limit for its heap. This value can be modified by the CONTAINER_HEAP_PERCENT environment variable. It can also be capped at an upper limit or overridden entirely. By default, any other processes run in the Jenkins agent container, such as shell scripts or oc commands run from pipelines, cannot use more than the remaining 50% memory limit without provoking an OOM kill. By default, each further JVM process that runs in a Jenkins agent container uses up to 25% of the container memory limit for its heap. It might be necessary to tune this limit for many build workloads. 2.4. Jenkins agent Gradle builds Hosting Gradle builds in the Jenkins agent on OpenShift Container Platform presents additional complications because in addition to the Jenkins JNLP agent and Gradle JVMs, Gradle spawns a third JVM to run tests if they are specified. The following settings are suggested as a starting point for running Gradle builds in a memory constrained Jenkins agent on OpenShift Container Platform. You can modify these settings as required. Ensure the long-lived Gradle daemon is disabled by adding org.gradle.daemon=false to the gradle.properties file. Disable parallel build execution by ensuring org.gradle.parallel=true is not set in the gradle.properties file and that --parallel is not set as a command line argument. To prevent Java compilations running out-of-process, set java { options.fork = false } in the build.gradle file. Disable multiple additional test processes by ensuring test { maxParallelForks = 1 } is set in the build.gradle file. Override the Gradle JVM memory parameters by the GRADLE_OPTS , JAVA_OPTS or JAVA_TOOL_OPTIONS environment variables. Set the maximum heap size and JVM arguments for any Gradle test JVM by defining the maxHeapSize and jvmArgs settings in build.gradle , or through the -Dorg.gradle.jvmargs command line argument. 2.5. Jenkins agent pod retention Jenkins agent pods, are deleted by default after the build completes or is stopped. This behavior can be changed by the Kubernetes plugin pod retention setting. Pod retention can be set for all Jenkins builds, with overrides for each pod template. The following behaviors are supported: Always keeps the build pod regardless of build result. Default uses the plugin value, which is the pod template only. Never always deletes the pod. On Failure keeps the pod if it fails during the build. You can override pod retention in the pipeline Jenkinsfile: podTemplate(label: "mypod", cloud: "openshift", inheritFrom: "maven", podRetention: onFailure(), 1 containers: [ ... ]) { node("mypod") { ... } } 1 Allowed values for podRetention are never() , onFailure() , always() , and default() . Warning Pods that are kept might continue to run and count against resource quotas. | [
"docker pull registry.redhat.io/ocp-tools-4/jenkins-rhel8:<image_tag>",
"docker pull registry.redhat.io/ocp-tools-4/jenkins-agent-base-rhel8:<image_tag>",
"podTemplate(label: \"mypod\", cloud: \"openshift\", inheritFrom: \"maven\", podRetention: onFailure(), 1 containers: [ ]) { node(\"mypod\") { } }"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/jenkins/images-other-jenkins-agent |
Chapter 2. Using IdM user vaults: storing and retrieving secrets | Chapter 2. Using IdM user vaults: storing and retrieving secrets This chapter describes how to use user vaults in Identity Management. Specifically, it describes how a user can store a secret in an IdM vault, and how the user can retrieve it. The user can do the storing and the retrieving from two different IdM clients. Prerequisites The Key Recovery Authority (KRA) Certificate System component has been installed on one or more of the servers in your IdM domain. For details, see Installing the Key Recovery Authority in IdM . 2.1. Storing a secret in a user vault Follow this procedure to create a vault container with one or more private vaults to securely store files with sensitive information. In the example used in the procedure below, the idm_user user creates a vault of the standard type. The standard vault type ensures that idm_user will not be required to authenticate when accessing the file. idm_user will be able to retrieve the file from any IdM client to which the user is logged in. In the procedure: idm_user is the user who wants to create the vault. my_vault is the vault used to store the user's certificate. The vault type is standard , so that accessing the archived certificate does not require the user to provide a vault password. secret.txt is the file containing the certificate that the user wants to store in the vault. Prerequisites You know the password of idm_user . You are logged in to a host that is an IdM client. Procedure Obtain the Kerberos ticket granting ticket (TGT) for idm_user : Use the ipa vault-add command with the --type standard option to create a standard vault: Important Make sure the first user vault for a user is created by the same user. Creating the first vault for a user also creates the user's vault container. The agent of the creation becomes the owner of the vault container. For example, if another user, such as admin , creates the first user vault for user1 , the owner of the user's vault container will also be admin , and user1 will be unable to access the user vault or create new user vaults. Use the ipa vault-archive command with the --in option to archive the secret.txt file into the vault: 2.2. Retrieving a secret from a user vault As an Identity Management (IdM), you can retrieve a secret from your user private vault onto any IdM client to which you are logged in. Follow this procedure to retrieve, as an IdM user named idm_user , a secret from the user private vault named my_vault onto idm_client.idm.example.com . Prerequisites idm_user is the owner of my_vault . idm_user has archived a secret in the vault . my_vault is a standard vault, which means that idm_user does not have to enter any password to access the contents of the vault. Procedure SSH to idm_client as idm_user : Log in as idm_user : Use the ipa vault-retrieve --out command with the --out option to retrieve the contents of the vault and save them into the secret_exported.txt file. 2.3. Additional resources See Using Ansible to manage IdM service vaults: storing and retrieving secrets . | [
"kinit idm_user",
"ipa vault-add my_vault --type standard ---------------------- Added vault \"my_vault\" ---------------------- Vault name: my_vault Type: standard Owner users: idm_user Vault user: idm_user",
"ipa vault-archive my_vault --in secret.txt ----------------------------------- Archived data into vault \"my_vault\" -----------------------------------",
"ssh idm_user@idm_client.idm.example.com",
"kinit user",
"ipa vault-retrieve my_vault --out secret_exported.txt -------------------------------------- Retrieved data from vault \"my_vault\" --------------------------------------"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/working_with_vaults_in_identity_management/using-idm-user-vaults-storing-and-retrieving-secrets_working-with-vaults-in-identity-management |
Chapter 29. dynamic | Chapter 29. dynamic This chapter describes the commands under the dynamic command. 29.1. dynamic action create Create new action. Usage: Table 29.1. Positional arguments Value Summary name Dynamic action name class_name Dynamic action class name code_source Code source id or name Table 29.2. Command arguments Value Summary -h, --help Show this help message and exit --public With this flag an action will be marked as "public". --namespace [NAMESPACE] Namespace to create the action within. Table 29.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 29.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 29.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 29.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 29.2. dynamic action delete Delete action. Usage: Table 29.7. Positional arguments Value Summary identifier Dynamic action name or id (can be repeated multiple times). Table 29.8. Command arguments Value Summary -h, --help Show this help message and exit --namespace [NAMESPACE] Namespace of the dynamic action(s). 29.3. dynamic action list List all dynamic actions. Usage: Table 29.9. Command arguments Value Summary -h, --help Show this help message and exit --marker [MARKER] The last execution uuid of the page, displays list of executions after "marker". --limit [LIMIT] Maximum number of entries to return in a single result. --sort_keys [SORT_KEYS] Comma-separated list of sort keys to sort results by. Default: created_at. Example: mistral execution-list --sort_keys=id,description --sort_dirs [SORT_DIRS] Comma-separated list of sort directions. default: asc. Example: mistral execution-list --sort_keys=id,description --sort_dirs=asc,desc --filter FILTERS Filters. can be repeated. --namespace [NAMESPACE] Namespace of dynamic actions. Table 29.10. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 29.11. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 29.12. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 29.13. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 29.4. dynamic action show Show specific dynamic action. Usage: Table 29.14. Positional arguments Value Summary identifier Dynamic action identifier (name or id) Table 29.15. Command arguments Value Summary -h, --help Show this help message and exit --namespace [NAMESPACE] Namespace to create the dynamic action within. Table 29.16. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 29.17. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 29.18. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 29.19. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 29.5. dynamic action update Update dynamic action. Usage: Table 29.20. Positional arguments Value Summary identifier Dynamic action identifier (id or name) Table 29.21. Command arguments Value Summary -h, --help Show this help message and exit --class-name [CLASS_NAME] Dynamic action class name. --code-source [CODE_SOURCE] Code source identifier (id or name). --public With this flag action will be marked as "public". --namespace [NAMESPACE] Namespace of the action. Table 29.22. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 29.23. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 29.24. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 29.25. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack dynamic action create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--public] [--namespace [NAMESPACE]] name class_name code_source",
"openstack dynamic action delete [-h] [--namespace [NAMESPACE]] identifier [identifier ...]",
"openstack dynamic action list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--marker [MARKER]] [--limit [LIMIT]] [--sort_keys [SORT_KEYS]] [--sort_dirs [SORT_DIRS]] [--filter FILTERS] [--namespace [NAMESPACE]]",
"openstack dynamic action show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--namespace [NAMESPACE]] identifier",
"openstack dynamic action update [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--class-name [CLASS_NAME]] [--code-source [CODE_SOURCE]] [--public] [--namespace [NAMESPACE]] identifier"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/command_line_interface_reference/dynamic |
2.2. Installing Virtualization Packages on an Existing Red Hat Enterprise Linux System | 2.2. Installing Virtualization Packages on an Existing Red Hat Enterprise Linux System This section describes the steps for installing the KVM hypervisor on an existing Red Hat Enterprise Linux 7 system. To install the packages, your machine must be registered and subscribed to the Red Hat Customer Portal. To register using Red Hat Subscription Manager, run the subscription-manager register command and follow the prompts. Alternatively, run the Red Hat Subscription Manager application from Applications System Tools on the desktop to register. If you do not have a valid Red Hat subscription, visit the Red Hat online store to obtain one. For more information on registering and subscribing a system to the Red Hat Customer Portal, see https://access.redhat.com/solutions/253273 . 2.2.1. Installing Virtualization Packages Manually To use virtualization on Red Hat Enterprise Linux, at minimum, you need to install the following packages: qemu-kvm : This package provides the user-level KVM emulator and facilitates communication between hosts and guest virtual machines. qemu-img : This package provides disk management for guest virtual machines. Note The qemu-img package is installed as a dependency of the qemu-kvm package. libvirt : This package provides the server and host-side libraries for interacting with hypervisors and host systems, and the libvirtd daemon that handles the library calls, manages virtual machines, and controls the hypervisor. To install these packages, enter the following command: Several additional virtualization management packages are also available and are recommended when using virtualization: virt-install : This package provides the virt-install command for creating virtual machines from the command line. libvirt-python : This package contains a module that permits applications written in the Python programming language to use the interface supplied by the libvirt API. virt-manager : This package provides the virt-manager tool, also known as Virtual Machine Manager . This is a graphical tool for administering virtual machines. It uses the libvirt-client library as the management API. libvirt-client : This package provides the client-side APIs and libraries for accessing libvirt servers. The libvirt-client package includes the virsh command-line tool to manage and control virtual machines and hypervisors from the command line or a special virtualization shell. You can install all of these recommended virtualization packages with the following command: 2.2.2. Installing Virtualization Package Groups The virtualization packages can also be installed from package groups. You can view the list of available groups by running the yum grouplist hidden commad. Out of the complete list of available package groups, the following table describes the virtualization package groups and what they provide. Table 2.1. Virtualization Package Groups Package Group Description Mandatory Packages Optional Packages Virtualization Hypervisor Smallest possible virtualization host installation libvirt, qemu-kvm, qemu-img qemu-kvm-tools Virtualization Client Clients for installing and managing virtualization instances gnome-boxes, virt-install, virt-manager, virt-viewer, qemu-img virt-top, libguestfs-tools, libguestfs-tools-c Virtualization Platform Provides an interface for accessing and controlling virtual machines and containers libvirt, libvirt-client, virt-who, qemu-img fence-virtd-libvirt, fence-virtd-multicast, fence-virtd-serial, libvirt-cim, libvirt-java, libvirt-snmp, perl-Sys-Virt Virtualization Tools Tools for offline virtual image management libguestfs, qemu-img libguestfs-java, libguestfs-tools, libguestfs-tools-c To install a package group, run the yum group install package_group command. For example, to install the Virtualization Tools package group with all the package types, run: For more information on installing package groups, see How to install a group of packages with yum on Red Hat Enterprise Linux? Knowledgebase article. | [
"yum install qemu-kvm libvirt",
"yum install virt-install libvirt-python virt-manager virt-install libvirt-client",
"yum group install \"Virtualization Tools\" --setopt=group_package_types=mandatory,default,optional"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-installing_the_virtualization_packages-installing_virtualization_packages_on_an_existing_red_hat_enterprise_linux_system |
40.2.2.2. Unit Masks | 40.2.2.2. Unit Masks If the cpu_type is not timer , unit masks may also be required to further define the event. Unit masks for each event are listed with the op_help command. The values for each unit mask are listed in hexadecimal format. To specify more than one unit mask, the hexadecimal values must be combined using a bitwise or operation. | [
"opcontrol --event= <event-name> : <sample-rate> : <unit-mask>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Setting_Events_to_Monitor-Unit_Masks |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_developer_tools/1/html/using_llvm_16.0.6_toolset/making-open-source-more-inclusive |
27.2. S3_PING Configuration Options | 27.2. S3_PING Configuration Options Red Hat JBoss Data Grid works with Amazon Web Services in two ways: In Library mode, use JGroups' default-configs/default-jgroups-ec2.xml file (see Section 26.2.2.3, "default-jgroups-ec2.xml" for details) or use the S3_PING protocol. In Remote Client-Server mode, use JGroups' S3_PING protocol. In Library and Remote Client-Server mode, there are three ways to configure the S3_PING protocol for clustering to work in Amazon AWS: Use Private S3 Buckets. These buckets use Amazon AWS credentials. Use Pre-Signed URLs. These pre-assigned URLs are assigned to buckets with private write and public read rights. Use Public S3 Buckets. These buckets do not have any credentials. Report a bug 27.2.1. Using Private S3 Buckets This configuration requires access to a private bucket that can only be accessed with the appropriate AWS credentials. To confirm that the appropriate permissions are available, confirm that the user has the following permissions for the bucket: List Upload/Delete View Permissions Edit Permissions Ensure that the S3_PING configuration includes the following properties: either the location or the prefix property to specify the bucket, but not both. If the prefix property is set, S3_PING searches for a bucket with a name that starts with the prefix value. If a bucket with the prefix at the beginning of the name is found, S3_PING uses that bucket. If a bucket with the prefix is not found, S3_PING creates a bucket using the AWS credentials and names it based on the prefix and a UUID (the naming format is {prefix value} - {UUID} ). the access_key and secret_access_key properties for the AWS user. Note If a 403 error displays when using this configuration, verify that the properties have the correct values. If the problem persists, confirm that the system time in the EC2 node is correct. Amazon S3 rejects requests with a time stamp that is more than 15 minutes old compared to their server's times for security purposes. Example 27.1. Start the Red Hat JBoss Data Grid Server with a Private Bucket Run the following command from the top level of the server directory to start the Red Hat JBoss Data Grid server using a private S3 bucket: Replace {server_ip_address} with the server's IP address. Replace {s3_bucket_name} with the appropriate bucket name. Replace {access_key} with the user's access key. Replace {secret_access_key} with the user's secret access key. Report a bug 27.2.2. Using Pre-Signed URLs For this configuration, create a publically readable bucket in S3 by setting the List permissions to Everyone to allow public read access. Each node in the cluster generates a pre-signed URL for put and delete operations, as required by the S3_PING protocol. This URL points to a unique file and can include a folder path within the bucket. Note Longer paths will cause errors in S3_PING . For example, a path such as my_bucket/DemoCluster/node1 works while a longer path such as my_bucket/Demo/Cluster/node1 will not. Report a bug 27.2.2.1. Generating Pre-Signed URLs JGroup's S3_PING class includes a utility method to generate pre-signed URLs. The last argument for this method is the time when the URL expires expressed in the number of seconds since the Unix epoch (January 1, 1970). The syntax to generate a pre-signed URL is as follows: Replace {operation} with either PUT or DELETE . Replace {access_key} with the user's access key. Replace {secret_access_key} with the user's secret access key. Replace {bucket_name} with the name of the bucket. Replace {path} with the desired path to the file within the bucket. Replace {seconds} with the number of seconds since the Unix epoch (January 1, 1970) that the path remains valid. Example 27.2. Generate a Pre-Signed URL Ensure that the S3_PING configuration includes the pre_signed_put_url and pre_signed_delete_url properties generated by the call to S3_PING.generatePreSignedUrl() . This configuration is more secure than one using private S3 buckets, because the AWS credentials are not stored on each node in the cluster Note If a pre-signed URL is entered into an XML file, then the & characters in the URL must be replaced with its XML entity ( & ). Report a bug 27.2.2.2. Set Pre-Signed URLs Using the Command Line To set the pre-signed URLs using the command line, use the following guidelines: Enclose the URL in double quotation marks ( " " ). In the URL, each occurrence of the ampersand ( & ) character must be escaped with a backslash ( \ ) Example 27.3. Start a JBoss Data Grid Server with a Pre-Signed URL In the provided example, the {signatures} values are generated by the S3_PING.generatePreSignedUrl() method. Additionally, the {expiration_time} values are the expiration time for the URL that are passed into the S3_PING.generatePreSignedUrl() method. Report a bug 27.2.3. Using Public S3 Buckets This configuration involves an S3 bucket that has public read and write permissions, which means that Everyone has permissions to List , Upload/Delete , View Permissions , and Edit Permissions for the bucket. The location property must be specified with the bucket name for this configuration. This configuration method is the least secure because any user who knows the name of the bucket can upload and store data in the bucket and the bucket creator's account is charged for this data. To start the Red Hat JBoss Data Grid server, use the following command: Report a bug 27.2.4. Troubleshooting S3_PING Warnings Depending on the S3_PING configuration type used, the following warnings may appear when starting the JBoss Data Grid Server: In each case, ensure that the property listed as missing in the warning is not needed by the S3_PING configuration. Report a bug | [
"bin/clustered.sh -Djboss.bind.address= {server_ip_address} -Djboss.bind.address.management= {server_ip_address} -Djboss.default.jgroups.stack=s3 -Djgroups.s3.bucket= {s3_bucket_name} -Djgroups.s3.access_key= {access_key} -Djgroups.s3.secret_access_key= {secret_access_key}",
"String Url = S3_PING.generatePreSignedUrl(\" {access_key} \", \" {secret_access_key} \", \" {operation} \", \" {bucket_name} \", \" {path} \", {seconds} );",
"String putUrl = S3_PING.generatePreSignedUrl(\" access_key \", \" secret_access_key \", \"put\", \" my_bucket \", \" DemoCluster/node1 \", 1234567890 );",
"bin/clustered.sh -Djboss.bind.address= {server_ip_address} -Djboss.bind.address.management= {server_ip_address} -Djboss.default.jgroups.stack=s3 -Djgroups.s3.pre_signed_put_url=\"http:// {s3_bucket_name} .s3.amazonaws.com/ node1?AWSAccessKeyId= {access_key} \\&Expires= {expiration_time} \\&Signature= {signature} \"-Djgroups.s3.pre_signed_delete_url=\"http:// {s3_bucket_name} .s3.amazonaws.com/ node1?AWSAccessKeyId= {access_key} \\&Expires= {expiration_time} \\&Signature= {signature} \"",
"bin/clustered.sh -Djboss.bind.address= {server_ip_address} -Djboss.bind.address.management= {server_ip_address} -Djboss.default.jgroups.stack=s3 -Djgroups.s3.bucket= {s3_bucket_name}",
"15:46:03,468 WARN [org.jgroups.conf.ProtocolConfiguration] (MSC service thread 1-7) variable \"USD{jgroups.s3.pre_signed_put_url}\" in S3_PING could not be substituted; pre_signed_put_url is removed from properties",
"15:46:03,469 WARN [org.jgroups.conf.ProtocolConfiguration] (MSC service thread 1-7) variable \"USD{jgroups.s3.prefix}\" in S3_PING could not be substituted; prefix is removed from properties",
"15:46:03,469 WARN [org.jgroups.conf.ProtocolConfiguration] (MSC service thread 1-7) variable \"USD{jgroups.s3.pre_signed_delete_url}\" in S3_PING could not be substituted; pre_signed_delete_url is removed from properties"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/sect-S3_PING_Configuration_Options |
Chapter 9. Uninstalling a cluster on IBM Cloud | Chapter 9. Uninstalling a cluster on IBM Cloud You can remove a cluster that you deployed to IBM Cloud(R). 9.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. Prerequisites You have a copy of the installation program that you used to deploy the cluster. You have the files that the installation program generated when you created your cluster. You have configured the ccoctl binary. You have installed the IBM Cloud(R) CLI and installed or updated the VPC infrastructure service plugin. For more information see "Prerequisites" in the IBM Cloud(R) CLI documentation . Procedure If the following conditions are met, this step is required: The installer created a resource group as part of the installation process. You or one of your applications created persistent volume claims (PVCs) after the cluster was deployed. In which case, the PVCs are not removed when uninstalling the cluster, which might prevent the resource group from being successfully removed. To prevent a failure: Log in to the IBM Cloud(R) using the CLI. To list the PVCs, run the following command: USD ibmcloud is volumes --resource-group-name <infrastructure_id> For more information about listing volumes, see the IBM Cloud(R) CLI documentation . To delete the PVCs, run the following command: USD ibmcloud is volume-delete --force <volume_id> For more information about deleting volumes, see the IBM Cloud(R) CLI documentation . Export the API key that was created as part of the installation process. USD export IC_API_KEY=<api_key> Note You must set the variable name exactly as specified. The installation program expects the variable name to be present to remove the service IDs that were created when the cluster was installed. From the directory that contains the installation program on the computer that you used to install the cluster, run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Remove the manual CCO credentials that were created for the cluster: USD ccoctl ibmcloud delete-service-id \ --credentials-requests-dir <path_to_credential_requests_directory> \ --name <cluster_name> Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program. | [
"ibmcloud is volumes --resource-group-name <infrastructure_id>",
"ibmcloud is volume-delete --force <volume_id>",
"export IC_API_KEY=<api_key>",
"./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2",
"ccoctl ibmcloud delete-service-id --credentials-requests-dir <path_to_credential_requests_directory> --name <cluster_name>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_ibm_cloud/uninstalling-cluster-ibm-cloud |
Chapter 140. KafkaBridgeTemplate schema reference | Chapter 140. KafkaBridgeTemplate schema reference Used in: KafkaBridgeSpec Property Property type Description deployment DeploymentTemplate Template for Kafka Bridge Deployment . pod PodTemplate Template for Kafka Bridge Pods . apiService InternalServiceTemplate Template for Kafka Bridge API Service . podDisruptionBudget PodDisruptionBudgetTemplate Template for Kafka Bridge PodDisruptionBudget . bridgeContainer ContainerTemplate Template for the Kafka Bridge container. clusterRoleBinding ResourceTemplate Template for the Kafka Bridge ClusterRoleBinding. serviceAccount ResourceTemplate Template for the Kafka Bridge service account. initContainer ContainerTemplate Template for the Kafka Bridge init container. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-kafkabridgetemplate-reference |
Chapter 1. Introduction to Red Hat Satellite | Chapter 1. Introduction to Red Hat Satellite Red Hat Satellite is a system management solution that enables you to deploy, configure, and maintain your systems across physical, virtual, and cloud environments. Satellite provides provisioning, remote management and monitoring of multiple Red Hat Enterprise Linux deployments with a single, centralized tool. Satellite Server synchronizes the content from Red Hat Customer Portal and other sources, and provides functionality including fine-grained life cycle management, user and group role-based access control, integrated subscription management, as well as advanced GUI, CLI, or API access. Capsule Server mirrors content from Satellite Server to facilitate content federation across various geographical locations. Host systems can pull content and configuration from Capsule Server in their location and not from the central Satellite Server. Capsule Server also provides localized services such as Puppet server, DHCP, DNS, or TFTP. Capsule Servers assist you in scaling your Satellite environment as the number of your managed systems increases. Capsule Servers decrease the load on the central server, increase redundancy, and reduce bandwidth usage. For more information, see Chapter 2, Capsule Server Overview . 1.1. System Architecture The following diagram represents the high-level architecture of Red Hat Satellite. Figure 1.1. Red Hat Satellite System Architecture There are four stages through which content flows in this architecture: External Content Sources The Satellite Server can consume diverse types of content from various sources. The Red Hat Customer Portal is the primary source of software packages, errata, and container images. In addition, you can use other supported content sources (Git repositories, Docker Hub, Puppet Forge, SCAP repositories) as well as your organization's internal data store. Satellite Server The Satellite Server enables you to plan and manage the content life cycle and the configuration of Capsule Servers and hosts through GUI, CLI, or API. Satellite Server organizes the life cycle management by using organizations as principal division units. Organizations isolate content for groups of hosts with specific requirements and administration tasks. For example, the OS build team can use a different organization than the web development team. Satellite Server also contains a fine-grained authentication system to provide Satellite operators with permissions to access precisely the parts of the infrastructure that lie in their area of responsibility. Capsule Servers Capsule Servers mirror content from Satellite Server to establish content sources in various geographical locations. This enables host systems to pull content and configuration from Capsule Servers in their location and not from the central Satellite Server. The recommended minimum number of Capsule Servers is therefore given by the number of geographic regions where the organization that uses Satellite operates. Using Content Views, you can specify the exact subset of content that Capsule Server makes available to hosts. See Figure 1.2, "Content Life Cycle in Red Hat Satellite" for a closer look at life cycle management with the use of Content Views. The communication between managed hosts and Satellite Server is routed through Capsule Server that can also manage multiple services on behalf of hosts. Many of these services use dedicated network ports, but Capsule Server ensures that a single source IP address is used for all communications from the host to Satellite Server, which simplifies firewall administration. For more information on Capsule Servers see Chapter 2, Capsule Server Overview . Managed Hosts Hosts are the recipients of content from Capsule Servers. Hosts can be either physical or virtual. Satellite Server can have directly managed hosts. The base system running a Capsule Server is also a managed host of Satellite Server. The following diagram provides a closer look at the distribution of content from Satellite Server to Capsules. Figure 1.2. Content Life Cycle in Red Hat Satellite By default, each organization has a Library of content from external sources. Content Views are subsets of content from the Library created by intelligent filtering. You can publish and promote Content Views into life cycle environments (typically Dev, QA, and Production). When creating a Capsule Server, you can choose which life cycle environments will be copied to that Capsule and made available to managed hosts. Content Views can be combined to create Composite Content Views. It can be beneficial to have a separate Content View for a repository of packages required by an operating system and a separate one for a repository of packages required by an application. One advantage is that any updates to packages in one repository only requires republishing the relevant Content View. You can then use Composite Content Views to combine published Content Views for ease of management. Which Content Views should be promoted to which Capsule Server depends on the Capsule's intended functionality. Any Capsule Server can run DNS, DHCP, and TFTP as infrastructure services that can be supplemented, for example, with content or configuration services. You can update Capsule Server by creating a new version of a Content View using synchronized content from the Library. The new Content View version is then promoted through life cycle environments. You can also create in-place updates of Content Views. This means creating a minor version of the Content View in its current life cycle environment without promoting it from the Library. For example, if you need to apply a security erratum to a Content View used in Production, you can update the Content View directly without promoting to other life cycles. For more information on content management, see Managing Content . 1.2. System Components Red Hat Satellite consists of several open source projects which are integrated, verified, delivered and supported as Satellite. This information is maintained and regularly updated on the Red Hat Customer Portal; see Satellite 6 Component Versions . Red Hat Satellite consists of the following open source projects: Foreman Foreman is an open source application used for provisioning and life cycle management of physical and virtual systems. Foreman automatically configures these systems using various methods, including kickstart and Puppet modules. Foreman also provides historical data for reporting, auditing, and troubleshooting. Katello Katello is a Foreman plug-in for subscription and repository management. It provides a means to subscribe to Red Hat repositories and download content. You can create and manage different versions of this content and apply them to specific systems within user-defined stages of the application life cycle. Candlepin Candlepin is a service within Katello that handles subscription management. Pulp Pulp is a service within Katello that handles repository and content management. Pulp ensures efficient storage space by not duplicating RPM packages even when requested by Content Views in different organizations. Hammer Hammer is a CLI tool that provides command line and shell equivalents of most Satellite web UI functions. REST API Red Hat Satellite includes a RESTful API service that allows system administrators and developers to write custom scripts and third-party applications that interface with Red Hat Satellite. The terminology used in Red Hat Satellite and its components is extensive. For explanations of frequently used terms, see Appendix B, Glossary of Terms . 1.3. Supported Usage Each Red Hat Satellite subscription includes one supported instance of Red Hat Enterprise Linux Server. This instance should be reserved solely for the purpose of running Red Hat Satellite. Using the operating system included with Satellite to run other daemons, applications, or services within your environment is not supported. Support for Red Hat Satellite components is described below. SELinux must be either in enforcing or permissive mode, installation with disabled SELinux is not supported. Puppet Red Hat Satellite includes supported Puppet packages. The installation program allows users to install and configure Puppet servers as a part of Capsule Servers. A Puppet module, running on a Puppet server on the Satellite Server or Satellite Capsule Server, is also supported by Red Hat. For information on what versions of Puppet are supported, see the Red Hat Knowledgebase article Satellite 6 Component Versions . Red Hat supports many different scripting and other frameworks, including Puppet modules. Support for these frameworks is based on the Red Hat Knowledgebase article How does Red Hat support scripting frameworks . Pulp Pulp usage is only supported via Satellite web UI, CLI, and API. Direct modification or interaction with Pulp's local API or database is not supported, as this can cause irreparable damage to the Red Hat Satellite databases. Foreman Foreman can be extended using plug-ins, but only plug-ins packaged with Red Hat Satellite are supported. Red Hat does not support plug-ins in the Red Hat Satellite Optional repository. Red Hat Satellite also includes components, configuration and functionality to provision and configure operating systems other than Red Hat Enterprise Linux. While these features are included and can be employed, Red Hat supports their usage for Red Hat Enterprise Linux. Candlepin The only supported methods of using Candlepin are through the Satellite web UI, CLI, and API. Red Hat does not support direct interaction with Candlepin, its local API or database, as this can cause irreparable damage to the Red Hat Satellite databases. Embedded Tomcat Application Server The only supported methods of using the embedded Tomcat application server are through the Satellite web UI, API, and database. Red Hat does not support direct interaction with the embedded Tomcat application server's local API or database. Note Usage of all Red Hat Satellite components is supported within the context of Red Hat Satellite only. Third-party usage of any components falls beyond supported usage. 1.4. Supported Client Architectures 1.4.1. Content Management Supported combinations of major versions of Red Hat Enterprise Linux and hardware architectures for registering and managing hosts with Satellite. This includes the Satellite Client 6 repositories. Table 1.1. Content Management Support Platform Architectures Red Hat Enterprise Linux 9 x86_64, ppc64le, s390x, aarch64 Red Hat Enterprise Linux 8 x86_64, ppc64le, s390x Red Hat Enterprise Linux 7 x86_64, ppc64 (BE), ppc64le, aarch64, s390x Red Hat Enterprise Linux 6 x86_64, i386, s390x, ppc64 (BE) 1.4.2. Host Provisioning Supported combinations of major versions of Red Hat Enterprise Linux and hardware architectures for host provisioning with Satellite. Table 1.2. Host Provisioning Support Platform Architectures Red Hat Enterprise Linux 9 x86_64 Red Hat Enterprise Linux 8 x86_64 Red Hat Enterprise Linux 7 x86_64 Red Hat Enterprise Linux 6 x86_64, i386 1.4.3. Configuration Management Supported combinations of major versions of Red Hat Enterprise Linux and hardware architectures for configuration management with Satellite. Table 1.3. Puppet Agent Support Platform Architectures Red Hat Enterprise Linux 9 x86_64 Red Hat Enterprise Linux 8 x86_64, aarch64 Red Hat Enterprise Linux 7 x86_64 Red Hat Enterprise Linux 6 x86_64, i386 | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/satellite_overview_concepts_and_deployment_considerations/introduction_to_server_planning |
Chapter 2. Configuring user authentication using authselect | Chapter 2. Configuring user authentication using authselect authselect is a utility that allows you to configure system identity and authentication sources by selecting a specific profile. Profile is a set of files that describes how the resulting Pluggable Authentication Modules (PAM) and Network Security Services (NSS) configuration will look like. You can choose the default profile set or create a custom profile. 2.1. What is authselect used for You can use the authselect utility to configure user authentication on a Red Hat Enterprise Linux 9 host. You can configure identity information and authentication sources and providers by selecting one of the ready-made profiles: The default sssd profile enables the System Security Services Daemon (SSSD) for systems that use LDAP authentication. The winbind profile enables the Winbind utility for systems directly integrated with Microsoft Active Directory. The minimal profile serves only local users and groups directly from system files, which allows administrators to remove network authentication services that are no longer needed. After selecting an authselect profile for a given host, the profile is applied to every user logging into the host. Red Hat recommends using authselect in semi-centralized identity management environments, for example if your organization utilizes LDAP or Winbind databases to authenticate users to use services in your domain. Warning You do not need to use authselect if: Your host is part of Red Hat Enterprise Linux Identity Management (IdM). Joining your host to an IdM domain with the ipa-client-install command automatically configures SSSD authentication on your host. Your host is part of Active Directory via SSSD. Calling the realm join command to join your host to an Active Directory domain automatically configures SSSD authentication on your host. Red Hat recommends against changing the authselect profiles configured by ipa-client-install or realm join . If you need to modify them, display the current settings before making any modifications, so you can revert back to them if necessary: 2.1.1. Files and directories authselect modifies The authconfig utility, used in Red Hat Enterprise Linux versions, created and modified many different configuration files, making troubleshooting more difficult. Authselect simplifies testing and troubleshooting because it only modifies the following files and directories: /etc/nsswitch.conf The GNU C Library and other applications use this Name Service Switch (NSS) configuration file to determine the sources from which to obtain name-service information in a range of categories, and in what order. Each category of information is identified by a database name. /etc/pam.d/* files Linux-PAM (Pluggable Authentication Modules) is a system of modules that handle the authentication tasks of applications (services) on the system. The nature of the authentication is dynamically configurable: the system administrator can choose how individual service-providing applications will authenticate users. The configuration files in the /etc/pam.d/ directory list the PAMs that will perform authentication tasks required by a service, and the appropriate behavior of the PAM-API in the event that individual PAMs fail. Among other things, these files contain information about: User password lockout conditions The ability to authenticate with a smart card The ability to authenticate with a fingerprint reader /etc/dconf/db/distro.d/* files This directory holds configuration profiles for the dconf utility, which you can use to manage settings for the GNOME Desktop Graphical User Interface (GUI). 2.1.2. Data providers in /etc/nsswitch.conf The default sssd profile establishes SSSD as a source of information by creating sss entries in /etc/nsswitch.conf : This means that the system first looks to SSSD if information concerning one of those items is requested: passwd for user information group for user group information netgroup for NIS netgroup information automount for NFS automount information services for information regarding services Only if the requested information is not found in the sssd cache and on the server providing authentication, or if sssd is not running, the system looks at the local files, that is /etc/* . For example, if information is requested about a user ID, the user ID is first searched in the sssd cache. If it is not found there, the /etc/passwd file is consulted. Analogically, if a user's group affiliation is requested, it is first searched in the sssd cache and only if not found there, the /etc/group file is consulted. In practice, the local files database is not normally consulted. The most important exception is the case of the root user, which is never handled by sssd but by files . 2.2. Choosing an authselect profile As a system administrator, you can select a profile for the authselect utility for a specific host. The profile will be applied to every user logging into the host. Prerequisites You need root credentials to run authselect commands Procedure Select the authselect profile that is appropriate for your authentication provider. For example, for logging into the network of a company that uses LDAP, choose sssd . Optional: You can modify the default profile settings by adding the following options to the authselect select sssd or authselect select winbind command, for example: with-faillock with-smartcard with-fingerprint To see the full list of available options, see Converting your scripts from authconfig to authselect or the authselect-migration(7) man page on your system. Note Make sure that the configuration files that are relevant for your profile are configured properly before finishing the authselect select procedure. For example, if the sssd daemon is not configured correctly and active, running authselect select results in only local users being able to authenticate, using pam_unix . Verification Verify sss entries for SSSD are present in /etc/nsswitch.conf : Review the contents of the /etc/pam.d/system-auth file for pam_sss.so entries: Additional Resources What is authselect used for Modifying a ready-made authselect profile Creating and deploying your own authselect profile 2.3. Modifying a ready-made authselect profile As a system administrator, you can modify one of the default profiles to suit your needs. You can modify any of the items in the /etc/authselect/user-nsswitch.conf file with the exception of: passwd group netgroup automount services Running authselect select profile_name afterwards will result in transferring permissible changes from /etc/authselect/user-nsswitch.conf to the /etc/nsswitch.conf file. Unacceptable changes are overwritten by the default profile configuration. Important Do not modify the /etc/nsswitch.conf file directly. Procedure Select an authselect profile, for example: Edit the /etc/authselect/user-nsswitch.conf file with your desired changes. Apply the changes from the /etc/authselect/user-nsswitch.conf file: Verification Review the /etc/nsswitch.conf file to verify that the changes from /etc/authselect/user-nsswitch.conf have been propagated there. Additional Resources What is authselect used for 2.4. Creating and deploying your own authselect profile As a system administrator, you can create and deploy a custom profile by making a customized copy of one of the default profiles. This is particularly useful if Modifying a ready-made authselect profile is not enough for your needs. When you deploy a custom profile, the profile is applied to every user logging into the given host. Procedure Create your custom profile by using the authselect create-profile command. For example, to create a custom profile called user-profile based on the ready-made sssd profile but one in which you can configure the items in the /etc/nsswitch.conf file yourself: Warning If you are planning to modify /etc/authselect/custom/user-profile/{password-auth,system-auth,fingerprint-auth,smartcard-auth,postlogin} , then enter the command above without the --symlink-pam option. This is to ensure that the modification persists during the upgrade of authselect-libs . Including the --symlink-pam option in the command means that PAM templates will be symbolic links to the origin profile files instead of their copy; including the --symlink-meta option means that meta files, such as README and REQUIREMENTS will be symbolic links to the origin profile files instead of their copy. This ensures that all future updates to the PAM templates and meta files in the original profile will be reflected in your custom profile, too. The command creates a copy of the /etc/nsswitch.conf file in the /etc/authselect/custom/user-profile/ directory. Configure the /etc/authselect/custom/user-profile/nsswitch.conf file. Select the custom profile by running the authselect select command, and adding custom/ name_of_the_profile as a parameter. For example, to select the user-profile profile: Selecting the user-profile profile for your machine means that if the sssd profile is subsequently updated by Red Hat, you will benefit from all the updates with the exception of updates made to the /etc/nsswitch.conf file. Example 2.1. Creating a profile The following procedure shows how to create a profile based on the sssd profile which only consults the local static table lookup for hostnames in the /etc/hosts file, not in the dns or myhostname databases. Edit the /etc/nsswitch.conf file by editing the following line: Create a custom profile based on sssd that excludes changes to /etc/nsswitch.conf : Select the profile: Optional: Check that selecting the custom profile has created the /etc/pam.d/system-auth file according to the chosen sssd profile left the configuration in the /etc/nsswitch.conf unchanged: Note Running authselect select sssd would, in contrast, result in hosts: files dns myhostname Additional Resources What is authselect used for 2.5. Converting your scripts from authconfig to authselect If you use ipa-client-install or realm join to join a domain, you can safely remove any authconfig call in your scripts. If this is not possible, replace each authconfig call with its equivalent authselect call. In doing that, select the correct profile and the appropriate options. In addition, edit the necessary configuration files: /etc/krb5.conf /etc/sssd/sssd.conf (for the sssd profile) or /etc/samba/smb.conf (for the winbind profile) Relation of authconfig options to authselect profiles and Authselect profile option equivalents of authconfig options show the authselect equivalents of authconfig options. Table 2.1. Relation of authconfig options to authselect profiles Authconfig options Authselect profile --enableldap --enableldapauth sssd --enablesssd --enablesssdauth sssd --enablekrb5 sssd --enablewinbind --enablewinbindauth winbind Table 2.2. Authselect profile option equivalents of authconfig options Authconfig option Authselect profile feature --enablesmartcard with-smartcard --enablefingerprint with-fingerprint --enableecryptfs with-ecryptfs --enablemkhomedir with-mkhomedir --enablefaillock with-faillock --enablepamaccess with-pamaccess --enablewinbindkrb5 with-krb5 Examples of authselect command equivalents to authconfig commands shows example transformations of Kickstart calls to authconfig into Kickstart calls to authselect . Table 2.3. Examples of authselect command equivalents to authconfig commands authconfig command authselect equivalent authconfig --enableldap --enableldapauth --enablefaillock --updateall authselect select sssd with-faillock authconfig --enablesssd --enablesssdauth --enablesmartcard --smartcardmodule=sssd --updateall authselect select sssd with-smartcard authconfig --enableecryptfs --enablepamaccess --updateall authselect select sssd with-ecryptfs with-pamaccess authconfig --enablewinbind --enablewinbindauth --winbindjoin=Administrator --updateall realm join -U Administrator --client-software=winbind WINBINDDOMAIN 2.6. Additional resources What is pam_faillock and how to use it in Red Hat Enterprise Linux 8 & 9? (Red Hat Knowledgebase) Set Password Policy/Complexity in Red Hat Enterprise Linux 8 (Red Hat Knowledgebase) | [
"authselect current Profile ID: sssd Enabled features: - with-sudo - with-mkhomedir - with-smartcard",
"passwd: sss files group: sss files netgroup: sss files automount: sss files services: sss files",
"authselect select sssd",
"passwd: sss files group: sss files netgroup: sss files automount: sss files services: sss files",
"Generated by authselect on Tue Sep 11 22:59:06 2018 Do not modify this file manually. auth required pam_env.so auth required pam_faildelay.so delay=2000000 auth [default=1 ignore=ignore success=ok] pam_succeed_if.so uid >= 1000 quiet auth [default=1 ignore=ignore success=ok] pam_localuser.so auth sufficient pam_unix.so nullok try_first_pass auth requisite pam_succeed_if.so uid >= 1000 quiet_success auth sufficient pam_sss.so forward_pass auth required pam_deny.so account required pam_unix.so account sufficient pam_localuser.so",
"authselect select sssd",
"authselect apply-changes",
"authselect create-profile user-profile -b sssd --symlink-meta --symlink-pam New profile was created at /etc/authselect/custom/user-profile",
"authselect select custom/ user-profile",
"hosts: files",
"authselect create-profile user-profile -b sssd --symlink-meta --symlink-pam",
"authselect select custom/ user-profile",
"hosts: files"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_authentication_and_authorization_in_rhel/configuring-user-authentication-using-authselect_configuring-authentication-and-authorization-in-rhel |
1.3. Scheduling Policies | 1.3. Scheduling Policies A scheduling policy is a set of rules that defines the logic by which virtual machines are distributed amongst hosts in the cluster that scheduling policy is applied to. Scheduling policies determine this logic via a combination of filters, weightings, and a load balancing policy. The filter modules apply hard enforcement and filter out hosts that do not meet the conditions specified by that filter. The weights modules apply soft enforcement, and are used to control the relative priority of factors considered when determining the hosts in a cluster on which a virtual machine can run. The Red Hat Virtualization Manager provides five default scheduling policies: Evenly_Distributed , Cluster_Maintenance , None , Power_Saving , and VM_Evenly_Distributed . You can also define new scheduling policies that provide fine-grained control over the distribution of virtual machines. Regardless of the scheduling policy, a virtual machine will not start on a host with an overloaded CPU. By default, a host's CPU is considered overloaded if it has a load of more than 80% for 5 minutes, but these values can be changed using scheduling policies. See Section 8.2.5, "Scheduling Policy Settings Explained" for more information about the properties of each scheduling policy. Figure 1.4. Evenly Distributed Scheduling Policy The Evenly_Distributed scheduling policy distributes the memory and CPU processing load evenly across all hosts in the cluster. Additional virtual machines attached to a host will not start if that host has reached the defined CpuOverCommitDurationMinutes , HighUtilization , or MaxFreeMemoryForOverUtilized . The VM_Evenly_Distributed scheduling policy virtual machines evenly between hosts based on a count of the virtual machines. The cluster is considered unbalanced if any host is running more virtual machines than the HighVmCount and there is at least one host with a virtual machine count that falls outside of the MigrationThreshold . Figure 1.5. Power Saving Scheduling Policy The Power_Saving scheduling policy distributes the memory and CPU processing load across a subset of available hosts to reduce power consumption on underutilized hosts. Hosts with a CPU load below the low utilization value for longer than the defined time interval will migrate all virtual machines to other hosts so that it can be powered down. Additional virtual machines attached to a host will not start if that host has reached the defined high utilization value. Set the None policy to have no load or power sharing between hosts for running virtual machines. This is the default mode. When a virtual machine is started, the memory and CPU processing load is spread evenly across all hosts in the cluster. Additional virtual machines attached to a host will not start if that host has reached the defined CpuOverCommitDurationMinutes , HighUtilization , or MaxFreeMemoryForOverUtilized . The Cluster_Maintenance scheduling policy limits activity in a cluster during maintenance tasks. When the Cluster_Maintenance policy is set, no new virtual machines may be started, except highly available virtual machines. If host failure occurs, highly available virtual machines will restart properly and any virtual machine can migrate. 1.3.1. Creating a Scheduling Policy You can create new scheduling policies to control the logic by which virtual machines are distributed amongst a given cluster in your Red Hat Virtualization environment. Creating a Scheduling Policy Click Administration Configure . Click the Scheduling Policies tab. Click New . Enter a Name and Description for the scheduling policy. Configure filter modules: In the Filter Modules section, drag and drop the preferred filter modules to apply to the scheduling policy from the Disabled Filters section into the Enabled Filters section. Specific filter modules can also be set as the First , to be given highest priority, or Last , to be given lowest priority, for basic optimization. To set the priority, right-click any filter module, hover the cursor over Position and select First or Last . Configure weight modules: In the Weights Modules section, drag and drop the preferred weights modules to apply to the scheduling policy from the Disabled Weights section into the Enabled Weights & Factors section. Use the + and - buttons to the left of the enabled weight modules to increase or decrease the weight of those modules. Specify a load balancing policy: From the drop-down menu in the Load Balancer section, select the load balancing policy to apply to the scheduling policy. From the drop-down menu in the Properties section, select a load balancing property to apply to the scheduling policy and use the text field to the right of that property to specify a value. Use the + and - buttons to add or remove additional properties. Click OK . 1.3.2. Explanation of Settings in the New Scheduling Policy and Edit Scheduling Policy Window The following table details the options available in the New Scheduling Policy and Edit Scheduling Policy windows. Table 1.12. New Scheduling Policy and Edit Scheduling Policy Settings Field Name Description Name The name of the scheduling policy. This is the name used to refer to the scheduling policy in the Red Hat Virtualization Manager. Description A description of the scheduling policy. This field is recommended but not mandatory. Filter Modules A set of filters for controlling the hosts on which a virtual machine in a cluster can run. Enabling a filter will filter out hosts that do not meet the conditions specified by that filter, as outlined below: CpuPinning : Hosts which do not satisfy the CPU pinning definition. Migration : Prevent migration to the same host. PinToHost : Hosts other than the host to which the virtual machine is pinned. CPU-Level : Hosts that do not meet the CPU topology of the virtual machine. CPU : Hosts with fewer CPUs than the number assigned to the virtual machine. Memory : Hosts that do not have sufficient memory to run the virtual machine. VmAffinityGroups : Hosts that do not meet the conditions specified for a virtual machine that is a member of an affinity group. For example, that virtual machines in an affinity group must run on the same host or on separate hosts. VmToHostsAffinityGroups : Group of hosts that do not meet the conditions specified for a virtual machine that is a member of an affinity group. For example, that virtual machines in an affinity group must run on one of the hosts in a group or on a separate host that is excluded from the group. InClusterUpgrade : Hosts that are running an earlier operating system than the host that the virtual machine currently runs on. HostDevice : Hosts that do not support host devices required by the virtual machine. HA : Forces the Manager virtual machine in a self-hosted engine environment to run only on hosts with a positive high availability score. Emulated-Machine : Hosts which do not have proper emulated machine support. Network : Hosts on which networks required by the network interface controller of a virtual machine are not installed, or on which the cluster's display network is not installed. HostedEnginesSpares : Reserves space for the Manager virtual machine on a specified number of self-hosted engine nodes. Label : Hosts that do not have the required affinity labels. Compatibility-Version : Runs virtual machines only on hosts with the correct compatibility version support. CPUOverloaded : Hosts that are CPU overloaded. Weights Modules A set of weightings for controlling the relative priority of factors considered when determining the hosts in a cluster on which a virtual machine can run. InClusterUpgrade : Weight hosts in accordance with their operating system version. The weight penalizes hosts with earlier operating systems more than hosts with the same operating system as the host that the virtual machine is currently running on. This ensures that priority is always given to hosts with later operating systems. OptimalForHaReservation : Weights hosts in accordance with their high availability score. None : Weights hosts in accordance with the even distribution module. OptimalForEvenGuestDistribution : Weights hosts in accordance with the number of virtual machines running on those hosts. VmAffinityGroups : Weights hosts in accordance with the affinity groups defined for virtual machines. This weight module determines how likely virtual machines in an affinity group are to run on the same host or on separate hosts in accordance with the parameters of that affinity group. VmToHostsAffinityGroups : Weights hosts in accordance with the affinity groups defined for virtual machines. This weight module determines how likely virtual machines in an affinity group are to run on one of the hosts in a group or on a separate host that is excluded from the group. OptimalForCPUPowerSaving : Weights hosts in accordance with their CPU usage, giving priority to hosts with higher CPU usage. OptimalForEvenCpuDistribution : Weights hosts in accordance with their CPU usage, giving priority to hosts with lower CPU usage. HA : Weights hosts in accordance with their high availability score. PreferredHosts : Preferred hosts have priority during virtual machine setup. OptimalForMemoryPowerSaving : Weights hosts in accordance with their memory usage, giving priority to hosts with lower available memory. OptimalForMemoryEvenDistribution : Weights hosts in accordance with their memory usage, giving priority to hosts with higher available memory. Load Balancer This drop-down menu allows you to select a load balancing module to apply. Load balancing modules determine the logic used to migrate virtual machines from hosts experiencing high usage to hosts experiencing lower usage. Properties This drop-down menu allows you to add or remove properties for load balancing modules, and is only available when you have selected a load balancing module for the scheduling policy. No properties are defined by default, and the properties that are available are specific to the load balancing module that is selected. Use the + and - buttons to add or remove additional properties to or from the load balancing module. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/sect-Scheduling_Policies |
Chapter 17. Searching and filtering | Chapter 17. Searching and filtering The ability to instantly find resources is important to safeguard your cluster. Use Red Hat Advanced Cluster Security for Kubernetes search feature to find relevant resources faster. For example, you can use it to find deployments that are exposed to a newly published CVE or find all deployments that have external network exposure. 17.1. Search syntax A search query is made up of two parts: An attribute that identifies the resource type you want to search for. A search term that finds the matching resource. For example, to find all violations in the visa-processor deployment, the search query is Deployment:visa-processor . In this search query, Deployment is the attribute and visa-processor is the search term. Note You must select an attribute before you can use search terms. However, in some views, such as the Risk view and the Violations view, Red Hat Advanced Cluster Security for Kubernetes automatically applies the relevant attribute based on the search term you enter. You can use multiple attributes in your query. When you use more than one attribute, the results only include the items that match all attributes. Example When you search for Namespace:frontend CVE:CVE-2018-11776 , it returns only those resources which violate CVE-2018-11776 in the frontend namespace. You can use more than one search term with each attribute. When you use more than one search term, the results include all items that match any of the search terms. Example If you use the search query Namespace: frontend backend , it returns matching results from the namespace frontend or backend . You can combine multiple attribute and search term pairs. Example The search query Cluster:production Namespace:frontend CVE:CVE-2018-11776 returns all resources which violate CVE-2018-11776 in the frontend namespace in the production cluster. Search terms can be part of a word, in which case Red Hat Advanced Cluster Security for Kubernetes returns all matching results. Example If you search for Deployment:def , the results include all deployments starting with def . To explicitly search for a specific term, use the search terms inside quotes. Example When you search for Deployment:"def" , the results only include the deployment def . You can also use regular expressions by using r/ before your search term. Example When you search for Namespace:r/st.*x , the results include matches from namespace stackrox and stix . Use ! to indicate the search terms that you do not want in results. Example If you search for Namespace:!stackrox , the results include matches from all namespaces except the stackrox namespace. Use the comparison operators > , < , = , >= , or <= to match a specific value or range of values. Example If you search for CVSS:>=6 , the results include all vulnerabilities with Common Vulnerability Scoring System (CVSS) score 6 or higher. 17.2. Search autocomplete As you enter your query, Red Hat Advanced Cluster Security for Kubernetes automatically displays relevant suggestions for the attributes and the search terms. 17.3. Using global search By using global search you can search across all resources in your environment. Based on the resource type you use in your search query, the results are grouped in the following categories: All results (Lists matching results across all categories) Clusters Deployments Images Namespaces Nodes Policies Policy categories [1] Roles Role bindings Secrets Service accounts Users and groups Violations The Policy categories option is only available if you use the following: PostgreSQL as a backend database in Red Hat Advanced Cluster Security for Kubernetes (RHACS). Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service). These categories are listed as a table on the RHACS portal global search page and you can click on the category name to identify results belonging to the selected category. To do a global search, in the RHACS portal, select Search . 17.4. Using local page filtering You can use local page filtering from within all views in the RHACS portal. Local page filtering works similar to the global search, but only relevant attributes are available. You can select the search bar to show all available attributes for a specific view. 17.5. Common search queries Here are some common search queries you can run with Red Hat Advanced Cluster Security for Kubernetes. Finding deployments that are affected by a specific CVE Query Example CVE:<CVE_number> CVE:CVE-2018-11776 Finding privileged running deployments Query Example Privileged:<true_or_false> Privileged:true Finding deployments that have external network exposure Query Example Exposure Level:<level> Exposure Level:External Finding deployments that are running specific processes Query Example Process Name:<process_name> Process Name:bash Finding deployments that have serious but fixable vulnerabilities Query Example CVSS:<expression_and_score> CVSS:>=6 Fixable:.* Finding deployments that use passwords exposed through environment variables Query Example Environment Key:<query> Environment Key:r/.*pass.* Finding running deployments that have particular software components in them Query Example Component:<component_name> Component:libgpg-error or Component:sudo Finding users or groups Use Kubernetes Labels and Selectors , and Annotations to attach metadata to your deployments. You can then query based on the applied annotations and labels to identify individuals or groups. Finding who owns a particular deployment Query Example Deployment:<deployment_name> Label:<key_value> or Deployment:<deployment_name> Annotation:<key_value> Deployment:app-server Label:team=backend Finding who is deploying images from public registries Query Example Image Registry:<registry_name> Label:<key_value> or Image Registry:<registry_name> Annotation:<key_value> Image Registry:docker.io Label:team=backend Finding who is deploying into the default namespace Query Example Namespace:default Label:<key_value> or Namespace:default Annotation:<key_value> Namespace:default Label:team=backend 17.6. Search attributes Following is the list of search attributes that you can use while searching and filtering in Red Hat Advanced Cluster Security for Kubernetes. Attribute Description Add Capabilities Provides the container with additional Linux capabilities, for instance the ability to modify files or perform network operations. Annotation Arbitrary non-identifying metadata attached to an orchestrator object. CPU Cores Limit Maximum number of cores that a resource is allowed to use. CPU Cores Request Minimum number of cores to be reserved for a given resource. CVE Common Vulnerabilities and Exposures, use it with specific CVE numbers. CVSS Common Vulnerability Scoring System, use it with the CVSS score and greater than ( > ), less than ( < ), or equal to ( = ) symbols. Category Policy categories include DevOps Best Practices, Security Best Practices, Privileges, Vulnerability Management, Multiple, and any custom policy categories that you create. Cert Expiration Certificate expiration date. Cluster Name of a Kubernetes or OpenShift Container Platform cluster. Cluster ID Unique ID for a Kubernetes or OpenShift Container Platform cluster. Cluster Role Use true to search for cluster-wide roles and false for namespace-scoped roles. Component Software (daemond, docker), objects (images, containers, services), registries (repository for Docker images). Component Count Number of components in the image. Component version The version of software, objects, or registries. Created Time Time and date when the secret object was created. Deployment Name of the deployment. Deployment Type The type of Kubernetes controller on which the deployment is based. Description Description of the deployment. Dockerfile Instruction Keyword Keyword in the Dockerfile instructions in an image. Dockerfile Instruction Value Value in the Dockerfile instructions in an image. Drop Capabilities Linux capabilities that have been dropped from the container. For example CAP_SETUID or CAP_NET_RAW . Enforcement Type of enforcement assigned to the deployment. For example, None , Scale to Zero Replicas , or Add an Unsatisfiable Node Constraint . Environment Key Key portion of a label key-value string that is metadata for further identifying and organizing the environment of a container. Environment Value Value portion of a label key-value string that is metadata for further identifying and organizing the environment of a container. Exposed Node Port Port number of the exposed node port. Exposing Service Name of the exposed service. Exposing Service Port Port number of the exposed service. Exposure Level The type of exposure for a deployment port, for example external or node . External Hostname The hostname for an external port exposure for a deployment. External IP The IP address for an external port exposure for a deployment. Fixable CVE Count Number of fixable CVEs on an image. Fixed By The version string of a package that fixes a flagged vulnerability in an image. Image The name of the image. Image Command The command specified in the image. Image Created Time The time and date when the image was created. Image Entrypoint The entrypoint command specified in the image. Image Pull Secret The name of the secret to use when pulling the image, as specified in the deployment. Image Pull Secret Registry The name of the registry for an image pull secret. Image Registry The name of the image registry. Image Remote Indication of an image that is remotely accessible. Image Scan Time The time and date when the image was last scanned. Image Tag Identifier for an image. Image Users Name of the user or group that a container image is configured to use when it runs. Image Volumes Names of the configured volumes in the container image. Inactive Deployment Use true to search for inactive deployments and false for active deployments. Label The key portion of a label key-value string that is metadata for further identifying and organizing images, containers, daemons, volumes, networks, and other resources. Lifecycle Stage The type of lifecycle stage where this policy is configured or alert was triggered. Max Exposure Level For a deployment, the maximum level of network exposure for all given ports/services. Memory Limit (MB) Maximum amount of memory that a resource is allowed to use. Memory Request (MB) Minimum amount of memory to be reserved for a given resource. Namespace The name of the namespace. Namespace ID Unique ID for the containing namespace object on a deployment. Node Name of a node. Node ID Unique ID for a node. Pod Label Single piece of identifying metadata attached to an individual pod. Policy The name of the security policy. Port Port numbers exposed by a deployment. Port Protocol IP protocol such as TCP or UDP used by exposed port. Priority Risk priority for a deployment. (Only available in Risks view.) Privileged Use true to search for privileged running deployments, or false otherwise. Process Ancestor Name of any parent process for a process indicator in a deployment. Process Arguments Command arguments for a process indicator in a deployment. Process Name Name of the process for a process indicator in a deployment. Process Path Path to the binary in the container for a process indicator in a deployment. Process UID Unix user ID for the process indicator in a deployment. Read Only Root Filesystem Use true to search for containers running with the root file system configured as read only. Role Name of a Kubernetes RBAC role. Role Binding Name of a Kubernetes RBAC role binding. Role ID Role ID to which a Kubernetes RBAC role binding is bound. Secret Name of the secret object that holds the sensitive information. Secret Path Path to the secret object in the file system. Secret Type Type of the secret, for example, certificate or RSA public key. Service Account Service account name for a service account or deployment. Severity Indication of level of importance of a violation: Critical, High, Medium, Low. Subject Name for a subject in Kubernetes RBAC. Subject Kind Type of subject in Kubernetes RBAC, such as SERVICE_ACCOUNT , USER or GROUP . Taint Effect Type of taint currently applied to a node. Taint Key Key for a taint currently applied to a node. Taint Value Allowed value for a taint currently applied to a node. Toleration Key Key for a toleration applied to a deployment. Toleration Value Value for a toleration applied to a deployment. Violation A notification displayed in the Violations page when the conditions specified by a policy have not been met. Violation State Use it to search for resolved violations. Violation Time Time and date that a violation first occurred. Volume Destination Mount path of the data volume. Volume Name Name of the storage. Volume ReadOnly Use true to search for volumes that are mounted as read only. Volume Source Indicates the form in which the volume is provisioned (for example, persistentVolumeClaim or hostPath ). Volume Type The type of volume. | null | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/operating/search-filter |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/release_notes_for_red_hat_build_of_openjdk_8.0.382/making-open-source-more-inclusive |
Chapter 32. Uninstalling the IdM CA service from an IdM server | Chapter 32. Uninstalling the IdM CA service from an IdM server If you have more than four Identity Management (IdM) replicas with the CA Role in your topology and you run into performance problems due to redundant certificate replication, remove redundant CA service instances from IdM replicas. To do this, you must first decommission the affected IdM replicas completely before re-installing IdM on them, this time without the CA service. Note While you can add the CA role to an IdM replica, IdM does not provide a method to remove only the CA role from an IdM replica: the ipa-ca-install command does not have an --uninstall option. Prerequisites You have the IdM CA service installed on more than four IdM servers in your topology. Procedure Identify the redundant CA service and follow the procedure in Uninstalling an IdM server on the IdM replica that hosts this service. On the same host, follow the procedure in Installing an IdM server: With integrated DNS, without a CA . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/installing_identity_management/uninstalling-the-idm-ca-service-from-an-idm-server_installing-identity-management |
Chapter 2. Learn more about ROSA with HCP | Chapter 2. Learn more about ROSA with HCP Red Hat OpenShift Service on AWS (ROSA) with hosted control planes (HCP) offers a reduced-cost solution to create a managed ROSA cluster with a focus on efficiency. You can quickly create a new cluster and deploy applications in minutes. 2.1. Key features of ROSA with HCP ROSA with HCP requires a minimum of only two nodes, making it ideal for smaller projects while still being able to scale to support larger projects and enterprises. The underlying control plane infrastructure is fully managed. Control plane components, such as the API server and etcd database, are hosted in a Red Hat-owned AWS account. Provisioning time is approximately 10 minutes. Customers can upgrade the control plane and machine pools separately, which means they do not have to shut down the entire cluster during upgrades. 2.2. Getting started with ROSA with HCP Use the following sections to find content to help you learn about and use ROSA with HCP. 2.2.1. Architect Learn about ROSA with HCP Plan ROSA with HCP deployment Additional resources Architecture overview Back up and restore ROSA with HCP life cycle ROSA with HCP architecture ROSA with HCP service definition Getting support 2.2.2. Cluster Administrator Learn about ROSA with HCP Deploy ROSA with HCP Manage ROSA with HCP Additional resources ROSA with HCP architecture Installing ROSA with HCP Getting Support OpenShift Interactive Learning Portal Storage Monitoring overview ROSA with HCP life cycle Back up and restore 2.2.3. Developer Learn about application development in ROSA with HCP Deploy applications Additional resources Red Hat Developers site Building applications overview Getting support Red Hat OpenShift Dev Spaces (formerly Red Hat CodeReady Workspaces) Operators overview Images Developer-focused CLI | null | https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/about/about-hcp |
Chapter 11. VolumeSnapshot [snapshot.storage.k8s.io/v1] | Chapter 11. VolumeSnapshot [snapshot.storage.k8s.io/v1] Description VolumeSnapshot is a user's request for either creating a point-in-time snapshot of a persistent volume, or binding to a pre-existing snapshot. Type object Required spec 11.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec defines the desired characteristics of a snapshot requested by a user. More info: https://kubernetes.io/docs/concepts/storage/volume-snapshots#volumesnapshots Required. status object status represents the current information of a snapshot. Consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object. 11.1.1. .spec Description spec defines the desired characteristics of a snapshot requested by a user. More info: https://kubernetes.io/docs/concepts/storage/volume-snapshots#volumesnapshots Required. Type object Required source Property Type Description source object source specifies where a snapshot will be created from. This field is immutable after creation. Required. volumeSnapshotClassName string VolumeSnapshotClassName is the name of the VolumeSnapshotClass requested by the VolumeSnapshot. VolumeSnapshotClassName may be left nil to indicate that the default SnapshotClass should be used. A given cluster may have multiple default Volume SnapshotClasses: one default per CSI Driver. If a VolumeSnapshot does not specify a SnapshotClass, VolumeSnapshotSource will be checked to figure out what the associated CSI Driver is, and the default VolumeSnapshotClass associated with that CSI Driver will be used. If more than one VolumeSnapshotClass exist for a given CSI Driver and more than one have been marked as default, CreateSnapshot will fail and generate an event. Empty string is not allowed for this field. 11.1.2. .spec.source Description source specifies where a snapshot will be created from. This field is immutable after creation. Required. Type object Property Type Description persistentVolumeClaimName string persistentVolumeClaimName specifies the name of the PersistentVolumeClaim object representing the volume from which a snapshot should be created. This PVC is assumed to be in the same namespace as the VolumeSnapshot object. This field should be set if the snapshot does not exists, and needs to be created. This field is immutable. volumeSnapshotContentName string volumeSnapshotContentName specifies the name of a pre-existing VolumeSnapshotContent object representing an existing volume snapshot. This field should be set if the snapshot already exists and only needs a representation in Kubernetes. This field is immutable. 11.1.3. .status Description status represents the current information of a snapshot. Consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object. Type object Property Type Description boundVolumeSnapshotContentName string boundVolumeSnapshotContentName is the name of the VolumeSnapshotContent object to which this VolumeSnapshot object intends to bind to. If not specified, it indicates that the VolumeSnapshot object has not been successfully bound to a VolumeSnapshotContent object yet. NOTE: To avoid possible security issues, consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object. creationTime string creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it may indicate that the creation time of the snapshot is unknown. error object error is the last observed error during snapshot creation, if any. This field could be helpful to upper level controllers(i.e., application controller) to decide whether they should continue on waiting for the snapshot to be created based on the type of error reported. The snapshot controller will keep retrying when an error occurs during the snapshot creation. Upon success, this error field will be cleared. readyToUse boolean readyToUse indicates if the snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown. restoreSize integer-or-string restoreSize represents the minimum size of volume required to create a volume from this snapshot. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown. 11.1.4. .status.error Description error is the last observed error during snapshot creation, if any. This field could be helpful to upper level controllers(i.e., application controller) to decide whether they should continue on waiting for the snapshot to be created based on the type of error reported. The snapshot controller will keep retrying when an error occurs during the snapshot creation. Upon success, this error field will be cleared. Type object Property Type Description message string message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information. time string time is the timestamp when the error was encountered. 11.2. API endpoints The following API endpoints are available: /apis/snapshot.storage.k8s.io/v1/volumesnapshots GET : list objects of kind VolumeSnapshot /apis/snapshot.storage.k8s.io/v1/namespaces/{namespace}/volumesnapshots DELETE : delete collection of VolumeSnapshot GET : list objects of kind VolumeSnapshot POST : create a VolumeSnapshot /apis/snapshot.storage.k8s.io/v1/namespaces/{namespace}/volumesnapshots/{name} DELETE : delete a VolumeSnapshot GET : read the specified VolumeSnapshot PATCH : partially update the specified VolumeSnapshot PUT : replace the specified VolumeSnapshot /apis/snapshot.storage.k8s.io/v1/namespaces/{namespace}/volumesnapshots/{name}/status GET : read status of the specified VolumeSnapshot PATCH : partially update status of the specified VolumeSnapshot PUT : replace status of the specified VolumeSnapshot 11.2.1. /apis/snapshot.storage.k8s.io/v1/volumesnapshots HTTP method GET Description list objects of kind VolumeSnapshot Table 11.1. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotList schema 401 - Unauthorized Empty 11.2.2. /apis/snapshot.storage.k8s.io/v1/namespaces/{namespace}/volumesnapshots HTTP method DELETE Description delete collection of VolumeSnapshot Table 11.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind VolumeSnapshot Table 11.3. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotList schema 401 - Unauthorized Empty HTTP method POST Description create a VolumeSnapshot Table 11.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.5. Body parameters Parameter Type Description body VolumeSnapshot schema Table 11.6. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshot schema 201 - Created VolumeSnapshot schema 202 - Accepted VolumeSnapshot schema 401 - Unauthorized Empty 11.2.3. /apis/snapshot.storage.k8s.io/v1/namespaces/{namespace}/volumesnapshots/{name} Table 11.7. Global path parameters Parameter Type Description name string name of the VolumeSnapshot HTTP method DELETE Description delete a VolumeSnapshot Table 11.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 11.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified VolumeSnapshot Table 11.10. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshot schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified VolumeSnapshot Table 11.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.12. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshot schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified VolumeSnapshot Table 11.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.14. Body parameters Parameter Type Description body VolumeSnapshot schema Table 11.15. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshot schema 201 - Created VolumeSnapshot schema 401 - Unauthorized Empty 11.2.4. /apis/snapshot.storage.k8s.io/v1/namespaces/{namespace}/volumesnapshots/{name}/status Table 11.16. Global path parameters Parameter Type Description name string name of the VolumeSnapshot HTTP method GET Description read status of the specified VolumeSnapshot Table 11.17. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshot schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified VolumeSnapshot Table 11.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.19. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshot schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified VolumeSnapshot Table 11.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.21. Body parameters Parameter Type Description body VolumeSnapshot schema Table 11.22. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshot schema 201 - Created VolumeSnapshot schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/storage_apis/volumesnapshot-snapshot-storage-k8s-io-v1 |
probe::nfs.aop.set_page_dirty | probe::nfs.aop.set_page_dirty Name probe::nfs.aop.set_page_dirty - NFS client marking page as dirty Synopsis nfs.aop.set_page_dirty Values __page the address of page page_flag page flags Description This probe attaches to the generic __set_page_dirty_nobuffers function. Thus, this probe is going to fire on many other file systems in addition to the NFS client. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-nfs-aop-set-page-dirty |
Chapter 1. Getting support | Chapter 1. Getting support If you experience difficulty with a procedure described in this documentation, or with Red Hat Quay in general, visit the Red Hat Customer Portal . From the Customer Portal, you can: Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products. Submit a support case to Red Hat Support. Access other product documentation. To identify issues with your deployment, you can use the Red Hat Quay debugging tool, or check the health endpoint of your deployment to obtain information about your problem. After you have debugged or obtained health information about your deployment, you can search the Red Hat Knowledgebase for a solution or file a support ticket. If you have a suggestion for improving this documentation or have found an error, submit a Jira issue to the ProjectQuay project. Provide specific details, such as the section name and Red Hat Quay version. 1.1. About the Red Hat Knowledgebase The Red Hat Knowledgebase provides rich content aimed at helping you make the most of Red Hat's products and technologies. The Red Hat Knowledgebase consists of articles, product documentation, and videos outlining best practices on installing, configuring, and using Red Hat products. In addition, you can search for solutions to known issues, each providing concise root cause descriptions and remedial steps. The Red Hat Quay Support Team also maintains a Consolidate troubleshooting article for Red Hat Quay that details solutions to common problems. This is an evolving document that can help users navigate various issues effectively and efficiently. 1.2. Searching the Red Hat Knowledgebase In the event of an Red Hat Quay issue, you can perform an initial search to determine if a solution already exists within the Red Hat Knowledgebase. Prerequisites You have a Red Hat Customer Portal account. Procedure Log in to the Red Hat Customer Portal . In the main Red Hat Customer Portal search field, input keywords and strings relating to the problem, including: Red Hat Quay components (such as database ) Related procedure (such as installation ) Warnings, error messages, and other outputs related to explicit failures Click Search . Select the Red Hat Quay product filter. Select the Knowledgebase content type filter. 1.3. Submitting a support case Prerequisites You have a Red Hat Customer Portal account. You have a Red Hat standard or premium Subscription. Procedure Log in to the Red Hat Customer Portal and select Open a support case . Select the Troubleshoot tab. For Summary , enter a concise but descriptive problem summary and further details about the symptoms being experienced, as well as your expectations. Review the list of suggested Red Hat Knowledgebase solutions for a potential match against the problem that is being reported. If the suggested articles do not address the issue, continue to the following step. For Product , select Red Hat Quay . Select the version of Red Hat Quay that you are using. Click Continue . Optional. Drag and drop, paste, or browse to upload a file. This could be debug logs gathered from your Red Hat Quay deployment. Click Get support to file your ticket. | null | https://docs.redhat.com/en/documentation/red_hat_quay/3/html/troubleshooting_red_hat_quay/getting-support |
Chapter 8. Dynamic provisioning | Chapter 8. Dynamic provisioning 8.1. About dynamic provisioning The StorageClass resource object describes and classifies storage that can be requested, as well as provides a means for passing parameters for dynamically provisioned storage on demand. StorageClass objects can also serve as a management mechanism for controlling different levels of storage and access to the storage. Cluster Administrators ( cluster-admin ) or Storage Administrators ( storage-admin ) define and create the StorageClass objects that users can request without needing any detailed knowledge about the underlying storage volume sources. The OpenShift Container Platform persistent volume framework enables this functionality and allows administrators to provision a cluster with persistent storage. The framework also gives users a way to request those resources without having any knowledge of the underlying infrastructure. Many storage types are available for use as persistent volumes in OpenShift Container Platform. While all of them can be statically provisioned by an administrator, some types of storage are created dynamically using the built-in provider and plugin APIs. 8.2. Available dynamic provisioning plugins OpenShift Container Platform provides the following provisioner plugins, which have generic implementations for dynamic provisioning that use the cluster's configured provider's API to create new storage resources: Storage type Provisioner plugin name Notes Red Hat OpenStack Platform (RHOSP) Cinder kubernetes.io/cinder RHOSP Manila Container Storage Interface (CSI) manila.csi.openstack.org Once installed, the OpenStack Manila CSI Driver Operator and ManilaDriver automatically create the required storage classes for all available Manila share types needed for dynamic provisioning. Amazon Elastic Block Store (Amazon EBS) kubernetes.io/aws-ebs For dynamic provisioning when using multiple clusters in different zones, tag each node with Key=kubernetes.io/cluster/<cluster_name>,Value=<cluster_id> where <cluster_name> and <cluster_id> are unique per cluster. Azure Disk kubernetes.io/azure-disk Azure File kubernetes.io/azure-file The persistent-volume-binder service account requires permissions to create and get secrets to store the Azure storage account and keys. GCE Persistent Disk (gcePD) kubernetes.io/gce-pd In multi-zone configurations, it is advisable to run one OpenShift Container Platform cluster per GCE project to avoid PVs from being created in zones where no node in the current cluster exists. IBM Power(R) Virtual Server Block powervs.csi.ibm.com After installation, the IBM Power(R) Virtual Server Block CSI Driver Operator and IBM Power(R) Virtual Server Block CSI Driver automatically create the required storage classes for dynamic provisioning. VMware vSphere kubernetes.io/vsphere-volume Important Any chosen provisioner plugin also requires configuration for the relevant cloud, host, or third-party provider as per the relevant documentation. 8.3. Defining a storage class StorageClass objects are currently a globally scoped object and must be created by cluster-admin or storage-admin users. Important The Cluster Storage Operator might install a default storage class depending on the platform in use. This storage class is owned and controlled by the Operator. It cannot be deleted or modified beyond defining annotations and labels. If different behavior is desired, you must define a custom storage class. The following sections describe the basic definition for a StorageClass object and specific examples for each of the supported plugin types. 8.3.1. Basic StorageClass object definition The following resource shows the parameters and default values that you use to configure a storage class. This example uses the AWS ElasticBlockStore (EBS) object definition. Sample StorageClass definition kind: StorageClass 1 apiVersion: storage.k8s.io/v1 2 metadata: name: <storage-class-name> 3 annotations: 4 storageclass.kubernetes.io/is-default-class: 'true' ... provisioner: kubernetes.io/aws-ebs 5 parameters: 6 type: gp3 ... 1 (required) The API object type. 2 (required) The current apiVersion. 3 (required) The name of the storage class. 4 (optional) Annotations for the storage class. 5 (required) The type of provisioner associated with this storage class. 6 (optional) The parameters required for the specific provisioner, this will change from plug-in to plug-in. 8.3.2. Storage class annotations To set a storage class as the cluster-wide default, add the following annotation to your storage class metadata: storageclass.kubernetes.io/is-default-class: "true" For example: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: "true" ... This enables any persistent volume claim (PVC) that does not specify a specific storage class to automatically be provisioned through the default storage class. However, your cluster can have more than one storage class, but only one of them can be the default storage class. Note The beta annotation storageclass.beta.kubernetes.io/is-default-class is still working; however, it will be removed in a future release. To set a storage class description, add the following annotation to your storage class metadata: kubernetes.io/description: My Storage Class Description For example: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: kubernetes.io/description: My Storage Class Description ... 8.3.3. RHOSP Cinder object definition cinder-storageclass.yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/cinder parameters: type: fast 2 availability: nova 3 fsType: ext4 4 1 Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes. 2 Volume type created in Cinder. Default is empty. 3 Availability Zone. If not specified, volumes are generally round-robined across all active zones where the OpenShift Container Platform cluster has a node. 4 File system that is created on dynamically provisioned volumes. This value is copied to the fsType field of dynamically provisioned persistent volumes and the file system is created when the volume is mounted for the first time. The default value is ext4 . 8.3.4. RHOSP Manila Container Storage Interface (CSI) object definition Once installed, the OpenStack Manila CSI Driver Operator and ManilaDriver automatically create the required storage classes for all available Manila share types needed for dynamic provisioning. 8.3.5. AWS Elastic Block Store (EBS) object definition aws-ebs-storageclass.yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/aws-ebs parameters: type: io1 2 iopsPerGB: "10" 3 encrypted: "true" 4 kmsKeyId: keyvalue 5 fsType: ext4 6 1 (required) Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes. 2 (required) Select from io1 , gp3 , sc1 , st1 . The default is gp3 . See the AWS documentation for valid Amazon Resource Name (ARN) values. 3 Optional: Only for io1 volumes. I/O operations per second per GiB. The AWS volume plugin multiplies this with the size of the requested volume to compute IOPS of the volume. The value cap is 20,000 IOPS, which is the maximum supported by AWS. See the AWS documentation for further details. 4 Optional: Denotes whether to encrypt the EBS volume. Valid values are true or false . 5 Optional: The full ARN of the key to use when encrypting the volume. If none is supplied, but encypted is set to true , then AWS generates a key. See the AWS documentation for a valid ARN value. 6 Optional: File system that is created on dynamically provisioned volumes. This value is copied to the fsType field of dynamically provisioned persistent volumes and the file system is created when the volume is mounted for the first time. The default value is ext4 . 8.3.6. Azure Disk object definition azure-advanced-disk-storageclass.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/azure-disk volumeBindingMode: WaitForFirstConsumer 2 allowVolumeExpansion: true parameters: kind: Managed 3 storageaccounttype: Premium_LRS 4 reclaimPolicy: Delete 1 Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes. 2 Using WaitForFirstConsumer is strongly recommended. This provisions the volume while allowing enough storage to schedule the pod on a free worker node from an available zone. 3 Possible values are Shared (default), Managed , and Dedicated . Important Red Hat only supports the use of kind: Managed in the storage class. With Shared and Dedicated , Azure creates unmanaged disks, while OpenShift Container Platform creates a managed disk for machine OS (root) disks. But because Azure Disk does not allow the use of both managed and unmanaged disks on a node, unmanaged disks created with Shared or Dedicated cannot be attached to OpenShift Container Platform nodes. 4 Azure storage account SKU tier. Default is empty. Note that Premium VMs can attach both Standard_LRS and Premium_LRS disks, Standard VMs can only attach Standard_LRS disks, Managed VMs can only attach managed disks, and unmanaged VMs can only attach unmanaged disks. If kind is set to Shared , Azure creates all unmanaged disks in a few shared storage accounts in the same resource group as the cluster. If kind is set to Managed , Azure creates new managed disks. If kind is set to Dedicated and a storageAccount is specified, Azure uses the specified storage account for the new unmanaged disk in the same resource group as the cluster. For this to work: The specified storage account must be in the same region. Azure Cloud Provider must have write access to the storage account. If kind is set to Dedicated and a storageAccount is not specified, Azure creates a new dedicated storage account for the new unmanaged disk in the same resource group as the cluster. 8.3.7. Azure File object definition The Azure File storage class uses secrets to store the Azure storage account name and the storage account key that are required to create an Azure Files share. These permissions are created as part of the following procedure. Procedure Define a ClusterRole object that allows access to create and view secrets: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: # name: system:azure-cloud-provider name: <persistent-volume-binder-role> 1 rules: - apiGroups: [''] resources: ['secrets'] verbs: ['get','create'] 1 The name of the cluster role to view and create secrets. Add the cluster role to the service account: USD oc adm policy add-cluster-role-to-user <persistent-volume-binder-role> system:serviceaccount:kube-system:persistent-volume-binder Create the Azure File StorageClass object: kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <azure-file> 1 provisioner: kubernetes.io/azure-file parameters: location: eastus 2 skuName: Standard_LRS 3 storageAccount: <storage-account> 4 reclaimPolicy: Delete volumeBindingMode: Immediate 1 Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes. 2 Location of the Azure storage account, such as eastus . Default is empty, meaning that a new Azure storage account will be created in the OpenShift Container Platform cluster's location. 3 SKU tier of the Azure storage account, such as Standard_LRS . Default is empty, meaning that a new Azure storage account will be created with the Standard_LRS SKU. 4 Name of the Azure storage account. If a storage account is provided, then skuName and location are ignored. If no storage account is provided, then the storage class searches for any storage account that is associated with the resource group for any accounts that match the defined skuName and location . 8.3.7.1. Considerations when using Azure File The following file system features are not supported by the default Azure File storage class: Symlinks Hard links Extended attributes Sparse files Named pipes Additionally, the owner user identifier (UID) of the Azure File mounted directory is different from the process UID of the container. The uid mount option can be specified in the StorageClass object to define a specific user identifier to use for the mounted directory. The following StorageClass object demonstrates modifying the user and group identifier, along with enabling symlinks for the mounted directory. kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: azure-file mountOptions: - uid=1500 1 - gid=1500 2 - mfsymlinks 3 provisioner: kubernetes.io/azure-file parameters: location: eastus skuName: Standard_LRS reclaimPolicy: Delete volumeBindingMode: Immediate 1 Specifies the user identifier to use for the mounted directory. 2 Specifies the group identifier to use for the mounted directory. 3 Enables symlinks. 8.3.8. GCE PersistentDisk (gcePD) object definition gce-pd-storageclass.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/gce-pd parameters: type: pd-standard 2 replication-type: none volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true reclaimPolicy: Delete 1 Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes. 2 Select either pd-standard or pd-ssd . The default is pd-standard . 8.3.9. VMware vSphere object definition vsphere-storageclass.yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: csi.vsphere.vmware.com 2 1 Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes. 2 For more information about using VMware vSphere CSI with OpenShift Container Platform, see the Kubernetes documentation . 8.4. Changing the default storage class Use the following procedure to change the default storage class. For example, if you have two defined storage classes, gp3 and standard , and you want to change the default storage class from gp3 to standard . Prerequisites Access to the cluster with cluster-admin privileges. Procedure To change the default storage class: List the storage classes: USD oc get storageclass Example output NAME TYPE gp3 (default) kubernetes.io/aws-ebs 1 standard kubernetes.io/aws-ebs 1 (default) indicates the default storage class. Make the desired storage class the default. For the desired storage class, set the storageclass.kubernetes.io/is-default-class annotation to true by running the following command: USD oc patch storageclass standard -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}' Note You can have multiple default storage classes for a short time. However, you should ensure that only one default storage class exists eventually. With multiple default storage classes present, any persistent volume claim (PVC) requesting the default storage class ( pvc.spec.storageClassName =nil) gets the most recently created default storage class, regardless of the default status of that storage class, and the administrator receives an alert in the alerts dashboard that there are multiple default storage classes, MultipleDefaultStorageClasses . Remove the default storage class setting from the old default storage class. For the old default storage class, change the value of the storageclass.kubernetes.io/is-default-class annotation to false by running the following command: USD oc patch storageclass gp3 -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "false"}}}' Verify the changes: USD oc get storageclass Example output NAME TYPE gp3 kubernetes.io/aws-ebs standard (default) kubernetes.io/aws-ebs | [
"kind: StorageClass 1 apiVersion: storage.k8s.io/v1 2 metadata: name: <storage-class-name> 3 annotations: 4 storageclass.kubernetes.io/is-default-class: 'true' provisioner: kubernetes.io/aws-ebs 5 parameters: 6 type: gp3",
"storageclass.kubernetes.io/is-default-class: \"true\"",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: \"true\"",
"kubernetes.io/description: My Storage Class Description",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: kubernetes.io/description: My Storage Class Description",
"kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/cinder parameters: type: fast 2 availability: nova 3 fsType: ext4 4",
"kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/aws-ebs parameters: type: io1 2 iopsPerGB: \"10\" 3 encrypted: \"true\" 4 kmsKeyId: keyvalue 5 fsType: ext4 6",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/azure-disk volumeBindingMode: WaitForFirstConsumer 2 allowVolumeExpansion: true parameters: kind: Managed 3 storageaccounttype: Premium_LRS 4 reclaimPolicy: Delete",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: system:azure-cloud-provider name: <persistent-volume-binder-role> 1 rules: - apiGroups: [''] resources: ['secrets'] verbs: ['get','create']",
"oc adm policy add-cluster-role-to-user <persistent-volume-binder-role> system:serviceaccount:kube-system:persistent-volume-binder",
"kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <azure-file> 1 provisioner: kubernetes.io/azure-file parameters: location: eastus 2 skuName: Standard_LRS 3 storageAccount: <storage-account> 4 reclaimPolicy: Delete volumeBindingMode: Immediate",
"kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: azure-file mountOptions: - uid=1500 1 - gid=1500 2 - mfsymlinks 3 provisioner: kubernetes.io/azure-file parameters: location: eastus skuName: Standard_LRS reclaimPolicy: Delete volumeBindingMode: Immediate",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/gce-pd parameters: type: pd-standard 2 replication-type: none volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true reclaimPolicy: Delete",
"kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: csi.vsphere.vmware.com 2",
"oc get storageclass",
"NAME TYPE gp3 (default) kubernetes.io/aws-ebs 1 standard kubernetes.io/aws-ebs",
"oc patch storageclass standard -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"true\"}}}'",
"oc patch storageclass gp3 -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"false\"}}}'",
"oc get storageclass",
"NAME TYPE gp3 kubernetes.io/aws-ebs standard (default) kubernetes.io/aws-ebs"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/storage/dynamic-provisioning |
Service Mesh | Service Mesh OpenShift Container Platform 4.9 Service Mesh installation, usage, and release notes Red Hat OpenShift Documentation Team | [
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: cluster-wide namespace: istio-system spec: version: v2.3 techPreview: controlPlaneMode: ClusterScoped 1",
"apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default spec: members: - '*' 1",
"kubectl get crd gateways.gateway.networking.k8s.io || { kubectl kustomize \"github.com/kubernetes-sigs/gateway-api/config/crd?ref=v0.4.0\" | kubectl apply -f -; }",
"spec: runtime: components: pilot: container: env: PILOT_ENABLE_GATEWAY_API: \"true\" PILOT_ENABLE_GATEWAY_API_STATUS: \"true\" # and optionally, for the deployment controller PILOT_ENABLE_GATEWAY_API_DEPLOYMENT_CONTROLLER: \"true\"",
"apiVersion: gateway.networking.k8s.io/v1alpha2 kind: Gateway metadata: name: gateway spec: addresses: - value: ingress.istio-gateways.svc.cluster.local type: Hostname",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: security: trust: manageNetworkPolicy: false",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: techPreview: meshConfig: defaultConfig: proxyMetadata: HTTP_STRIP_FRAGMENT_FROM_PATH_UNSAFE_IF_DISABLED: \"false\"",
"apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin namespace: foo spec: action: DENY rules: - from: - source: namespaces: [\"dev\"] to: - operation: hosts: [\"httpbin.com\",\"httpbin.com:*\"]",
"apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin namespace: default spec: action: DENY rules: - to: - operation: hosts: [\"httpbin.example.com:*\"]",
"spec: techPreview: global: pathNormalization: <option>",
"oc create -f <myEnvoyFilterFile>",
"apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: name: ingress-case-insensitive namespace: istio-system spec: configPatches: - applyTo: HTTP_FILTER match: context: GATEWAY listener: filterChain: filter: name: \"envoy.filters.network.http_connection_manager\" subFilter: name: \"envoy.filters.http.router\" patch: operation: INSERT_BEFORE value: name: envoy.lua typed_config: \"@type\": \"type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua\" inlineCode: | function envoy_on_request(request_handle) local path = request_handle:headers():get(\":path\") request_handle:headers():replace(\":path\", string.lower(path)) end",
"apiVersion: networking.istio.io/v1beta1 kind: ProxyConfig metadata: name: mesh-wide-concurrency namespace: <istiod-namespace> spec: concurrency: 0",
"api: namespaces: exclude: - \"^istio-operator\" - \"^kube-.*\" - \"^openshift.*\" - \"^ibm.*\" - \"^kiali-operator\"",
"spec: proxy: networking: trafficControl: inbound: excludedPorts: - 15020",
"{\"level\":\"warn\",\"ts\":1642438880.918793,\"caller\":\"channelz/logging.go:62\",\"msg\":\"[core]grpc: Server.Serve failed to create ServerTransport: connection error: desc = \\\"transport: http2Server.HandleStreams received bogus greeting from client: \\\\\\\"\\\\\\\\x16\\\\\\\\x03\\\\\\\\x01\\\\\\\\x02\\\\\\\\x00\\\\\\\\x01\\\\\\\\x00\\\\\\\\x01\\\\\\\\xfc\\\\\\\\x03\\\\\\\\x03vw\\\\\\\\x1a\\\\\\\\xc9T\\\\\\\\xe7\\\\\\\\xdaCj\\\\\\\\xb7\\\\\\\\x8dK\\\\\\\\xa6\\\\\\\"\\\"\",\"system\":\"grpc\",\"grpc_log\":true}",
"apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin-usernamepolicy spec: action: ALLOW rules: - when: - key: 'request.regex.headers[username]' values: - \"allowed.*\" selector: matchLabels: app: httpbin",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443",
"oc new-project istio-system",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: version: v2.3 tracing: type: Jaeger sampling: 10000 addons: jaeger: name: jaeger install: storage: type: Memory kiali: enabled: true name: kiali grafana: enabled: true",
"oc create -n istio-system -f <istio_installation.yaml>",
"oc get pods -n istio-system -w",
"NAME READY STATUS RESTARTS AGE grafana-b4d59bd7-mrgbr 2/2 Running 0 65m istio-egressgateway-678dc97b4c-wrjkp 1/1 Running 0 108s istio-ingressgateway-b45c9d54d-4qg6n 1/1 Running 0 108s istiod-basic-55d78bbbcd-j5556 1/1 Running 0 108s jaeger-67c75bd6dc-jv6k6 2/2 Running 0 65m kiali-6476c7656c-x5msp 1/1 Running 0 43m prometheus-58954b8d6b-m5std 2/2 Running 0 66m",
"oc login https://<HOSTNAME>:6443",
"oc get smcp -n istio-system",
"NAME READY STATUS PROFILES VERSION AGE basic 10/10 ComponentsReady [\"default\"] 2.1.1 66m",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: version: v2.3 security: identity: type: ThirdParty #required setting for ROSA tracing: type: Jaeger sampling: 10000 policy: type: Istiod addons: grafana: enabled: true jaeger: install: storage: type: Memory kiali: enabled: true prometheus: enabled: true telemetry: type: Istiod",
"apiVersion: kiali.io/v1alpha1 kind: Kiali metadata: name: kiali namespace: istio-system spec: auth: strategy: openshift deployment: accessible_namespaces: #restricted setting for ROSA - istio-system image_pull_policy: '' ingress_enabled: true namespace: istio-system",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443",
"oc new-project <your-project>",
"apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system spec: members: # a list of projects joined into the service mesh - your-project-name - another-project-name",
"oc create -n istio-system -f servicemeshmemberroll-default.yaml",
"oc get smmr -n istio-system default",
"oc edit smmr -n <controlplane-namespace>",
"apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system #control plane project spec: members: # a list of projects joined into the service mesh - your-project-name - another-project-name",
"oc new-project bookinfo",
"apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default spec: members: - bookinfo",
"oc create -n istio-system -f servicemeshmemberroll-default.yaml",
"oc get smmr -n istio-system -o wide",
"NAME READY STATUS AGE MEMBERS default 1/1 Configured 70s [\"bookinfo\"]",
"oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.3/samples/bookinfo/platform/kube/bookinfo.yaml",
"service/details created serviceaccount/bookinfo-details created deployment.apps/details-v1 created service/ratings created serviceaccount/bookinfo-ratings created deployment.apps/ratings-v1 created service/reviews created serviceaccount/bookinfo-reviews created deployment.apps/reviews-v1 created deployment.apps/reviews-v2 created deployment.apps/reviews-v3 created service/productpage created serviceaccount/bookinfo-productpage created deployment.apps/productpage-v1 created",
"oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.3/samples/bookinfo/networking/bookinfo-gateway.yaml",
"gateway.networking.istio.io/bookinfo-gateway created virtualservice.networking.istio.io/bookinfo created",
"export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')",
"oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.3/samples/bookinfo/networking/destination-rule-all.yaml",
"oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.3/samples/bookinfo/networking/destination-rule-all-mtls.yaml",
"destinationrule.networking.istio.io/productpage created destinationrule.networking.istio.io/reviews created destinationrule.networking.istio.io/ratings created destinationrule.networking.istio.io/details created",
"oc get pods -n bookinfo",
"NAME READY STATUS RESTARTS AGE details-v1-55b869668-jh7hb 2/2 Running 0 12m productpage-v1-6fc77ff794-nsl8r 2/2 Running 0 12m ratings-v1-7d7d8d8b56-55scn 2/2 Running 0 12m reviews-v1-868597db96-bdxgq 2/2 Running 0 12m reviews-v2-5b64f47978-cvssp 2/2 Running 0 12m reviews-v3-6dfd49b55b-vcwpf 2/2 Running 0 12m",
"echo \"http://USDGATEWAY_URL/productpage\"",
"oc delete project bookinfo",
"oc -n istio-system patch --type='json' smmr default -p '[{\"op\": \"remove\", \"path\": \"/spec/members\", \"value\":[\"'\"bookinfo\"'\"]}]'",
"oc get deployment -n <namespace>",
"get deployment -n bookinfo ratings-v1 -o yaml",
"apiVersion: apps/v1 kind: Deployment metadata: name: ratings-v1 namespace: bookinfo labels: app: ratings version: v1 spec: template: metadata: annotations: sidecar.istio.io/inject: 'true'",
"oc apply -n <namespace> -f deployment.yaml",
"oc apply -n bookinfo -f deployment-ratings-v1.yaml",
"oc get deployment -n <namespace> <deploymentName> -o yaml",
"oc get deployment -n bookinfo ratings-v1 -o yaml",
"apiVersion: apps/v1 kind: Deployment metadata: name: resource spec: replicas: 7 selector: matchLabels: app: resource template: metadata: annotations: sidecar.maistra.io/proxyEnv: \"{ \\\"maistra_test_env\\\": \\\"env_value\\\", \\\"maistra_test_env_2\\\": \\\"env_value_2\\\" }\"",
"oc patch deployment/<deployment> -p '{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"kubectl.kubernetes.io/restartedAt\": \"'`date -Iseconds`'\"}}}}}'",
"An error occurred admission webhook smcp.validation.maistra.io denied the request: [support for policy.type \"Mixer\" and policy.Mixer options have been removed in v2.1, please use another alternative, support for telemetry.type \"Mixer\" and telemetry.Mixer options have been removed in v2.1, please use another alternative]\"",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: policy: type: Istiod telemetry: type: Istiod version: v2.3",
"oc project istio-system",
"oc get smcp -o yaml",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.3",
"oc get smcp -o yaml",
"oc get smcp.v1.maistra.io <smcp_name> > smcp-resource.yaml #Edit the smcp-resource.yaml file. oc replace -f smcp-resource.yaml",
"oc patch smcp.v1.maistra.io <smcp_name> --type json --patch '[{\"op\": \"replace\",\"path\":\"/spec/path/to/bad/setting\",\"value\":\"corrected-value\"}]'",
"oc edit smcp.v1.maistra.io <smcp_name>",
"oc project istio-system",
"oc get servicemeshcontrolplanes.v1.maistra.io <smcp_name> -o yaml > <smcp_name>.v1.yaml",
"oc get smcp <smcp_name> -o yaml > <smcp_name>.v2.yaml",
"oc new-project istio-system-upgrade",
"oc create -n istio-system-upgrade -f <smcp_name>.v2.yaml",
"spec: policy: type: Mixer",
"spec: telemetry: type: Mixer",
"apiVersion: authentication.istio.io/v1alpha1 kind: Policy metadata: name: productpage-mTLS-disable namespace: <namespace> spec: targets: - name: productpage",
"apiVersion: security.istio.io/v1beta1 kind: PeerAuthentication metadata: name: productpage-mTLS-disable namespace: <namespace> spec: mtls: mode: DISABLE selector: matchLabels: # this should match the selector for the \"productpage\" service app: productpage",
"apiVersion: authentication.istio.io/v1alpha1 kind: Policy metadata: name: productpage-mTLS-with-JWT namespace: <namespace> spec: targets: - name: productpage ports: - number: 9000 peers: - mtls: origins: - jwt: issuer: \"https://securetoken.google.com\" audiences: - \"productpage\" jwksUri: \"https://www.googleapis.com/oauth2/v1/certs\" jwtHeaders: - \"x-goog-iap-jwt-assertion\" triggerRules: - excludedPaths: - exact: /health_check principalBinding: USE_ORIGIN",
"#require mtls for productpage:9000 apiVersion: security.istio.io/v1beta1 kind: PeerAuthentication metadata: name: productpage-mTLS-with-JWT namespace: <namespace> spec: selector: matchLabels: # this should match the selector for the \"productpage\" service app: productpage portLevelMtls: 9000: mode: STRICT --- #JWT authentication for productpage apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: productpage-mTLS-with-JWT namespace: <namespace> spec: selector: matchLabels: # this should match the selector for the \"productpage\" service app: productpage jwtRules: - issuer: \"https://securetoken.google.com\" audiences: - \"productpage\" jwksUri: \"https://www.googleapis.com/oauth2/v1/certs\" fromHeaders: - name: \"x-goog-iap-jwt-assertion\" --- #Require JWT token to access product page service from #any client to all paths except /health_check apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: productpage-mTLS-with-JWT namespace: <namespace> spec: action: ALLOW selector: matchLabels: # this should match the selector for the \"productpage\" service app: productpage rules: - to: # require JWT token to access all other paths - operation: notPaths: - /health_check from: - source: # if using principalBinding: USE_PEER in the Policy, # then use principals, e.g. # principals: # - \"*\" requestPrincipals: - \"*\" - to: # no JWT token required to access health_check - operation: paths: - /health_check",
"spec: tracing: sampling: 100 # 1% type: Jaeger",
"spec: addons: jaeger: name: jaeger install: storage: type: Memory # or Elasticsearch for production mode memory: maxTraces: 100000 elasticsearch: # the following values only apply if storage:type:=Elasticsearch storage: # specific storageclass configuration for the Jaeger Elasticsearch (optional) size: \"100G\" storageClassName: \"storageclass\" nodeCount: 3 redundancyPolicy: SingleRedundancy runtime: components: tracing.jaeger: {} # general Jaeger specific runtime configuration (optional) tracing.jaeger.elasticsearch: #runtime configuration for Jaeger Elasticsearch deployment (optional) container: resources: requests: memory: \"1Gi\" cpu: \"500m\" limits: memory: \"1Gi\"",
"spec: addons: grafana: enabled: true install: {} # customize install kiali: enabled: true name: kiali install: {} # customize install",
"oc rollout restart <deployment>",
"oc policy add-role-to-user -n istio-system --role-namespace istio-system mesh-user <user_name>",
"apiVersion: maistra.io/v1 kind: ServiceMeshMember metadata: name: default spec: controlPlaneRef: namespace: istio-system name: basic",
"oc policy add-role-to-user",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: namespace: istio-system name: mesh-users roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: mesh-user subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: alice",
"oc create configmap --from-file=<profiles-directory> smcp-templates -n openshift-operators",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: profiles: - default",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: version: v2.3 security: dataPlane: mtls: true",
"apiVersion: security.istio.io/v1beta1 kind: PeerAuthentication metadata: name: default namespace: <namespace> spec: mtls: mode: STRICT",
"oc create -n <namespace> -f <policy.yaml>",
"apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: default namespace: <namespace> spec: host: \"*.<namespace>.svc.cluster.local\" trafficPolicy: tls: mode: ISTIO_MUTUAL",
"oc create -n <namespace> -f <destination-rule.yaml>",
"kind: ServiceMeshControlPlane spec: security: controlPlane: tls: minProtocolVersion: TLSv1_2",
"apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: ingress-policy namespace: istio-system spec: selector: matchLabels: app: istio-ingressgateway action: DENY rules: - from: - source: ipBlocks: [\"1.2.3.4\"]",
"oc create -n istio-system -f <filename>",
"apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin-deny namespace: bookinfo spec: selector: matchLabels: app: httpbin version: v1 action: DENY rules: - from: - source: notNamespaces: [\"bookinfo\"]",
"apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: allow-all namespace: bookinfo spec: action: ALLOW rules: - {}",
"apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: deny-all namespace: bookinfo spec: {}",
"apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: ingress-policy namespace: istio-system spec: selector: matchLabels: app: istio-ingressgateway action: ALLOW rules: - from: - source: ipBlocks: [\"1.2.3.4\", \"5.6.7.0/24\"]",
"apiVersion: \"security.istio.io/v1beta1\" kind: \"RequestAuthentication\" metadata: name: \"jwt-example\" namespace: bookinfo spec: selector: matchLabels: app: httpbin jwtRules: - issuer: \"http://localhost:8080/auth/realms/master\" jwksUri: \"http://keycloak.default.svc:8080/auth/realms/master/protocol/openid-connect/certs\"",
"apiVersion: \"security.istio.io/v1beta1\" kind: \"AuthorizationPolicy\" metadata: name: \"frontend-ingress\" namespace: bookinfo spec: selector: matchLabels: app: httpbin action: DENY rules: - from: - source: notRequestPrincipals: [\"*\"]",
"oc create secret generic cacerts -n istio-system --from-file=<path>/ca-cert.pem --from-file=<path>/ca-key.pem --from-file=<path>/root-cert.pem --from-file=<path>/cert-chain.pem",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: security: dataPlane: mtls: true certificateAuthority: type: Istiod istiod: type: PrivateKey privateKey: rootCADir: /etc/cacerts",
"oc -n istio-system delete pods -l 'app in (istiod,istio-ingressgateway, istio-egressgateway)'",
"oc -n bookinfo delete pods --all",
"pod \"details-v1-6cd699df8c-j54nh\" deleted pod \"productpage-v1-5ddcb4b84f-mtmf2\" deleted pod \"ratings-v1-bdbcc68bc-kmng4\" deleted pod \"reviews-v1-754ddd7b6f-lqhsv\" deleted pod \"reviews-v2-675679877f-q67r2\" deleted pod \"reviews-v3-79d7549c7-c2gjs\" deleted",
"oc get pods -n bookinfo",
"sleep 60 oc -n bookinfo exec \"USD(oc -n bookinfo get pod -l app=productpage -o jsonpath={.items..metadata.name})\" -c istio-proxy -- openssl s_client -showcerts -connect details:9080 > bookinfo-proxy-cert.txt sed -n '/-----BEGIN CERTIFICATE-----/{:start /-----END CERTIFICATE-----/!{N;b start};/.*/p}' bookinfo-proxy-cert.txt > certs.pem awk 'BEGIN {counter=0;} /BEGIN CERT/{counter++} { print > \"proxy-cert-\" counter \".pem\"}' < certs.pem",
"openssl x509 -in <path>/root-cert.pem -text -noout > /tmp/root-cert.crt.txt",
"openssl x509 -in ./proxy-cert-3.pem -text -noout > /tmp/pod-root-cert.crt.txt",
"diff -s /tmp/root-cert.crt.txt /tmp/pod-root-cert.crt.txt",
"openssl x509 -in <path>/ca-cert.pem -text -noout > /tmp/ca-cert.crt.txt",
"openssl x509 -in ./proxy-cert-2.pem -text -noout > /tmp/pod-cert-chain-ca.crt.txt",
"diff -s /tmp/ca-cert.crt.txt /tmp/pod-cert-chain-ca.crt.txt",
"openssl verify -CAfile <(cat <path>/ca-cert.pem <path>/root-cert.pem) ./proxy-cert-1.pem",
"oc delete secret cacerts -n istio-system",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: security: dataPlane: mtls: true",
"apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: ext-host-gwy spec: selector: istio: ingressgateway # use istio default controller servers: - port: number: 443 name: https protocol: HTTPS hosts: - ext-host.example.com tls: mode: SIMPLE serverCertificate: /tmp/tls.crt privateKey: /tmp/tls.key",
"apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: virtual-svc spec: hosts: - ext-host.example.com gateways: - ext-host-gwy",
"apiVersion: v1 kind: Service metadata: name: istio-ingressgateway namespace: istio-ingress spec: type: ClusterIP selector: istio: ingressgateway ports: - name: http port: 80 targetPort: 8080 - name: https port: 443 targetPort: 8443 --- apiVersion: apps/v1 kind: Deployment metadata: name: istio-ingressgateway namespace: istio-ingress spec: selector: matchLabels: istio: ingressgateway template: metadata: annotations: inject.istio.io/templates: gateway labels: istio: ingressgateway sidecar.istio.io/inject: \"true\" 1 spec: containers: - name: istio-proxy image: auto 2",
"apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: istio-ingressgateway-sds namespace: istio-ingress rules: - apiGroups: [\"\"] resources: [\"secrets\"] verbs: [\"get\", \"watch\", \"list\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: istio-ingressgateway-sds namespace: istio-ingress roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: istio-ingressgateway-sds subjects: - kind: ServiceAccount name: default",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: gatewayingress namespace: istio-ingress spec: podSelector: matchLabels: istio: ingressgateway ingress: - {} policyTypes: - Ingress",
"apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: labels: istio: ingressgateway release: istio name: ingressgatewayhpa namespace: istio-ingress spec: maxReplicas: 5 metrics: - resource: name: cpu target: averageUtilization: 80 type: Utilization type: Resource minReplicas: 2 scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: istio-ingressgateway",
"apiVersion: policy/v1 kind: PodDisruptionBudget metadata: labels: istio: ingressgateway release: istio name: ingressgatewaypdb namespace: istio-ingress spec: minAvailable: 1 selector: matchLabels: istio: ingressgateway",
"oc get svc istio-ingressgateway -n istio-system",
"export INGRESS_HOST=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')",
"export INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"http2\")].port}')",
"export SECURE_INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"https\")].port}')",
"export TCP_INGRESS_PORT=USD(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"tcp\")].port}')",
"export INGRESS_HOST=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')",
"export INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"http2\")].nodePort}')",
"export SECURE_INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"https\")].nodePort}')",
"export TCP_INGRESS_PORT=USD(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"tcp\")].nodePort}')",
"apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: bookinfo-gateway spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - \"*\"",
"oc apply -f gateway.yaml",
"apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: bookinfo spec: hosts: - \"*\" gateways: - bookinfo-gateway http: - match: - uri: exact: /productpage - uri: prefix: /static - uri: exact: /login - uri: exact: /logout - uri: prefix: /api/v1/products route: - destination: host: productpage port: number: 9080",
"oc apply -f vs.yaml",
"export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')",
"export TARGET_PORT=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.port.targetPort}')",
"curl -s -I \"USDGATEWAY_URL/productpage\"",
"apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: gateway1 spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - www.bookinfo.com - bookinfo.example.com",
"oc -n istio-system get routes",
"NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD gateway1-lvlfn bookinfo.example.com istio-ingressgateway <all> None gateway1-scqhv www.bookinfo.com istio-ingressgateway <all> None",
"apiVersion: maistra.io/v1alpha1 kind: ServiceMeshControlPlane metadata: namespace: istio-system spec: gateways: openshiftRoute: enabled: false",
"apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: svc-entry spec: hosts: - ext-svc.example.com ports: - number: 443 name: https protocol: HTTPS location: MESH_EXTERNAL resolution: DNS",
"apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: ext-res-dr spec: host: ext-svc.example.com trafficPolicy: tls: mode: MUTUAL clientCertificate: /etc/certs/myclientcert.pem privateKey: /etc/certs/client_private_key.pem caCertificates: /etc/certs/rootcacerts.pem",
"apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - match: - headers: end-user: exact: jason route: - destination: host: reviews subset: v2 - route: - destination: host: reviews subset: v3",
"oc apply -f <VirtualService.yaml>",
"spec: hosts:",
"spec: http: - match:",
"spec: http: - match: - destination:",
"apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: my-destination-rule spec: host: my-svc trafficPolicy: loadBalancer: simple: RANDOM subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 trafficPolicy: loadBalancer: simple: ROUND_ROBIN - name: v3 labels: version: v3",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: security: manageNetworkPolicy: false",
"apiVersion: networking.istio.io/v1alpha3 kind: Sidecar metadata: name: default namespace: bookinfo spec: egress: - hosts: - \"./*\" - \"istio-system/*\"",
"oc apply -f sidecar.yaml",
"oc get sidecar",
"oc apply -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.3/samples/bookinfo/networking/virtual-service-all-v1.yaml",
"oc get virtualservices -o yaml",
"export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')",
"echo \"http://USDGATEWAY_URL/productpage\"",
"oc apply -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.3/samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml",
"oc get virtualservice reviews -o yaml",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443",
"oc project istio-system",
"oc get routes",
"NAME HOST/PORT SERVICES PORT TERMINATION bookinfo-gateway bookinfo-gateway-yourcompany.com istio-ingressgateway http2 grafana grafana-yourcompany.com grafana <all> reencrypt/Redirect istio-ingressgateway istio-ingress-yourcompany.com istio-ingressgateway 8080 jaeger jaeger-yourcompany.com jaeger-query <all> reencrypt kiali kiali-yourcompany.com kiali 20001 reencrypt/Redirect prometheus prometheus-yourcompany.com prometheus <all> reencrypt/Redirect",
"curl \"http://USDGATEWAY_URL/productpage\"",
"spec: addons: jaeger: name: distr-tracing-production",
"spec: tracing: sampling: 100",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443",
"export JAEGER_URL=USD(oc get route -n istio-system jaeger -o jsonpath='{.spec.host}')",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: version: v2.3 proxy: runtime: container: resources: requests: cpu: 600m memory: 50Mi limits: {} runtime: components: pilot: container: resources: requests: cpu: 1000m memory: 1.6Gi limits: {}",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.3 tracing: sampling: 100 type: Jaeger addons: jaeger: name: MyJaeger install: storage: type: Elasticsearch ingress: enabled: true runtime: components: tracing.jaeger.elasticsearch: # only supports resources and image name container: resources: {}",
"oc get smcp basic -o yaml",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: red-mesh namespace: red-mesh-system spec: version: v2.3 runtime: defaults: container: imagePullPolicy: Always gateways: additionalEgress: egress-green-mesh: enabled: true requestedNetworkView: - green-network routerMode: sni-dnat service: metadata: labels: federation.maistra.io/egress-for: egress-green-mesh ports: - port: 15443 name: tls - port: 8188 name: http-discovery #note HTTP here egress-blue-mesh: enabled: true requestedNetworkView: - blue-network routerMode: sni-dnat service: metadata: labels: federation.maistra.io/egress-for: egress-blue-mesh ports: - port: 15443 name: tls - port: 8188 name: http-discovery #note HTTP here additionalIngress: ingress-green-mesh: enabled: true routerMode: sni-dnat service: type: LoadBalancer metadata: labels: federation.maistra.io/ingress-for: ingress-green-mesh ports: - port: 15443 name: tls - port: 8188 name: https-discovery #note HTTPS here ingress-blue-mesh: enabled: true routerMode: sni-dnat service: type: LoadBalancer metadata: labels: federation.maistra.io/ingress-for: ingress-blue-mesh ports: - port: 15443 name: tls - port: 8188 name: https-discovery #note HTTPS here security: trust: domain: red-mesh.local",
"spec: cluster: name:",
"spec: cluster: network:",
"spec: gateways: additionalEgress: <egressName>:",
"spec: gateways: additionalEgress: <egressName>: enabled:",
"spec: gateways: additionalEgress: <egressName>: requestedNetworkView:",
"spec: gateways: additionalEgress: <egressName>: routerMode:",
"spec: gateways: additionalEgress: <egressName>: service: metadata: labels: federation.maistra.io/egress-for:",
"spec: gateways: additionalEgress: <egressName>: service: ports:",
"spec: gateways: additionalIngress:",
"spec: gateways: additionalIgress: <ingressName>: enabled:",
"spec: gateways: additionalIngress: <ingressName>: routerMode:",
"spec: gateways: additionalIngress: <ingressName>: service: type:",
"spec: gateways: additionalIngress: <ingressName>: service: type:",
"spec: gateways: additionalIngress: <ingressName>: service: metadata: labels: federation.maistra.io/ingress-for:",
"spec: gateways: additionalIngress: <ingressName>: service: ports:",
"spec: gateways: additionalIngress: <ingressName>: service: ports: nodePort:",
"gateways: additionalIngress: ingress-green-mesh: enabled: true routerMode: sni-dnat service: type: NodePort metadata: labels: federation.maistra.io/ingress-for: ingress-green-mesh ports: - port: 15443 nodePort: 30510 name: tls - port: 8188 nodePort: 32359 name: https-discovery",
"kind: ServiceMeshControlPlane metadata: name: red-mesh namespace: red-mesh-system spec: security: trust: domain: red-mesh.local",
"spec: security: trust: domain:",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443",
"oc project red-mesh-system",
"oc edit -n red-mesh-system smcp red-mesh",
"oc get smcp -n red-mesh-system",
"NAME READY STATUS PROFILES VERSION AGE red-mesh 10/10 ComponentsReady [\"default\"] 2.1.0 4m25s",
"kind: ServiceMeshPeer apiVersion: federation.maistra.io/v1 metadata: name: green-mesh namespace: red-mesh-system spec: remote: addresses: - ingress-red-mesh.green-mesh-system.apps.domain.com gateways: ingress: name: ingress-green-mesh egress: name: egress-green-mesh security: trustDomain: green-mesh.local clientID: green-mesh.local/ns/green-mesh-system/sa/egress-red-mesh-service-account certificateChain: kind: ConfigMap name: green-mesh-ca-root-cert",
"metadata: name:",
"metadata: namespace:",
"spec: remote: addresses:",
"spec: remote: discoveryPort:",
"spec: remote: servicePort:",
"spec: gateways: ingress: name:",
"spec: gateways: egress: name:",
"spec: security: trustDomain:",
"spec: security: clientID:",
"spec: security: certificateChain: kind: ConfigMap name:",
"oc login --username=<NAMEOFUSER> <API token> https://<HOSTNAME>:6443",
"oc project red-mesh-system",
"kind: ServiceMeshPeer apiVersion: federation.maistra.io/v1 metadata: name: green-mesh namespace: red-mesh-system spec: remote: addresses: - ingress-red-mesh.green-mesh-system.apps.domain.com gateways: ingress: name: ingress-green-mesh egress: name: egress-green-mesh security: trustDomain: green-mesh.local clientID: green-mesh.local/ns/green-mesh-system/sa/egress-red-mesh-service-account certificateChain: kind: ConfigMap name: green-mesh-ca-root-cert",
"oc create -n red-mesh-system -f servicemeshpeer.yaml",
"oc -n red-mesh-system get servicemeshpeer green-mesh -o yaml",
"status: discoveryStatus: active: - pod: istiod-red-mesh-b65457658-9wq5j remotes: - connected: true lastConnected: \"2021-10-05T13:02:25Z\" lastFullSync: \"2021-10-05T13:02:25Z\" source: 10.128.2.149 watch: connected: true lastConnected: \"2021-10-05T13:02:55Z\" lastDisconnectStatus: 503 Service Unavailable lastFullSync: \"2021-10-05T13:05:43Z\"",
"kind: ExportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: green-mesh namespace: red-mesh-system spec: exportRules: # export ratings.mesh-x-bookinfo as ratings.bookinfo - type: NameSelector nameSelector: namespace: red-mesh-bookinfo name: red-ratings alias: namespace: bookinfo name: ratings # export any service in red-mesh-bookinfo namespace with label export-service=true - type: LabelSelector labelSelector: namespace: red-mesh-bookinfo selector: matchLabels: export-service: \"true\" aliases: # export all matching services as if they were in the bookinfo namespace - namespace: \"*\" name: \"*\" alias: namespace: bookinfo",
"metadata: name:",
"metadata: namespace:",
"spec: exportRules: - type:",
"spec: exportRules: - type: NameSelector nameSelector: namespace: name:",
"spec: exportRules: - type: NameSelector nameSelector: alias: namespace: name:",
"spec: exportRules: - type: LabelSelector labelSelector: namespace: <exportingMesh> selector: matchLabels: <labelKey>: <labelValue>",
"spec: exportRules: - type: LabelSelector labelSelector: namespace: <exportingMesh> selector: matchLabels: <labelKey>: <labelValue> aliases: - namespace: name: alias: namespace: name:",
"kind: ExportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: blue-mesh namespace: red-mesh-system spec: exportRules: - type: NameSelector nameSelector: namespace: \"*\" name: ratings",
"kind: ExportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: green-mesh namespace: red-mesh-system spec: exportRules: - type: NameSelector nameSelector: namespace: west-data-center name: \"*\"",
"oc login --username=<NAMEOFUSER> <API token> https://<HOSTNAME>:6443",
"oc project red-mesh-system",
"apiVersion: federation.maistra.io/v1 kind: ExportedServiceSet metadata: name: green-mesh namespace: red-mesh-system spec: exportRules: - type: NameSelector nameSelector: namespace: red-mesh-bookinfo name: ratings alias: namespace: bookinfo name: red-ratings - type: NameSelector nameSelector: namespace: red-mesh-bookinfo name: reviews",
"oc create -n <ControlPlaneNamespace> -f <ExportedServiceSet.yaml>",
"oc create -n red-mesh-system -f export-to-green-mesh.yaml",
"oc get exportedserviceset <PeerMeshExportedTo> -o yaml",
"oc get exportedserviceset green-mesh -o yaml",
"oc get exportedserviceset <PeerMeshExportedTo> -o yaml",
"oc -n red-mesh-system get exportedserviceset green-mesh -o yaml",
"status: exportedServices: - exportedName: red-ratings.bookinfo.svc.green-mesh-exports.local localService: hostname: ratings.red-mesh-bookinfo.svc.cluster.local name: ratings namespace: red-mesh-bookinfo - exportedName: reviews.red-mesh-bookinfo.svc.green-mesh-exports.local localService: hostname: reviews.red-mesh-bookinfo.svc.cluster.local name: reviews namespace: red-mesh-bookinfo",
"kind: ImportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: red-mesh #name of mesh that exported the service namespace: green-mesh-system #mesh namespace that service is being imported into spec: importRules: # first matching rule is used # import ratings.bookinfo as ratings.bookinfo - type: NameSelector importAsLocal: false nameSelector: namespace: bookinfo name: ratings alias: # service will be imported as ratings.bookinfo.svc.red-mesh-imports.local namespace: bookinfo name: ratings",
"metadata: name:",
"metadata: namespace:",
"spec: importRules: - type:",
"spec: importRules: - type: NameSelector nameSelector: namespace: name:",
"spec: importRules: - type: NameSelector importAsLocal:",
"spec: importRules: - type: NameSelector nameSelector: namespace: name: alias: namespace: name:",
"kind: ImportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: red-mesh namespace: blue-mesh-system spec: importRules: - type: NameSelector importAsLocal: false nameSelector: namespace: bookinfo name: ratings",
"kind: ImportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: red-mesh namespace: green-mesh-system spec: importRules: - type: NameSelector importAsLocal: false nameSelector: namespace: west-data-center name: \"*\"",
"oc login --username=<NAMEOFUSER> <API token> https://<HOSTNAME>:6443",
"oc project green-mesh-system",
"kind: ImportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: red-mesh namespace: green-mesh-system spec: importRules: - type: NameSelector importAsLocal: false nameSelector: namespace: bookinfo name: red-ratings alias: namespace: bookinfo name: ratings",
"oc create -n <ControlPlaneNamespace> -f <ImportedServiceSet.yaml>",
"oc create -n green-mesh-system -f import-from-red-mesh.yaml",
"oc get importedserviceset <PeerMeshImportedInto> -o yaml",
"oc get importedserviceset green-mesh -o yaml",
"oc get importedserviceset <PeerMeshImportedInto> -o yaml",
"oc -n green-mesh-system get importedserviceset/red-mesh -o yaml",
"status: importedServices: - exportedName: red-ratings.bookinfo.svc.green-mesh-exports.local localService: hostname: ratings.bookinfo.svc.red-mesh-imports.local name: ratings namespace: bookinfo - exportedName: reviews.red-mesh-bookinfo.svc.green-mesh-exports.local localService: hostname: \"\" name: \"\" namespace: \"\"",
"kind: ImportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: red-mesh #name of mesh that exported the service namespace: green-mesh-system #mesh namespace that service is being imported into spec: importRules: # first matching rule is used # import ratings.bookinfo as ratings.bookinfo - type: NameSelector importAsLocal: true nameSelector: namespace: bookinfo name: ratings alias: # service will be imported as ratings.bookinfo.svc.red-mesh-imports.local namespace: bookinfo name: ratings #Locality within which imported services should be associated. locality: region: us-west",
"oc login --username=<NAMEOFUSER> <API token> https://<HOSTNAME>:6443",
"oc project <smcp-system>",
"oc project green-mesh-system",
"oc edit -n <smcp-system> -f <ImportedServiceSet.yaml>",
"oc edit -n green-mesh-system -f import-from-red-mesh.yaml",
"oc login --username=<NAMEOFUSER> <API token> https://<HOSTNAME>:6443",
"oc project <smcp-system>",
"oc project green-mesh-system",
"apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: default-failover namespace: bookinfo spec: host: \"ratings.bookinfo.svc.cluster.local\" trafficPolicy: loadBalancer: localityLbSetting: enabled: true failover: - from: us-east to: us-west outlierDetection: consecutive5xxErrors: 3 interval: 10s baseEjectionTime: 1m",
"oc create -n <application namespace> -f <DestinationRule.yaml>",
"oc create -n bookinfo -f green-mesh-us-west-DestinationRule.yaml",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: openid-connect namespace: istio-ingress spec: selector: matchLabels: istio: ingressgateway url: file:///opt/filters/openid.wasm sha256: 1ef0c9a92b0420cf25f7fe5d481b231464bc88f486ca3b9c83ed5cc21d2f6210 phase: AUTHN pluginConfig: openid_server: authn openid_realm: ingress",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: openid-connect namespace: istio-system spec: selector: matchLabels: istio: ingressgateway url: oci://private-registry:5000/openid-connect/openid:latest imagePullPolicy: IfNotPresent imagePullSecret: private-registry-pull-secret phase: AUTHN pluginConfig: openid_server: authn openid_realm: ingress",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: openid-connect namespace: istio-system spec: selector: matchLabels: istio: ingressgateway url: oci://private-registry:5000/openid-connect/openid:latest imagePullPolicy: IfNotPresent imagePullSecret: private-registry-pull-secret phase: AUTHN pluginConfig: openid_server: authn openid_realm: ingress",
"oc apply -f plugin.yaml",
"schemaVersion: 1 name: <your-extension> description: <description> version: 1.0.0 phase: PreAuthZ priority: 100 module: extension.wasm",
"apiVersion: maistra.io/v1 kind: ServiceMeshExtension metadata: name: header-append namespace: istio-system spec: workloadSelector: labels: app: httpbin config: first-header: some-value another-header: another-value image: quay.io/maistra-dev/header-append-filter:2.1 phase: PostAuthZ priority: 100",
"oc apply -f <extension>.yaml",
"apiVersion: maistra.io/v1 kind: ServiceMeshExtension metadata: name: header-append namespace: istio-system spec: workloadSelector: labels: app: httpbin config: first-header: some-value another-header: another-value image: quay.io/maistra-dev/header-append-filter:2.2 phase: PostAuthZ priority: 100",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: header-append namespace: istio-system spec: selector: matchLabels: app: httpbin url: oci://quay.io/maistra-dev/header-append-filter:2.2 phase: STATS pluginConfig: first-header: some-value another-header: another-value",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> namespace: <bookinfo> 1 spec: selector: 2 labels: app: <product_page> pluginConfig: <yaml_configuration> url: oci://registry.redhat.io/3scale-amp2/3scale-auth-wasm-rhel8:0.0.3 phase: AUTHZ priority: 100",
"oc apply -f threescale-wasm-auth-bookinfo.yaml",
"apiVersion: networking.istio.io/v1beta1 kind: ServiceEntry metadata: name: service-entry-threescale-saas-backend spec: hosts: - su1.3scale.net ports: - number: 443 name: https protocol: HTTPS location: MESH_EXTERNAL resolution: DNS",
"apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: destination-rule-threescale-saas-backend spec: host: su1.3scale.net trafficPolicy: tls: mode: SIMPLE sni: su1.3scale.net",
"oc apply -f service-entry-threescale-saas-backend.yml",
"oc apply -f destination-rule-threescale-saas-backend.yml",
"apiVersion: networking.istio.io/v1beta1 kind: ServiceEntry metadata: name: service-entry-threescale-saas-system spec: hosts: - multitenant.3scale.net ports: - number: 443 name: https protocol: HTTPS location: MESH_EXTERNAL resolution: DNS",
"apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: destination-rule-threescale-saas-system spec: host: multitenant.3scale.net trafficPolicy: tls: mode: SIMPLE sni: multitenant.3scale.net",
"oc apply -f service-entry-threescale-saas-system.yml",
"oc apply -f <destination-rule-threescale-saas-system.yml>",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> namespace: <bookinfo> spec: pluginConfig: api: v1",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: pluginConfig: system: name: <saas_porta> upstream: <object> token: <my_account_token> ttl: 300",
"apiVersion: maistra.io/v1 upstream: name: outbound|443||multitenant.3scale.net url: \"https://myaccount-admin.3scale.net/\" timeout: 5000",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: pluginConfig: backend: name: backend upstream: <object>",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: pluginConfig: services: - id: \"2555417834789\" token: service_token authorities: - \"*.app\" - 0.0.0.0 - \"0.0.0.0:8443\" credentials: <object> mapping_rules: <object>",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: pluginConfig: services: - credentials: user_key: <array_of_lookup_queries> app_id: <array_of_lookup_queries> app_key: <array_of_lookup_queries>",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: pluginConfig: services: - credentials: user_key: - <source_type>: <object> - <source_type>: <object> app_id: - <source_type>: <object> app_key: - <source_type>: <object>",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: pluginConfig: mapping_rules: - method: GET pattern: / usages: - name: hits delta: 1 - method: GET pattern: /products/ usages: - name: products delta: 1 - method: ANY pattern: /products/{id}/sold usages: - name: sales delta: 1 - name: products delta: 1",
"credentials: user_key: - query_string: keys: - user_key - header: keys: - user_key",
"credentials: app_id: - header: keys: - app_id - query_string: keys: - app_id app_key: - header: keys: - app_key - query_string: keys: - app_key",
"aladdin:opensesame: Authorization: Basic YWxhZGRpbjpvcGVuc2VzYW1l",
"credentials: app_id: - header: keys: - authorization ops: - split: separator: \" \" max: 2 - length: min: 2 - drop: head: 1 - base64_urlsafe - split: max: 2 app_key: - header: keys: - app_key",
"credentials: app_id: - header: keys: - authorization ops: - split: separator: \" \" max: 2 - length: min: 2 - reverse - glob: - Basic - drop: tail: 1 - base64_urlsafe - split: max: 2 - test: if: length: min: 2 then: - strlen: max: 63 - or: - strlen: min: 1 - drop: tail: 1 - assert: - and: - reverse - or: - strlen: min: 8 - glob: - aladdin - admin",
"apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: jwt-example namespace: bookinfo spec: selector: matchLabels: app: productpage jwtRules: - issuer: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak jwksUri: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak/protocol/openid-connect/certs",
"credentials: app_id: - filter: path: - envoy.filters.http.jwt_authn - \"0\" keys: - azp - aud ops: - take: head: 1",
"credentials: app_id: - header: keys: - x-jwt-payload ops: - base64_urlsafe - json: - keys: - azp - aud - take: head: 1",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: url: oci://registry.redhat.io/3scale-amp2/3scale-auth-wasm-rhel8:0.0.3 imagePullSecret: <optional_pull_secret_resource> phase: AUTHZ priority: 100 selector: labels: app: <product_page> pluginConfig: api: v1 system: name: <system_name> upstream: name: outbound|443||multitenant.3scale.net url: https://istiodevel-admin.3scale.net/ timeout: 5000 token: <token> backend: name: <backend_name> upstream: name: outbound|443||su1.3scale.net url: https://su1.3scale.net/ timeout: 5000 extensions: - no_body services: - id: '2555417834780' authorities: - \"*\" credentials: user_key: - query_string: keys: - <user_key> - header: keys: - <user_key> app_id: - query_string: keys: - <app_id> - header: keys: - <app_id> app_key: - query_string: keys: - <app_key> - header: keys: - <app_key>",
"apiVersion: \"config.istio.io/v1alpha2\" kind: handler metadata: name: threescale spec: adapter: threescale params: system_url: \"https://<organization>-admin.3scale.net/\" access_token: \"<ACCESS_TOKEN>\" connection: address: \"threescale-istio-adapter:3333\"",
"apiVersion: \"config.istio.io/v1alpha2\" kind: rule metadata: name: threescale spec: match: destination.labels[\"service-mesh.3scale.net\"] == \"true\" actions: - handler: threescale.handler instances: - threescale-authorization.instance",
"3scale-config-gen --name=admin-credentials --url=\"https://<organization>-admin.3scale.net:443\" --token=\"[redacted]\"",
"3scale-config-gen --url=\"https://<organization>-admin.3scale.net\" --name=\"my-unique-id\" --service=\"123456789\" --token=\"[redacted]\"",
"export NS=\"istio-system\" URL=\"https://replaceme-admin.3scale.net:443\" NAME=\"name\" TOKEN=\"token\" exec -n USD{NS} USD(oc get po -n USD{NS} -o jsonpath='{.items[?(@.metadata.labels.app==\"3scale-istio-adapter\")].metadata.name}') -it -- ./3scale-config-gen --url USD{URL} --name USD{NAME} --token USD{TOKEN} -n USD{NS}",
"export CREDENTIALS_NAME=\"replace-me\" export SERVICE_ID=\"replace-me\" export DEPLOYMENT=\"replace-me\" patch=\"USD(oc get deployment \"USD{DEPLOYMENT}\" patch=\"USD(oc get deployment \"USD{DEPLOYMENT}\" --template='{\"spec\":{\"template\":{\"metadata\":{\"labels\":{ {{ range USDk,USDv := .spec.template.metadata.labels }}\"{{ USDk }}\":\"{{ USDv }}\",{{ end }}\"service-mesh.3scale.net/service-id\":\"'\"USD{SERVICE_ID}\"'\",\"service-mesh.3scale.net/credentials\":\"'\"USD{CREDENTIALS_NAME}\"'\"}}}}}' )\" patch deployment \"USD{DEPLOYMENT}\" --patch ''\"USD{patch}\"''",
"apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization namespace: istio-system spec: template: authorization params: subject: user: request.query_params[\"user_key\"] | request.headers[\"user-key\"] | \"\" action: path: request.url_path method: request.method | \"get\"",
"apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization namespace: istio-system spec: template: authorization params: subject: app_id: request.query_params[\"app_id\"] | request.headers[\"app-id\"] | \"\" app_key: request.query_params[\"app_key\"] | request.headers[\"app-key\"] | \"\" action: path: request.url_path method: request.method | \"get\"",
"apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization spec: template: threescale-authorization params: subject: properties: app_key: request.query_params[\"app_key\"] | request.headers[\"app-key\"] | \"\" client_id: request.auth.claims[\"azp\"] | \"\" action: path: request.url_path method: request.method | \"get\" service: destination.labels[\"service-mesh.3scale.net/service-id\"] | \"\"",
"apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: jwt-example namespace: bookinfo spec: selector: matchLabels: app: productpage jwtRules: - issuer: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak jwksUri: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak/protocol/openid-connect/certs",
"apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization spec: template: authorization params: subject: user: request.query_params[\"user_key\"] | request.headers[\"user-key\"] | properties: app_id: request.query_params[\"app_id\"] | request.headers[\"app-id\"] | \"\" app_key: request.query_params[\"app_key\"] | request.headers[\"app-key\"] | \"\" client_id: request.auth.claims[\"azp\"] | \"\" action: path: request.url_path method: request.method | \"get\" service: destination.labels[\"service-mesh.3scale.net/service-id\"] | \"\"",
"oc get pods -n <istio-system>",
"oc logs <istio-system>",
"oc get pods -n openshift-operators",
"NAME READY STATUS RESTARTS AGE istio-operator-bb49787db-zgr87 1/1 Running 0 15s jaeger-operator-7d5c4f57d8-9xphf 1/1 Running 0 2m42s kiali-operator-f9c8d84f4-7xh2v 1/1 Running 0 64s",
"oc get pods -n openshift-operators-redhat",
"NAME READY STATUS RESTARTS AGE elasticsearch-operator-d4f59b968-796vq 1/1 Running 0 15s",
"oc logs -n openshift-operators <podName>",
"oc logs -n openshift-operators istio-operator-bb49787db-zgr87",
"oc get pods -n istio-system",
"NAME READY STATUS RESTARTS AGE grafana-6776785cfc-6fz7t 2/2 Running 0 102s istio-egressgateway-5f49dd99-l9ppq 1/1 Running 0 103s istio-ingressgateway-6dc885c48-jjd8r 1/1 Running 0 103s istiod-basic-6c9cc55998-wg4zq 1/1 Running 0 2m14s jaeger-6865d5d8bf-zrfss 2/2 Running 0 100s kiali-579799fbb7-8mwc8 1/1 Running 0 46s prometheus-5c579dfb-6qhjk 2/2 Running 0 115s",
"oc get smcp -n <istio-system>",
"NAME READY STATUS PROFILES VERSION AGE basic 10/10 ComponentsReady [\"default\"] 2.1.3 4m2s",
"NAME READY STATUS TEMPLATE VERSION AGE basic-install 10/10 UpdateSuccessful default v1.1 3d16h",
"oc describe smcp <smcp-name> -n <controlplane-namespace>",
"oc describe smcp basic -n istio-system",
"oc get jaeger -n <istio-system>",
"NAME STATUS VERSION STRATEGY STORAGE AGE jaeger Running 1.30.0 allinone memory 15m",
"oc get kiali -n <istio-system>",
"NAME AGE kiali 15m",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443",
"export JAEGER_URL=USD(oc get route -n istio-system jaeger -o jsonpath='{.spec.host}')",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443",
"oc project istio-system",
"oc edit smcp <smcp_name>",
"spec: proxy: accessLogging: file: name: /dev/stdout #file name",
"oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.9.0",
"oc adm must-gather -- /usr/bin/gather_audit_logs",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s",
"oc adm must-gather --image=registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8:2.3",
"oc adm must-gather --image=registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8:2.3 gather <namespace>",
"oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{\"\\n\"}'",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.3 proxy: runtime: container: resources: requests: cpu: 100m memory: 128Mi limits: cpu: 500m memory: 128Mi tracing: type: Jaeger gateways: ingress: # istio-ingressgateway service: type: ClusterIP ports: - name: status-port port: 15020 - name: http2 port: 80 targetPort: 8080 - name: https port: 443 targetPort: 8443 meshExpansionPorts: [] egress: # istio-egressgateway service: type: ClusterIP ports: - name: status-port port: 15020 - name: http2 port: 80 targetPort: 8080 - name: https port: 443 targetPort: 8443 additionalIngress: some-other-ingress-gateway: {} additionalEgress: some-other-egress-gateway: {} policy: type: Mixer mixer: # only applies if policy.type: Mixer enableChecks: true failOpen: false telemetry: type: Istiod # or Mixer mixer: # only applies if telemetry.type: Mixer, for v1 telemetry sessionAffinity: false batching: maxEntries: 100 maxTime: 1s adapters: kubernetesenv: true stdio: enabled: true outputAsJSON: true addons: grafana: enabled: true install: config: env: {} envSecrets: {} persistence: enabled: true storageClassName: \"\" accessMode: ReadWriteOnce capacity: requests: storage: 5Gi service: ingress: contextPath: /grafana tls: termination: reencrypt kiali: name: kiali enabled: true install: # install kiali CR if not present dashboard: viewOnly: false enableGrafana: true enableTracing: true enablePrometheus: true service: ingress: contextPath: /kiali jaeger: name: jaeger install: storage: type: Elasticsearch # or Memory memory: maxTraces: 100000 elasticsearch: nodeCount: 3 storage: {} redundancyPolicy: SingleRedundancy indexCleaner: {} ingress: {} # jaeger ingress configuration runtime: components: pilot: deployment: replicas: 2 pod: affinity: {} container: resources: requests: cpu: 100m memory: 128Mi limits: cpu: 500m memory: 128Mi grafana: deployment: {} pod: {} kiali: deployment: {} pod: {}",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: general: logging: componentLevels: {} # misc: error logAsJSON: false validationMessages: true",
"logging:",
"logging: componentLevels:",
"logging: logLevels:",
"logging: logAsJSON:",
"validationMessages:",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: profiles: - YourProfileName",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.3 tracing: sampling: 100 type: Jaeger",
"tracing: sampling:",
"tracing: type:",
"spec: addons: 3Scale: enabled: false PARAM_THREESCALE_LISTEN_ADDR: 3333 PARAM_THREESCALE_LOG_LEVEL: info PARAM_THREESCALE_LOG_JSON: true PARAM_THREESCALE_LOG_GRPC: false PARAM_THREESCALE_REPORT_METRICS: true PARAM_THREESCALE_METRICS_PORT: 8080 PARAM_THREESCALE_CACHE_TTL_SECONDS: 300 PARAM_THREESCALE_CACHE_REFRESH_SECONDS: 180 PARAM_THREESCALE_CACHE_ENTRIES_MAX: 1000 PARAM_THREESCALE_CACHE_REFRESH_RETRIES: 1 PARAM_THREESCALE_ALLOW_INSECURE_CONN: false PARAM_THREESCALE_CLIENT_TIMEOUT_SECONDS: 10 PARAM_THREESCALE_GRPC_CONN_MAX_SECONDS: 60 PARAM_USE_CACHED_BACKEND: false PARAM_BACKEND_CACHE_FLUSH_INTERVAL_SECONDS: 15 PARAM_BACKEND_CACHE_POLICY_FAIL_CLOSED: true",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: addons: kiali: name: kiali enabled: true install: dashboard: viewOnly: false enableGrafana: true enableTracing: true enablePrometheus: true service: ingress: contextPath: /kiali",
"spec: addons: kiali: name:",
"kiali: enabled:",
"kiali: install:",
"kiali: install: dashboard:",
"kiali: install: dashboard: viewOnly:",
"kiali: install: dashboard: enableGrafana:",
"kiali: install: dashboard: enablePrometheus:",
"kiali: install: dashboard: enableTracing:",
"kiali: install: service:",
"kiali: install: service: metadata:",
"kiali: install: service: metadata: annotations:",
"kiali: install: service: metadata: labels:",
"kiali: install: service: ingress:",
"kiali: install: service: ingress: metadata: annotations:",
"kiali: install: service: ingress: metadata: labels:",
"kiali: install: service: ingress: enabled:",
"kiali: install: service: ingress: contextPath:",
"install: service: ingress: hosts:",
"install: service: ingress: tls:",
"kiali: install: service: nodePort:",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.3 tracing: sampling: 100 type: Jaeger",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.3 tracing: sampling: 10000 type: Jaeger addons: jaeger: name: jaeger install: storage: type: Memory",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.3 tracing: sampling: 10000 type: Jaeger addons: jaeger: name: jaeger #name of Jaeger CR install: storage: type: Elasticsearch ingress: enabled: true runtime: components: tracing.jaeger.elasticsearch: # only supports resources and image name container: resources: {}",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.3 tracing: sampling: 1000 type: Jaeger addons: jaeger: name: MyJaegerInstance #name of Jaeger CR install: storage: type: Elasticsearch ingress: enabled: true",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.3 tracing: sampling: 1000 type: Jaeger addons: jaeger: name: MyJaegerInstance #name of Jaeger CR",
"apiVersion: jaegertracing.io/v1 kind: Jaeger spec: ingress: enabled: true openshift: htpasswdFile: /etc/proxy/htpasswd/auth sar: '{\"namespace\": \"istio-system\", \"resource\": \"pods\", \"verb\": \"get\"}' options: {} resources: {} security: oauth-proxy volumes: - name: secret-htpasswd secret: secretName: htpasswd - configMap: defaultMode: 420 items: - key: ca-bundle.crt path: tls-ca-bundle.pem name: trusted-ca-bundle optional: true name: trusted-ca-bundle volumeMounts: - mountPath: /etc/proxy/htpasswd name: secret-htpasswd - mountPath: /etc/pki/ca-trust/extracted/pem/ name: trusted-ca-bundle readOnly: true",
"oc login https://<HOSTNAME>:6443",
"oc project istio-system",
"oc edit -n tracing-system -f jaeger.yaml",
"apiVersion: jaegertracing.io/v1 kind: Jaeger spec: ingress: enabled: true openshift: htpasswdFile: /etc/proxy/htpasswd/auth sar: '{\"namespace\": \"istio-system\", \"resource\": \"pods\", \"verb\": \"get\"}' options: {} resources: {} security: oauth-proxy volumes: - name: secret-htpasswd secret: secretName: htpasswd - configMap: defaultMode: 420 items: - key: ca-bundle.crt path: tls-ca-bundle.pem name: trusted-ca-bundle optional: true name: trusted-ca-bundle volumeMounts: - mountPath: /etc/proxy/htpasswd name: secret-htpasswd - mountPath: /etc/pki/ca-trust/extracted/pem/ name: trusted-ca-bundle readOnly: true",
"oc apply -n tracing-system -f <jaeger.yaml>",
"oc get pods -n tracing-system -w",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: name spec: strategy: <deployment_strategy> allInOne: options: {} resources: {} agent: options: {} resources: {} collector: options: {} resources: {} sampling: options: {} storage: type: options: {} query: options: {} resources: {} ingester: options: {} resources: {} options: {}",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-all-in-one-inmemory",
"collector: replicas:",
"spec: collector: options: {}",
"options: collector: num-workers:",
"options: collector: queue-size:",
"options: kafka: producer: topic: jaeger-spans",
"options: kafka: producer: brokers: my-cluster-kafka-brokers.kafka:9092",
"options: log-level:",
"spec: sampling: options: {} default_strategy: service_strategy:",
"default_strategy: type: service_strategy: type:",
"default_strategy: param: service_strategy: param:",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: with-sampling spec: sampling: options: default_strategy: type: probabilistic param: 0.5 service_strategies: - service: alpha type: probabilistic param: 0.8 operation_strategies: - operation: op1 type: probabilistic param: 0.2 - operation: op2 type: probabilistic param: 0.4 - service: beta type: ratelimiting param: 5",
"spec: sampling: options: default_strategy: type: probabilistic param: 1",
"spec: storage: type:",
"storage: secretname:",
"storage: options: {}",
"storage: esIndexCleaner: enabled:",
"storage: esIndexCleaner: numberOfDays:",
"storage: esIndexCleaner: schedule:",
"elasticsearch: properties: doNotProvision:",
"elasticsearch: properties: name:",
"elasticsearch: nodeCount:",
"elasticsearch: resources: requests: cpu:",
"elasticsearch: resources: requests: memory:",
"elasticsearch: resources: limits: cpu:",
"elasticsearch: resources: limits: memory:",
"elasticsearch: redundancyPolicy:",
"elasticsearch: useCertManagement:",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch elasticsearch: nodeCount: 3 resources: requests: cpu: 1 memory: 16Gi limits: memory: 16Gi",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch elasticsearch: nodeCount: 1 storage: 1 storageClassName: gp2 size: 5Gi resources: requests: cpu: 200m memory: 4Gi limits: memory: 4Gi redundancyPolicy: ZeroRedundancy",
"es: server-urls:",
"es: max-doc-count:",
"es: max-num-spans:",
"es: max-span-age:",
"es: sniffer:",
"es: sniffer-tls-enabled:",
"es: timeout:",
"es: username:",
"es: password:",
"es: version:",
"es: num-replicas:",
"es: num-shards:",
"es: create-index-templates:",
"es: index-prefix:",
"es: bulk: actions:",
"es: bulk: flush-interval:",
"es: bulk: size:",
"es: bulk: workers:",
"es: tls: ca:",
"es: tls: cert:",
"es: tls: enabled:",
"es: tls: key:",
"es: tls: server-name:",
"es: token-file:",
"es-archive: bulk: actions:",
"es-archive: bulk: flush-interval:",
"es-archive: bulk: size:",
"es-archive: bulk: workers:",
"es-archive: create-index-templates:",
"es-archive: enabled:",
"es-archive: index-prefix:",
"es-archive: max-doc-count:",
"es-archive: max-num-spans:",
"es-archive: max-span-age:",
"es-archive: num-replicas:",
"es-archive: num-shards:",
"es-archive: password:",
"es-archive: server-urls:",
"es-archive: sniffer:",
"es-archive: sniffer-tls-enabled:",
"es-archive: timeout:",
"es-archive: tls: ca:",
"es-archive: tls: cert:",
"es-archive: tls: enabled:",
"es-archive: tls: key:",
"es-archive: tls: server-name:",
"es-archive: token-file:",
"es-archive: username:",
"es-archive: version:",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch options: es: server-urls: https://quickstart-es-http.default.svc:9200 index-prefix: my-prefix tls: ca: /es/certificates/ca.crt secretName: tracing-secret volumeMounts: - name: certificates mountPath: /es/certificates/ readOnly: true volumes: - name: certificates secret: secretName: quickstart-es-http-certs-public",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch options: es: server-urls: https://quickstart-es-http.default.svc:9200 1 index-prefix: my-prefix tls: 2 ca: /es/certificates/ca.crt secretName: tracing-secret 3 volumeMounts: 4 - name: certificates mountPath: /es/certificates/ readOnly: true volumes: - name: certificates secret: secretName: quickstart-es-http-certs-public",
"apiVersion: logging.openshift.io/v1 kind: Elasticsearch metadata: annotations: logging.openshift.io/elasticsearch-cert-management: \"true\" logging.openshift.io/elasticsearch-cert.jaeger-custom-es: \"user.jaeger\" logging.openshift.io/elasticsearch-cert.curator-custom-es: \"system.logging.curator\" name: custom-es spec: managementState: Managed nodeSpec: resources: limits: memory: 16Gi requests: cpu: 1 memory: 16Gi nodes: - nodeCount: 3 proxyResources: {} resources: {} roles: - master - client - data storage: {} redundancyPolicy: ZeroRedundancy",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-prod spec: strategy: production storage: type: elasticsearch elasticsearch: name: custom-es doNotProvision: true useCertManagement: true",
"spec: query: replicas:",
"spec: query: options: {}",
"options: log-level:",
"options: query: base-path:",
"apiVersion: jaegertracing.io/v1 kind: \"Jaeger\" metadata: name: \"my-jaeger\" spec: strategy: allInOne allInOne: options: log-level: debug query: base-path: /jaeger",
"spec: ingester: options: {}",
"options: deadlockInterval:",
"options: kafka: consumer: topic:",
"options: kafka: consumer: brokers:",
"options: log-level:",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-streaming spec: strategy: streaming collector: options: kafka: producer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 ingester: options: kafka: consumer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 ingester: deadlockInterval: 5 storage: type: elasticsearch options: es: server-urls: http://elasticsearch:9200",
"oc delete smmr -n istio-system default",
"oc get smcp -n istio-system",
"oc delete smcp -n istio-system <name_of_custom_resource>",
"oc delete validatingwebhookconfiguration/openshift-operators.servicemesh-resources.maistra.io",
"oc delete mutatingwebhookconfiguration/openshift-operators.servicemesh-resources.maistra.io",
"oc delete svc maistra-admission-controller -n openshift-operators",
"oc -n openshift-operators delete ds -lmaistra-version",
"oc delete clusterrole/istio-admin clusterrole/istio-cni clusterrolebinding/istio-cni",
"oc delete clusterrole istio-view istio-edit",
"oc delete clusterrole jaegers.jaegertracing.io-v1-admin jaegers.jaegertracing.io-v1-crdview jaegers.jaegertracing.io-v1-edit jaegers.jaegertracing.io-v1-view",
"oc get crds -o name | grep '.*\\.istio\\.io' | xargs -r -n 1 oc delete",
"oc get crds -o name | grep '.*\\.maistra\\.io' | xargs -r -n 1 oc delete",
"oc get crds -o name | grep '.*\\.kiali\\.io' | xargs -r -n 1 oc delete",
"oc delete crds jaegers.jaegertracing.io",
"oc delete cm -n openshift-operators maistra-operator-cabundle",
"oc delete cm -n openshift-operators istio-cni-config istio-cni-config-v2-3",
"oc delete sa -n openshift-operators istio-cni",
"oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.9.0",
"oc adm must-gather -- /usr/bin/gather_audit_logs",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s",
"oc adm must-gather --image=registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8:2.3",
"oc adm must-gather --image=registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8:2.3 gather <namespace>",
"apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin namespace: foo spec: action: DENY rules: - from: - source: namespaces: [\"dev\"] to: - operation: hosts: [\"httpbin.com\",\"httpbin.com:*\"]",
"apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin namespace: default spec: action: DENY rules: - to: - operation: hosts: [\"httpbin.example.com:*\"]",
"spec: global: pathNormalization: <option>",
"{ \"runtime\": { \"symlink_root\": \"/var/lib/istio/envoy/runtime\" } }",
"oc create secret generic -n <SMCPnamespace> gateway-bootstrap --from-file=bootstrap-override.json",
"apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: gateways: istio-ingressgateway: env: ISTIO_BOOTSTRAP_OVERRIDE: /var/lib/istio/envoy/custom-bootstrap/bootstrap-override.json secretVolumes: - mountPath: /var/lib/istio/envoy/custom-bootstrap name: custom-bootstrap secretName: gateway-bootstrap",
"oc create secret generic -n <SMCPnamespace> gateway-settings --from-literal=overload.global_downstream_max_connections=10000",
"apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: template: default #Change the version to \"v1.0\" if you are on the 1.0 stream. version: v1.1 istio: gateways: istio-ingressgateway: env: ISTIO_BOOTSTRAP_OVERRIDE: /var/lib/istio/envoy/custom-bootstrap/bootstrap-override.json secretVolumes: - mountPath: /var/lib/istio/envoy/custom-bootstrap name: custom-bootstrap secretName: gateway-bootstrap # below is the new secret mount - mountPath: /var/lib/istio/envoy/runtime name: gateway-settings secretName: gateway-settings",
"oc get jaeger -n istio-system",
"NAME AGE jaeger 3d21h",
"oc get jaeger jaeger -oyaml -n istio-system > /tmp/jaeger-cr.yaml",
"oc delete jaeger jaeger -n istio-system",
"oc create -f /tmp/jaeger-cr.yaml -n istio-system",
"rm /tmp/jaeger-cr.yaml",
"oc delete -f <jaeger-cr-file>",
"oc delete -f jaeger-prod-elasticsearch.yaml",
"oc create -f <jaeger-cr-file>",
"oc get pods -n jaeger-system -w",
"spec: version: v1.1",
"{\"level\":\"warn\",\"ts\":1642438880.918793,\"caller\":\"channelz/logging.go:62\",\"msg\":\"[core]grpc: Server.Serve failed to create ServerTransport: connection error: desc = \\\"transport: http2Server.HandleStreams received bogus greeting from client: \\\\\\\"\\\\\\\\x16\\\\\\\\x03\\\\\\\\x01\\\\\\\\x02\\\\\\\\x00\\\\\\\\x01\\\\\\\\x00\\\\\\\\x01\\\\\\\\xfc\\\\\\\\x03\\\\\\\\x03vw\\\\\\\\x1a\\\\\\\\xc9T\\\\\\\\xe7\\\\\\\\xdaCj\\\\\\\\xb7\\\\\\\\x8dK\\\\\\\\xa6\\\\\\\"\\\"\",\"system\":\"grpc\",\"grpc_log\":true}",
"apiVersion: \"rbac.istio.io/v1alpha1\" kind: ServiceRoleBinding metadata: name: httpbin-client-binding namespace: httpbin spec: subjects: - user: \"cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account\" properties: request.headers[<header>]: \"value\"",
"apiVersion: \"rbac.istio.io/v1alpha1\" kind: ServiceRoleBinding metadata: name: httpbin-client-binding namespace: httpbin spec: subjects: - user: \"cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account\" properties: request.regex.headers[<header>]: \"<regular expression>\"",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443",
"oc new-project istio-system",
"oc create -n istio-system -f istio-installation.yaml",
"oc get smcp -n istio-system",
"NAME READY STATUS PROFILES VERSION AGE basic-install 11/11 ComponentsReady [\"default\"] v1.1.18 4m25s",
"oc get pods -n istio-system -w",
"NAME READY STATUS RESTARTS AGE grafana-7bf5764d9d-2b2f6 2/2 Running 0 28h istio-citadel-576b9c5bbd-z84z4 1/1 Running 0 28h istio-egressgateway-5476bc4656-r4zdv 1/1 Running 0 28h istio-galley-7d57b47bb7-lqdxv 1/1 Running 0 28h istio-ingressgateway-dbb8f7f46-ct6n5 1/1 Running 0 28h istio-pilot-546bf69578-ccg5x 2/2 Running 0 28h istio-policy-77fd498655-7pvjw 2/2 Running 0 28h istio-sidecar-injector-df45bd899-ctxdt 1/1 Running 0 28h istio-telemetry-66f697d6d5-cj28l 2/2 Running 0 28h jaeger-896945cbc-7lqrr 2/2 Running 0 11h kiali-78d9c5b87c-snjzh 1/1 Running 0 22h prometheus-6dff867c97-gr2n5 2/2 Running 0 28h",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443",
"oc new-project <your-project>",
"apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system spec: members: # a list of projects joined into the service mesh - your-project-name - another-project-name",
"oc create -n istio-system -f servicemeshmemberroll-default.yaml",
"oc get smmr -n istio-system default",
"oc edit smmr -n <controlplane-namespace>",
"apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system #control plane project spec: members: # a list of projects joined into the service mesh - your-project-name - another-project-name",
"oc patch deployment/<deployment> -p '{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"kubectl.kubernetes.io/restartedAt\": \"'`date -Iseconds`'\"}}}}}'",
"apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: global: mtls: enabled: true",
"apiVersion: \"authentication.istio.io/v1alpha1\" kind: \"Policy\" metadata: name: default namespace: <NAMESPACE> spec: peers: - mtls: {}",
"apiVersion: \"networking.istio.io/v1alpha3\" kind: \"DestinationRule\" metadata: name: \"default\" namespace: <CONTROL_PLANE_NAMESPACE>> spec: host: \"*.local\" trafficPolicy: tls: mode: ISTIO_MUTUAL",
"apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: global: tls: minProtocolVersion: TLSv1_2 maxProtocolVersion: TLSv1_3",
"oc create secret generic cacerts -n istio-system --from-file=<path>/ca-cert.pem --from-file=<path>/ca-key.pem --from-file=<path>/root-cert.pem --from-file=<path>/cert-chain.pem",
"apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: global: mtls: enabled: true security: selfSigned: false",
"oc delete secret istio.default",
"RATINGSPOD=`oc get pods -l app=ratings -o jsonpath='{.items[0].metadata.name}'`",
"oc exec -it USDRATINGSPOD -c istio-proxy -- /bin/cat /etc/certs/root-cert.pem > /tmp/pod-root-cert.pem",
"oc exec -it USDRATINGSPOD -c istio-proxy -- /bin/cat /etc/certs/cert-chain.pem > /tmp/pod-cert-chain.pem",
"openssl x509 -in <path>/root-cert.pem -text -noout > /tmp/root-cert.crt.txt",
"openssl x509 -in /tmp/pod-root-cert.pem -text -noout > /tmp/pod-root-cert.crt.txt",
"diff /tmp/root-cert.crt.txt /tmp/pod-root-cert.crt.txt",
"sed '0,/^-----END CERTIFICATE-----/d' /tmp/pod-cert-chain.pem > /tmp/pod-cert-chain-ca.pem",
"openssl x509 -in <path>/ca-cert.pem -text -noout > /tmp/ca-cert.crt.txt",
"openssl x509 -in /tmp/pod-cert-chain-ca.pem -text -noout > /tmp/pod-cert-chain-ca.crt.txt",
"diff /tmp/ca-cert.crt.txt /tmp/pod-cert-chain-ca.crt.txt",
"head -n 21 /tmp/pod-cert-chain.pem > /tmp/pod-cert-chain-workload.pem",
"openssl verify -CAfile <(cat <path>/ca-cert.pem <path>/root-cert.pem) /tmp/pod-cert-chain-workload.pem",
"/tmp/pod-cert-chain-workload.pem: OK",
"oc delete secret cacerts -n istio-system",
"apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: global: mtls: enabled: true security: selfSigned: true",
"apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: ext-host-gwy spec: selector: istio: ingressgateway # use istio default controller servers: - port: number: 443 name: https protocol: HTTPS hosts: - ext-host.example.com tls: mode: SIMPLE serverCertificate: /tmp/tls.crt privateKey: /tmp/tls.key",
"apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: virtual-svc spec: hosts: - ext-host.example.com gateways: - ext-host-gwy",
"apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: bookinfo-gateway spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - \"*\"",
"oc apply -f gateway.yaml",
"apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: bookinfo spec: hosts: - \"*\" gateways: - bookinfo-gateway http: - match: - uri: exact: /productpage - uri: prefix: /static - uri: exact: /login - uri: exact: /logout - uri: prefix: /api/v1/products route: - destination: host: productpage port: number: 9080",
"oc apply -f vs.yaml",
"export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')",
"export TARGET_PORT=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.port.targetPort}')",
"curl -s -I \"USDGATEWAY_URL/productpage\"",
"oc get svc istio-ingressgateway -n istio-system",
"export INGRESS_HOST=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')",
"export INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"http2\")].port}')",
"export SECURE_INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"https\")].port}')",
"export TCP_INGRESS_PORT=USD(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"tcp\")].port}')",
"export INGRESS_HOST=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')",
"export INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"http2\")].nodePort}')",
"export SECURE_INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"https\")].nodePort}')",
"export TCP_INGRESS_PORT=USD(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"tcp\")].nodePort}')",
"spec: istio: gateways: istio-egressgateway: autoscaleEnabled: false autoscaleMin: 1 autoscaleMax: 5 istio-ingressgateway: autoscaleEnabled: false autoscaleMin: 1 autoscaleMax: 5 ior_enabled: true",
"apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: gateway1 spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - www.bookinfo.com - bookinfo.example.com",
"oc -n <control_plane_namespace> get routes",
"NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD gateway1-lvlfn bookinfo.example.com istio-ingressgateway <all> None gateway1-scqhv www.bookinfo.com istio-ingressgateway <all> None",
"apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: svc-entry spec: hosts: - ext-svc.example.com ports: - number: 443 name: https protocol: HTTPS location: MESH_EXTERNAL resolution: DNS",
"apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: ext-res-dr spec: host: ext-svc.example.com trafficPolicy: tls: mode: MUTUAL clientCertificate: /etc/certs/myclientcert.pem privateKey: /etc/certs/client_private_key.pem caCertificates: /etc/certs/rootcacerts.pem",
"apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - match: - headers: end-user: exact: jason route: - destination: host: reviews subset: v2 - route: - destination: host: reviews subset: v3",
"oc apply -f <VirtualService.yaml>",
"spec: hosts:",
"spec: http: - match:",
"spec: http: - match: - destination:",
"apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: my-destination-rule spec: host: my-svc trafficPolicy: loadBalancer: simple: RANDOM subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 trafficPolicy: loadBalancer: simple: ROUND_ROBIN - name: v3 labels: version: v3",
"oc apply -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.3/samples/bookinfo/networking/virtual-service-all-v1.yaml",
"oc get virtualservices -o yaml",
"export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')",
"echo \"http://USDGATEWAY_URL/productpage\"",
"oc apply -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.3/samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml",
"oc get virtualservice reviews -o yaml",
"oc create configmap --from-file=<templates-directory> smcp-templates -n openshift-operators",
"oc get clusterserviceversion -n openshift-operators | grep 'Service Mesh'",
"maistra.v1.0.0 Red Hat OpenShift Service Mesh 1.0.0 Succeeded",
"oc edit clusterserviceversion -n openshift-operators maistra.v1.0.0",
"deployments: - name: istio-operator spec: template: spec: containers: volumeMounts: - name: discovery-cache mountPath: /home/istio-operator/.kube/cache/discovery - name: smcp-templates mountPath: /usr/local/share/istio-operator/templates/ volumes: - name: discovery-cache emptyDir: medium: Memory - name: smcp-templates configMap: name: smcp-templates",
"apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane metadata: name: minimal-install spec: template: default",
"oc get deployment -n <namespace>",
"get deployment -n bookinfo ratings-v1 -o yaml",
"apiVersion: apps/v1 kind: Deployment metadata: name: ratings-v1 namespace: bookinfo labels: app: ratings version: v1 spec: template: metadata: annotations: sidecar.istio.io/inject: 'true'",
"oc apply -n <namespace> -f deployment.yaml",
"oc apply -n bookinfo -f deployment-ratings-v1.yaml",
"oc get deployment -n <namespace> <deploymentName> -o yaml",
"oc get deployment -n bookinfo ratings-v1 -o yaml",
"apiVersion: apps/v1 kind: Deployment metadata: name: resource spec: replicas: 7 selector: matchLabels: app: resource template: metadata: annotations: sidecar.maistra.io/proxyEnv: \"{ \\\"maistra_test_env\\\": \\\"env_value\\\", \\\"maistra_test_env_2\\\": \\\"env_value_2\\\" }\"",
"oc get cm -n <istio-system> istio -o jsonpath='{.data.mesh}' | grep disablePolicyChecks",
"oc edit cm -n <istio-system> istio",
"oc new-project bookinfo",
"apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default spec: members: - bookinfo",
"oc create -n istio-system -f servicemeshmemberroll-default.yaml",
"oc get smmr -n istio-system -o wide",
"NAME READY STATUS AGE MEMBERS default 1/1 Configured 70s [\"bookinfo\"]",
"oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.3/samples/bookinfo/platform/kube/bookinfo.yaml",
"service/details created serviceaccount/bookinfo-details created deployment.apps/details-v1 created service/ratings created serviceaccount/bookinfo-ratings created deployment.apps/ratings-v1 created service/reviews created serviceaccount/bookinfo-reviews created deployment.apps/reviews-v1 created deployment.apps/reviews-v2 created deployment.apps/reviews-v3 created service/productpage created serviceaccount/bookinfo-productpage created deployment.apps/productpage-v1 created",
"oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.3/samples/bookinfo/networking/bookinfo-gateway.yaml",
"gateway.networking.istio.io/bookinfo-gateway created virtualservice.networking.istio.io/bookinfo created",
"export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')",
"oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.3/samples/bookinfo/networking/destination-rule-all.yaml",
"oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.3/samples/bookinfo/networking/destination-rule-all-mtls.yaml",
"destinationrule.networking.istio.io/productpage created destinationrule.networking.istio.io/reviews created destinationrule.networking.istio.io/ratings created destinationrule.networking.istio.io/details created",
"oc get pods -n bookinfo",
"NAME READY STATUS RESTARTS AGE details-v1-55b869668-jh7hb 2/2 Running 0 12m productpage-v1-6fc77ff794-nsl8r 2/2 Running 0 12m ratings-v1-7d7d8d8b56-55scn 2/2 Running 0 12m reviews-v1-868597db96-bdxgq 2/2 Running 0 12m reviews-v2-5b64f47978-cvssp 2/2 Running 0 12m reviews-v3-6dfd49b55b-vcwpf 2/2 Running 0 12m",
"echo \"http://USDGATEWAY_URL/productpage\"",
"oc delete project bookinfo",
"oc -n istio-system patch --type='json' smmr default -p '[{\"op\": \"remove\", \"path\": \"/spec/members\", \"value\":[\"'\"bookinfo\"'\"]}]'",
"curl \"http://USDGATEWAY_URL/productpage\"",
"export JAEGER_URL=USD(oc get route -n istio-system jaeger -o jsonpath='{.spec.host}')",
"echo USDJAEGER_URL",
"curl \"http://USDGATEWAY_URL/productpage\"",
"apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane metadata: name: basic-install spec: istio: global: proxy: resources: requests: cpu: 100m memory: 128Mi limits: cpu: 500m memory: 128Mi gateways: istio-egressgateway: autoscaleEnabled: false istio-ingressgateway: autoscaleEnabled: false ior_enabled: false mixer: policy: autoscaleEnabled: false telemetry: autoscaleEnabled: false resources: requests: cpu: 100m memory: 1G limits: cpu: 500m memory: 4G pilot: autoscaleEnabled: false traceSampling: 100 kiali: enabled: true grafana: enabled: true tracing: enabled: true jaeger: template: all-in-one",
"istio: global: tag: 1.1.0 hub: registry.redhat.io/openshift-service-mesh/ proxy: resources: requests: cpu: 10m memory: 128Mi limits: mtls: enabled: false disablePolicyChecks: true policyCheckFailOpen: false imagePullSecrets: - MyPullSecret",
"gateways: egress: enabled: true runtime: deployment: autoScaling: enabled: true maxReplicas: 5 minReplicas: 1 enabled: true ingress: enabled: true runtime: deployment: autoScaling: enabled: true maxReplicas: 5 minReplicas: 1",
"mixer: enabled: true policy: autoscaleEnabled: false telemetry: autoscaleEnabled: false resources: requests: cpu: 10m memory: 128Mi limits:",
"spec: runtime: components: pilot: deployment: autoScaling: enabled: true minReplicas: 1 maxReplicas: 5 targetCPUUtilizationPercentage: 85 pod: tolerations: - key: node.kubernetes.io/unreachable operator: Exists effect: NoExecute tolerationSeconds: 60 affinity: podAntiAffinity: requiredDuringScheduling: - key: istio topologyKey: kubernetes.io/hostname operator: In values: - pilot container: resources: limits: cpu: 100m memory: 128M",
"apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: kiali: enabled: true dashboard: viewOnlyMode: false ingress: enabled: true",
"enabled",
"dashboard viewOnlyMode",
"ingress enabled",
"spec: kiali: enabled: true dashboard: viewOnlyMode: false grafanaURL: \"https://grafana-istio-system.127.0.0.1.nip.io\" ingress: enabled: true",
"spec: kiali: enabled: true dashboard: viewOnlyMode: false jaegerURL: \"http://jaeger-query-istio-system.127.0.0.1.nip.io\" ingress: enabled: true",
"apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: version: v1.1 istio: tracing: enabled: true jaeger: template: all-in-one",
"tracing: enabled:",
"jaeger: template:",
"apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: tracing: enabled: true ingress: enabled: true jaeger: template: production-elasticsearch elasticsearch: nodeCount: 3 redundancyPolicy: resources: requests: cpu: \"1\" memory: \"16Gi\" limits: cpu: \"1\" memory: \"16Gi\"",
"tracing: enabled:",
"ingress: enabled:",
"jaeger: template:",
"elasticsearch: nodeCount:",
"requests: cpu:",
"requests: memory:",
"limits: cpu:",
"limits: memory:",
"oc get route -n istio-system external-jaeger",
"NAME HOST/PORT PATH SERVICES [...] external-jaeger external-jaeger-istio-system.apps.test external-jaeger-query [...]",
"apiVersion: jaegertracing.io/v1 kind: \"Jaeger\" metadata: name: \"external-jaeger\" # Deploy to the Control Plane Namespace namespace: istio-system spec: # Set Up Authentication ingress: enabled: true security: oauth-proxy openshift: # This limits user access to the Jaeger instance to users who have access # to the control plane namespace. Make sure to set the correct namespace here sar: '{\"namespace\": \"istio-system\", \"resource\": \"pods\", \"verb\": \"get\"}' htpasswdFile: /etc/proxy/htpasswd/auth volumeMounts: - name: secret-htpasswd mountPath: /etc/proxy/htpasswd volumes: - name: secret-htpasswd secret: secretName: htpasswd",
"apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane metadata: name: external-jaeger namespace: istio-system spec: version: v1.1 istio: tracing: # Disable Jaeger deployment by service mesh operator enabled: false global: tracer: zipkin: # Set Endpoint for Trace Collection address: external-jaeger-collector.istio-system.svc.cluster.local:9411 kiali: # Set Jaeger dashboard URL dashboard: jaegerURL: https://external-jaeger-istio-system.apps.test # Set Endpoint for Trace Querying jaegerInClusterURL: external-jaeger-query.istio-system.svc.cluster.local",
"apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: tracing: enabled: true ingress: enabled: true jaeger: template: production-elasticsearch elasticsearch: nodeCount: 3 redundancyPolicy: resources: requests: cpu: \"1\" memory: \"16Gi\" limits: cpu: \"1\" memory: \"16Gi\"",
"tracing: enabled:",
"ingress: enabled:",
"jaeger: template:",
"elasticsearch: nodeCount:",
"requests: cpu:",
"requests: memory:",
"limits: cpu:",
"limits: memory:",
"apiVersion: jaegertracing.io/v1 kind: Jaeger spec: strategy: production storage: type: elasticsearch esIndexCleaner: enabled: false numberOfDays: 7 schedule: \"55 23 * * *\"",
"spec: addons: 3Scale: enabled: false PARAM_THREESCALE_LISTEN_ADDR: 3333 PARAM_THREESCALE_LOG_LEVEL: info PARAM_THREESCALE_LOG_JSON: true PARAM_THREESCALE_LOG_GRPC: false PARAM_THREESCALE_REPORT_METRICS: true PARAM_THREESCALE_METRICS_PORT: 8080 PARAM_THREESCALE_CACHE_TTL_SECONDS: 300 PARAM_THREESCALE_CACHE_REFRESH_SECONDS: 180 PARAM_THREESCALE_CACHE_ENTRIES_MAX: 1000 PARAM_THREESCALE_CACHE_REFRESH_RETRIES: 1 PARAM_THREESCALE_ALLOW_INSECURE_CONN: false PARAM_THREESCALE_CLIENT_TIMEOUT_SECONDS: 10 PARAM_THREESCALE_GRPC_CONN_MAX_SECONDS: 60 PARAM_USE_CACHED_BACKEND: false PARAM_BACKEND_CACHE_FLUSH_INTERVAL_SECONDS: 15 PARAM_BACKEND_CACHE_POLICY_FAIL_CLOSED: true",
"apiVersion: \"config.istio.io/v1alpha2\" kind: handler metadata: name: threescale spec: adapter: threescale params: system_url: \"https://<organization>-admin.3scale.net/\" access_token: \"<ACCESS_TOKEN>\" connection: address: \"threescale-istio-adapter:3333\"",
"apiVersion: \"config.istio.io/v1alpha2\" kind: rule metadata: name: threescale spec: match: destination.labels[\"service-mesh.3scale.net\"] == \"true\" actions: - handler: threescale.handler instances: - threescale-authorization.instance",
"3scale-config-gen --name=admin-credentials --url=\"https://<organization>-admin.3scale.net:443\" --token=\"[redacted]\"",
"3scale-config-gen --url=\"https://<organization>-admin.3scale.net\" --name=\"my-unique-id\" --service=\"123456789\" --token=\"[redacted]\"",
"export NS=\"istio-system\" URL=\"https://replaceme-admin.3scale.net:443\" NAME=\"name\" TOKEN=\"token\" exec -n USD{NS} USD(oc get po -n USD{NS} -o jsonpath='{.items[?(@.metadata.labels.app==\"3scale-istio-adapter\")].metadata.name}') -it -- ./3scale-config-gen --url USD{URL} --name USD{NAME} --token USD{TOKEN} -n USD{NS}",
"export CREDENTIALS_NAME=\"replace-me\" export SERVICE_ID=\"replace-me\" export DEPLOYMENT=\"replace-me\" patch=\"USD(oc get deployment \"USD{DEPLOYMENT}\" patch=\"USD(oc get deployment \"USD{DEPLOYMENT}\" --template='{\"spec\":{\"template\":{\"metadata\":{\"labels\":{ {{ range USDk,USDv := .spec.template.metadata.labels }}\"{{ USDk }}\":\"{{ USDv }}\",{{ end }}\"service-mesh.3scale.net/service-id\":\"'\"USD{SERVICE_ID}\"'\",\"service-mesh.3scale.net/credentials\":\"'\"USD{CREDENTIALS_NAME}\"'\"}}}}}' )\" patch deployment \"USD{DEPLOYMENT}\" --patch ''\"USD{patch}\"''",
"apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization namespace: istio-system spec: template: authorization params: subject: user: request.query_params[\"user_key\"] | request.headers[\"user-key\"] | \"\" action: path: request.url_path method: request.method | \"get\"",
"apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization namespace: istio-system spec: template: authorization params: subject: app_id: request.query_params[\"app_id\"] | request.headers[\"app-id\"] | \"\" app_key: request.query_params[\"app_key\"] | request.headers[\"app-key\"] | \"\" action: path: request.url_path method: request.method | \"get\"",
"apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization spec: template: threescale-authorization params: subject: properties: app_key: request.query_params[\"app_key\"] | request.headers[\"app-key\"] | \"\" client_id: request.auth.claims[\"azp\"] | \"\" action: path: request.url_path method: request.method | \"get\" service: destination.labels[\"service-mesh.3scale.net/service-id\"] | \"\"",
"apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: jwt-example namespace: bookinfo spec: selector: matchLabels: app: productpage jwtRules: - issuer: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak jwksUri: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak/protocol/openid-connect/certs",
"apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization spec: template: authorization params: subject: user: request.query_params[\"user_key\"] | request.headers[\"user-key\"] | properties: app_id: request.query_params[\"app_id\"] | request.headers[\"app-id\"] | \"\" app_key: request.query_params[\"app_key\"] | request.headers[\"app-key\"] | \"\" client_id: request.auth.claims[\"azp\"] | \"\" action: path: request.url_path method: request.method | \"get\" service: destination.labels[\"service-mesh.3scale.net/service-id\"] | \"\"",
"oc get pods -n <istio-system>",
"oc logs <istio-system>",
"oc delete smmr -n istio-system default",
"oc get smcp -n istio-system",
"oc delete smcp -n istio-system <name_of_custom_resource>",
"oc delete validatingwebhookconfiguration/openshift-operators.servicemesh-resources.maistra.io",
"oc delete mutatingwebhookconfiguration/openshift-operators.servicemesh-resources.maistra.io",
"oc delete -n openshift-operators daemonset/istio-node",
"oc delete clusterrole/istio-admin clusterrole/istio-cni clusterrolebinding/istio-cni",
"oc delete clusterrole istio-view istio-edit",
"oc delete clusterrole jaegers.jaegertracing.io-v1-admin jaegers.jaegertracing.io-v1-crdview jaegers.jaegertracing.io-v1-edit jaegers.jaegertracing.io-v1-view",
"oc get crds -o name | grep '.*\\.istio\\.io' | xargs -r -n 1 oc delete",
"oc get crds -o name | grep '.*\\.maistra\\.io' | xargs -r -n 1 oc delete",
"oc get crds -o name | grep '.*\\.kiali\\.io' | xargs -r -n 1 oc delete",
"oc delete crds jaegers.jaegertracing.io",
"oc delete svc admission-controller -n <operator-project>",
"oc delete project <istio-system-project>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html-single/service_mesh/index |
Chapter 50. ReportConfigurationService | Chapter 50. ReportConfigurationService 50.1. CountReportConfigurations GET /v1/report-configurations-count CountReportConfigurations returns the number of report configurations. 50.1.1. Description 50.1.2. Parameters 50.1.2.1. Query Parameters Name Description Required Default Pattern query - null pagination.limit - null pagination.offset - null pagination.sortOption.field - null pagination.sortOption.reversed - null pagination.sortOption.aggregateBy.aggrFunc - UNSET pagination.sortOption.aggregateBy.distinct - null 50.1.3. Return Type V1CountReportConfigurationsResponse 50.1.4. Content Type application/json 50.1.5. Responses Table 50.1. HTTP Response Codes Code Message Datatype 200 A successful response. V1CountReportConfigurationsResponse 0 An unexpected error response. RuntimeError 50.1.6. Samples 50.1.7. Common object reference 50.1.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 50.1.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 50.1.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 50.1.7.3. V1CountReportConfigurationsResponse Field Name Required Nullable Type Description Format count Integer int32 50.2. GetReportConfigurations GET /v1/report/configurations 50.2.1. Description 50.2.2. Parameters 50.2.2.1. Query Parameters Name Description Required Default Pattern query - null pagination.limit - null pagination.offset - null pagination.sortOption.field - null pagination.sortOption.reversed - null pagination.sortOption.aggregateBy.aggrFunc - UNSET pagination.sortOption.aggregateBy.distinct - null 50.2.3. Return Type V1GetReportConfigurationsResponse 50.2.4. Content Type application/json 50.2.5. Responses Table 50.2. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetReportConfigurationsResponse 0 An unexpected error response. RuntimeError 50.2.6. Samples 50.2.7. Common object reference 50.2.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 50.2.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 50.2.7.2. ReportConfigurationReportType Enum Values VULNERABILITY 50.2.7.3. ReportLastRunStatusRunStatus Enum Values SUCCESS FAILURE 50.2.7.4. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 50.2.7.5. ScheduleDaysOfMonth Field Name Required Nullable Type Description Format days List of integer int32 50.2.7.6. ScheduleDaysOfWeek Field Name Required Nullable Type Description Format days List of integer int32 50.2.7.7. ScheduleIntervalType Enum Values UNSET DAILY WEEKLY MONTHLY 50.2.7.8. ScheduleWeeklyInterval Field Name Required Nullable Type Description Format day Integer int32 50.2.7.9. SimpleAccessScopeRules Each element of any repeated field is an individual rule. Rules are joined by logical OR: if there exists a rule allowing resource x , x is in the access scope. Field Name Required Nullable Type Description Format includedClusters List of string includedNamespaces List of SimpleAccessScopeRulesNamespace clusterLabelSelectors List of StorageSetBasedLabelSelector namespaceLabelSelectors List of StorageSetBasedLabelSelector 50.2.7.10. SimpleAccessScopeRulesNamespace Field Name Required Nullable Type Description Format clusterName String Both fields must be set. namespaceName String 50.2.7.11. StorageEmailNotifierConfiguration Field Name Required Nullable Type Description Format notifierId String mailingLists List of string customSubject String customBody String 50.2.7.12. StorageNotifierConfiguration Field Name Required Nullable Type Description Format emailConfig StorageEmailNotifierConfiguration id String 50.2.7.13. StorageReportConfiguration Field Name Required Nullable Type Description Format id String name String description String type ReportConfigurationReportType VULNERABILITY, vulnReportFilters StorageVulnerabilityReportFilters scopeId String emailConfig StorageEmailNotifierConfiguration schedule StorageSchedule lastRunStatus StorageReportLastRunStatus lastSuccessfulRunTime Date date-time resourceScope StorageResourceScope notifiers List of StorageNotifierConfiguration creator StorageSlimUser version Integer int32 50.2.7.14. StorageReportLastRunStatus Field Name Required Nullable Type Description Format reportStatus ReportLastRunStatusRunStatus SUCCESS, FAILURE, lastRunTime Date date-time errorMsg String 50.2.7.15. StorageResourceScope Field Name Required Nullable Type Description Format collectionId String 50.2.7.16. StorageSchedule Field Name Required Nullable Type Description Format intervalType ScheduleIntervalType UNSET, DAILY, WEEKLY, MONTHLY, hour Integer int32 minute Integer int32 weekly ScheduleWeeklyInterval daysOfWeek ScheduleDaysOfWeek daysOfMonth ScheduleDaysOfMonth 50.2.7.17. StorageSetBasedLabelSelector SetBasedLabelSelector only allows set-based label requirements. available tag: 3 Field Name Required Nullable Type Description Format requirements List of StorageSetBasedLabelSelectorRequirement 50.2.7.18. StorageSetBasedLabelSelectorOperator Enum Values UNKNOWN IN NOT_IN EXISTS NOT_EXISTS 50.2.7.19. StorageSetBasedLabelSelectorRequirement Field Name Required Nullable Type Description Format key String op StorageSetBasedLabelSelectorOperator UNKNOWN, IN, NOT_IN, EXISTS, NOT_EXISTS, values List of string 50.2.7.20. StorageSlimUser Field Name Required Nullable Type Description Format id String name String 50.2.7.21. StorageVulnerabilityReportFilters Field Name Required Nullable Type Description Format fixability VulnerabilityReportFiltersFixability BOTH, FIXABLE, NOT_FIXABLE, sinceLastReport Boolean severities List of StorageVulnerabilitySeverity imageTypes List of VulnerabilityReportFiltersImageType allVuln Boolean sinceLastSentScheduledReport Boolean sinceStartDate Date date-time accessScopeRules List of SimpleAccessScopeRules 50.2.7.22. StorageVulnerabilitySeverity Enum Values UNKNOWN_VULNERABILITY_SEVERITY LOW_VULNERABILITY_SEVERITY MODERATE_VULNERABILITY_SEVERITY IMPORTANT_VULNERABILITY_SEVERITY CRITICAL_VULNERABILITY_SEVERITY 50.2.7.23. V1GetReportConfigurationsResponse Field Name Required Nullable Type Description Format reportConfigs List of StorageReportConfiguration 50.2.7.24. VulnerabilityReportFiltersFixability Enum Values BOTH FIXABLE NOT_FIXABLE 50.2.7.25. VulnerabilityReportFiltersImageType Enum Values DEPLOYED WATCHED 50.3. DeleteReportConfiguration DELETE /v1/report/configurations/{id} DeleteReportConfiguration removes a report configuration given its id 50.3.1. Description 50.3.2. Parameters 50.3.2.1. Path Parameters Name Description Required Default Pattern id X null 50.3.3. Return Type Object 50.3.4. Content Type application/json 50.3.5. Responses Table 50.3. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. RuntimeError 50.3.6. Samples 50.3.7. Common object reference 50.3.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 50.3.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 50.3.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 50.4. GetReportConfiguration GET /v1/report/configurations/{id} 50.4.1. Description 50.4.2. Parameters 50.4.2.1. Path Parameters Name Description Required Default Pattern id X null 50.4.3. Return Type V1GetReportConfigurationResponse 50.4.4. Content Type application/json 50.4.5. Responses Table 50.4. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetReportConfigurationResponse 0 An unexpected error response. RuntimeError 50.4.6. Samples 50.4.7. Common object reference 50.4.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 50.4.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 50.4.7.2. ReportConfigurationReportType Enum Values VULNERABILITY 50.4.7.3. ReportLastRunStatusRunStatus Enum Values SUCCESS FAILURE 50.4.7.4. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 50.4.7.5. ScheduleDaysOfMonth Field Name Required Nullable Type Description Format days List of integer int32 50.4.7.6. ScheduleDaysOfWeek Field Name Required Nullable Type Description Format days List of integer int32 50.4.7.7. ScheduleIntervalType Enum Values UNSET DAILY WEEKLY MONTHLY 50.4.7.8. ScheduleWeeklyInterval Field Name Required Nullable Type Description Format day Integer int32 50.4.7.9. SimpleAccessScopeRules Each element of any repeated field is an individual rule. Rules are joined by logical OR: if there exists a rule allowing resource x , x is in the access scope. Field Name Required Nullable Type Description Format includedClusters List of string includedNamespaces List of SimpleAccessScopeRulesNamespace clusterLabelSelectors List of StorageSetBasedLabelSelector namespaceLabelSelectors List of StorageSetBasedLabelSelector 50.4.7.10. SimpleAccessScopeRulesNamespace Field Name Required Nullable Type Description Format clusterName String Both fields must be set. namespaceName String 50.4.7.11. StorageEmailNotifierConfiguration Field Name Required Nullable Type Description Format notifierId String mailingLists List of string customSubject String customBody String 50.4.7.12. StorageNotifierConfiguration Field Name Required Nullable Type Description Format emailConfig StorageEmailNotifierConfiguration id String 50.4.7.13. StorageReportConfiguration Field Name Required Nullable Type Description Format id String name String description String type ReportConfigurationReportType VULNERABILITY, vulnReportFilters StorageVulnerabilityReportFilters scopeId String emailConfig StorageEmailNotifierConfiguration schedule StorageSchedule lastRunStatus StorageReportLastRunStatus lastSuccessfulRunTime Date date-time resourceScope StorageResourceScope notifiers List of StorageNotifierConfiguration creator StorageSlimUser version Integer int32 50.4.7.14. StorageReportLastRunStatus Field Name Required Nullable Type Description Format reportStatus ReportLastRunStatusRunStatus SUCCESS, FAILURE, lastRunTime Date date-time errorMsg String 50.4.7.15. StorageResourceScope Field Name Required Nullable Type Description Format collectionId String 50.4.7.16. StorageSchedule Field Name Required Nullable Type Description Format intervalType ScheduleIntervalType UNSET, DAILY, WEEKLY, MONTHLY, hour Integer int32 minute Integer int32 weekly ScheduleWeeklyInterval daysOfWeek ScheduleDaysOfWeek daysOfMonth ScheduleDaysOfMonth 50.4.7.17. StorageSetBasedLabelSelector SetBasedLabelSelector only allows set-based label requirements. available tag: 3 Field Name Required Nullable Type Description Format requirements List of StorageSetBasedLabelSelectorRequirement 50.4.7.18. StorageSetBasedLabelSelectorOperator Enum Values UNKNOWN IN NOT_IN EXISTS NOT_EXISTS 50.4.7.19. StorageSetBasedLabelSelectorRequirement Field Name Required Nullable Type Description Format key String op StorageSetBasedLabelSelectorOperator UNKNOWN, IN, NOT_IN, EXISTS, NOT_EXISTS, values List of string 50.4.7.20. StorageSlimUser Field Name Required Nullable Type Description Format id String name String 50.4.7.21. StorageVulnerabilityReportFilters Field Name Required Nullable Type Description Format fixability VulnerabilityReportFiltersFixability BOTH, FIXABLE, NOT_FIXABLE, sinceLastReport Boolean severities List of StorageVulnerabilitySeverity imageTypes List of VulnerabilityReportFiltersImageType allVuln Boolean sinceLastSentScheduledReport Boolean sinceStartDate Date date-time accessScopeRules List of SimpleAccessScopeRules 50.4.7.22. StorageVulnerabilitySeverity Enum Values UNKNOWN_VULNERABILITY_SEVERITY LOW_VULNERABILITY_SEVERITY MODERATE_VULNERABILITY_SEVERITY IMPORTANT_VULNERABILITY_SEVERITY CRITICAL_VULNERABILITY_SEVERITY 50.4.7.23. V1GetReportConfigurationResponse Field Name Required Nullable Type Description Format reportConfig StorageReportConfiguration 50.4.7.24. VulnerabilityReportFiltersFixability Enum Values BOTH FIXABLE NOT_FIXABLE 50.4.7.25. VulnerabilityReportFiltersImageType Enum Values DEPLOYED WATCHED 50.5. UpdateReportConfiguration PUT /v1/report/configurations/{id} UpdateReportConfiguration updates a report configuration 50.5.1. Description 50.5.2. Parameters 50.5.2.1. Path Parameters Name Description Required Default Pattern id X null 50.5.2.2. Body Parameter Name Description Required Default Pattern body V1UpdateReportConfigurationRequest X 50.5.3. Return Type Object 50.5.4. Content Type application/json 50.5.5. Responses Table 50.5. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. RuntimeError 50.5.6. Samples 50.5.7. Common object reference 50.5.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 50.5.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 50.5.7.2. ReportConfigurationReportType Enum Values VULNERABILITY 50.5.7.3. ReportLastRunStatusRunStatus Enum Values SUCCESS FAILURE 50.5.7.4. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 50.5.7.5. ScheduleDaysOfMonth Field Name Required Nullable Type Description Format days List of integer int32 50.5.7.6. ScheduleDaysOfWeek Field Name Required Nullable Type Description Format days List of integer int32 50.5.7.7. ScheduleIntervalType Enum Values UNSET DAILY WEEKLY MONTHLY 50.5.7.8. ScheduleWeeklyInterval Field Name Required Nullable Type Description Format day Integer int32 50.5.7.9. SimpleAccessScopeRules Each element of any repeated field is an individual rule. Rules are joined by logical OR: if there exists a rule allowing resource x , x is in the access scope. Field Name Required Nullable Type Description Format includedClusters List of string includedNamespaces List of SimpleAccessScopeRulesNamespace clusterLabelSelectors List of StorageSetBasedLabelSelector namespaceLabelSelectors List of StorageSetBasedLabelSelector 50.5.7.10. SimpleAccessScopeRulesNamespace Field Name Required Nullable Type Description Format clusterName String Both fields must be set. namespaceName String 50.5.7.11. StorageEmailNotifierConfiguration Field Name Required Nullable Type Description Format notifierId String mailingLists List of string customSubject String customBody String 50.5.7.12. StorageNotifierConfiguration Field Name Required Nullable Type Description Format emailConfig StorageEmailNotifierConfiguration id String 50.5.7.13. StorageReportConfiguration Field Name Required Nullable Type Description Format id String name String description String type ReportConfigurationReportType VULNERABILITY, vulnReportFilters StorageVulnerabilityReportFilters scopeId String emailConfig StorageEmailNotifierConfiguration schedule StorageSchedule lastRunStatus StorageReportLastRunStatus lastSuccessfulRunTime Date date-time resourceScope StorageResourceScope notifiers List of StorageNotifierConfiguration creator StorageSlimUser version Integer int32 50.5.7.14. StorageReportLastRunStatus Field Name Required Nullable Type Description Format reportStatus ReportLastRunStatusRunStatus SUCCESS, FAILURE, lastRunTime Date date-time errorMsg String 50.5.7.15. StorageResourceScope Field Name Required Nullable Type Description Format collectionId String 50.5.7.16. StorageSchedule Field Name Required Nullable Type Description Format intervalType ScheduleIntervalType UNSET, DAILY, WEEKLY, MONTHLY, hour Integer int32 minute Integer int32 weekly ScheduleWeeklyInterval daysOfWeek ScheduleDaysOfWeek daysOfMonth ScheduleDaysOfMonth 50.5.7.17. StorageSetBasedLabelSelector SetBasedLabelSelector only allows set-based label requirements. available tag: 3 Field Name Required Nullable Type Description Format requirements List of StorageSetBasedLabelSelectorRequirement 50.5.7.18. StorageSetBasedLabelSelectorOperator Enum Values UNKNOWN IN NOT_IN EXISTS NOT_EXISTS 50.5.7.19. StorageSetBasedLabelSelectorRequirement Field Name Required Nullable Type Description Format key String op StorageSetBasedLabelSelectorOperator UNKNOWN, IN, NOT_IN, EXISTS, NOT_EXISTS, values List of string 50.5.7.20. StorageSlimUser Field Name Required Nullable Type Description Format id String name String 50.5.7.21. StorageVulnerabilityReportFilters Field Name Required Nullable Type Description Format fixability VulnerabilityReportFiltersFixability BOTH, FIXABLE, NOT_FIXABLE, sinceLastReport Boolean severities List of StorageVulnerabilitySeverity imageTypes List of VulnerabilityReportFiltersImageType allVuln Boolean sinceLastSentScheduledReport Boolean sinceStartDate Date date-time accessScopeRules List of SimpleAccessScopeRules 50.5.7.22. StorageVulnerabilitySeverity Enum Values UNKNOWN_VULNERABILITY_SEVERITY LOW_VULNERABILITY_SEVERITY MODERATE_VULNERABILITY_SEVERITY IMPORTANT_VULNERABILITY_SEVERITY CRITICAL_VULNERABILITY_SEVERITY 50.5.7.23. V1UpdateReportConfigurationRequest Field Name Required Nullable Type Description Format id String reportConfig StorageReportConfiguration 50.5.7.24. VulnerabilityReportFiltersFixability Enum Values BOTH FIXABLE NOT_FIXABLE 50.5.7.25. VulnerabilityReportFiltersImageType Enum Values DEPLOYED WATCHED 50.6. PostReportConfiguration POST /v1/report/configurations PostReportConfiguration creates a report configuration 50.6.1. Description 50.6.2. Parameters 50.6.2.1. Body Parameter Name Description Required Default Pattern body V1PostReportConfigurationRequest X 50.6.3. Return Type V1PostReportConfigurationResponse 50.6.4. Content Type application/json 50.6.5. Responses Table 50.6. HTTP Response Codes Code Message Datatype 200 A successful response. V1PostReportConfigurationResponse 0 An unexpected error response. RuntimeError 50.6.6. Samples 50.6.7. Common object reference 50.6.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 50.6.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 50.6.7.2. ReportConfigurationReportType Enum Values VULNERABILITY 50.6.7.3. ReportLastRunStatusRunStatus Enum Values SUCCESS FAILURE 50.6.7.4. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 50.6.7.5. ScheduleDaysOfMonth Field Name Required Nullable Type Description Format days List of integer int32 50.6.7.6. ScheduleDaysOfWeek Field Name Required Nullable Type Description Format days List of integer int32 50.6.7.7. ScheduleIntervalType Enum Values UNSET DAILY WEEKLY MONTHLY 50.6.7.8. ScheduleWeeklyInterval Field Name Required Nullable Type Description Format day Integer int32 50.6.7.9. SimpleAccessScopeRules Each element of any repeated field is an individual rule. Rules are joined by logical OR: if there exists a rule allowing resource x , x is in the access scope. Field Name Required Nullable Type Description Format includedClusters List of string includedNamespaces List of SimpleAccessScopeRulesNamespace clusterLabelSelectors List of StorageSetBasedLabelSelector namespaceLabelSelectors List of StorageSetBasedLabelSelector 50.6.7.10. SimpleAccessScopeRulesNamespace Field Name Required Nullable Type Description Format clusterName String Both fields must be set. namespaceName String 50.6.7.11. StorageEmailNotifierConfiguration Field Name Required Nullable Type Description Format notifierId String mailingLists List of string customSubject String customBody String 50.6.7.12. StorageNotifierConfiguration Field Name Required Nullable Type Description Format emailConfig StorageEmailNotifierConfiguration id String 50.6.7.13. StorageReportConfiguration Field Name Required Nullable Type Description Format id String name String description String type ReportConfigurationReportType VULNERABILITY, vulnReportFilters StorageVulnerabilityReportFilters scopeId String emailConfig StorageEmailNotifierConfiguration schedule StorageSchedule lastRunStatus StorageReportLastRunStatus lastSuccessfulRunTime Date date-time resourceScope StorageResourceScope notifiers List of StorageNotifierConfiguration creator StorageSlimUser version Integer int32 50.6.7.14. StorageReportLastRunStatus Field Name Required Nullable Type Description Format reportStatus ReportLastRunStatusRunStatus SUCCESS, FAILURE, lastRunTime Date date-time errorMsg String 50.6.7.15. StorageResourceScope Field Name Required Nullable Type Description Format collectionId String 50.6.7.16. StorageSchedule Field Name Required Nullable Type Description Format intervalType ScheduleIntervalType UNSET, DAILY, WEEKLY, MONTHLY, hour Integer int32 minute Integer int32 weekly ScheduleWeeklyInterval daysOfWeek ScheduleDaysOfWeek daysOfMonth ScheduleDaysOfMonth 50.6.7.17. StorageSetBasedLabelSelector SetBasedLabelSelector only allows set-based label requirements. available tag: 3 Field Name Required Nullable Type Description Format requirements List of StorageSetBasedLabelSelectorRequirement 50.6.7.18. StorageSetBasedLabelSelectorOperator Enum Values UNKNOWN IN NOT_IN EXISTS NOT_EXISTS 50.6.7.19. StorageSetBasedLabelSelectorRequirement Field Name Required Nullable Type Description Format key String op StorageSetBasedLabelSelectorOperator UNKNOWN, IN, NOT_IN, EXISTS, NOT_EXISTS, values List of string 50.6.7.20. StorageSlimUser Field Name Required Nullable Type Description Format id String name String 50.6.7.21. StorageVulnerabilityReportFilters Field Name Required Nullable Type Description Format fixability VulnerabilityReportFiltersFixability BOTH, FIXABLE, NOT_FIXABLE, sinceLastReport Boolean severities List of StorageVulnerabilitySeverity imageTypes List of VulnerabilityReportFiltersImageType allVuln Boolean sinceLastSentScheduledReport Boolean sinceStartDate Date date-time accessScopeRules List of SimpleAccessScopeRules 50.6.7.22. StorageVulnerabilitySeverity Enum Values UNKNOWN_VULNERABILITY_SEVERITY LOW_VULNERABILITY_SEVERITY MODERATE_VULNERABILITY_SEVERITY IMPORTANT_VULNERABILITY_SEVERITY CRITICAL_VULNERABILITY_SEVERITY 50.6.7.23. V1PostReportConfigurationRequest Field Name Required Nullable Type Description Format reportConfig StorageReportConfiguration 50.6.7.24. V1PostReportConfigurationResponse Field Name Required Nullable Type Description Format reportConfig StorageReportConfiguration 50.6.7.25. VulnerabilityReportFiltersFixability Enum Values BOTH FIXABLE NOT_FIXABLE 50.6.7.26. VulnerabilityReportFiltersImageType Enum Values DEPLOYED WATCHED | [
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"1 for 1st, 2 for 2nd .... 31 for 31st",
"Sunday = 0, Monday = 1, .... Saturday = 6",
"Next available tag: 4",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"1 for 1st, 2 for 2nd .... 31 for 31st",
"Sunday = 0, Monday = 1, .... Saturday = 6",
"Next available tag: 4",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"1 for 1st, 2 for 2nd .... 31 for 31st",
"Sunday = 0, Monday = 1, .... Saturday = 6",
"Next available tag: 4",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"1 for 1st, 2 for 2nd .... 31 for 31st",
"Sunday = 0, Monday = 1, .... Saturday = 6",
"Next available tag: 4"
] | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/api_reference/reportconfigurationservice |
Chapter 10. ServiceAccount [v1] | Chapter 10. ServiceAccount [v1] Description ServiceAccount binds together: * a name, understood by users, and perhaps by peripheral systems, for an identity * a principal that can be authenticated and authorized * a set of secrets Type object 10.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources automountServiceAccountToken boolean AutomountServiceAccountToken indicates whether pods running as this service account should have an API token automatically mounted. Can be overridden at the pod level. imagePullSecrets array ImagePullSecrets is a list of references to secrets in the same namespace to use for pulling any images in pods that reference this ServiceAccount. ImagePullSecrets are distinct from Secrets because Secrets can be mounted in the pod, but ImagePullSecrets are only accessed by the kubelet. More info: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod imagePullSecrets[] object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata secrets array Secrets is a list of the secrets in the same namespace that pods running using this ServiceAccount are allowed to use. Pods are only limited to this list if this service account has a "kubernetes.io/enforce-mountable-secrets" annotation set to "true". This field should not be used to find auto-generated service account token secrets for use outside of pods. Instead, tokens can be requested directly using the TokenRequest API, or service account token secrets can be manually created. More info: https://kubernetes.io/docs/concepts/configuration/secret secrets[] object ObjectReference contains enough information to let you inspect or modify the referred object. 10.1.1. .imagePullSecrets Description ImagePullSecrets is a list of references to secrets in the same namespace to use for pulling any images in pods that reference this ServiceAccount. ImagePullSecrets are distinct from Secrets because Secrets can be mounted in the pod, but ImagePullSecrets are only accessed by the kubelet. More info: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod Type array 10.1.2. .imagePullSecrets[] Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 10.1.3. .secrets Description Secrets is a list of the secrets in the same namespace that pods running using this ServiceAccount are allowed to use. Pods are only limited to this list if this service account has a "kubernetes.io/enforce-mountable-secrets" annotation set to "true". This field should not be used to find auto-generated service account token secrets for use outside of pods. Instead, tokens can be requested directly using the TokenRequest API, or service account token secrets can be manually created. More info: https://kubernetes.io/docs/concepts/configuration/secret Type array 10.1.4. .secrets[] Description ObjectReference contains enough information to let you inspect or modify the referred object. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 10.2. API endpoints The following API endpoints are available: /api/v1/serviceaccounts GET : list or watch objects of kind ServiceAccount /api/v1/watch/serviceaccounts GET : watch individual changes to a list of ServiceAccount. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/serviceaccounts DELETE : delete collection of ServiceAccount GET : list or watch objects of kind ServiceAccount POST : create a ServiceAccount /api/v1/watch/namespaces/{namespace}/serviceaccounts GET : watch individual changes to a list of ServiceAccount. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/serviceaccounts/{name} DELETE : delete a ServiceAccount GET : read the specified ServiceAccount PATCH : partially update the specified ServiceAccount PUT : replace the specified ServiceAccount /api/v1/watch/namespaces/{namespace}/serviceaccounts/{name} GET : watch changes to an object of kind ServiceAccount. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 10.2.1. /api/v1/serviceaccounts HTTP method GET Description list or watch objects of kind ServiceAccount Table 10.1. HTTP responses HTTP code Reponse body 200 - OK ServiceAccountList schema 401 - Unauthorized Empty 10.2.2. /api/v1/watch/serviceaccounts HTTP method GET Description watch individual changes to a list of ServiceAccount. deprecated: use the 'watch' parameter with a list operation instead. Table 10.2. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 10.2.3. /api/v1/namespaces/{namespace}/serviceaccounts HTTP method DELETE Description delete collection of ServiceAccount Table 10.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 10.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind ServiceAccount Table 10.5. HTTP responses HTTP code Reponse body 200 - OK ServiceAccountList schema 401 - Unauthorized Empty HTTP method POST Description create a ServiceAccount Table 10.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.7. Body parameters Parameter Type Description body ServiceAccount schema Table 10.8. HTTP responses HTTP code Reponse body 200 - OK ServiceAccount schema 201 - Created ServiceAccount schema 202 - Accepted ServiceAccount schema 401 - Unauthorized Empty 10.2.4. /api/v1/watch/namespaces/{namespace}/serviceaccounts HTTP method GET Description watch individual changes to a list of ServiceAccount. deprecated: use the 'watch' parameter with a list operation instead. Table 10.9. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 10.2.5. /api/v1/namespaces/{namespace}/serviceaccounts/{name} Table 10.10. Global path parameters Parameter Type Description name string name of the ServiceAccount HTTP method DELETE Description delete a ServiceAccount Table 10.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 10.12. HTTP responses HTTP code Reponse body 200 - OK ServiceAccount schema 202 - Accepted ServiceAccount schema 401 - Unauthorized Empty HTTP method GET Description read the specified ServiceAccount Table 10.13. HTTP responses HTTP code Reponse body 200 - OK ServiceAccount schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ServiceAccount Table 10.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.15. HTTP responses HTTP code Reponse body 200 - OK ServiceAccount schema 201 - Created ServiceAccount schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ServiceAccount Table 10.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.17. Body parameters Parameter Type Description body ServiceAccount schema Table 10.18. HTTP responses HTTP code Reponse body 200 - OK ServiceAccount schema 201 - Created ServiceAccount schema 401 - Unauthorized Empty 10.2.6. /api/v1/watch/namespaces/{namespace}/serviceaccounts/{name} Table 10.19. Global path parameters Parameter Type Description name string name of the ServiceAccount HTTP method GET Description watch changes to an object of kind ServiceAccount. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 10.20. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/security_apis/serviceaccount-v1 |
Chapter 5. Postinstallation machine configuration tasks | Chapter 5. Postinstallation machine configuration tasks There are times when you need to make changes to the operating systems running on OpenShift Container Platform nodes. This can include changing settings for network time service, adding kernel arguments, or configuring journaling in a specific way. Aside from a few specialized features, most changes to operating systems on OpenShift Container Platform nodes can be done by creating what are referred to as MachineConfig objects that are managed by the Machine Config Operator. Tasks in this section describe how to use features of the Machine Config Operator to configure operating system features on OpenShift Container Platform nodes. 5.1. About the Machine Config Operator OpenShift Container Platform 4.13 integrates both operating system and cluster management. Because the cluster manages its own updates, including updates to Red Hat Enterprise Linux CoreOS (RHCOS) on cluster nodes, OpenShift Container Platform provides an opinionated lifecycle management experience that simplifies the orchestration of node upgrades. OpenShift Container Platform employs three daemon sets and controllers to simplify node management. These daemon sets orchestrate operating system updates and configuration changes to the hosts by using standard Kubernetes-style constructs. They include: The machine-config-controller , which coordinates machine upgrades from the control plane. It monitors all of the cluster nodes and orchestrates their configuration updates. The machine-config-daemon daemon set, which runs on each node in the cluster and updates a machine to configuration as defined by machine config and as instructed by the MachineConfigController. When the node detects a change, it drains off its pods, applies the update, and reboots. These changes come in the form of Ignition configuration files that apply the specified machine configuration and control kubelet configuration. The update itself is delivered in a container. This process is key to the success of managing OpenShift Container Platform and RHCOS updates together. The machine-config-server daemon set, which provides the Ignition config files to control plane nodes as they join the cluster. The machine configuration is a subset of the Ignition configuration. The machine-config-daemon reads the machine configuration to see if it needs to do an OSTree update or if it must apply a series of systemd kubelet file changes, configuration changes, or other changes to the operating system or OpenShift Container Platform configuration. When you perform node management operations, you create or modify a KubeletConfig custom resource (CR). Important When changes are made to a machine configuration, the Machine Config Operator (MCO) automatically reboots all corresponding nodes in order for the changes to take effect. To prevent the nodes from automatically rebooting after machine configuration changes, before making the changes, you must pause the autoreboot process by setting the spec.paused field to true in the corresponding machine config pool. When paused, machine configuration changes are not applied until you set the spec.paused field to false and the nodes have rebooted into the new configuration. The following modifications do not trigger a node reboot: When the MCO detects any of the following changes, it applies the update without draining or rebooting the node: Changes to the SSH key in the spec.config.passwd.users.sshAuthorizedKeys parameter of a machine config. Changes to the global pull secret or pull secret in the openshift-config namespace. Automatic rotation of the /etc/kubernetes/kubelet-ca.crt certificate authority (CA) by the Kubernetes API Server Operator. When the MCO detects changes to the /etc/containers/registries.conf file, such as adding or editing an ImageDigestMirrorSet , ImageTagMirrorSet , or ImageContentSourcePolicy object, it drains the corresponding nodes, applies the changes, and uncordons the nodes. The node drain does not happen for the following changes: The addition of a registry with the pull-from-mirror = "digest-only" parameter set for each mirror. The addition of a mirror with the pull-from-mirror = "digest-only" parameter set in a registry. The addition of items to the unqualified-search-registries list. There might be situations where the configuration on a node does not fully match what the currently-applied machine config specifies. This state is called configuration drift . The Machine Config Daemon (MCD) regularly checks the nodes for configuration drift. If the MCD detects configuration drift, the MCO marks the node degraded until an administrator corrects the node configuration. A degraded node is online and operational, but, it cannot be updated. 5.1.1. Machine Config overview The Machine Config Operator (MCO) manages updates to systemd, CRI-O and Kubelet, the kernel, Network Manager and other system features. It also offers a MachineConfig CRD that can write configuration files onto the host (see machine-config-operator ). Understanding what the MCO does and how it interacts with other components is critical to making advanced, system-level changes to an OpenShift Container Platform cluster. Here are some things you should know about the MCO, machine configs, and how they are used: Machine configs are processed alphabetically, in lexicographically increasing order, of their name. The render controller uses the first machine config in the list as the base and appends the rest to the base machine config. A machine config can make a specific change to a file or service on the operating system of each system representing a pool of OpenShift Container Platform nodes. MCO applies changes to operating systems in pools of machines. All OpenShift Container Platform clusters start with worker and control plane node pools. By adding more role labels, you can configure custom pools of nodes. For example, you can set up a custom pool of worker nodes that includes particular hardware features needed by an application. However, examples in this section focus on changes to the default pool types. Important A node can have multiple labels applied that indicate its type, such as master or worker , however it can be a member of only a single machine config pool. After a machine config change, the MCO updates the affected nodes alphabetically by zone, based on the topology.kubernetes.io/zone label. If a zone has more than one node, the oldest nodes are updated first. For nodes that do not use zones, such as in bare metal deployments, the nodes are upgraded by age, with the oldest nodes updated first. The MCO updates the number of nodes as specified by the maxUnavailable field on the machine configuration pool at a time. Some machine configuration must be in place before OpenShift Container Platform is installed to disk. In most cases, this can be accomplished by creating a machine config that is injected directly into the OpenShift Container Platform installer process, instead of running as a postinstallation machine config. In other cases, you might need to do bare metal installation where you pass kernel arguments at OpenShift Container Platform installer startup, to do such things as setting per-node individual IP addresses or advanced disk partitioning. MCO manages items that are set in machine configs. Manual changes you do to your systems will not be overwritten by MCO, unless MCO is explicitly told to manage a conflicting file. In other words, MCO only makes specific updates you request, it does not claim control over the whole node. Manual changes to nodes are strongly discouraged. If you need to decommission a node and start a new one, those direct changes would be lost. MCO is only supported for writing to files in /etc and /var directories, although there are symbolic links to some directories that can be writeable by being symbolically linked to one of those areas. The /opt and /usr/local directories are examples. Ignition is the configuration format used in MachineConfigs. See the Ignition Configuration Specification v3.2.0 for details. Although Ignition config settings can be delivered directly at OpenShift Container Platform installation time, and are formatted in the same way that MCO delivers Ignition configs, MCO has no way of seeing what those original Ignition configs are. Therefore, you should wrap Ignition config settings into a machine config before deploying them. When a file managed by MCO changes outside of MCO, the Machine Config Daemon (MCD) sets the node as degraded . It will not overwrite the offending file, however, and should continue to operate in a degraded state. A key reason for using a machine config is that it will be applied when you spin up new nodes for a pool in your OpenShift Container Platform cluster. The machine-api-operator provisions a new machine and MCO configures it. MCO uses Ignition as the configuration format. OpenShift Container Platform 4.6 moved from Ignition config specification version 2 to version 3. 5.1.1.1. What can you change with machine configs? The kinds of components that MCO can change include: config : Create Ignition config objects (see the Ignition configuration specification ) to do things like modify files, systemd services, and other features on OpenShift Container Platform machines, including: Configuration files : Create or overwrite files in the /var or /etc directory. systemd units : Create and set the status of a systemd service or add to an existing systemd service by dropping in additional settings. users and groups : Change SSH keys in the passwd section postinstallation. Important Changing SSH keys by using a machine config is supported only for the core user. Adding new users by using a machine config is not supported. kernelArguments : Add arguments to the kernel command line when OpenShift Container Platform nodes boot. kernelType : Optionally identify a non-standard kernel to use instead of the standard kernel. Use realtime to use the RT kernel (for RAN). This is only supported on select platforms. extensions : Extend RHCOS features by adding selected pre-packaged software. For this feature, available extensions include usbguard and kernel modules. Custom resources (for ContainerRuntime and Kubelet ) : Outside of machine configs, MCO manages two special custom resources for modifying CRI-O container runtime settings ( ContainerRuntime CR) and the Kubelet service ( Kubelet CR). The MCO is not the only Operator that can change operating system components on OpenShift Container Platform nodes. Other Operators can modify operating system-level features as well. One example is the Node Tuning Operator, which allows you to do node-level tuning through Tuned daemon profiles. Tasks for the MCO configuration that can be done postinstallation are included in the following procedures. See descriptions of RHCOS bare metal installation for system configuration tasks that must be done during or before OpenShift Container Platform installation. There might be situations where the configuration on a node does not fully match what the currently-applied machine config specifies. This state is called configuration drift . The Machine Config Daemon (MCD) regularly checks the nodes for configuration drift. If the MCD detects configuration drift, the MCO marks the node degraded until an administrator corrects the node configuration. A degraded node is online and operational, but, it cannot be updated. For more information on configuration drift, see Understanding configuration drift detection . 5.1.1.2. Project See the openshift-machine-config-operator GitHub site for details. 5.1.2. Understanding the Machine Config Operator node drain behavior When you use a machine config to change a system feature, such as adding new config files, modifying systemd units or kernel arguments, or updating SSH keys, the Machine Config Operator (MCO) applies those changes and ensures that each node is in the desired configuration state. After you make the changes, the MCO generates a new rendered machine config. In the majority of cases, when applying the new rendered machine config, the Operator performs the following steps on each affected node until all of the affected nodes have the updated configuration: Cordon. The MCO marks the node as not schedulable for additional workloads. Drain. The MCO terminates all running workloads on the node, causing the workloads to be rescheduled onto other nodes. Apply. The MCO writes the new configuration to the nodes as needed. Reboot. The MCO restarts the node. Uncordon. The MCO marks the node as schedulable for workloads. Throughout this process, the MCO maintains the required number of pods based on the MaxUnavailable value set in the machine config pool. If the MCO drains pods on the master node, note the following conditions: In single-node OpenShift clusters, the MCO skips the drain operation. The MCO does not drain static pods in order to prevent interference with services, such as etcd. Note In certain cases the nodes are not drained. For more information, see "About the Machine Config Operator." You can mitigate the disruption caused by drain and reboot cycles by disabling control plane reboots. For more information, see "Disabling the Machine Config Operator from automatically rebooting." Additional resources About the Machine Config Operator Disabling the Machine Config Operator from automatically rebooting 5.1.3. Understanding configuration drift detection There might be situations when the on-disk state of a node differs from what is configured in the machine config. This is known as configuration drift . For example, a cluster admin might manually modify a file, a systemd unit file, or a file permission that was configured through a machine config. This causes configuration drift. Configuration drift can cause problems between nodes in a Machine Config Pool or when the machine configs are updated. The Machine Config Operator (MCO) uses the Machine Config Daemon (MCD) to check nodes for configuration drift on a regular basis. If detected, the MCO sets the node and the machine config pool (MCP) to Degraded and reports the error. A degraded node is online and operational, but, it cannot be updated. The MCD performs configuration drift detection upon each of the following conditions: When a node boots. After any of the files (Ignition files and systemd drop-in units) specified in the machine config are modified outside of the machine config. Before a new machine config is applied. Note If you apply a new machine config to the nodes, the MCD temporarily shuts down configuration drift detection. This shutdown is needed because the new machine config necessarily differs from the machine config on the nodes. After the new machine config is applied, the MCD restarts detecting configuration drift using the new machine config. When performing configuration drift detection, the MCD validates that the file contents and permissions fully match what the currently-applied machine config specifies. Typically, the MCD detects configuration drift in less than a second after the detection is triggered. If the MCD detects configuration drift, the MCD performs the following tasks: Emits an error to the console logs Emits a Kubernetes event Stops further detection on the node Sets the node and MCP to degraded You can check if you have a degraded node by listing the MCPs: USD oc get mcp worker If you have a degraded MCP, the DEGRADEDMACHINECOUNT field is non-zero, similar to the following output: Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-404caf3180818d8ac1f50c32f14b57c3 False True True 2 1 1 1 5h51m You can determine if the problem is caused by configuration drift by examining the machine config pool: USD oc describe mcp worker Example output ... Last Transition Time: 2021-12-20T18:54:00Z Message: Node ci-ln-j4h8nkb-72292-pxqxz-worker-a-fjks4 is reporting: "content mismatch for file \"/etc/mco-test-file\"" 1 Reason: 1 nodes are reporting degraded status on sync Status: True Type: NodeDegraded 2 ... 1 This message shows that a node's /etc/mco-test-file file, which was added by the machine config, has changed outside of the machine config. 2 The state of the node is NodeDegraded . Or, if you know which node is degraded, examine that node: USD oc describe node/ci-ln-j4h8nkb-72292-pxqxz-worker-a-fjks4 Example output ... Annotations: cloud.network.openshift.io/egress-ipconfig: [{"interface":"nic0","ifaddr":{"ipv4":"10.0.128.0/17"},"capacity":{"ip":10}}] csi.volume.kubernetes.io/nodeid: {"pd.csi.storage.gke.io":"projects/openshift-gce-devel-ci/zones/us-central1-a/instances/ci-ln-j4h8nkb-72292-pxqxz-worker-a-fjks4"} machine.openshift.io/machine: openshift-machine-api/ci-ln-j4h8nkb-72292-pxqxz-worker-a-fjks4 machineconfiguration.openshift.io/controlPlaneTopology: HighlyAvailable machineconfiguration.openshift.io/currentConfig: rendered-worker-67bd55d0b02b0f659aef33680693a9f9 machineconfiguration.openshift.io/desiredConfig: rendered-worker-67bd55d0b02b0f659aef33680693a9f9 machineconfiguration.openshift.io/reason: content mismatch for file "/etc/mco-test-file" 1 machineconfiguration.openshift.io/state: Degraded 2 ... 1 The error message indicating that configuration drift was detected between the node and the listed machine config. Here the error message indicates that the contents of the /etc/mco-test-file , which was added by the machine config, has changed outside of the machine config. 2 The state of the node is Degraded . You can correct configuration drift and return the node to the Ready state by performing one of the following remediations: Ensure that the contents and file permissions of the files on the node match what is configured in the machine config. You can manually rewrite the file contents or change the file permissions. Generate a force file on the degraded node. The force file causes the MCD to bypass the usual configuration drift detection and reapplies the current machine config. Note Generating a force file on a node causes that node to reboot. Additional resources Disabling Machine Config Operator from automatically rebooting . 5.1.4. Checking machine config pool status To see the status of the Machine Config Operator (MCO), its sub-components, and the resources it manages, use the following oc commands: Procedure To see the number of MCO-managed nodes available on your cluster for each machine config pool (MCP), run the following command: USD oc get machineconfigpool Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-06c9c4... True False False 3 3 3 0 4h42m worker rendered-worker-f4b64... False True False 3 2 2 0 4h42m where: UPDATED The True status indicates that the MCO has applied the current machine config to the nodes in that MCP. The current machine config is specified in the STATUS field in the oc get mcp output. The False status indicates a node in the MCP is updating. UPDATING The True status indicates that the MCO is applying the desired machine config, as specified in the MachineConfigPool custom resource, to at least one of the nodes in that MCP. The desired machine config is the new, edited machine config. Nodes that are updating might not be available for scheduling. The False status indicates that all nodes in the MCP are updated. DEGRADED A True status indicates the MCO is blocked from applying the current or desired machine config to at least one of the nodes in that MCP, or the configuration is failing. Nodes that are degraded might not be available for scheduling. A False status indicates that all nodes in the MCP are ready. MACHINECOUNT Indicates the total number of machines in that MCP. READYMACHINECOUNT Indicates the total number of machines in that MCP that are ready for scheduling. UPDATEDMACHINECOUNT Indicates the total number of machines in that MCP that have the current machine config. DEGRADEDMACHINECOUNT Indicates the total number of machines in that MCP that are marked as degraded or unreconcilable. In the output, there are three control plane (master) nodes and three worker nodes. The control plane MCP and the associated nodes are updated to the current machine config. The nodes in the worker MCP are being updated to the desired machine config. Two of the nodes in the worker MCP are updated and one is still updating, as indicated by the UPDATEDMACHINECOUNT being 2 . There are no issues, as indicated by the DEGRADEDMACHINECOUNT being 0 and DEGRADED being False . While the nodes in the MCP are updating, the machine config listed under CONFIG is the current machine config, which the MCP is being updated from. When the update is complete, the listed machine config is the desired machine config, which the MCP was updated to. Note If a node is being cordoned, that node is not included in the READYMACHINECOUNT , but is included in the MACHINECOUNT . Also, the MCP status is set to UPDATING . Because the node has the current machine config, it is counted in the UPDATEDMACHINECOUNT total: Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-06c9c4... True False False 3 3 3 0 4h42m worker rendered-worker-c1b41a... False True False 3 2 3 0 4h42m To check the status of the nodes in an MCP by examining the MachineConfigPool custom resource, run the following command: : USD oc describe mcp worker Example output ... Degraded Machine Count: 0 Machine Count: 3 Observed Generation: 2 Ready Machine Count: 3 Unavailable Machine Count: 0 Updated Machine Count: 3 Events: <none> Note If a node is being cordoned, the node is not included in the Ready Machine Count . It is included in the Unavailable Machine Count : Example output ... Degraded Machine Count: 0 Machine Count: 3 Observed Generation: 2 Ready Machine Count: 2 Unavailable Machine Count: 1 Updated Machine Count: 3 To see each existing MachineConfig object, run the following command: USD oc get machineconfigs Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m 00-worker 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m 01-master-container-runtime 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m 01-master-kubelet 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m ... rendered-master-dde... 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m rendered-worker-fde... 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m Note that the MachineConfig objects listed as rendered are not meant to be changed or deleted. To view the contents of a particular machine config (in this case, 01-master-kubelet ), run the following command: USD oc describe machineconfigs 01-master-kubelet The output from the command shows that this MachineConfig object contains both configuration files ( cloud.conf and kubelet.conf ) and a systemd service (Kubernetes Kubelet): Example output Name: 01-master-kubelet ... Spec: Config: Ignition: Version: 3.2.0 Storage: Files: Contents: Source: data:, Mode: 420 Overwrite: true Path: /etc/kubernetes/cloud.conf Contents: Source: data:,kind%3A%20KubeletConfiguration%0AapiVersion%3A%20kubelet.config.k8s.io%2Fv1beta1%0Aauthentication%3A%0A%20%20x509%3A%0A%20%20%20%20clientCAFile%3A%20%2Fetc%2Fkubernetes%2Fkubelet-ca.crt%0A%20%20anonymous... Mode: 420 Overwrite: true Path: /etc/kubernetes/kubelet.conf Systemd: Units: Contents: [Unit] Description=Kubernetes Kubelet Wants=rpc-statd.service network-online.target crio.service After=network-online.target crio.service ExecStart=/usr/bin/hyperkube \ kubelet \ --config=/etc/kubernetes/kubelet.conf \ ... If something goes wrong with a machine config that you apply, you can always back out that change. For example, if you had run oc create -f ./myconfig.yaml to apply a machine config, you could remove that machine config by running the following command: USD oc delete -f ./myconfig.yaml If that was the only problem, the nodes in the affected pool should return to a non-degraded state. This actually causes the rendered configuration to roll back to its previously rendered state. If you add your own machine configs to your cluster, you can use the commands shown in the example to check their status and the related status of the pool to which they are applied. 5.2. Using MachineConfig objects to configure nodes You can use the tasks in this section to create MachineConfig objects that modify files, systemd unit files, and other operating system features running on OpenShift Container Platform nodes. For more ideas on working with machine configs, see content related to updating SSH authorized keys, verifying image signatures , enabling SCTP , and configuring iSCSI initiatornames for OpenShift Container Platform. OpenShift Container Platform supports Ignition specification version 3.2 . All new machine configs you create going forward should be based on Ignition specification version 3.2. If you are upgrading your OpenShift Container Platform cluster, any existing Ignition specification version 2.x machine configs will be translated automatically to specification version 3.2. There might be situations where the configuration on a node does not fully match what the currently-applied machine config specifies. This state is called configuration drift . The Machine Config Daemon (MCD) regularly checks the nodes for configuration drift. If the MCD detects configuration drift, the MCO marks the node degraded until an administrator corrects the node configuration. A degraded node is online and operational, but, it cannot be updated. For more information on configuration drift, see Understanding configuration drift detection . Tip Use the following "Configuring chrony time service" procedure as a model for how to go about adding other configuration files to OpenShift Container Platform nodes. 5.2.1. Configuring chrony time service You can set the time server and related settings used by the chrony time service ( chronyd ) by modifying the contents of the chrony.conf file and passing those contents to your nodes as a machine config. Procedure Create a Butane config including the contents of the chrony.conf file. For example, to configure chrony on worker nodes, create a 99-worker-chrony.bu file. Note See "Creating machine configs with Butane" for information about Butane. variant: openshift version: 4.13.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 storage: files: - path: /etc/chrony.conf mode: 0644 3 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony 1 2 On control plane nodes, substitute master for worker in both of these locations. 3 Specify an octal value mode for the mode field in the machine config file. After creating the file and applying the changes, the mode is converted to a decimal value. You can check the YAML file with the command oc get mc <mc-name> -o yaml . 4 Specify any valid, reachable time source, such as the one provided by your DHCP server. Alternately, you can specify any of the following NTP servers: 1.rhel.pool.ntp.org , 2.rhel.pool.ntp.org , or 3.rhel.pool.ntp.org . Use Butane to generate a MachineConfig object file, 99-worker-chrony.yaml , containing the configuration to be delivered to the nodes: USD butane 99-worker-chrony.bu -o 99-worker-chrony.yaml Apply the configurations in one of two ways: If the cluster is not running yet, after you generate manifest files, add the MachineConfig object file to the <installation_directory>/openshift directory, and then continue to create the cluster. If the cluster is already running, apply the file: USD oc apply -f ./99-worker-chrony.yaml Additional resources Creating machine configs with Butane 5.2.2. Disabling the chrony time service You can disable the chrony time service ( chronyd ) for nodes with a specific role by using a MachineConfig custom resource (CR). Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create the MachineConfig CR that disables chronyd for the specified node role. Save the following YAML in the disable-chronyd.yaml file: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: <node_role> 1 name: disable-chronyd spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=NTP client/server Documentation=man:chronyd(8) man:chrony.conf(5) After=ntpdate.service sntp.service ntpd.service Conflicts=ntpd.service systemd-timesyncd.service ConditionCapability=CAP_SYS_TIME [Service] Type=forking PIDFile=/run/chrony/chronyd.pid EnvironmentFile=-/etc/sysconfig/chronyd ExecStart=/usr/sbin/chronyd USDOPTIONS ExecStartPost=/usr/libexec/chrony-helper update-daemon PrivateTmp=yes ProtectHome=yes ProtectSystem=full [Install] WantedBy=multi-user.target enabled: false name: "chronyd.service" 1 Node role where you want to disable chronyd , for example, master . Create the MachineConfig CR by running the following command: USD oc create -f disable-chronyd.yaml 5.2.3. Adding kernel arguments to nodes In some special cases, you might want to add kernel arguments to a set of nodes in your cluster. This should only be done with caution and clear understanding of the implications of the arguments you set. Warning Improper use of kernel arguments can result in your systems becoming unbootable. Examples of kernel arguments you could set include: nosmt : Disables symmetric multithreading (SMT) in the kernel. Multithreading allows multiple logical threads for each CPU. You could consider nosmt in multi-tenant environments to reduce risks from potential cross-thread attacks. By disabling SMT, you essentially choose security over performance. systemd.unified_cgroup_hierarchy : Enables Linux control group version 2 (cgroup v2). cgroup v2 is the version of the kernel control group and offers multiple improvements. enforcing=0 : Configures Security Enhanced Linux (SELinux) to run in permissive mode. In permissive mode, the system acts as if SELinux is enforcing the loaded security policy, including labeling objects and emitting access denial entries in the logs, but it does not actually deny any operations. While not supported for production systems, permissive mode can be helpful for debugging. Warning Disabling SELinux on RHCOS in production is not supported. Once SELinux has been disabled on a node, it must be re-provisioned before re-inclusion in a production cluster. See Kernel.org kernel parameters for a list and descriptions of kernel arguments. In the following procedure, you create a MachineConfig object that identifies: A set of machines to which you want to add the kernel argument. In this case, machines with a worker role. Kernel arguments that are appended to the end of the existing kernel arguments. A label that indicates where in the list of machine configs the change is applied. Prerequisites Have administrative privilege to a working OpenShift Container Platform cluster. Procedure List existing MachineConfig objects for your OpenShift Container Platform cluster to determine how to label your machine config: USD oc get MachineConfig Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m Create a MachineConfig object file that identifies the kernel argument (for example, 05-worker-kernelarg-selinuxpermissive.yaml ) apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 05-worker-kernelarg-selinuxpermissive 2 spec: kernelArguments: - enforcing=0 3 1 Applies the new kernel argument only to worker nodes. 2 Named to identify where it fits among the machine configs (05) and what it does (adds a kernel argument to configure SELinux permissive mode). 3 Identifies the exact kernel argument as enforcing=0 . Create the new machine config: USD oc create -f 05-worker-kernelarg-selinuxpermissive.yaml Check the machine configs to see that the new one was added: USD oc get MachineConfig Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 05-worker-kernelarg-selinuxpermissive 3.2.0 105s 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m Check the nodes: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ip-10-0-136-161.ec2.internal Ready worker 28m v1.26.0 ip-10-0-136-243.ec2.internal Ready master 34m v1.26.0 ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.26.0 ip-10-0-142-249.ec2.internal Ready master 34m v1.26.0 ip-10-0-153-11.ec2.internal Ready worker 28m v1.26.0 ip-10-0-153-150.ec2.internal Ready master 34m v1.26.0 You can see that scheduling on each worker node is disabled as the change is being applied. Check that the kernel argument worked by going to one of the worker nodes and listing the kernel command line arguments (in /proc/cmdline on the host): USD oc debug node/ip-10-0-141-105.ec2.internal Example output Starting pod/ip-10-0-141-105ec2internal-debug ... To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline BOOT_IMAGE=/ostree/rhcos-... console=tty0 console=ttyS0,115200n8 rootflags=defaults,prjquota rw root=UUID=fd0... ostree=/ostree/boot.0/rhcos/16... coreos.oem.id=qemu coreos.oem.id=ec2 ignition.platform.id=ec2 enforcing=0 sh-4.2# exit You should see the enforcing=0 argument added to the other kernel arguments. 5.2.4. Enabling multipathing with kernel arguments on RHCOS Red Hat Enterprise Linux CoreOS (RHCOS) supports multipathing on the primary disk, allowing stronger resilience to hardware failure to achieve higher host availability. Postinstallation support is available by activating multipathing via the machine config. Important Enabling multipathing during installation is supported and recommended for nodes provisioned in OpenShift Container Platform. In setups where any I/O to non-optimized paths results in I/O system errors, you must enable multipathing at installation time. For more information about enabling multipathing during installation time, see "Enabling multipathing post installation" in the Installing on bare metal documentation. Important On IBM Z and IBM(R) LinuxONE, you can enable multipathing only if you configured your cluster for it during installation. For more information, see "Installing RHCOS and starting the OpenShift Container Platform bootstrap process" in Installing a cluster with z/VM on IBM Z and IBM(R) LinuxONE . Important When an OpenShift Container Platform cluster is installed or configured as a postinstallation activity on a single VIOS host with "vSCSI" storage on IBM Power(R) with multipath configured, the CoreOS nodes with multipath enabled fail to boot. This behavior is expected, as only one path is available to the node. Prerequisites You have a running OpenShift Container Platform cluster. You are logged in to the cluster as a user with administrative privileges. You have confirmed that the disk is enabled for multipathing. Multipathing is only supported on hosts that are connected to a SAN via an HBA adapter. Procedure To enable multipathing postinstallation on control plane nodes: Create a machine config file, such as 99-master-kargs-mpath.yaml , that instructs the cluster to add the master label and that identifies the multipath kernel argument, for example: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: "master" name: 99-master-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root' To enable multipathing postinstallation on worker nodes: Create a machine config file, such as 99-worker-kargs-mpath.yaml , that instructs the cluster to add the worker label and that identifies the multipath kernel argument, for example: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: "worker" name: 99-worker-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root' Create the new machine config by using either the master or worker YAML file you previously created: USD oc create -f ./99-worker-kargs-mpath.yaml Check the machine configs to see that the new one was added: USD oc get MachineConfig Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-kargs-mpath 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 105s 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m Check the nodes: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ip-10-0-136-161.ec2.internal Ready worker 28m v1.26.0 ip-10-0-136-243.ec2.internal Ready master 34m v1.26.0 ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.26.0 ip-10-0-142-249.ec2.internal Ready master 34m v1.26.0 ip-10-0-153-11.ec2.internal Ready worker 28m v1.26.0 ip-10-0-153-150.ec2.internal Ready master 34m v1.26.0 You can see that scheduling on each worker node is disabled as the change is being applied. Check that the kernel argument worked by going to one of the worker nodes and listing the kernel command line arguments (in /proc/cmdline on the host): USD oc debug node/ip-10-0-141-105.ec2.internal Example output Starting pod/ip-10-0-141-105ec2internal-debug ... To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline ... rd.multipath=default root=/dev/disk/by-label/dm-mpath-root ... sh-4.2# exit You should see the added kernel arguments. Additional resources See Enabling multipathing with kernel arguments on RHCOS for more information about enabling multipathing during installation time. 5.2.5. Adding a real-time kernel to nodes Some OpenShift Container Platform workloads require a high degree of determinism.While Linux is not a real-time operating system, the Linux real-time kernel includes a preemptive scheduler that provides the operating system with real-time characteristics. If your OpenShift Container Platform workloads require these real-time characteristics, you can switch your machines to the Linux real-time kernel. For OpenShift Container Platform, 4.13 you can make this switch using a MachineConfig object. Although making the change is as simple as changing a machine config kernelType setting to realtime , there are a few other considerations before making the change: Currently, real-time kernel is supported only on worker nodes, and only for radio access network (RAN) use. The following procedure is fully supported with bare metal installations that use systems that are certified for Red Hat Enterprise Linux for Real Time 8. Real-time support in OpenShift Container Platform is limited to specific subscriptions. The following procedure is also supported for use with Google Cloud Platform. Prerequisites Have a running OpenShift Container Platform cluster (version 4.4 or later). Log in to the cluster as a user with administrative privileges. Procedure Create a machine config for the real-time kernel: Create a YAML file (for example, 99-worker-realtime.yaml ) that contains a MachineConfig object for the realtime kernel type. This example tells the cluster to use a real-time kernel for all worker nodes: USD cat << EOF > 99-worker-realtime.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: "worker" name: 99-worker-realtime spec: kernelType: realtime EOF Add the machine config to the cluster. Type the following to add the machine config to the cluster: USD oc create -f 99-worker-realtime.yaml Check the real-time kernel: Once each impacted node reboots, log in to the cluster and run the following commands to make sure that the real-time kernel has replaced the regular kernel for the set of nodes you configured: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ip-10-0-143-147.us-east-2.compute.internal Ready worker 103m v1.26.0 ip-10-0-146-92.us-east-2.compute.internal Ready worker 101m v1.26.0 ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.26.0 USD oc debug node/ip-10-0-143-147.us-east-2.compute.internal Example output Starting pod/ip-10-0-143-147us-east-2computeinternal-debug ... To use host binaries, run `chroot /host` sh-4.4# uname -a Linux <worker_node> 4.18.0-147.3.1.rt24.96.el8_1.x86_64 #1 SMP PREEMPT RT Wed Nov 27 18:29:55 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux The kernel name contains rt and text "PREEMPT RT" indicates that this is a real-time kernel. To go back to the regular kernel, delete the MachineConfig object: USD oc delete -f 99-worker-realtime.yaml 5.2.6. Configuring journald settings If you need to configure settings for the journald service on OpenShift Container Platform nodes, you can do that by modifying the appropriate configuration file and passing the file to the appropriate pool of nodes as a machine config. This procedure describes how to modify journald rate limiting settings in the /etc/systemd/journald.conf file and apply them to worker nodes. See the journald.conf man page for information on how to use that file. Prerequisites Have a running OpenShift Container Platform cluster. Log in to the cluster as a user with administrative privileges. Procedure Create a Butane config file, 40-worker-custom-journald.bu , that includes an /etc/systemd/journald.conf file with the required settings. Note See "Creating machine configs with Butane" for information about Butane. variant: openshift version: 4.13.0 metadata: name: 40-worker-custom-journald labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/systemd/journald.conf mode: 0644 overwrite: true contents: inline: | # Disable rate limiting RateLimitInterval=1s RateLimitBurst=10000 Storage=volatile Compress=no MaxRetentionSec=30s Use Butane to generate a MachineConfig object file, 40-worker-custom-journald.yaml , containing the configuration to be delivered to the worker nodes: USD butane 40-worker-custom-journald.bu -o 40-worker-custom-journald.yaml Apply the machine config to the pool: USD oc apply -f 40-worker-custom-journald.yaml Check that the new machine config is applied and that the nodes are not in a degraded state. It might take a few minutes. The worker pool will show the updates in progress, as each node successfully has the new machine config applied: USD oc get machineconfigpool NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-35 True False False 3 3 3 0 34m worker rendered-worker-d8 False True False 3 1 1 0 34m To check that the change was applied, you can log in to a worker node: USD oc get node | grep worker ip-10-0-0-1.us-east-2.compute.internal Ready worker 39m v0.0.0-master+USDFormat:%hUSD USD oc debug node/ip-10-0-0-1.us-east-2.compute.internal Starting pod/ip-10-0-141-142us-east-2computeinternal-debug ... ... sh-4.2# chroot /host sh-4.4# cat /etc/systemd/journald.conf # Disable rate limiting RateLimitInterval=1s RateLimitBurst=10000 Storage=volatile Compress=no MaxRetentionSec=30s sh-4.4# exit Additional resources Creating machine configs with Butane 5.2.7. Adding extensions to RHCOS RHCOS is a minimal container-oriented RHEL operating system, designed to provide a common set of capabilities to OpenShift Container Platform clusters across all platforms. While adding software packages to RHCOS systems is generally discouraged, the MCO provides an extensions feature you can use to add a minimal set of features to RHCOS nodes. Currently, the following extensions are available: usbguard : Adding the usbguard extension protects RHCOS systems from attacks from intrusive USB devices. See USBGuard for details. kerberos : Adding the kerberos extension provides a mechanism that allows both users and machines to identify themselves to the network to receive defined, limited access to the areas and services that an administrator has configured. See Using Kerberos for details, including how to set up a Kerberos client and mount a Kerberized NFS share. The following procedure describes how to use a machine config to add one or more extensions to your RHCOS nodes. Prerequisites Have a running OpenShift Container Platform cluster (version 4.6 or later). Log in to the cluster as a user with administrative privileges. Procedure Create a machine config for extensions: Create a YAML file (for example, 80-extensions.yaml ) that contains a MachineConfig extensions object. This example tells the cluster to add the usbguard extension. USD cat << EOF > 80-extensions.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 80-worker-extensions spec: config: ignition: version: 3.2.0 extensions: - usbguard EOF Add the machine config to the cluster. Type the following to add the machine config to the cluster: USD oc create -f 80-extensions.yaml This sets all worker nodes to have rpm packages for usbguard installed. Check that the extensions were applied: USD oc get machineconfig 80-worker-extensions Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 80-worker-extensions 3.2.0 57s Check that the new machine config is now applied and that the nodes are not in a degraded state. It may take a few minutes. The worker pool will show the updates in progress, as each machine successfully has the new machine config applied: USD oc get machineconfigpool Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-35 True False False 3 3 3 0 34m worker rendered-worker-d8 False True False 3 1 1 0 34m Check the extensions. To check that the extension was applied, run: USD oc get node | grep worker Example output NAME STATUS ROLES AGE VERSION ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.26.0 USD oc debug node/ip-10-0-169-2.us-east-2.compute.internal Example output ... To use host binaries, run `chroot /host` sh-4.4# chroot /host sh-4.4# rpm -q usbguard usbguard-0.7.4-4.el8.x86_64.rpm 5.2.8. Loading custom firmware blobs in the machine config manifest Because the default location for firmware blobs in /usr/lib is read-only, you can locate a custom firmware blob by updating the search path. This enables you to load local firmware blobs in the machine config manifest when the blobs are not managed by RHCOS. Procedure Create a Butane config file, 98-worker-firmware-blob.bu , that updates the search path so that it is root-owned and writable to local storage. The following example places the custom blob file from your local workstation onto nodes under /var/lib/firmware . Note See "Creating machine configs with Butane" for information about Butane. Butane config file for custom firmware blob variant: openshift version: 4.13.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-worker-firmware-blob storage: files: - path: /var/lib/firmware/<package_name> 1 contents: local: <package_name> 2 mode: 0644 3 openshift: kernel_arguments: - 'firmware_class.path=/var/lib/firmware' 4 1 Sets the path on the node where the firmware package is copied to. 2 Specifies a file with contents that are read from a local file directory on the system running Butane. The path of the local file is relative to a files-dir directory, which must be specified by using the --files-dir option with Butane in the following step. 3 Sets the permissions for the file on the RHCOS node. It is recommended to set 0644 permissions. 4 The firmware_class.path parameter customizes the kernel search path of where to look for the custom firmware blob that was copied from your local workstation onto the root file system of the node. This example uses /var/lib/firmware as the customized path. Run Butane to generate a MachineConfig object file that uses a copy of the firmware blob on your local workstation named 98-worker-firmware-blob.yaml . The firmware blob contains the configuration to be delivered to the nodes. The following example uses the --files-dir option to specify the directory on your workstation where the local file or files are located: USD butane 98-worker-firmware-blob.bu -o 98-worker-firmware-blob.yaml --files-dir <directory_including_package_name> Apply the configurations to the nodes in one of two ways: If the cluster is not running yet, after you generate manifest files, add the MachineConfig object file to the <installation_directory>/openshift directory, and then continue to create the cluster. If the cluster is already running, apply the file: USD oc apply -f 98-worker-firmware-blob.yaml A MachineConfig object YAML file is created for you to finish configuring your machines. Save the Butane config in case you need to update the MachineConfig object in the future. Additional resources Creating machine configs with Butane 5.3. Configuring MCO-related custom resources Besides managing MachineConfig objects, the MCO manages two custom resources (CRs): KubeletConfig and ContainerRuntimeConfig . Those CRs let you change node-level settings impacting how the Kubelet and CRI-O container runtime services behave. 5.3.1. Creating a KubeletConfig CRD to edit kubelet parameters The kubelet configuration is currently serialized as an Ignition configuration, so it can be directly edited. However, there is also a new kubelet-config-controller added to the Machine Config Controller (MCC). This lets you use a KubeletConfig custom resource (CR) to edit the kubelet parameters. Note As the fields in the kubeletConfig object are passed directly to the kubelet from upstream Kubernetes, the kubelet validates those values directly. Invalid values in the kubeletConfig object might cause cluster nodes to become unavailable. For valid values, see the Kubernetes documentation . Consider the following guidance: Edit an existing KubeletConfig CR to modify existing settings or add new settings, instead of creating a CR for each change. It is recommended that you create a CR only to modify a different machine config pool, or for changes that are intended to be temporary, so that you can revert the changes. Create one KubeletConfig CR for each machine config pool with all the config changes you want for that pool. As needed, create multiple KubeletConfig CRs with a limit of 10 per cluster. For the first KubeletConfig CR, the Machine Config Operator (MCO) creates a machine config appended with kubelet . With each subsequent CR, the controller creates another kubelet machine config with a numeric suffix. For example, if you have a kubelet machine config with a -2 suffix, the kubelet machine config is appended with -3 . Note If you are applying a kubelet or container runtime config to a custom machine config pool, the custom role in the machineConfigSelector must match the name of the custom machine config pool. For example, because the following custom machine config pool is named infra , the custom role must also be infra : apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: infra spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]} # ... If you want to delete the machine configs, delete them in reverse order to avoid exceeding the limit. For example, you delete the kubelet-3 machine config before deleting the kubelet-2 machine config. Note If you have a machine config with a kubelet-9 suffix, and you create another KubeletConfig CR, a new machine config is not created, even if there are fewer than 10 kubelet machine configs. Example KubeletConfig CR USD oc get kubeletconfig NAME AGE set-kubelet-config 15m Example showing a KubeletConfig machine config USD oc get mc | grep kubelet ... 99-worker-generated-kubelet-1 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 26m ... The following procedure is an example to show how to configure the maximum number of pods per node, the maximum PIDs per node, and the maximum container log size size on the worker nodes. Prerequisites Obtain the label associated with the static MachineConfigPool CR for the type of node you want to configure. Perform one of the following steps: View the machine config pool: USD oc describe machineconfigpool <name> For example: USD oc describe machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: 2019-02-08T14:52:39Z generation: 1 labels: custom-kubelet: set-kubelet-config 1 1 If a label has been added it appears under labels . If the label is not present, add a key/value pair: USD oc label machineconfigpool worker custom-kubelet=set-kubelet-config Procedure View the available machine configuration objects that you can select: USD oc get machineconfig By default, the two kubelet-related configs are 01-master-kubelet and 01-worker-kubelet . Check the current value for the maximum pods per node: USD oc describe node <node_name> For example: USD oc describe node ci-ln-5grqprb-f76d1-ncnqq-worker-a-mdv94 Look for value: pods: <value> in the Allocatable stanza: Example output Allocatable: attachable-volumes-aws-ebs: 25 cpu: 3500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 15341844Ki pods: 250 Configure the worker nodes as needed: Create a YAML file similar to the following that contains the kubelet configuration: Important Kubelet configurations that target a specific machine config pool also affect any dependent pools. For example, creating a kubelet configuration for the pool containing worker nodes will also apply to any subset pools, including the pool containing infrastructure nodes. To avoid this, you must create a new machine config pool with a selection expression that only includes worker nodes, and have your kubelet configuration target this new pool. apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-config spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-kubelet-config 1 kubeletConfig: 2 podPidsLimit: 8192 containerLogMaxSize: 50Mi maxPods: 500 1 Enter the label from the machine config pool. 2 Add the kubelet configuration. For example: Use podPidsLimit to set the maximum number of PIDs in any pod. Use containerLogMaxSize to set the maximum size of the container log file before it is rotated. Use maxPods to set the maximum pods per node. Note The rate at which the kubelet talks to the API server depends on queries per second (QPS) and burst values. The default values, 50 for kubeAPIQPS and 100 for kubeAPIBurst , are sufficient if there are limited pods running on each node. It is recommended to update the kubelet QPS and burst rates if there are enough CPU and memory resources on the node. apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-config spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-kubelet-config kubeletConfig: maxPods: <pod_count> kubeAPIBurst: <burst_rate> kubeAPIQPS: <QPS> Update the machine config pool for workers with the label: USD oc label machineconfigpool worker custom-kubelet=set-kubelet-config Create the KubeletConfig object: USD oc create -f change-maxPods-cr.yaml Verification Verify that the KubeletConfig object is created: USD oc get kubeletconfig Example output NAME AGE set-kubelet-config 15m Depending on the number of worker nodes in the cluster, wait for the worker nodes to be rebooted one by one. For a cluster with 3 worker nodes, this could take about 10 to 15 minutes. Verify that the changes are applied to the node: Check on a worker node that the maxPods value changed: USD oc describe node <node_name> Locate the Allocatable stanza: ... Allocatable: attachable-volumes-gce-pd: 127 cpu: 3500m ephemeral-storage: 123201474766 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 14225400Ki pods: 500 1 ... 1 In this example, the pods parameter should report the value you set in the KubeletConfig object. Verify the change in the KubeletConfig object: USD oc get kubeletconfigs set-kubelet-config -o yaml This should show a status of True and type:Success , as shown in the following example: spec: kubeletConfig: containerLogMaxSize: 50Mi maxPods: 500 podPidsLimit: 8192 machineConfigPoolSelector: matchLabels: custom-kubelet: set-kubelet-config status: conditions: - lastTransitionTime: "2021-06-30T17:04:07Z" message: Success status: "True" type: Success 5.3.2. Creating a ContainerRuntimeConfig CR to edit CRI-O parameters You can change some of the settings associated with the OpenShift Container Platform CRI-O runtime for the nodes associated with a specific machine config pool (MCP). Using a ContainerRuntimeConfig custom resource (CR), you set the configuration values and add a label to match the MCP. The MCO then rebuilds the crio.conf and storage.conf configuration files on the associated nodes with the updated values. Note To revert the changes implemented by using a ContainerRuntimeConfig CR, you must delete the CR. Removing the label from the machine config pool does not revert the changes. You can modify the following settings by using a ContainerRuntimeConfig CR: Log level : The logLevel parameter sets the CRI-O log_level parameter, which is the level of verbosity for log messages. The default is info ( log_level = info ). Other options include fatal , panic , error , warn , debug , and trace . Overlay size : The overlaySize parameter sets the CRI-O Overlay storage driver size parameter, which is the maximum size of a container image. Container runtime : The defaultRuntime parameter sets the container runtime to either runc or crun . The default is runc . You should have one ContainerRuntimeConfig CR for each machine config pool with all the config changes you want for that pool. If you are applying the same content to all the pools, you only need one ContainerRuntimeConfig CR for all the pools. You should edit an existing ContainerRuntimeConfig CR to modify existing settings or add new settings instead of creating a new CR for each change. It is recommended to create a new ContainerRuntimeConfig CR only to modify a different machine config pool, or for changes that are intended to be temporary so that you can revert the changes. You can create multiple ContainerRuntimeConfig CRs, as needed, with a limit of 10 per cluster. For the first ContainerRuntimeConfig CR, the MCO creates a machine config appended with containerruntime . With each subsequent CR, the controller creates a new containerruntime machine config with a numeric suffix. For example, if you have a containerruntime machine config with a -2 suffix, the containerruntime machine config is appended with -3 . If you want to delete the machine configs, you should delete them in reverse order to avoid exceeding the limit. For example, you should delete the containerruntime-3 machine config before deleting the containerruntime-2 machine config. Note If you have a machine config with a containerruntime-9 suffix, and you create another ContainerRuntimeConfig CR, a new machine config is not created, even if there are fewer than 10 containerruntime machine configs. Example showing multiple ContainerRuntimeConfig CRs USD oc get ctrcfg Example output NAME AGE ctr-overlay 15m ctr-level 5m45s Example showing multiple containerruntime machine configs USD oc get mc | grep container Example output ... 01-master-container-runtime b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 57m ... 01-worker-container-runtime b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 57m ... 99-worker-generated-containerruntime b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 26m 99-worker-generated-containerruntime-1 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 17m 99-worker-generated-containerruntime-2 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 7m26s ... The following example sets the log_level field to debug and sets the overlay size to 8 GB: Example ContainerRuntimeConfig CR apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: overlay-size spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: '' 1 containerRuntimeConfig: logLevel: debug 2 overlaySize: 8G 3 defaultRuntime: "crun" 4 1 Specifies the machine config pool label. For a container runtime config, the role must match the name of the associated machine config pool. 2 Optional: Specifies the level of verbosity for log messages. 3 Optional: Specifies the maximum size of a container image. 4 Optional: Specifies the container runtime to deploy to new containers. The default value is runc . Procedure To change CRI-O settings using the ContainerRuntimeConfig CR: Create a YAML file for the ContainerRuntimeConfig CR: apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: overlay-size spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: '' 1 containerRuntimeConfig: 2 logLevel: debug overlaySize: 8G 1 Specify a label for the machine config pool that you want you want to modify. 2 Set the parameters as needed. Create the ContainerRuntimeConfig CR: USD oc create -f <file_name>.yaml Verify that the CR is created: USD oc get ContainerRuntimeConfig Example output NAME AGE overlay-size 3m19s Check that a new containerruntime machine config is created: USD oc get machineconfigs | grep containerrun Example output 99-worker-generated-containerruntime 2c9371fbb673b97a6fe8b1c52691999ed3a1bfc2 3.2.0 31s Monitor the machine config pool until all are shown as ready: USD oc get mcp worker Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-169 False True False 3 1 1 0 9h Verify that the settings were applied in CRI-O: Open an oc debug session to a node in the machine config pool and run chroot /host . USD oc debug node/<node_name> sh-4.4# chroot /host Verify the changes in the crio.conf file: sh-4.4# crio config | grep 'log_level' Example output log_level = "debug" Verify the changes in the `storage.conf`file: sh-4.4# head -n 7 /etc/containers/storage.conf Example output 5.3.3. Setting the default maximum container root partition size for Overlay with CRI-O The root partition of each container shows all of the available disk space of the underlying host. Follow this guidance to set a maximum partition size for the root disk of all containers. To configure the maximum Overlay size, as well as other CRI-O options like the log level, you can create the following ContainerRuntimeConfig custom resource definition (CRD): apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: overlay-size spec: machineConfigPoolSelector: matchLabels: custom-crio: overlay-size containerRuntimeConfig: logLevel: debug overlaySize: 8G Procedure Create the configuration object: USD oc apply -f overlaysize.yml To apply the new CRI-O configuration to your worker nodes, edit the worker machine config pool: USD oc edit machineconfigpool worker Add the custom-crio label based on the matchLabels name you set in the ContainerRuntimeConfig CRD: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: "2020-07-09T15:46:34Z" generation: 3 labels: custom-crio: overlay-size machineconfiguration.openshift.io/mco-built-in: "" Save the changes, then view the machine configs: USD oc get machineconfigs New 99-worker-generated-containerruntime and rendered-worker-xyz objects are created: Example output 99-worker-generated-containerruntime 4173030d89fbf4a7a0976d1665491a4d9a6e54f1 3.2.0 7m42s rendered-worker-xyz 4173030d89fbf4a7a0976d1665491a4d9a6e54f1 3.2.0 7m36s After those objects are created, monitor the machine config pool for the changes to be applied: USD oc get mcp worker The worker nodes show UPDATING as True , as well as the number of machines, the number updated, and other details: Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-xyz False True False 3 2 2 0 20h When complete, the worker nodes transition back to UPDATING as False , and the UPDATEDMACHINECOUNT number matches the MACHINECOUNT : Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-xyz True False False 3 3 3 0 20h Looking at a worker machine, you see that the new 8 GB max size configuration is applied to all of the workers: Example output head -n 7 /etc/containers/storage.conf [storage] driver = "overlay" runroot = "/var/run/containers/storage" graphroot = "/var/lib/containers/storage" [storage.options] additionalimagestores = [] size = "8G" Looking inside a container, you see that the root partition is now 8 GB: Example output ~ USD df -h Filesystem Size Used Available Use% Mounted on overlay 8.0G 8.0K 8.0G 0% / 5.3.4. Creating a drop-in file for the default capabilities of CRI-O You can change some of the settings associated with the OpenShift Container Platform CRI-O runtime for the nodes associated with a specific machine config pool (MCP). By using a controller custom resource (CR), you set the configuration values and add a label to match the MCP. The Machine Config Operator (MCO) then rebuilds the crio.conf and default.conf configuration files on the associated nodes with the updated values. Earlier versions of OpenShift Container Platform included specific machine configs by default. If you updated to a later version of OpenShift Container Platform, those machine configs were retained to ensure that clusters running on the same OpenShift Container Platform version have the same machine configs. You can create multiple ContainerRuntimeConfig CRs, as needed, with a limit of 10 per cluster. For the first ContainerRuntimeConfig CR, the MCO creates a machine config appended with containerruntime . With each subsequent CR, the controller creates a containerruntime machine config with a numeric suffix. For example, if you have a containerruntime machine config with a -2 suffix, the containerruntime machine config is appended with -3 . If you want to delete the machine configs, delete them in reverse order to avoid exceeding the limit. For example, delete the containerruntime-3 machine config before you delete the containerruntime-2 machine config. Note If you have a machine config with a containerruntime-9 suffix and you create another ContainerRuntimeConfig CR, a new machine config is not created, even if there are fewer than 10 containerruntime machine configs. Example of multiple ContainerRuntimeConfig CRs USD oc get ctrcfg Example output NAME AGE ctr-overlay 15m ctr-level 5m45s Example of multiple containerruntime related system configs USD cat /proc/1/status | grep Cap USD capsh --decode=<decode_CapBnd_value> 1 1 Replace <decode_CapBnd_value> with the specific value you want to decode. | [
"oc get mcp worker",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-404caf3180818d8ac1f50c32f14b57c3 False True True 2 1 1 1 5h51m",
"oc describe mcp worker",
"Last Transition Time: 2021-12-20T18:54:00Z Message: Node ci-ln-j4h8nkb-72292-pxqxz-worker-a-fjks4 is reporting: \"content mismatch for file \\\"/etc/mco-test-file\\\"\" 1 Reason: 1 nodes are reporting degraded status on sync Status: True Type: NodeDegraded 2",
"oc describe node/ci-ln-j4h8nkb-72292-pxqxz-worker-a-fjks4",
"Annotations: cloud.network.openshift.io/egress-ipconfig: [{\"interface\":\"nic0\",\"ifaddr\":{\"ipv4\":\"10.0.128.0/17\"},\"capacity\":{\"ip\":10}}] csi.volume.kubernetes.io/nodeid: {\"pd.csi.storage.gke.io\":\"projects/openshift-gce-devel-ci/zones/us-central1-a/instances/ci-ln-j4h8nkb-72292-pxqxz-worker-a-fjks4\"} machine.openshift.io/machine: openshift-machine-api/ci-ln-j4h8nkb-72292-pxqxz-worker-a-fjks4 machineconfiguration.openshift.io/controlPlaneTopology: HighlyAvailable machineconfiguration.openshift.io/currentConfig: rendered-worker-67bd55d0b02b0f659aef33680693a9f9 machineconfiguration.openshift.io/desiredConfig: rendered-worker-67bd55d0b02b0f659aef33680693a9f9 machineconfiguration.openshift.io/reason: content mismatch for file \"/etc/mco-test-file\" 1 machineconfiguration.openshift.io/state: Degraded 2",
"oc get machineconfigpool",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-06c9c4... True False False 3 3 3 0 4h42m worker rendered-worker-f4b64... False True False 3 2 2 0 4h42m",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-06c9c4... True False False 3 3 3 0 4h42m worker rendered-worker-c1b41a... False True False 3 2 3 0 4h42m",
"oc describe mcp worker",
"Degraded Machine Count: 0 Machine Count: 3 Observed Generation: 2 Ready Machine Count: 3 Unavailable Machine Count: 0 Updated Machine Count: 3 Events: <none>",
"Degraded Machine Count: 0 Machine Count: 3 Observed Generation: 2 Ready Machine Count: 2 Unavailable Machine Count: 1 Updated Machine Count: 3",
"oc get machineconfigs",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m 00-worker 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m 01-master-container-runtime 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m 01-master-kubelet 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m rendered-master-dde... 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m rendered-worker-fde... 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m",
"oc describe machineconfigs 01-master-kubelet",
"Name: 01-master-kubelet Spec: Config: Ignition: Version: 3.2.0 Storage: Files: Contents: Source: data:, Mode: 420 Overwrite: true Path: /etc/kubernetes/cloud.conf Contents: Source: data:,kind%3A%20KubeletConfiguration%0AapiVersion%3A%20kubelet.config.k8s.io%2Fv1beta1%0Aauthentication%3A%0A%20%20x509%3A%0A%20%20%20%20clientCAFile%3A%20%2Fetc%2Fkubernetes%2Fkubelet-ca.crt%0A%20%20anonymous Mode: 420 Overwrite: true Path: /etc/kubernetes/kubelet.conf Systemd: Units: Contents: [Unit] Description=Kubernetes Kubelet Wants=rpc-statd.service network-online.target crio.service After=network-online.target crio.service ExecStart=/usr/bin/hyperkube kubelet --config=/etc/kubernetes/kubelet.conf \\",
"oc delete -f ./myconfig.yaml",
"variant: openshift version: 4.13.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 storage: files: - path: /etc/chrony.conf mode: 0644 3 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony",
"butane 99-worker-chrony.bu -o 99-worker-chrony.yaml",
"oc apply -f ./99-worker-chrony.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: <node_role> 1 name: disable-chronyd spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=NTP client/server Documentation=man:chronyd(8) man:chrony.conf(5) After=ntpdate.service sntp.service ntpd.service Conflicts=ntpd.service systemd-timesyncd.service ConditionCapability=CAP_SYS_TIME [Service] Type=forking PIDFile=/run/chrony/chronyd.pid EnvironmentFile=-/etc/sysconfig/chronyd ExecStart=/usr/sbin/chronyd USDOPTIONS ExecStartPost=/usr/libexec/chrony-helper update-daemon PrivateTmp=yes ProtectHome=yes ProtectSystem=full [Install] WantedBy=multi-user.target enabled: false name: \"chronyd.service\"",
"oc create -f disable-chronyd.yaml",
"oc get MachineConfig",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 05-worker-kernelarg-selinuxpermissive 2 spec: kernelArguments: - enforcing=0 3",
"oc create -f 05-worker-kernelarg-selinuxpermissive.yaml",
"oc get MachineConfig",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 05-worker-kernelarg-selinuxpermissive 3.2.0 105s 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-136-161.ec2.internal Ready worker 28m v1.26.0 ip-10-0-136-243.ec2.internal Ready master 34m v1.26.0 ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.26.0 ip-10-0-142-249.ec2.internal Ready master 34m v1.26.0 ip-10-0-153-11.ec2.internal Ready worker 28m v1.26.0 ip-10-0-153-150.ec2.internal Ready master 34m v1.26.0",
"oc debug node/ip-10-0-141-105.ec2.internal",
"Starting pod/ip-10-0-141-105ec2internal-debug To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline BOOT_IMAGE=/ostree/rhcos-... console=tty0 console=ttyS0,115200n8 rootflags=defaults,prjquota rw root=UUID=fd0... ostree=/ostree/boot.0/rhcos/16 coreos.oem.id=qemu coreos.oem.id=ec2 ignition.platform.id=ec2 enforcing=0 sh-4.2# exit",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"master\" name: 99-master-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"worker\" name: 99-worker-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'",
"oc create -f ./99-worker-kargs-mpath.yaml",
"oc get MachineConfig",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-kargs-mpath 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 105s 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-136-161.ec2.internal Ready worker 28m v1.26.0 ip-10-0-136-243.ec2.internal Ready master 34m v1.26.0 ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.26.0 ip-10-0-142-249.ec2.internal Ready master 34m v1.26.0 ip-10-0-153-11.ec2.internal Ready worker 28m v1.26.0 ip-10-0-153-150.ec2.internal Ready master 34m v1.26.0",
"oc debug node/ip-10-0-141-105.ec2.internal",
"Starting pod/ip-10-0-141-105ec2internal-debug To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline rd.multipath=default root=/dev/disk/by-label/dm-mpath-root sh-4.2# exit",
"cat << EOF > 99-worker-realtime.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"worker\" name: 99-worker-realtime spec: kernelType: realtime EOF",
"oc create -f 99-worker-realtime.yaml",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-143-147.us-east-2.compute.internal Ready worker 103m v1.26.0 ip-10-0-146-92.us-east-2.compute.internal Ready worker 101m v1.26.0 ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.26.0",
"oc debug node/ip-10-0-143-147.us-east-2.compute.internal",
"Starting pod/ip-10-0-143-147us-east-2computeinternal-debug To use host binaries, run `chroot /host` sh-4.4# uname -a Linux <worker_node> 4.18.0-147.3.1.rt24.96.el8_1.x86_64 #1 SMP PREEMPT RT Wed Nov 27 18:29:55 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux",
"oc delete -f 99-worker-realtime.yaml",
"variant: openshift version: 4.13.0 metadata: name: 40-worker-custom-journald labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/systemd/journald.conf mode: 0644 overwrite: true contents: inline: | # Disable rate limiting RateLimitInterval=1s RateLimitBurst=10000 Storage=volatile Compress=no MaxRetentionSec=30s",
"butane 40-worker-custom-journald.bu -o 40-worker-custom-journald.yaml",
"oc apply -f 40-worker-custom-journald.yaml",
"oc get machineconfigpool NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-35 True False False 3 3 3 0 34m worker rendered-worker-d8 False True False 3 1 1 0 34m",
"oc get node | grep worker ip-10-0-0-1.us-east-2.compute.internal Ready worker 39m v0.0.0-master+USDFormat:%hUSD oc debug node/ip-10-0-0-1.us-east-2.compute.internal Starting pod/ip-10-0-141-142us-east-2computeinternal-debug sh-4.2# chroot /host sh-4.4# cat /etc/systemd/journald.conf Disable rate limiting RateLimitInterval=1s RateLimitBurst=10000 Storage=volatile Compress=no MaxRetentionSec=30s sh-4.4# exit",
"cat << EOF > 80-extensions.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 80-worker-extensions spec: config: ignition: version: 3.2.0 extensions: - usbguard EOF",
"oc create -f 80-extensions.yaml",
"oc get machineconfig 80-worker-extensions",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 80-worker-extensions 3.2.0 57s",
"oc get machineconfigpool",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-35 True False False 3 3 3 0 34m worker rendered-worker-d8 False True False 3 1 1 0 34m",
"oc get node | grep worker",
"NAME STATUS ROLES AGE VERSION ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.26.0",
"oc debug node/ip-10-0-169-2.us-east-2.compute.internal",
"To use host binaries, run `chroot /host` sh-4.4# chroot /host sh-4.4# rpm -q usbguard usbguard-0.7.4-4.el8.x86_64.rpm",
"variant: openshift version: 4.13.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-worker-firmware-blob storage: files: - path: /var/lib/firmware/<package_name> 1 contents: local: <package_name> 2 mode: 0644 3 openshift: kernel_arguments: - 'firmware_class.path=/var/lib/firmware' 4",
"butane 98-worker-firmware-blob.bu -o 98-worker-firmware-blob.yaml --files-dir <directory_including_package_name>",
"oc apply -f 98-worker-firmware-blob.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: infra spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]}",
"oc get kubeletconfig",
"NAME AGE set-kubelet-config 15m",
"oc get mc | grep kubelet",
"99-worker-generated-kubelet-1 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 26m",
"oc describe machineconfigpool <name>",
"oc describe machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: 2019-02-08T14:52:39Z generation: 1 labels: custom-kubelet: set-kubelet-config 1",
"oc label machineconfigpool worker custom-kubelet=set-kubelet-config",
"oc get machineconfig",
"oc describe node <node_name>",
"oc describe node ci-ln-5grqprb-f76d1-ncnqq-worker-a-mdv94",
"Allocatable: attachable-volumes-aws-ebs: 25 cpu: 3500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 15341844Ki pods: 250",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-config spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-kubelet-config 1 kubeletConfig: 2 podPidsLimit: 8192 containerLogMaxSize: 50Mi maxPods: 500",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-config spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-kubelet-config kubeletConfig: maxPods: <pod_count> kubeAPIBurst: <burst_rate> kubeAPIQPS: <QPS>",
"oc label machineconfigpool worker custom-kubelet=set-kubelet-config",
"oc create -f change-maxPods-cr.yaml",
"oc get kubeletconfig",
"NAME AGE set-kubelet-config 15m",
"oc describe node <node_name>",
"Allocatable: attachable-volumes-gce-pd: 127 cpu: 3500m ephemeral-storage: 123201474766 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 14225400Ki pods: 500 1",
"oc get kubeletconfigs set-kubelet-config -o yaml",
"spec: kubeletConfig: containerLogMaxSize: 50Mi maxPods: 500 podPidsLimit: 8192 machineConfigPoolSelector: matchLabels: custom-kubelet: set-kubelet-config status: conditions: - lastTransitionTime: \"2021-06-30T17:04:07Z\" message: Success status: \"True\" type: Success",
"oc get ctrcfg",
"NAME AGE ctr-overlay 15m ctr-level 5m45s",
"oc get mc | grep container",
"01-master-container-runtime b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 57m 01-worker-container-runtime b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 57m 99-worker-generated-containerruntime b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 26m 99-worker-generated-containerruntime-1 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 17m 99-worker-generated-containerruntime-2 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 7m26s",
"apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: overlay-size spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: '' 1 containerRuntimeConfig: logLevel: debug 2 overlaySize: 8G 3 defaultRuntime: \"crun\" 4",
"apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: overlay-size spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: '' 1 containerRuntimeConfig: 2 logLevel: debug overlaySize: 8G",
"oc create -f <file_name>.yaml",
"oc get ContainerRuntimeConfig",
"NAME AGE overlay-size 3m19s",
"oc get machineconfigs | grep containerrun",
"99-worker-generated-containerruntime 2c9371fbb673b97a6fe8b1c52691999ed3a1bfc2 3.2.0 31s",
"oc get mcp worker",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-169 False True False 3 1 1 0 9h",
"oc debug node/<node_name>",
"sh-4.4# chroot /host",
"sh-4.4# crio config | grep 'log_level'",
"log_level = \"debug\"",
"sh-4.4# head -n 7 /etc/containers/storage.conf",
"[storage] driver = \"overlay\" runroot = \"/var/run/containers/storage\" graphroot = \"/var/lib/containers/storage\" [storage.options] additionalimagestores = [] size = \"8G\"",
"apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: overlay-size spec: machineConfigPoolSelector: matchLabels: custom-crio: overlay-size containerRuntimeConfig: logLevel: debug overlaySize: 8G",
"oc apply -f overlaysize.yml",
"oc edit machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2020-07-09T15:46:34Z\" generation: 3 labels: custom-crio: overlay-size machineconfiguration.openshift.io/mco-built-in: \"\"",
"oc get machineconfigs",
"99-worker-generated-containerruntime 4173030d89fbf4a7a0976d1665491a4d9a6e54f1 3.2.0 7m42s rendered-worker-xyz 4173030d89fbf4a7a0976d1665491a4d9a6e54f1 3.2.0 7m36s",
"oc get mcp worker",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-xyz False True False 3 2 2 0 20h",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-xyz True False False 3 3 3 0 20h",
"head -n 7 /etc/containers/storage.conf [storage] driver = \"overlay\" runroot = \"/var/run/containers/storage\" graphroot = \"/var/lib/containers/storage\" [storage.options] additionalimagestores = [] size = \"8G\"",
"~ USD df -h Filesystem Size Used Available Use% Mounted on overlay 8.0G 8.0K 8.0G 0% /",
"oc get ctrcfg",
"NAME AGE ctr-overlay 15m ctr-level 5m45s",
"cat /proc/1/status | grep Cap",
"capsh --decode=<decode_CapBnd_value> 1"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/post-installation_configuration/post-install-machine-configuration-tasks |
function::text_str | function::text_str Name function::text_str - Escape any non-printable chars in a string Synopsis Arguments input the string to escape Description This function accepts a string argument, and any ASCII characters that are not printable are replaced by the corresponding escape sequence in the returned string. | [
"text_str:string(input:string)"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-text-str |
Chapter 5. Fixed issues | Chapter 5. Fixed issues For a complete list of issues that have been fixed in the release, see AMQ Broker 7.9.0 Fixed Issues and AMQ Broker - 7.9.x Resolved Issues . | null | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/release_notes_for_red_hat_amq_broker_7.9/resolved |
Chapter 2. APIcast operator-based upgrade guide: from 2.14 to 2.15 | Chapter 2. APIcast operator-based upgrade guide: from 2.14 to 2.15 Upgrading APIcast from 2.14 to 2.15 in an operator-based installation helps you use the APIcast API gateway to integrate your internal and external application programming interfaces (APIs) services with 3scale. Important In order to understand the required conditions and procedure, read the entire upgrade guide before applying the listed steps. The upgrade process disrupts the provision of the service until the procedure finishes. Due to this disruption, make sure to have a maintenance window. 2.1. Prerequisites to perform the upgrade To perform the upgrade of APIcast from 2.14 to 2.15 in an operator-based installation, the following required prerequisites must already be in place: An OpenShift Container Platform (OCP) 4.12, 4.13, 4.14, 4.15, 4.16, or 4.17 cluster with administrator access. Ensure that your OCP environment is upgraded to at least version 4.12, which is the minimal requirement for proceeding with an APIcast update. APIcast 2.14 previously deployed via the APIcast operator. Make sure the latest CSV of the threescale-2.14 channel is in use. To check it: If the approval setting for the subscription is automatic , you should already be in the latest CSV version of the channel. If the approval setting for the subscription is manual , make sure you approve all pending InstallPlans and have the latest CSV version. Keep in mind if there is a pending install plan, there might be more pending install plans, which will only be shown after the existing pending plan has been installed. 2.2. Upgrading APIcast from 2.14 to 2.15 in an operator-based installation Upgrade APIcast from 2.14 to 2.15 in an operator-based installation so that APIcast can function as the API gateway in your 3scale installation. Procedure Log in to the OCP console using the account with administrator privileges. Select the project where the APIcast operator has been deployed. Click Operators > Installed Operators . In Subscription > Channel , select Red Hat Integration - 3scale APIcast gateway . Edit the channel of the subscription by selecting the threescale-2.15 channel and save the changes. This will start the upgrade process. Query the pods status on the project until you see all the new versions are running and ready without errors: USD oc get pods -n <apicast_namespace> Note The pods might have temporary errors during the upgrade process. The time required to upgrade pods can vary from 5-10 minutes. Check the status of the APIcast objects and get the YAML content by running the following command: USD oc get apicast <myapicast> -n <apicast_namespace> -o yaml The new annotations with the values should be as follows: After you have performed all steps, the APIcast upgrade from 2.14 to 2.15 in an operator-based deployment is complete. | [
"oc get pods -n <apicast_namespace>",
"oc get apicast <myapicast> -n <apicast_namespace> -o yaml",
"apicast.apps.3scale.net/operator-version: \"0.12.x\""
] | https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/migrating_red_hat_3scale_api_management/upgrading-apicast |
7.2. Starting and Stopping a Cluster | 7.2. Starting and Stopping a Cluster You can use the ccs command to stop a cluster by using the following command to stop cluster services on all nodes in the cluster: You can use the ccs command to start a cluster that is not running by using the following command to start cluster services on all nodes in the cluster: When you use the --startall option of the ccs command to start a cluster, the command automatically enables the cluster resources. For some configurations, such as when services have been intentionally disabled on one node to disable fence loops, you may not want to enable the services on that node. As of Red Hat Enterprise Linux 6.6 release, you can use the --noenable option of the ccs --startall command to prevent the services from being enabled: | [
"ccs -h host --stopall",
"ccs -h host --startall",
"ccs -h host --startall --noenable"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-admin-start-ccs-CA |
3.5. Configuring Fence Devices | 3.5. Configuring Fence Devices Configuring fence devices consists of creating, modifying, and deleting fence devices. Creating a fence device consists of selecting a fence device type and entering parameters for that fence device (for example, name, IP address, login, and password). Modifying a fence device consists of selecting an existing fence device and changing parameters for that fence device. Deleting a fence device consists of selecting an existing fence device and deleting it. Note If you are creating a new cluster, you can create fence devices when you configure cluster nodes. Refer to Section 3.6, "Configuring Cluster Members" . With Conga you can create shared and non-shared fence devices. The following shared fence devices are available: APC Power Switch Brocade Fabric Switch Bull PAP Egenera SAN Controller GNBD IBM Blade Center McData SAN Switch QLogic SANbox2 SCSI Fencing Virtual Machine Fencing Vixel SAN Switch WTI Power Switch The following non-shared fence devices are available: Dell DRAC HP iLO IBM RSA II IPMI LAN RPS10 Serial Switch This section provides procedures for the following tasks: Creating shared fence devices - Refer to Section 3.5.1, "Creating a Shared Fence Device" . The procedures apply only to creating shared fence devices. You can create non-shared (and shared) fence devices while configuring nodes (refer to Section 3.6, "Configuring Cluster Members" ). Modifying or deleting fence devices - Refer to Section 3.5.2, "Modifying or Deleting a Fence Device" . The procedures apply to both shared and non-shared fence devices. The starting point of each procedure is at the cluster-specific page that you navigate to from Choose a cluster to administer displayed on the cluster tab. 3.5.1. Creating a Shared Fence Device To create a shared fence device, follow these steps: At the detailed menu for the cluster (below the clusters menu), click Shared Fence Devices . Clicking Shared Fence Devices causes the display of the fence devices for a cluster and causes the display of menu items for fence device configuration: Add a Fence Device and Configure a Fence Device . Note If this is an initial cluster configuration, no fence devices have been created, and therefore none are displayed. Click Add a Fence Device . Clicking Add a Fence Device causes the Add a Sharable Fence Device page to be displayed (refer to Figure 3.1, "Fence Device Configuration" ). Figure 3.1. Fence Device Configuration At the Add a Sharable Fence Device page, click the drop-down box under Fencing Type and select the type of fence device to configure. Specify the information in the Fencing Type dialog box according to the type of fence device. Refer to Appendix B, Fence Device Parameters for more information about fence device parameters. Click Add this shared fence device . Clicking Add this shared fence device causes a progress page to be displayed temporarily. After the fence device has been added, the detailed cluster properties menu is updated with the fence device under Configure a Fence Device . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_administration/s1-config-fence-devices-conga-CA |
Chapter 307. Simple JMS Component | Chapter 307. Simple JMS Component Available as of Camel version 2.11 The Simple JMS Component, or SJMS, is a JMS client for use with Camel that uses well known best practices when it comes to JMS client creation and configuration. SJMS contains a brand new JMS client API written explicitly for Camel eliminating third party messaging implementations keeping it light and resilient. The following features is included: Standard Queue and Topic Support (Durable & Non-Durable) InOnly & InOut MEP Support Asynchronous Producer and Consumer Processing Internal JMS Transaction Support Additional key features include: Plugable Connection Resource Management Session, Consumer, & Producer Pooling & Caching Management Batch Consumers and Producers Transacted Batch Consumers & Producers Support for Customizable Transaction Commit Strategies (Local JMS Transactions only) Note Why the S in SJMS S stands for Simple and Standard and Springless. Also camel-jms was already taken. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-sjms</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 307.1. URI format Where destinationName is a JMS queue or topic name. By default, the destinationName is interpreted as a queue name. For example, to connect to the queue, FOO.BAR use: You can include the optional queue: prefix, if you prefer: To connect to a topic, you must include the topic: prefix. For example, to connect to the topic, Stocks.Prices , use: You append query options to the URI using the following format, ?option=value&option=value&... 307.2. Component Options and Configurations The Simple JMS component supports 15 options, which are listed below. Name Description Default Type connectionFactory (advanced) A ConnectionFactory is required to enable the SjmsComponent. It can be set directly or set set as part of a ConnectionResource. ConnectionFactory connectionResource (advanced) A ConnectionResource is an interface that allows for customization and container control of the ConnectionFactory. See Plugable Connection Resource Management for further details. ConnectionResource connectionCount (common) The maximum number of connections available to endpoints started under this component 1 Integer jmsKeyFormatStrategy (advanced) Pluggable strategy for encoding and decoding JMS keys so they can be compliant with the JMS specification. Camel provides one implementation out of the box: default. The default strategy will safely marshal dots and hyphens (. and -). Can be used for JMS brokers which do not care whether JMS header keys contain illegal characters. You can provide your own implementation of the org.apache.camel.component.jms.JmsKeyFormatStrategy and refer to it using the # notation. JmsKeyFormatStrategy transactionCommit Strategy (transaction) To configure which kind of commit strategy to use. Camel provides two implementations out of the box, default and batch. TransactionCommit Strategy destinationCreation Strategy (advanced) To use a custom DestinationCreationStrategy. DestinationCreation Strategy timedTaskManager (advanced) To use a custom TimedTaskManager TimedTaskManager messageCreatedStrategy (advanced) To use the given MessageCreatedStrategy which are invoked when Camel creates new instances of javax.jms.Message objects when Camel is sending a JMS message. MessageCreatedStrategy connectionTestOnBorrow (advanced) When using the default org.apache.camel.component.sjms.jms.ConnectionFactoryResource then should each javax.jms.Connection be tested (calling start) before returned from the pool. true boolean connectionUsername (security) The username to use when creating javax.jms.Connection when using the default org.apache.camel.component.sjms.jms.ConnectionFactoryResource. String connectionPassword (security) The password to use when creating javax.jms.Connection when using the default org.apache.camel.component.sjms.jms.ConnectionFactoryResource. String connectionClientId (advanced) The client ID to use when creating javax.jms.Connection when using the default org.apache.camel.component.sjms.jms.ConnectionFactoryResource. String connectionMaxWait (advanced) The max wait time in millis to block and wait on free connection when the pool is exhausted when using the default org.apache.camel.component.sjms.jms.ConnectionFactoryResource. 5000 long headerFilterStrategy (filter) To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. HeaderFilterStrategy resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The Simple JMS endpoint is configured using URI syntax: with the following path and query parameters: 307.2.1. Path Parameters (2 parameters): Name Description Default Type destinationType The kind of destination to use queue String destinationName Required DestinationName is a JMS queue or topic name. By default, the destinationName is interpreted as a queue name. String 307.2.2. Query Parameters (34 parameters): Name Description Default Type acknowledgementMode (common) The JMS acknowledgement name, which is one of: SESSION_TRANSACTED, CLIENT_ACKNOWLEDGE, AUTO_ACKNOWLEDGE, DUPS_OK_ACKNOWLEDGE AUTO_ ACKNOWLEDGE SessionAcknowledgement Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean consumerCount (consumer) Sets the number of consumer listeners used for this endpoint. 1 int durableSubscriptionId (consumer) Sets the durable subscription Id required for durable topics. String synchronous (consumer) Sets whether synchronous processing should be strictly used or Camel is allowed to use asynchronous processing (if supported). true boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern messageSelector (consumer) Sets the JMS Message selector syntax. String namedReplyTo (producer) Sets the reply to destination name used for InOut producer endpoints. The type of the reply to destination can be determined by the starting prefix (topic: or queue:) in its name. String persistent (producer) Flag used to enable/disable message persistence. true boolean producerCount (producer) Sets the number of producers used for this endpoint. 1 int ttl (producer) Flag used to adjust the Time To Live value of produced messages. -1 long allowNullBody (producer) Whether to allow sending messages with no body. If this option is false and the message body is null, then an JMSException is thrown. true boolean prefillPool (producer) Whether to prefill the producer connection pool on startup, or create connections lazy when needed. true boolean responseTimeOut (producer) Sets the amount of time we should wait before timing out a InOut response. 5000 long asyncStartListener (advanced) Whether to startup the consumer message listener asynchronously, when starting a route. For example if a JmsConsumer cannot get a connection to a remote JMS broker, then it may block while retrying and/or failover. This will cause Camel to block while starting routes. By setting this option to true, you will let routes startup, while the JmsConsumer connects to the JMS broker using a dedicated thread in asynchronous mode. If this option is used, then beware that if the connection could not be established, then an exception is logged at WARN level, and the consumer will not be able to receive messages; You can then restart the route to retry. false boolean asyncStopListener (advanced) Whether to stop the consumer message listener asynchronously, when stopping a route. false boolean connectionCount (advanced) The maximum number of connections available to this endpoint Integer connectionFactory (advanced) Initializes the connectionFactory for the endpoint, which takes precedence over the component's connectionFactory, if any ConnectionFactory connectionResource (advanced) Initializes the connectionResource for the endpoint, which takes precedence over the component's connectionResource, if any ConnectionResource destinationCreationStrategy (advanced) To use a custom DestinationCreationStrategy. DestinationCreation Strategy exceptionListener (advanced) Specifies the JMS Exception Listener that is to be notified of any underlying JMS exceptions. ExceptionListener headerFilterStrategy (advanced) To use a custom HeaderFilterStrategy to filter header to and from Camel message. HeaderFilterStrategy includeAllJMSXProperties (advanced) Whether to include all JMSXxxx properties when mapping from JMS to Camel Message. Setting this to true will include properties such as JMSXAppID, and JMSXUserID etc. Note: If you are using a custom headerFilterStrategy then this option does not apply. false boolean jmsKeyFormatStrategy (advanced) Pluggable strategy for encoding and decoding JMS keys so they can be compliant with the JMS specification. Camel provides two implementations out of the box: default and passthrough. The default strategy will safely marshal dots and hyphens (. and -). The passthrough strategy leaves the key as is. Can be used for JMS brokers which do not care whether JMS header keys contain illegal characters. You can provide your own implementation of the org.apache.camel.component.jms.JmsKeyFormatStrategy and refer to it using the # notation. JmsKeyFormatStrategy mapJmsMessage (advanced) Specifies whether Camel should auto map the received JMS message to a suited payload type, such as javax.jms.TextMessage to a String etc. See section about how mapping works below for more details. true boolean messageCreatedStrategy (advanced) To use the given MessageCreatedStrategy which are invoked when Camel creates new instances of javax.jms.Message objects when Camel is sending a JMS message. MessageCreatedStrategy errorHandlerLoggingLevel (logging) Allows to configure the default errorHandler logging level for logging uncaught exceptions. WARN LoggingLevel errorHandlerLogStackTrace (logging) Allows to control whether stacktraces should be logged or not, by the default errorHandler. true boolean transacted (transaction) Specifies whether to use transacted mode false boolean transactionBatchCount (transaction) If transacted sets the number of messages to process before committing a transaction. -1 int transactionBatchTimeout (transaction) Sets timeout (in millis) for batch transactions, the value should be 1000 or higher. 5000 long transactionCommitStrategy (transaction) Sets the commit strategy. TransactionCommit Strategy sharedJMSSession (transaction) Specifies whether to share JMS session with other SJMS endpoints. Turn this off if your route is accessing to multiple JMS providers. If you need transaction against multiple JMS providers, use jms component to leverage XA transaction. true boolean 307.3. Spring Boot Auto-Configuration The component supports 15 options, which are listed below. Name Description Default Type camel.component.sjms.connection-client-id The client ID to use when creating javax.jms.Connection when using the default org.apache.camel.component.sjms.jms.ConnectionFactoryResource. String camel.component.sjms.connection-count The maximum number of connections available to endpoints started under this component 1 Integer camel.component.sjms.connection-factory A ConnectionFactory is required to enable the SjmsComponent. It can be set directly or set set as part of a ConnectionResource. The option is a javax.jms.ConnectionFactory type. String camel.component.sjms.connection-max-wait The max wait time in millis to block and wait on free connection when the pool is exhausted when using the default org.apache.camel.component.sjms.jms.ConnectionFactoryResource. 5000 Long camel.component.sjms.connection-password The password to use when creating javax.jms.Connection when using the default org.apache.camel.component.sjms.jms.ConnectionFactoryResource. String camel.component.sjms.connection-resource A ConnectionResource is an interface that allows for customization and container control of the ConnectionFactory. See Plugable Connection Resource Management for further details. The option is a org.apache.camel.component.sjms.jms.ConnectionResource type. String camel.component.sjms.connection-test-on-borrow When using the default org.apache.camel.component.sjms.jms.ConnectionFactoryResource then should each javax.jms.Connection be tested (calling start) before returned from the pool. true Boolean camel.component.sjms.connection-username The username to use when creating javax.jms.Connection when using the default org.apache.camel.component.sjms.jms.ConnectionFactoryResource. String camel.component.sjms.destination-creation-strategy To use a custom DestinationCreationStrategy. The option is a org.apache.camel.component.sjms.jms.DestinationCreationStrategy type. String camel.component.sjms.enabled Enable sjms component true Boolean camel.component.sjms.header-filter-strategy To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. The option is a org.apache.camel.spi.HeaderFilterStrategy type. String camel.component.sjms.jms-key-format-strategy Pluggable strategy for encoding and decoding JMS keys so they can be compliant with the JMS specification. Camel provides one implementation out of the box: default. The default strategy will safely marshal dots and hyphens (. and -). Can be used for JMS brokers which do not care whether JMS header keys contain illegal characters. You can provide your own implementation of the org.apache.camel.component.jms.JmsKeyFormatStrategy and refer to it using the # notation. The option is a org.apache.camel.component.sjms.jms.JmsKeyFormatStrategy type. String camel.component.sjms.message-created-strategy To use the given MessageCreatedStrategy which are invoked when Camel creates new instances of javax.jms.Message objects when Camel is sending a JMS message. The option is a org.apache.camel.component.sjms.jms.MessageCreatedStrategy type. String camel.component.sjms.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean camel.component.sjms.transaction-commit-strategy To configure which kind of commit strategy to use. Camel provides two implementations out of the box, default and batch. The option is a org.apache.camel.component.sjms.TransactionCommitStrategy type. String Below is an example of how to configure the SjmsComponent with its required ConnectionFactory provider. It will create a single connection by default and store it using the components internal pooling APIs to ensure that it is able to service Session creation requests in a thread safe manner. SjmsComponent component = new SjmsComponent(); component.setConnectionFactory(new ActiveMQConnectionFactory("tcp://localhost:61616")); getContext().addComponent("sjms", component); For a SJMS component that is required to support a durable subscription, you can override the default ConnectionFactoryResource instance and set the clientId property. ConnectionFactoryResource connectionResource = new ConnectionFactoryResource(); connectionResource.setConnectionFactory(new ActiveMQConnectionFactory("tcp://localhost:61616")); connectionResource.setClientId("myclient-id"); SjmsComponent component = new SjmsComponent(); component.setConnectionResource(connectionResource); component.setMaxConnections(1); 307.4. Producer Usage 307.4.1. InOnly Producer - (Default) The InOnly producer is the default behavior of the SJMS Producer Endpoint. from("direct:start") .to("sjms:queue:bar"); 307.4.2. InOut Producer To enable InOut behavior append the exchangePattern attribute to the URI. By default it will use a dedicated TemporaryQueue for each consumer. from("direct:start") .to("sjms:queue:bar?exchangePattern=InOut"); You can specify a namedReplyTo though which can provide a better monitor point. from("direct:start") .to("sjms:queue:bar?exchangePattern=InOut&namedReplyTo=my.reply.to.queue"); 307.5. Consumer Usage 307.5.1. InOnly Consumer - (Default) The InOnly consumer is the default Exchange behavior of the SJMS Consumer Endpoint. from("sjms:queue:bar") .to("mock:result"); 307.5.2. InOut Consumer To enable InOut behavior append the exchangePattern attribute to the URI. from("sjms:queue:in.out.test?exchangePattern=InOut") .transform(constant("Bye Camel")); 307.6. Advanced Usage Notes 307.6.1. Plugable Connection Resource Management SJMS provides JMS Connection resource management through built-in connection pooling. This eliminates the need to depend on third party API pooling logic. However there may be times that you are required to use an external Connection resource manager such as those provided by J2EE or OSGi containers. For this SJMS provides an interface that can be used to override the internal SJMS Connection pooling capabilities. This is accomplished through the ConnectionResource interface. The ConnectionResource provides methods for borrowing and returning Connections as needed is the contract used to provide Connection pools to the SJMS component. A user should use when it is necessary to integrate SJMS with an external connection pooling manager. It is recommended though that for standard ConnectionFactory providers you use the ConnectionFactoryResource implementation that is provided with SJMS as-is or extend as it is optimized for this component. Below is an example of using the plugable ConnectionResource with the ActiveMQ PooledConnectionFactory : public class AMQConnectionResource implements ConnectionResource { private PooledConnectionFactory pcf; public AMQConnectionResource(String connectString, int maxConnections) { super(); pcf = new PooledConnectionFactory(connectString); pcf.setMaxConnections(maxConnections); pcf.start(); } public void stop() { pcf.stop(); } @Override public Connection borrowConnection() throws Exception { Connection answer = pcf.createConnection(); answer.start(); return answer; } @Override public Connection borrowConnection(long timeout) throws Exception { // SNIPPED... } @Override public void returnConnection(Connection connection) throws Exception { // Do nothing since there isn't a way to return a Connection // to the instance of PooledConnectionFactory log.info("Connection returned"); } } Then pass in the ConnectionResource to the SjmsComponent : CamelContext camelContext = new DefaultCamelContext(); AMQConnectionResource pool = new AMQConnectionResource("tcp://localhost:33333", 1); SjmsComponent component = new SjmsComponent(); component.setConnectionResource(pool); camelContext.addComponent("sjms", component); To see the full example of its usage please refer to the ConnectionResourceIT . 307.6.2. Batch Message Support The SjmsProducer supports publishing a collection of messages by creating an Exchange that encapsulates a List . This SjmsProducer will take then iterate through the contents of the List and publish each message individually. If when producing a batch of messages there is the need to set headers that are unique to each message you can use the SJMS BatchMessage class. When the SjmsProducer encounters a BatchMessage list it will iterate each BatchMessage and publish the included payload and headers. Below is an example of using the BatchMessage class. First we create a list of BatchMessage : List<BatchMessage<String>> messages = new ArrayList<BatchMessage<String>>(); for (int i = 1; i <= messageCount; i++) { String body = "Hello World " + i; BatchMessage<String> message = new BatchMessage<String>(body, null); messages.add(message); } Then publish the list: template.sendBody("sjms:queue:batch.queue", messages); 307.6.3. Customizable Transaction Commit Strategies (Local JMS Transactions only) SJMS provides a developer the means to create a custom and plugable transaction strategy through the use of the TransactionCommitStrategy interface. This allows a user to define a unique set of circumstances that the SessionTransactionSynchronization will use to determine when to commit the Session. An example of its use is the BatchTransactionCommitStrategy which is detailed further in the section. 307.6.4. Transacted Batch Consumers & Producers The SJMS component has been designed to support the batching of local JMS transactions on both the Producer and Consumer endpoints. How they are handled on each is very different though. The SJMS consumer endpoint is a straightforward implementation that will process X messages before committing them with the associated Session. To enable batched transaction on the consumer first enable transactions by setting the transacted parameter to true and then adding the transactionBatchCount and setting it to any value that is greater than 0. For example the following configuration will commit the Session every 10 messages: If an exception occurs during the processing of a batch on the consumer endpoint, the Session rollback is invoked causing the messages to be redelivered to the available consumer. The counter is also reset to 0 for the BatchTransactionCommitStrategy for the associated Session as well. It is the responsibility of the user to ensure they put hooks in their processors of batch messages to watch for messages with the JMSRedelivered header set to true. This is the indicator that messages were rolled back at some point and that a verification of a successful processing should occur. A transacted batch consumer also carries with it an instance of an internal timer that waits a default amount of time (5000ms) between messages before committing the open transactions on the Session. The default value of 5000ms (minimum of 1000ms) should be adequate for most use-cases but if further tuning is necessary simply set the transactionBatchTimeout parameter. The minimal value that will be accepted is 1000ms as the amount of context switching may cause unnecessary performance impacts without gaining benefit. The producer endpoint is handled much differently though. With the producer after each message is delivered to its destination the Exchange is closed and there is no longer a reference to that message. To make a available all the messages available for redelivery you simply enable transactions on a Producer Endpoint that is publishing BatchMessages. The transaction will commit at the conclusion of the exchange which includes all messages in the batch list. Nothing additional need be configured. For example: List<BatchMessage<String>> messages = new ArrayList<BatchMessage<String>>(); for (int i = 1; i <= messageCount; i++) { String body = "Hello World " + i; BatchMessage<String> message = new BatchMessage<String>(body, null); messages.add(message); } Now publish the List with transactions enabled: template.sendBody("sjms:queue:batch.queue?transacted=true", messages); 307.7. Additional Notes 307.7.1. Message Header Format The SJMS Component uses the same header format strategy that is used in the Camel JMS Component. This plugable strategy ensures that messages sent over the wire conform to the JMS Message spec. For the exchange.in.header the following rules apply for the header keys: Keys starting with JMS or JMSX are reserved. exchange.in.headers keys must be literals and all be valid Java identifiers (do not use dots in the key name). Camel replaces dots & hyphens and the reverse when when consuming JMS messages: is replaced by DOT and the reverse replacement when Camel consumes the message. is replaced by HYPHEN and the reverse replacement when Camel consumes the message. See also the option jmsKeyFormatStrategy , which allows use of your own custom strategy for formatting keys. For the exchange.in.header , the following rules apply for the header values: 307.7.2. Message Content To deliver content over the wire we must ensure that the body of the message that is being delivered adheres to the JMS Message Specification. Therefore, all that are produced must either be primitives or their counter objects (such as Integer , Long , Character ). The types, String , CharSequence , Date , BigDecimal and BigInteger are all converted to their toString() representation. All other types are dropped. 307.7.3. Clustering When using InOut with SJMS in a clustered environment you must either use TemporaryQueue destinations or use a unique named reply to destination per InOut producer endpoint. Message correlation is handled by the endpoint, not with message selectors at the broker. The InOut Producer Endpoint uses Java Concurrency Exchangers cached by the Message JMSCorrelationID . This provides a nice performance increase while reducing the overhead on the broker since all the messages are consumed from the destination in the order they are produced by the interested consumer. Currently the only correlation strategy is to use the JMSCorrelationId . The InOut Consumer uses this strategy as well ensuring that all responses messages to the included JMSReplyTo destination also have the JMSCorrelationId copied from the request as well. 307.8. Transaction Support SJMS currently only supports the use of internal JMS Transactions. There is no support for the Camel Transaction Processor or the Java Transaction API (JTA). 307.8.1. Does Springless Mean I Can't Use Spring? Not at all. Below is an example of the SJMS component using the Spring DSL: <route id="inout.named.reply.to.producer.route"> <from uri="direct:invoke.named.reply.to.queue" /> <to uri="sjms:queue:named.reply.to.queue?namedReplyTo=my.response.queue&exchangePattern=InOut" /> </route> Springless refers to moving away from the dependency on the Spring JMS API. A new JMS client API is being developed from the ground up to power SJMS. | [
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-sjms</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>",
"sjms:[queue:|topic:]destinationName[?options]",
"sjms:FOO.BAR",
"sjms:queue:FOO.BAR",
"sjms:topic:Stocks.Prices",
"sjms:destinationType:destinationName",
"SjmsComponent component = new SjmsComponent(); component.setConnectionFactory(new ActiveMQConnectionFactory(\"tcp://localhost:61616\")); getContext().addComponent(\"sjms\", component);",
"ConnectionFactoryResource connectionResource = new ConnectionFactoryResource(); connectionResource.setConnectionFactory(new ActiveMQConnectionFactory(\"tcp://localhost:61616\")); connectionResource.setClientId(\"myclient-id\"); SjmsComponent component = new SjmsComponent(); component.setConnectionResource(connectionResource); component.setMaxConnections(1);",
"from(\"direct:start\") .to(\"sjms:queue:bar\");",
"from(\"direct:start\") .to(\"sjms:queue:bar?exchangePattern=InOut\");",
"from(\"direct:start\") .to(\"sjms:queue:bar?exchangePattern=InOut&namedReplyTo=my.reply.to.queue\");",
"from(\"sjms:queue:bar\") .to(\"mock:result\");",
"from(\"sjms:queue:in.out.test?exchangePattern=InOut\") .transform(constant(\"Bye Camel\"));",
"public class AMQConnectionResource implements ConnectionResource { private PooledConnectionFactory pcf; public AMQConnectionResource(String connectString, int maxConnections) { super(); pcf = new PooledConnectionFactory(connectString); pcf.setMaxConnections(maxConnections); pcf.start(); } public void stop() { pcf.stop(); } @Override public Connection borrowConnection() throws Exception { Connection answer = pcf.createConnection(); answer.start(); return answer; } @Override public Connection borrowConnection(long timeout) throws Exception { // SNIPPED } @Override public void returnConnection(Connection connection) throws Exception { // Do nothing since there isn't a way to return a Connection // to the instance of PooledConnectionFactory log.info(\"Connection returned\"); } }",
"CamelContext camelContext = new DefaultCamelContext(); AMQConnectionResource pool = new AMQConnectionResource(\"tcp://localhost:33333\", 1); SjmsComponent component = new SjmsComponent(); component.setConnectionResource(pool); camelContext.addComponent(\"sjms\", component);",
"List<BatchMessage<String>> messages = new ArrayList<BatchMessage<String>>(); for (int i = 1; i <= messageCount; i++) { String body = \"Hello World \" + i; BatchMessage<String> message = new BatchMessage<String>(body, null); messages.add(message); }",
"template.sendBody(\"sjms:queue:batch.queue\", messages);",
"sjms:queue:transacted.batch.consumer?transacted=true&transactionBatchCount=10",
"sjms:queue:transacted.batch.consumer?transacted=true&transactionBatchCount=10&transactionBatchTimeout=2000",
"List<BatchMessage<String>> messages = new ArrayList<BatchMessage<String>>(); for (int i = 1; i <= messageCount; i++) { String body = \"Hello World \" + i; BatchMessage<String> message = new BatchMessage<String>(body, null); messages.add(message); }",
"template.sendBody(\"sjms:queue:batch.queue?transacted=true\", messages);",
"<route id=\"inout.named.reply.to.producer.route\"> <from uri=\"direct:invoke.named.reply.to.queue\" /> <to uri=\"sjms:queue:named.reply.to.queue?namedReplyTo=my.response.queue&exchangePattern=InOut\" /> </route>"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/sjms-component |
function::user_ushort | function::user_ushort Name function::user_ushort - Retrieves an unsigned short value stored in user space Synopsis Arguments addr the user space address to retrieve the unsigned short from Description Returns the unsigned short value from a given user space address. Returns zero when user space data is not accessible. | [
"user_ushort:long(addr:long)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-user-ushort |
Chapter 3. ClusterRole [authorization.openshift.io/v1] | Chapter 3. ClusterRole [authorization.openshift.io/v1] Description ClusterRole is a logical grouping of PolicyRules that can be referenced as a unit by ClusterRoleBindings. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required rules 3.1. Specification Property Type Description aggregationRule AggregationRule_v2 AggregationRule is an optional field that describes how to build the Rules for this ClusterRole. If AggregationRule is set, then the Rules are controller managed and direct changes to Rules will be stomped by the controller. apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta_v2 metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata rules array Rules holds all the PolicyRules for this ClusterRole rules[] object PolicyRule holds information that describes a policy rule, but does not contain information about who the rule applies to or which namespace the rule applies to. 3.1.1. .rules Description Rules holds all the PolicyRules for this ClusterRole Type array 3.1.2. .rules[] Description PolicyRule holds information that describes a policy rule, but does not contain information about who the rule applies to or which namespace the rule applies to. Type object Required verbs resources Property Type Description apiGroups array (string) APIGroups is the name of the APIGroup that contains the resources. If this field is empty, then both kubernetes and origin API groups are assumed. That means that if an action is requested against one of the enumerated resources in either the kubernetes or the origin API group, the request will be allowed attributeRestrictions RawExtension AttributeRestrictions will vary depending on what the Authorizer/AuthorizationAttributeBuilder pair supports. If the Authorizer does not recognize how to handle the AttributeRestrictions, the Authorizer should report an error. nonResourceURLs array (string) NonResourceURLsSlice is a set of partial urls that a user should have access to. *s are allowed, but only as the full, final step in the path This name is intentionally different than the internal type so that the DefaultConvert works nicely and because the ordering may be different. resourceNames array (string) ResourceNames is an optional white list of names that the rule applies to. An empty set means that everything is allowed. resources array (string) Resources is a list of resources this rule applies to. ResourceAll represents all resources. verbs array (string) Verbs is a list of Verbs that apply to ALL the ResourceKinds and AttributeRestrictions contained in this rule. VerbAll represents all kinds. 3.2. API endpoints The following API endpoints are available: /apis/authorization.openshift.io/v1/clusterroles GET : list objects of kind ClusterRole POST : create a ClusterRole /apis/authorization.openshift.io/v1/clusterroles/{name} DELETE : delete a ClusterRole GET : read the specified ClusterRole PATCH : partially update the specified ClusterRole PUT : replace the specified ClusterRole 3.2.1. /apis/authorization.openshift.io/v1/clusterroles HTTP method GET Description list objects of kind ClusterRole Table 3.1. HTTP responses HTTP code Reponse body 200 - OK ClusterRoleList schema 401 - Unauthorized Empty HTTP method POST Description create a ClusterRole Table 3.2. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.3. Body parameters Parameter Type Description body ClusterRole schema Table 3.4. HTTP responses HTTP code Reponse body 200 - OK ClusterRole schema 201 - Created ClusterRole schema 202 - Accepted ClusterRole schema 401 - Unauthorized Empty 3.2.2. /apis/authorization.openshift.io/v1/clusterroles/{name} Table 3.5. Global path parameters Parameter Type Description name string name of the ClusterRole HTTP method DELETE Description delete a ClusterRole Table 3.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.7. HTTP responses HTTP code Reponse body 200 - OK Status_v3 schema 202 - Accepted Status_v3 schema 401 - Unauthorized Empty HTTP method GET Description read the specified ClusterRole Table 3.8. HTTP responses HTTP code Reponse body 200 - OK ClusterRole schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ClusterRole Table 3.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.10. HTTP responses HTTP code Reponse body 200 - OK ClusterRole schema 201 - Created ClusterRole schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ClusterRole Table 3.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.12. Body parameters Parameter Type Description body ClusterRole schema Table 3.13. HTTP responses HTTP code Reponse body 200 - OK ClusterRole schema 201 - Created ClusterRole schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/role_apis/clusterrole-authorization-openshift-io-v1 |
Chapter 49. Managing public SSH keys for users and hosts | Chapter 49. Managing public SSH keys for users and hosts SSH (Secure Shell) is a protocol which provides secure communications between two systems using a client-server architecture. SSH allows users to log in to server host systems remotely and also allows one host machine to access another machine. 49.1. About the SSH key format IdM accepts the following two SSH key formats: OpenSSH-style key Raw RFC 4253-style key Note that IdM automatically converts RFC 4253-style keys into OpenSSH-style keys before saving them into the IdM LDAP server. The IdM server can identify the type of key, such as an RSA or DSA key, from the uploaded key blob. In a key file such as ~/.ssh/known_hosts , a key entry is identified by the hostname and IP address of the server, its type, and the key. For example: This is different from a user public key entry, which has the elements in the order type key== comment : A key file, such as id_rsa.pub , consists of three parts: the key type, the key, and an additional comment or identifier. When uploading a key to IdM, you can upload all three key parts or only the key. If you only upload the key, IdM automatically identifies the key type, such as RSA or DSA, from the uploaded key. If you use the host public key entry from the ~/.ssh/known_hosts file, you must reorder it to match the format of a user key, type key== comment : IdM can determine the key type automatically from the content of the public key. The comment is optional, to make identifying individual keys easier. The only required element is the public key blob. IdM uses public keys stored in the following OpenSSH-style files: Host public keys are in the known_hosts file. User public keys are in the authorized_keys file. Additional resources See RFC 4716 See RFC 4253 49.2. About IdM and OpenSSH During an IdM server or client installation, as part of the install script: An OpenSSH server and client is configured on the IdM client machine. SSSD is configured to store and retrieve user and host SSH keys in cache. This allows IdM to serve as a universal and centralized repository of SSH keys. If you enable the SSH service during the client installation, an RSA key is created when the SSH service is started for the first time. Note When you run the ipa-client-install install script to add the machine as an IdM client, the client is created with two SSH keys, RSA and DSA. As part of the installation, you can configure the following: Configure OpenSSH to automatically trust the IdM DNS records where the key fingerprints are stored using the --ssh-trust-dns option. Disable OpenSSH and prevent the install script from configuring the OpenSSH server using the --no-sshd option. Prevent the host from creating DNS SSHFP records with its own DNS entries using the --no-dns-sshfp option. If you do not configure the server or client during installation, you can manually configure SSSD later. For information on how to manually configure SSSD, see Configuring SSSD to Provide a Cache for the OpenSSH Services . Note that caching SSH keys by SSSD requires administrative privileges on the local machines. 49.3. Generating SSH keys You can generate an SSH key by using the OpenSSH ssh-keygen utility. Procedure To generate an RSA SSH key, run the following command: Note if generating a host key, replace [email protected] with the required hostname, such as server.example.com,1.2.3.4 . Specify the file where you are saving the key or press enter to accept the displayed default location. Note if generating a host key, save the key to a different location than the user's ~/.ssh/ directory so you do not overwrite any existing keys. for example, /home/user/.ssh/host_keys . Specify a passphrase for your private key or press enter to leave the passphrase blank. To upload this SSH key, use the public key string stored in the displayed file. 49.4. Managing public SSH keys for hosts OpenSSH uses public keys to authenticate hosts. One machine attempts to access another machine and presents its key pair. The first time the host authenticates, the administrator on the target machine has to approve the request manually. The machine then stores the host's public key in a known_hosts file. Any time that the remote machine attempts to access the target machine again, the target machine checks its known_hosts file and then grants access automatically to approved hosts. 49.4.1. Uploading SSH keys for a host using the IdM Web UI Identity Management allows you to upload a public SSH key to a host entry. OpenSSH uses public keys to authenticate hosts. Prerequisites Administrator privileges for managing the IdM Web UI or User Administrator role. Procedure You can retrieve the key for your host from a ~/.ssh/known_hosts file. For example: You can also generate a host key. See Generating SSH keys . Copy the public key from the key file. The full key entry has the form host name,IP type key== . Only the key== is required, but you can store the entire entry. To use all elements in the entry, rearrange the entry so it has the order type key== [host name,IP] . Log into the IdM Web UI. Go to the Identity>Hosts tab. Click the name of the host to edit. In the Host Settings section, click the SSH public keys Add button. Paste the public key for the host into the SSH public key field. Click Set . Click Save at the top of the IdM Web UI window. Verification Under the Hosts Settings section, verify the key is listed under SSH public keys . 49.4.2. Uploading SSH keys for a host using the IdM CLI Identity Management allows you to upload a public SSH key to a host entry. OpenSSH uses public keys to authenticate hosts. Host SSH keys are added to host entries in IdM, when the host is created using host-add or by modifying the entry later. Note RSA and DSA host keys are created by the ipa-client-install command, unless the SSH service is explicitly disabled in the installation script. Prerequisites Administrator privileges for managing IdM or User Administrator role. Procedure Run the host-mod command with the --sshpubkey option to upload the base64-encoded public key to the host entry. Because adding a host key changes the DNS Secure Shell fingerprint (SSHFP) record for the host, use the --updatedns option to update the host's DNS entry. For example: A real key also usually ends with an equal sign (=) but is longer. To upload more than one key, enter multiple --sshpubkey command-line parameters: Note A host can have multiple public keys. After uploading the host keys, configure SSSD to use Identity Management as one of its identity domains and set up OpenSSH to use the SSSD tools for managing host keys, covered in Configuring SSSD to Provide a Cache for the OpenSSH Services . Verification Run the ipa host-show command to verify that the SSH public key is associated with the specified host: 49.4.3. Deleting SSH keys for a host using the IdM Web UI You can remove the host keys once they expire or are no longer valid. Follow the steps below to remove an individual host key by using the IdM Web UI. Prerequisites Administrator privileges for managing the IdM Web UI or Host Administrator role. Procedure Log into the IdM Web UI. Go to the Identity>Hosts tab. Click the name of the host to edit. Under the Host Settings section, click Delete to the SSH public key you want to remove. Click Save at the top of the page. Verification Under the Host Settings section, verify the key is no longer listed under SSH public keys . 49.4.4. Deleting SSH keys for a host using the IdM CLI You can remove the host keys once they expire or are no longer valid. Follow the steps below to remove an individual host key by using the IdM CLI. Prerequisites Administrator privileges for managing the IdM CLI or Host Administrator role. Procedure To delete all SSH keys assigned to a host account, add the --sshpubkey option to the ipa host-mod command without specifying any key: Note that it is good practice to use the --updatedns option to update the host's DNS entry. IdM determines the key type automatically from the key, if the type is not included in the uploaded key. Verification Run the ipa host-show command to verify that the SSH public key is no longer associated with the specified host: 49.5. Managing public SSH keys for users Identity Management allows you to upload a public SSH key to a user entry. The user who has access to the corresponding private SSH key can use SSH to log into an IdM machine without using Kerberos credentials. Note that users can still authenticate by providing their Kerberos credentials if they are logging in from a machine where their private SSH key file is not available. 49.5.1. Uploading SSH keys for a user using the IdM Web UI Identity Management allows you to upload a public SSH key to a user entry. The user who has access to the corresponding private SSH key can use SSH to log into an IdM machine without using Kerberos credentials. Prerequisites Administrator privileges for managing the IdM Web UI or User Administrator role. Procedure Log into the IdM Web UI. Go to the Identity>Users tab. Click the name of the user to edit. In the Account Settings section, click the SSH public keys Add button. Paste the Base 64-encoded public key string into the SSH public key field. Click Set . Click Save at the top of the IdM Web UI window. Verification Under the Accounts Settings section, verify the key is listed under SSH public keys . 49.5.2. Uploading SSH keys for a user using the IdM CLI Identity Management allows you to upload a public SSH key to a user entry. The user who has access to the corresponding private SSH key can use SSH to log into an IdM machine without using Kerberos credentials. Prerequisites Administrator privileges for managing the IdM CLI or User Administrator role. Procedure Run the ipa user-mod command with the --sshpubkey option to upload the base64-encoded public key to the user entry. Note in this example you upload the key type, the key, and the hostname identifier to the user entry. To upload multiple keys, use --sshpubkey multiple times. For example, to upload two SSH keys: To use command redirection and point to a file that contains the key instead of pasting the key string manually, use the following command: Verification Run the ipa user-show command to verify that the SSH public key is associated with the specified user: 49.5.3. Deleting SSH keys for a user using the IdM Web UI Follow this procedure to delete an SSH key from a user profile in the IdM Web UI. Prerequisites Administrator privileges for managing the IdM Web UI or User Administrator role. Procedure Log into the IdM Web UI. Go to the Identity>Users tab. Click the name of the user to edit. Under the Account Settings section, under SSH public key , click Delete to the key you want to remove. Click Save at the top of the page. Verification Under the Account Settings section, verify the key is no longer listed under SSH public keys . 49.5.4. Deleting SSH keys for a user using the IdM CLI Follow this procedure to delete an SSH key from a user profile by using the IdM CLI. Prerequisites Administrator privileges for managing the IdM CLI or User Administrator role. Procedure To delete all SSH keys assigned to a user account, add the --sshpubkey option to the ipa user-mod command without specifying any key: To only delete a specific SSH key or keys, use the --sshpubkey option to specify the keys you want to keep, omitting the key you are deleting. Verification Run the ipa user-show command to verify that the SSH public key is no longer associated with the specified user: | [
"host.example.com,1.2.3.4 ssh-rsa AAA...ZZZ==",
"\"ssh-rsa ABCD1234...== ipaclient.example.com\"",
"ssh-rsa AAA...ZZZ== host.example.com,1.2.3.4",
"ssh-keygen -t rsa -C [email protected] Generating public/private rsa key pair.",
"Enter file in which to save the key (/home/user/.ssh/id_rsa):",
"Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/user/.ssh/id_rsa. Your public key has been saved in /home/user/.ssh/id_rsa.pub. The key fingerprint is: SHA256:ONxjcMX7hJ5zly8F8ID9fpbqcuxQK+ylVLKDMsJPxGA [email protected] The key's randomart image is: +---[RSA 3072]----+ | ..o | | .o + | | E. . o = | | ..o= o . + | | +oS. = + o.| | . .o .* B =.+| | o + . X.+.= | | + o o.*+. .| | . o=o . | +----[SHA256]-----+",
"server.example.com,1.2.3.4 ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEApvjBvSFSkTU0WQW4eOweeo0DZZ08F9Ud21xlLy6FOhzwpXFGIyxvXZ52+siHBHbbqGL5+14N7UvElruyslIHx9LYUR/pPKSMXCGyboLy5aTNl5OQ5EHwrhVnFDIKXkvp45945R7SKYCUtRumm0Iw6wq0XD4o+ILeVbV3wmcB1bXs36ZvC/M6riefn9PcJmh6vNCvIsbMY6S+FhkWUTTiOXJjUDYRLlwM273FfWhzHK+SSQXeBp/zIn1gFvJhSZMRi9HZpDoqxLbBB9QIdIw6U4MIjNmKsSI/ASpkFm2GuQ7ZK9KuMItY2AoCuIRmRAdF8iYNHBTXNfFurGogXwRDjQ==",
"cat /home/user/.ssh/host_keys.pub ssh-rsa AAAAB3NzaC1yc2E...tJG1PK2Mq++wQ== server.example.com,1.2.3.4",
"ipa host-mod --sshpubkey=\"ssh-rsa RjlzYQo==\" --updatedns host1.example.com",
"--sshpubkey=\"RjlzYQo==\" --sshpubkey=\"ZEt0TAo==\"",
"ipa host-show client.ipa.test SSH public key fingerprint: SHA256:qGaqTZM60YPFTngFX0PtNPCKbIuudwf1D2LqmDeOcuA [email protected] (ssh-rsa)",
"kinit admin ipa host-mod --sshpubkey= --updatedns host1.example.com",
"ipa host-show client.ipa.test Host name: client.ipa.test Platform: x86_64 Operating system: 4.18.0-240.el8.x86_64 Principal name: host/[email protected] Principal alias: host/[email protected] Password: False Member of host-groups: ipaservers Roles: helpdesk Member of netgroups: test Member of Sudo rule: test2 Member of HBAC rule: test Keytab: True Managed by: client.ipa.test, server.ipa.test Users allowed to retrieve keytab: user1, user2, user3",
"ipa user-mod user --sshpubkey=\"ssh-rsa AAAAB3Nza...SNc5dv== client.example.com\"",
"--sshpubkey=\"AAAAB3Nza...SNc5dv==\" --sshpubkey=\"RjlzYQo...ZEt0TAo=\"",
"ipa user-mod user --sshpubkey=\"USD(cat ~/.ssh/id_rsa.pub)\" --sshpubkey=\"USD(cat ~/.ssh/id_rsa2.pub)\"",
"ipa user-show user User login: user First name: user Last name: user Home directory: /home/user Login shell: /bin/sh Principal name: [email protected] Principal alias: [email protected] Email address: [email protected] UID: 1118800019 GID: 1118800019 SSH public key fingerprint: SHA256:qGaqTZM60YPFTngFX0PtNPCKbIuudwf1D2LqmDeOcuA [email protected] (ssh-rsa) Account disabled: False Password: False Member of groups: ipausers Subordinate ids: 3167b7cc-8497-4ff2-ab4b-6fcb3cb1b047 Kerberos keys available: False",
"ipa user-mod user --sshpubkey=",
"ipa user-show user User login: user First name: user Last name: user Home directory: /home/user Login shell: /bin/sh Principal name: [email protected] Principal alias: [email protected] Email address: [email protected] UID: 1118800019 GID: 1118800019 Account disabled: False Password: False Member of groups: ipausers Subordinate ids: 3167b7cc-8497-4ff2-ab4b-6fcb3cb1b047 Kerberos keys available: False"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_idm_users_groups_hosts_and_access_control_rules/managing-public-ssh-keys_managing-users-groups-hosts |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. To propose improvements, open a Jira issue and describe your suggested changes. Provide as much detail as possible to enable us to address your request quickly. Prerequisite You have a Red Hat Customer Portal account. This account enables you to log in to the Red Hat Jira Software instance. If you do not have an account, you will be prompted to create one. Procedure Click the following: Create issue . In the Summary text box, enter a brief description of the issue. In the Description text box, provide the following information: The URL of the page where you found the issue. A detailed description of the issue. You can leave the information in any other fields at their default values. Add a reporter name. Click Create to submit the Jira issue to the documentation team. Thank you for taking the time to provide feedback. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/using_3scale_api_management_with_the_streams_for_apache_kafka_bridge/proc-providing-feedback-on-redhat-documentation |
A.13. numad | A.13. numad numad is an automatic NUMA affinity management daemon. It monitors NUMA topology and resource usage within a system in order to dynamically improve NUMA resource allocation and management. Note that when numad is enabled, its behavior overrides the default behavior of automatic NUMA balancing. A.13.1. Using numad from the Command Line To use numad as an executable, just run: While numad runs, its activities are logged in /var/log/numad.log . It will run until stopped with the following command: Stopping numad does not remove the changes it has made to improve NUMA affinity. If system use changes significantly, running numad again will adjust affinity to improve performance under the new conditions. To restrict numad management to a specific process, start it with the following options. -p pid This option adds the specified pid to an explicit inclusion list. The process specified will not be managed until it meets the numad process significance threshold. -S 0 This sets the type of process scanning to 0 , which limits numad management to explicitly included processes. For further information about available numad options, refer to the numad man page: A.13.2. Using numad as a Service While numad runs as a service, it attempts to tune the system dynamically based on the current system workload. Its activities are logged in /var/log/numad.log . To start the service, run: To make the service persist across reboots, run: For further information about available numad options, refer to the numad man page: A.13.3. Pre-Placement Advice numad provides a pre-placement advice service that can be queried by various job management systems to provide assistance with the initial binding of CPU and memory resources for their processes. This pre-placement advice is available regardless of whether numad is running as an executable or a service. A.13.4. Using numad with KSM If KSM is in use on a NUMA system, change the value of the /sys/kernel/mm/ksm/merge_nodes parameter to 0 to avoid merging pages across NUMA nodes. Otherwise, KSM increases remote memory accesses as it merges pages across nodes. Furthermore, kernel memory accounting statistics can eventually contradict each other after large amounts of cross-node merging. As such, numad can become confused about the correct amounts and locations of available memory, after the KSM daemon merges many memory pages. KSM is beneficial only if you are overcommitting the memory on your system. If your system has sufficient free memory, you may achieve higher performance by turning off and disabling the KSM daemon. | [
"numad",
"numad -i 0",
"numad -S 0 -p pid",
"man numad",
"systemctl start numad.service",
"chkconfig numad on",
"man numad"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/performance_tuning_guide/sect-Red_Hat_Enterprise_Linux-Performance_Tuning_Guide-Tool_Reference-numad |
Preface | Preface Red Hat build of OpenJDK (Open Java Development Kit) is a free and open source implementation of the Java Platform, Standard Edition (Java SE). The Red Hat build of OpenJDK is available in four versions: 8u, 11u, 17u, and 21u. Packages for the Red Hat build of OpenJDK are made available on Red Hat Enterprise Linux and Microsoft Windows and shipped as a JDK and JRE in the Red Hat Ecosystem Catalog. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/release_notes_for_red_hat_build_of_openjdk_21.0.1/pr01 |
Chapter 3. Getting Started | Chapter 3. Getting Started 3.1. Using the helloworld-mdb Quickstart The helloworld-mdb quickstart uses a simple message-driven bean to demonstrate basic Jakarta EE messaging features. Having the quickstart up and running as you review the basic configuration is an excellent way to introduce yourself to the features included with the JBoss EAP messaging server. Build and Deploy the helloworld-mdb Quickstart See the instructions in the README.md file provided with the quickstart for instructions on building and deploying the helloworld-mdb quickstart. You will need to start the JBoss EAP server specifying the full configuration, which contains the messaging-activemq subsystem. See the README.md file or the JBoss EAP Configuration Guide for details on starting JBoss EAP with a different configuration file. 3.2. Overview of the Messaging Subsystem Configuration Default configuration for the messaging-activemq subsystem is included when starting the JBoss EAP server with the full or full-ha configuration. The full-ha option includes advanced configuration for features like clustering and high availability . Although not necessary, it is recommended that you use the helloworld-mdb quickstart as a working example to have running alongside this overview of the configuration. For information on all settings available in the messaging-activemq subsystem, see the schema definitions located in the EAP_HOME /docs/schema/ directory, or run the read-resource-description operation on the subsystem from the management CLI, as shown below. The following extension in the server configuration file tells JBoss EAP to include the messaging-activemq subsystem as part of its runtime. <extensions> ... <extension module="org.wildfly.extension.messaging-activemq"/> ... </extensions> The configuration for the messaging-activemq subsystem is contained within the <subsystem xmlns="urn:jboss:domain:messaging-activemq:4.0"> element. <subsystem xmlns="urn:jboss:domain:messaging-activemq:4.0"> <server name="default"> <cluster password="USD{jboss.messaging.cluster.password:CHANGE ME!!}"/> <security-setting name="#"> <role name="guest" send="true" consume="true" create-non-durable-queue="true" delete-non-durable-queue="true"/> </security-setting> <address-setting name="#" dead-letter-address="jms.queue.DLQ" expiry-address="jms.queue.ExpiryQueue" max-size-bytes="10485760" page-size-bytes="2097152" message-counter-history-day-limit="10" redistribution-delay="1000"/> <http-connector name="http-connector" socket-binding="http" endpoint="http-acceptor"/> <http-connector name="http-connector-throughput" socket-binding="http" endpoint="http-acceptor-throughput"> <param name="batch-delay" value="50"/> </http-connector> <in-vm-connector name="in-vm" server-id="0"/> <http-acceptor name="http-acceptor" http-listener="default"/> <http-acceptor name="http-acceptor-throughput" http-listener="default"> <param name="batch-delay" value="50"/> <param name="direct-deliver" value="false"/> </http-acceptor> <in-vm-acceptor name="in-vm" server-id="0"/> <broadcast-group name="bg-group1" connectors="http-connector" jgroups-cluster="activemq-cluster"/> <discovery-group name="dg-group1" jgroups-cluster="activemq-cluster"/> <cluster-connection name="my-cluster" address="jms" connector-name="http-connector" discovery-group="dg-group1"/> <jms-queue name="ExpiryQueue" entries="java:/jms/queue/ExpiryQueue"/> <jms-queue name="DLQ" entries="java:/jms/queue/DLQ"/> <connection-factory name="InVmConnectionFactory" connectors="in-vm" entries="java:/ConnectionFactory"/> <connection-factory name="RemoteConnectionFactory" ha="true" block-on-acknowledge="true" reconnect-attempts="-1" connectors="http-connector" entries="java:jboss/exported/jms/RemoteConnectionFactory"/> <pooled-connection-factory name="activemq-ra" transaction="xa" connectors="in-vm" entries="java:/JmsXA java:jboss/DefaultJMSConnectionFactory"/> </server> </subsystem> Connection Factories Messaging clients use a Jakarta Messaging ConnectionFactory object to make connections to the server. The default JBoss EAP configuration defines several connection factories. Note that there is a <connection-factory> for in-vm, http, and pooled connections. <connection-factory name="InVmConnectionFactory" connectors="in-vm" entries="java:/ConnectionFactory"/> <connection-factory name="RemoteConnectionFactory" ha="true" block-on-acknowledge="true" reconnect-attempts="-1" connectors="http-connector" entries="java:jboss/exported/jms/RemoteConnectionFactory"/> <pooled-connection-factory name="activemq-ra" transaction="xa" connectors="in-vm" entries="java:/JmsXA java:jboss/DefaultJMSConnectionFactory"/> See the Configuring Connection Factories section for more details. Connectors and Acceptors Each Jakarta Messaging connection factory uses connectors to enable Jakarta Messaging-enabled communication from a client producer or consumer to a messaging server. The connector object defines the transport and parameters used to connect to the messaging server. Its counterpart is the acceptor object, which identifies the type of connections accepted by the messaging server. The default JBoss EAP configuration defines several connectors and acceptors. Example: Default Connectors <http-connector name="http-connector" socket-binding="http" endpoint="http-acceptor"/> <http-connector name="http-connector-throughput" socket-binding="http" endpoint="http-acceptor-throughput"> <param name="batch-delay" value="50"/> </http-connector> <in-vm-connector name="in-vm" server-id="0"/> Example: Default Acceptors <http-acceptor name="http-acceptor" http-listener="default"/> <http-acceptor name="http-acceptor-throughput" http-listener="default"> <param name="batch-delay" value="50"/> <param name="direct-deliver" value="false"/> </http-acceptor> See the Acceptors and Connectors section for more details. Socket Binding Groups The socket-binding attribute for the default connectors reference a socket binding named http . The http connector is used because JBoss EAP can multiplex inbound requests over standard web ports. You can find this socket-binding as part of the <socket-binding-group> section elsewhere in the configuration file. Note how the configuration for the http and https socket bindings appear within the <socket-binding-groups> element: <socket-binding-group name="standard-sockets" default-interface="public" port-offset="USD{jboss.socket.binding.port-offset:0}"> ... <socket-binding name="http" port="USD{jboss.http.port:8080}"/> <socket-binding name="https" port="USD{jboss.https.port:8443}"/> ... </socket-binding-group> For information on socket bindings, see Configuring Socket Bindings in the JBoss EAP Configuration Guide . Messaging Security The messaging-activemq subsystem includes a single security-setting element when JBoss EAP is first installed: <security-setting name="#"> <role name="guest" delete-non-durable-queue="true" create-non-durable-queue="true" consume="true" send="true"/> </security-setting> This declares that any user with the role guest can access any address on the server, as noted by the wildcard # . See Configuring Address Settings for more information on the wildcard syntax . For more information on securing destinations and remote connections see Configuring Messaging Security . Messaging Destinations The full and full-ha configurations include two helpful queues that JBoss EAP can use to hold messages that have expired or that cannot be routed to their proper destination. <jms-queue name="ExpiryQueue" entries="java:/jms/queue/ExpiryQueue"/> <jms-queue name="DLQ" entries="java:/jms/queue/DLQ"/> You can add your own messaging destinations in JBoss EAP using one of the following methods. Using the management CLI Use the following management CLI command to add a queue. Use the following management CLI command to add a topic. Using the management console Messaging destinations can be configured from the management console by navigating to Configuration Subsystems Messaging (ActiveMQ) Server , selecting the server, selecting Destinations , and clicking View . Select the JMS Queue tab to configure queues and select the JMS Topic to configure topics. Defining your destinations using a Jakarta EE deployment descriptor or annotation. In Jakarta EE 8, deployment descriptors can include configuration for queues and topics. Below is a snippet from a Jakarta EE descriptor file that defines a Jakarta Messaging queue. ... <jms-destination> <name>java:global/jms/MyQueue</name> <interfaceName>javax.jms.Queue</interfaceName> <destinationName>myQueue</destinationName> </jms-destination> ... For example, the message-driven beans in the helloworld-mdb quickstart contain annotations that define the queue and topic needed to run the application. Destinations created in this way will appear in the list of runtime queues. Use the management CLI to display the list of runtime queues. After deploying the quickstart the runtime queues it created will appear as below: See Configuring Messaging Destinations for more detailed information. | [
"/subsystem=messaging-activemq:read-resource-description(recursive=true)",
"<extensions> <extension module=\"org.wildfly.extension.messaging-activemq\"/> </extensions>",
"<subsystem xmlns=\"urn:jboss:domain:messaging-activemq:4.0\"> <server name=\"default\"> <cluster password=\"USD{jboss.messaging.cluster.password:CHANGE ME!!}\"/> <security-setting name=\"#\"> <role name=\"guest\" send=\"true\" consume=\"true\" create-non-durable-queue=\"true\" delete-non-durable-queue=\"true\"/> </security-setting> <address-setting name=\"#\" dead-letter-address=\"jms.queue.DLQ\" expiry-address=\"jms.queue.ExpiryQueue\" max-size-bytes=\"10485760\" page-size-bytes=\"2097152\" message-counter-history-day-limit=\"10\" redistribution-delay=\"1000\"/> <http-connector name=\"http-connector\" socket-binding=\"http\" endpoint=\"http-acceptor\"/> <http-connector name=\"http-connector-throughput\" socket-binding=\"http\" endpoint=\"http-acceptor-throughput\"> <param name=\"batch-delay\" value=\"50\"/> </http-connector> <in-vm-connector name=\"in-vm\" server-id=\"0\"/> <http-acceptor name=\"http-acceptor\" http-listener=\"default\"/> <http-acceptor name=\"http-acceptor-throughput\" http-listener=\"default\"> <param name=\"batch-delay\" value=\"50\"/> <param name=\"direct-deliver\" value=\"false\"/> </http-acceptor> <in-vm-acceptor name=\"in-vm\" server-id=\"0\"/> <broadcast-group name=\"bg-group1\" connectors=\"http-connector\" jgroups-cluster=\"activemq-cluster\"/> <discovery-group name=\"dg-group1\" jgroups-cluster=\"activemq-cluster\"/> <cluster-connection name=\"my-cluster\" address=\"jms\" connector-name=\"http-connector\" discovery-group=\"dg-group1\"/> <jms-queue name=\"ExpiryQueue\" entries=\"java:/jms/queue/ExpiryQueue\"/> <jms-queue name=\"DLQ\" entries=\"java:/jms/queue/DLQ\"/> <connection-factory name=\"InVmConnectionFactory\" connectors=\"in-vm\" entries=\"java:/ConnectionFactory\"/> <connection-factory name=\"RemoteConnectionFactory\" ha=\"true\" block-on-acknowledge=\"true\" reconnect-attempts=\"-1\" connectors=\"http-connector\" entries=\"java:jboss/exported/jms/RemoteConnectionFactory\"/> <pooled-connection-factory name=\"activemq-ra\" transaction=\"xa\" connectors=\"in-vm\" entries=\"java:/JmsXA java:jboss/DefaultJMSConnectionFactory\"/> </server> </subsystem>",
"<connection-factory name=\"InVmConnectionFactory\" connectors=\"in-vm\" entries=\"java:/ConnectionFactory\"/> <connection-factory name=\"RemoteConnectionFactory\" ha=\"true\" block-on-acknowledge=\"true\" reconnect-attempts=\"-1\" connectors=\"http-connector\" entries=\"java:jboss/exported/jms/RemoteConnectionFactory\"/> <pooled-connection-factory name=\"activemq-ra\" transaction=\"xa\" connectors=\"in-vm\" entries=\"java:/JmsXA java:jboss/DefaultJMSConnectionFactory\"/>",
"<http-connector name=\"http-connector\" socket-binding=\"http\" endpoint=\"http-acceptor\"/> <http-connector name=\"http-connector-throughput\" socket-binding=\"http\" endpoint=\"http-acceptor-throughput\"> <param name=\"batch-delay\" value=\"50\"/> </http-connector> <in-vm-connector name=\"in-vm\" server-id=\"0\"/>",
"<http-acceptor name=\"http-acceptor\" http-listener=\"default\"/> <http-acceptor name=\"http-acceptor-throughput\" http-listener=\"default\"> <param name=\"batch-delay\" value=\"50\"/> <param name=\"direct-deliver\" value=\"false\"/> </http-acceptor>",
"<socket-binding-group name=\"standard-sockets\" default-interface=\"public\" port-offset=\"USD{jboss.socket.binding.port-offset:0}\"> <socket-binding name=\"http\" port=\"USD{jboss.http.port:8080}\"/> <socket-binding name=\"https\" port=\"USD{jboss.https.port:8443}\"/> </socket-binding-group>",
"<security-setting name=\"#\"> <role name=\"guest\" delete-non-durable-queue=\"true\" create-non-durable-queue=\"true\" consume=\"true\" send=\"true\"/> </security-setting>",
"<jms-queue name=\"ExpiryQueue\" entries=\"java:/jms/queue/ExpiryQueue\"/> <jms-queue name=\"DLQ\" entries=\"java:/jms/queue/DLQ\"/>",
"jms-queue add --queue-address=testQueue --entries=queue/test,java:jboss/exported/jms/queue/test",
"jms-topic add --topic-address=testTopic --entries=topic/test,java:jboss/exported/jms/topic/test",
"<jms-destination> <name>java:global/jms/MyQueue</name> <interfaceName>javax.jms.Queue</interfaceName> <destinationName>myQueue</destinationName> </jms-destination>",
"/subsystem=messaging-activemq/server=default/runtime-queue=*:read-resource { \"outcome\" => \"success\", \"result\" => [ { \"address\" => [ (\"subsystem\" => \"messaging-activemq\"), (\"server\" => \"default\"), (\"runtime-queue\" => \"jms.queue.HelloWorldMDBQueue\") ], \"outcome\" => \"success\", \"result\" => {\"durable\" => undefined} }, { \"address\" => [ (\"subsystem\" => \"messaging-activemq\"), (\"server\" => \"default\"), (\"runtime-queue\" => \"jms.topic.HelloWorldMDBTopic\") ], \"outcome\" => \"success\", \"result\" => {\"durable\" => undefined} }, ] }"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/configuring_messaging/getting_started |
Chapter 3. Using the OpenShift Container Platform dashboard to get cluster information | Chapter 3. Using the OpenShift Container Platform dashboard to get cluster information The OpenShift Container Platform web console captures high-level information about the cluster. 3.1. About the OpenShift Container Platform dashboards page Access the OpenShift Container Platform dashboard, which captures high-level information about the cluster, by navigating to Home Overview from the OpenShift Container Platform web console. The OpenShift Container Platform dashboard provides various cluster information, captured in individual dashboard cards. The OpenShift Container Platform dashboard consists of the following cards: Details provides a brief overview of informational cluster details. Status include ok , error , warning , in progress , and unknown . Resources can add custom status names. Cluster ID Provider Version Cluster Inventory details number of resources and associated statuses. It is helpful when intervention is required to resolve problems, including information about: Number of nodes Number of pods Persistent storage volume claims Bare metal hosts in the cluster, listed according to their state (only available in metal3 environment) Status helps administrators understand how cluster resources are consumed. Click on a resource to jump to a detailed page listing pods and nodes that consume the largest amount of the specified cluster resource (CPU, memory, or storage). Cluster Utilization shows the capacity of various resources over a specified period of time, to help administrators understand the scale and frequency of high resource consumption, including information about: CPU time Memory allocation Storage consumed Network resources consumed Pod count Activity lists messages related to recent activity in the cluster, such as pod creation or virtual machine migration to another host. 3.2. Recognizing resource and project limits and quotas You can view a graphical representation of available resources in the Topology view of the web console Developer perspective. If a resource has a message about resource limitations or quotas being reached, a yellow border appears around the resource name. Click the resource to open a side panel to see the message. If the Topology view has been zoomed out, a yellow dot indicates that a message is available. If you are using List View from the View Shortcuts menu, resources appear as a list. The Alerts column indicates if a message is available. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/web_console/using-dashboard-to-get-cluster-info |
Chapter 7. Bucket policies in the Multicloud Object Gateway | Chapter 7. Bucket policies in the Multicloud Object Gateway OpenShift Data Foundation supports AWS S3 bucket policies. Bucket policies allow you to grant users access permissions for buckets and the objects in them. 7.1. Introduction to bucket policies Bucket policies are an access policy option available for you to grant permission to your AWS S3 buckets and objects. Bucket policies use JSON-based access policy language. For more information about access policy language, see AWS Access Policy Language Overview . 7.2. Using bucket policies in Multicloud Object Gateway Prerequisites A running OpenShift Data Foundation Platform. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications A valid Multicloud Object Gateway user account. See Creating a user in the Multicloud Object Gateway for instructions to create a user account. Procedure To use bucket policies in the MCG: Create the bucket policy in JSON format. For example: Replace [email protected] with a valid Multicloud Object Gateway user account. Using AWS S3 client, use the put-bucket-policy command to apply the bucket policy to your S3 bucket: Replace ENDPOINT with the S3 endpoint. Replace MyBucket with the bucket to set the policy on. Replace BucketPolicy with the bucket policy JSON file. Add --no-verify-ssl if you are using the default self signed certificates. For example: For more information on the put-bucket-policy command, see the AWS CLI Command Reference for put-bucket-policy . Note The principal element specifies the user that is allowed or denied access to a resource, such as a bucket. Currently, Only NooBaa accounts can be used as principals. In the case of object bucket claims, NooBaa automatically create an account obc-account.<generated bucket name>@noobaa.io . Note Bucket policy conditions are not supported. Additional resources There are many available elements for bucket policies with regard to access permissions. For details on these elements and examples of how they can be used to control the access permissions, see AWS Access Policy Language Overview . For more examples of bucket policies, see AWS Bucket Policy Examples . OpenShift Data Foundation version 4.16 introduces the bucket policy elements NotPrincipal , NotAction , and NotResource . For more information on these elements, see IAM JSON policy elements reference . 7.3. Creating a user in the Multicloud Object Gateway Prerequisites A running OpenShift Data Foundation Platform. Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Execute the following command to create an MCG user account: <noobaa-account-name> Specify the name of the new MCG user account. --allow_bucket_create Allows the user to create new buckets. --allowed_buckets Sets the user's allowed bucket list (use commas or multiple flags). --default_resource Sets the default resource.The new buckets are created on this default resource (including the future ones). --full_permission Allows this account to access all existing and future buckets. Important You need to provide permission to access atleast one bucket or full permission to access all the buckets. | [
"{ \"Version\": \"NewVersion\", \"Statement\": [ { \"Sid\": \"Example\", \"Effect\": \"Allow\", \"Principal\": [ \"[email protected]\" ], \"Action\": [ \"s3:GetObject\" ], \"Resource\": [ \"arn:aws:s3:::john_bucket\" ] } ] }",
"aws --endpoint ENDPOINT --no-verify-ssl s3api put-bucket-policy --bucket MyBucket --policy file:// BucketPolicy",
"aws --endpoint https://s3-openshift-storage.apps.gogo44.noobaa.org --no-verify-ssl s3api put-bucket-policy -bucket MyBucket --policy file://BucketPolicy",
"noobaa account create <noobaa-account-name> [--allow_bucket_create=true] [--allowed_buckets=[]] [--default_resource=''] [--full_permission=false]"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/managing_hybrid_and_multicloud_resources/bucket-policies-in-the-multicloud-object-gateway |
Chapter 3. Resolved issues | Chapter 3. Resolved issues The following issues are resolved in the latest release of the JBoss Web Server collection: Issue Description AMW-203 Typo in jws_conf_loggging variable AMW-200 jws_selinux_enabled requires jws_native | null | https://docs.redhat.com/en/documentation/red_hat_jboss_web_server/6.0/html/red_hat_ansible_certified_content_collection_for_red_hat_jboss_web_server_release_notes/resolved_issues |
Chapter 4. Deploy standalone Multicloud Object Gateway in internal mode | Chapter 4. Deploy standalone Multicloud Object Gateway in internal mode Deploying only the Multicloud Object Gateway component with the OpenShift Data Foundation provides the flexibility in deployment and helps to reduce the resource consumption. Use this section to deploy only the standalone Multicloud Object Gateway component in internal mode, which involves the following steps: Installing Red Hat OpenShift Data Foundation Operator Creating standalone Multicloud Object Gateway Note Deploying standalone Multicloud Object Gateway component is not supported in external mode deployments. 4.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and Operator installation permissions. You must have at least three worker nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command in the command line interface to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation chapter in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.9 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Note We recommend using all default settings. Changing it may result in unexpected behavior. Alter only if you are aware of its result. Verification steps Verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console, navigate to Operators and verify if OpenShift Data Foundation is available. Important In case the console plugin option was not automatically enabled after you installed the OpenShift Data Foundation Operator, you need to enable it. For more information on how to enable the console plugin, see Enabling the Red Hat OpenShift Data Foundation console plugin . 4.2. Creating standalone Multicloud Object Gateway Use this section to create only the Multicloud Object Gateway component with OpenShift Data Foundation. Prerequisites Ensure that OpenShift Data Foundation Operator is installed. (For deploying using local storage devices only) Ensure that Local Storage Operator is installed. Ensure that you have a storage class and is set as the default. Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, expand Advanced . Select Multicloud Object Gateway for Deployment type . Click . Optional: In the Security page, select Connect to an external key management service . Key Management Service Provider is set to Vault by default. Enter Vault Service Name , host Address of Vault server ('https:// <hostname or ip> '), Port number , and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in the Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate , and Client Private Key . Click Save . Click . In the Review and create page, review the configuration details: To modify any configuration settings, click Back . Click Create StorageSystem . Verification steps Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage OpenShift Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. Verify the state of the pods Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list and verify that the following pods are in Running state. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any worker node) ocs-metrics-exporter-* (1 pod on any worker node) odf-operator-controller-manager-* (1 pod on any worker node) odf-console-* (1 pod on any worker node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any worker node) Multicloud Object Gateway noobaa-operator-* (1 pod on any worker node) noobaa-core-* (1 pod on any worker node) noobaa-db-pg-* (1 pod on any worker node) noobaa-endpoint-* (1 pod on any worker node) | [
"oc annotate namespace openshift-storage openshift.io/node-selector="
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/deploy-standalone-multicloud-object-gateway |
Chapter 4. Regional-DR solution for OpenShift Data Foundation | Chapter 4. Regional-DR solution for OpenShift Data Foundation 4.1. Components of Regional-DR solution Regional-DR is composed of Red Hat Advanced Cluster Management for Kubernetes and OpenShift Data Foundation components to provide application and data mobility across Red Hat OpenShift Container Platform clusters. Red Hat Advanced Cluster Management for Kubernetes Red Hat Advanced Cluster Management (RHACM) provides the ability to manage multiple clusters and application lifecycles. Hence, it serves as a control plane in a multi-cluster environment. RHACM is split into two parts: RHACM Hub: components that run on the multi-cluster control plane. Managed clusters: components that run on the clusters that are managed. For more information about this product, see RHACM documentation and the RHACM "Manage Applications" documentation . OpenShift Data Foundation OpenShift Data Foundation provides the ability to provision and manage storage for stateful applications in an OpenShift Container Platform cluster. OpenShift Data Foundation is backed by Ceph as the storage provider, whose lifecycle is managed by Rook in the OpenShift Data Foundation component stack. Ceph-CSI provides the provisioning and management of Persistent Volumes for stateful applications. OpenShift Data Foundation stack is now enhanced with the following abilities for disaster recovery: Enable RBD block pools for mirroring across OpenShift Data Foundation instances (clusters) Ability to mirror specific images within an RBD block pool Provides csi-addons to manage per Persistent Volume Claim (PVC) mirroring OpenShift DR OpenShift DR is a set of orchestrators to configure and manage stateful applications across a set of peer OpenShift clusters which are managed using RHACM and provides cloud-native interfaces to orchestrate the life-cycle of an application's state on Persistent Volumes. These include: Protecting an application and its state relationship across OpenShift clusters Failing over an application and its state to a peer cluster Relocate an application and its state to the previously deployed cluster OpenShift DR is split into three components: ODF Multicluster Orchestrator : Installed on the multi-cluster control plane (RHACM Hub), it orchestrates configuration and peering of OpenShift Data Foundation clusters for Metro and Regional DR relationships OpenShift DR Hub Operator : Automatically installed as part of ODF Multicluster Orchestrator installation on the hub cluster to orchestrate failover or relocation of DR enabled applications. OpenShift DR Cluster Operator : Automatically installed on each managed cluster that is part of a Metro and Regional DR relationship to manage the lifecycle of all PVCs of an application. 4.2. Regional-DR deployment workflow This section provides an overview of the steps required to configure and deploy Regional-DR capabilities using the latest version of Red Hat OpenShift Data Foundation across two distinct OpenShift Container Platform clusters. In addition to two managed clusters, a third OpenShift Container Platform cluster will be required to deploy the Red Hat Advanced Cluster Management (RHACM). To configure your infrastructure, perform the below steps in the order given: Ensure requirements across the three: Hub, Primary and Secondary Openshift Container Platform clusters that are part of the DR solution are met. See Requirements for enabling Regional-DR . Install OpenShift Data Foundation operator and create a storage system on Primary and Secondary managed clusters. See Creating OpenShift Data Foundation cluster on managed clusters . Install the ODF Multicluster Orchestrator on the Hub cluster. See Installing ODF Multicluster Orchestrator on Hub cluster . Configure SSL access between the Hub, Primary and Secondary clusters. See Configuring SSL access across clusters . Create a DRPolicy resource for use with applications requiring DR protection across the Primary and Secondary clusters. See Creating Disaster Recovery Policy on Hub cluster . Note There can be more than a single policy. Testing your disaster recovery solution with: Subscription-based application: Create Subscription-based applications. See Creating sample application . Test failover and relocate operations using the sample subscription-based application between managed clusters. See Subscription-based application failover and relocating subscription-based application . ApplicationSet-based application: Create sample applications. See Creating ApplicationSet-based applications . Test failover and relocate operations using the sample application between managed clusters. See ApplicationSet-based application failover and relocating ApplicationSet-based application . 4.3. Requirements for enabling Regional-DR The prerequisites to installing a disaster recovery solution supported by Red Hat OpenShift Data Foundation are as follows: You must have three OpenShift clusters that have network reachability between them: Hub cluster where Red Hat Advanced Cluster Management (RHACM) for Kubernetes operator is installed. Primary managed cluster where OpenShift Data Foundation is running. Secondary managed cluster where OpenShift Data Foundation is running. Note For configuring hub recovery setup, you need a 4th cluster which acts as the passive hub. The primary managed cluster (Site-1) can be co-situated with the active RHACM hub cluster while the passive hub cluster is situated along with the secondary managed cluster (Site-2). Alternatively, the active RHACM hub cluster can be placed in a neutral site (Site-3) that is not impacted by the failures of either of the primary managed cluster at Site-1 or the secondary cluster at Site-2. In this situation, if a passive hub cluster is used it can be placed with the secondary cluster at Site-2. For more information, see Configuring passive hub cluster for hub recovery . Hub recovery is a Technology Preview feature and is subject to Technology Preview support limitations. Ensure that RHACM operator and MultiClusterHub is installed on the Hub cluster. See RHACM installation guide for instructions. After the operator is successfully installed, a popover with a message that the Web console update is available appears on the user interface. Click Refresh web console from this popover for the console changes to reflect. Important Ensure that application traffic routing and redirection are configured appropriately. On the Hub cluster Navigate to All Clusters Infrastructure Clusters . Import or create the Primary managed cluster and the Secondary managed cluster using the RHACM console. Choose the appropriate options for your environment. For instructions, see Creating a cluster and Importing a target managed cluster to the hub cluster . Connect the private OpenShift cluster and service networks using the RHACM Submariner add-ons. Verify that the two clusters have non-overlapping service and cluster private networks. Otherwise, ensure that the Globalnet is enabled during the Submariner add-ons installation. Run the following command for each of the managed clusters to determine if Globalnet needs to be enabled. The example shown here is for non-overlapping cluster and service networks so Globalnet would not be enabled. Example output for Primary cluster: Example output for Secondary cluster: For more information, see Submariner documentation . 4.4. Creating an OpenShift Data Foundation cluster on managed clusters In order to configure storage replication between the two OpenShift Container Platform clusters, create an OpenShift Data Foundation storage system after you install the OpenShift Data Foundation operator. Note Refer to OpenShift Data Foundation deployment guides and instructions that are specific to your infrastructure (AWS, VMware, BM, Azure, etc.). Procedure Install and configure the latest OpenShift Data Foundation cluster on each of the managed clusters. For information about the OpenShift Data Foundation deployment, refer to your infrastructure specific deployment guides (for example, AWS, VMware, Bare metal, Azure). Note While creating the storage cluster, in the Data Protection step, you must select the Prepare cluster for disaster recovery (Regional-DR only) checkbox. Validate the successful deployment of OpenShift Data Foundation on each managed cluster with the following command: For the Multicloud Gateway (MCG): If the status result is Ready for both queries on the Primary managed cluster and the Secondary managed cluster , then continue with the step. In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources and verify that the Status of StorageCluster is Ready and has a green tick mark to it. [Optional] If Globalnet was enabled when Submariner was installed, then edit the StorageCluster after the OpenShift Data Foundation install finishes. For Globalnet networks, manually edit the StorageCluster yaml to add the clusterID and set enabled to true . Replace <clustername> with your RHACM imported or newly created managed cluster name. Edit the StorageCluster on both the Primary managed cluster and the Secondary managed cluster. Warning Do not make this change in the StorageCluster unless you enabled Globalnet when Submariner was installed. After the above changes are made, Wait for the OSD pods to restart and OSD services to be created. Wait for all MONS to failover. Ensure that the MONS and OSD services are exported. Ensure that cluster is in a Ready state and cluster health has a green tick indicating Health ok . Verify using step 3. 4.5. Installing OpenShift Data Foundation Multicluster Orchestrator operator OpenShift Data Foundation Multicluster Orchestrator is a controller that is installed from OpenShift Container Platform's OperatorHub on the Hub cluster. Procedure On the Hub cluster , navigate to OperatorHub and use the keyword filter to search for ODF Multicluster Orchestrator . Click ODF Multicluster Orchestrator tile. Keep all default settings and click Install . Ensure that the operator resources are installed in openshift-operators project and available to all namespaces. Note The ODF Multicluster Orchestrator also installs the Openshift DR Hub Operator on the RHACM hub cluster as a dependency. Verify that the operator Pods are in a Running state. The OpenShift DR Hub operator is also installed at the same time in openshift-operators namespace. Example output: 4.6. Configuring SSL access across clusters Configure network (SSL) access between the primary and secondary clusters so that metadata can be stored on the alternate cluster in a Multicloud Gateway (MCG) object bucket using a secure transport protocol and in the Hub cluster for verifying access to the object buckets. Note If all of your OpenShift clusters are deployed using a signed and valid set of certificates for your environment then this section can be skipped. Procedure Extract the ingress certificate for the Primary managed cluster and save the output to primary.crt . Extract the ingress certificate for the Secondary managed cluster and save the output to secondary.crt . Create a new ConfigMap file to hold the remote cluster's certificate bundle with filename cm-clusters-crt.yaml . Note There could be more or less than three certificates for each cluster as shown in this example file. Also, ensure that the certificate contents are correctly indented after you copy and paste from the primary.crt and secondary.crt files that were created before. Create the ConfigMap on the Primary managed cluster , Secondary managed cluster , and the Hub cluster . Example output: Patch default proxy resource on the Primary managed cluster , Secondary managed cluster , and the Hub cluster . Example output: 4.7. Creating Disaster Recovery Policy on Hub cluster Openshift Disaster Recovery Policy (DRPolicy) resource specifies OpenShift Container Platform clusters participating in the disaster recovery solution and the desired replication interval. DRPolicy is a cluster scoped resource that users can apply to applications that require Disaster Recovery solution. The ODF MultiCluster Orchestrator Operator facilitates the creation of each DRPolicy and the corresponding DRClusters through the Multicluster Web console . Prerequisites Ensure that there is a minimum set of two managed clusters. Procedure On the OpenShift console , navigate to All Clusters Data Services Data policies . Click Create DRPolicy . Enter Policy name . Ensure that each DRPolicy has a unique name (for example: ocp4bos1-ocp4bos2-5m ). Select two clusters from the list of managed clusters to which this new policy will be associated with. Note If you get an error message "OSDs not migrated" after selecting the clusters, then follow the instructions from knowledgebase article on Migration of existing OSD to the optimized OSD in OpenShift Data Foundation for Regional-DR cluster before proceeding with the step. Replication policy is automatically set to Asynchronous (async) based on the OpenShift clusters selected and a Sync schedule option will become available. Set Sync schedule . Important For every desired replication interval a new DRPolicy must be created with a unique name (such as: ocp4bos1-ocp4bos2-10m ). The same clusters can be selected but the Sync schedule can be configured with a different replication interval in minutes/hours/days. The minimum is one minute. Click Create . Verify that the DRPolicy is created successfully. Run this command on the Hub cluster for each of the DRPolicy resources created, where <drpolicy_name> is replaced with your unique name. Example output: When a DRPolicy is created, along with it, two DRCluster resources are also created. It could take up to 10 minutes for all three resources to be validated and for the status to show as Succeeded . Note Editing of SchedulingInterval , ReplicationClassSelector , VolumeSnapshotClassSelector and DRClusters field values are not supported in the DRPolicy. Verify the object bucket access from the Hub cluster to both the Primary managed cluster and the Secondary managed cluster . Get the names of the DRClusters on the Hub cluster. Example output: Check S3 access to each bucket created on each managed cluster. Use the DRCluster validation command, where <drcluster_name> is replaced with your unique name. Note Editing of Region and S3ProfileName field values are non supported in DRClusters. Example output: Note Make sure to run commands for both DRClusters on the Hub cluster . Verify that the OpenShift DR Cluster operator installation was successful on the Primary managed cluster and the Secondary managed cluster . Example output: You can also verify that OpenShift DR Cluster Operator is installed successfully on the OperatorHub of each managed cluster. Note On the initial run, VolSync operator is installed automatically. VolSync is used to set up volume replication between two clusters to protect CephFs-based PVCs. The replication feature is enabled by default. Verify that the status of the OpenShift Data Foundation mirroring daemon health on the Primary managed cluster and the Secondary managed cluster . Example output: Caution It could take up to 10 minutes for the daemon_health and health to go from Warning to OK . If the status does not become OK eventually, then use the RHACM console to verify that the Submariner connection between managed clusters is still in a healthy state. Do not proceed until all values are OK . 4.8. Create sample application for testing disaster recovery solution OpenShift Data Foundation disaster recovery (DR) solution supports disaster recovery for Subscription-based and ApplicationSet-based applications that are managed by RHACM. For more details, see Subscriptions and ApplicationSet documentation. The following sections detail how to create an application and apply a DRPolicy to an application. Subscription-based applications OpenShift users that do not have cluster-admin permissions, see the knowledge article on how to assign necessary permissions to an application user for executing disaster recovery actions. ApplicationSet-based applications OpenShift users that do not have cluster-admin permissions cannot create ApplicationSet-based applications. 4.8.1. Subscription-based applications 4.8.1.1. Creating a sample Subscription-based application In order to test failover from the Primary managed cluster to the Secondary managed cluster and relocate , we need a sample application. Prerequisites When creating an application for general consumption, ensure that the application is deployed to ONLY one cluster. Use the sample application called busybox as an example. Ensure all external routes of the application are configured using either Global Traffic Manager (GTM) or Global Server Load Balancing (GLSB) service for traffic redirection when the application fails over or is relocated. As a best practice, group Red Hat Advanced Cluster Management (RHACM) subscriptions that belong together, refer to a single Placement Rule to DR protect them as a group. Further create them as a single application for a logical grouping of the subscriptions for future DR actions like failover and relocate. Note If unrelated subscriptions refer to the same Placement Rule for placement actions, they are also DR protected as the DR workflow controls all subscriptions that references the Placement Rule. Procedure On the Hub cluster, navigate to Applications and click Create application . Select type as Subscription . Enter your application Name (for example, busybox ) and Namespace (for example, busybox-sample ). In the Repository location for resources section, select Repository type Git . Enter the Git repository URL for the sample application, the github Branch and Path where the resources busybox Pod and PVC will be created. Use the sample application repository as https://github.com/red-hat-storage/ocm-ramen-samples . Select Branch as release-4.15 . Choose one of the following Path : busybox-odr to use RBD Regional-DR. busybox-odr-cephfs to use CephFS Regional-DR. Scroll down in the form until you see Deploy application resources on clusters with all specified labels . Select the global Cluster sets or the one that includes the correct managed clusters for your environment. Add a label <name> with its value set to the managed cluster name. Click Create which is at the top right hand corner. On the follow-on screen go to the Topology tab. You should see that there are all Green checkmarks on the application topology. Note To get more information, click on any of the topology elements and a window will appear on the right of the topology view. Validating the sample application deployment. Now that the busybox application has been deployed to your preferred Cluster, the deployment can be validated. Log in to your managed cluster where busybox was deployed by RHACM. Example output: 4.8.1.2. Apply Data policy to sample application Prerequisites Ensure that both managed clusters referenced in the Data policy are reachable. If not, the application will not be protected for disaster recovery until both clusters are online. Procedure On the Hub cluster, navigate to All Clusters Applications . Click the Actions menu at the end of application to view the list of available actions. Click Manage data policy Assign data policy . Select Policy and click . Select an Application resource and then use PVC label selector to select PVC label for the selected application resource. Note You can select more than one PVC label for the selected application resources. You can also use the Add application resource option to add multiple resources. After adding all the application resources, click . Review the Policy configuration details and click Assign . The newly assigned Data policy is displayed on the Manage data policy modal list view. Verify that you can view the assigned policy details on the Applications page. On the Applications page, navigate to the Data policy column and click the policy link to expand the view. Verify that you can see the number of policies assigned along with failover and relocate status. Click View more details to view the status of ongoing activities with the policy in use with the application. Optional: Verify RADOS block device (RBD) volumereplication and volumereplicationgroup on the primary cluster. Example output: Example output: Optional: Verify CephFS volsync replication source has been set up successfully in the primary cluster and VolSync ReplicationDestination has been set up in the failover cluster. Example output: Example output: 4.8.2. ApplicationSet-based applications 4.8.2.1. Creating ApplicationSet-based applications Prerequisite Ensure that the Red Hat OpenShift GitOps operator is installed on the Hub cluster. For instructions, see RHACM documentation . Ensure that both Primary and Secondary managed clusters are registered to GitOps. For registration instructions, see Registering managed clusters to GitOps . Then check if the Placement used by GitOpsCluster resource to register both managed clusters, has the tolerations to deal with cluster unavailability. You can verify if the following tolerations are added to the Placement using the command oc get placement <placement-name> -n openshift-gitops -o yaml . In case the tolerations are not added, see Configuring application placement tolerations for Red Hat Advanced Cluster Management and OpenShift GitOps . Procedure On the Hub cluster, navigate to All Clusters Applications and click Create application . Choose application type as Argo CD ApplicationSet - Push model In General step 1, enter your Application set name . Select Argo server openshift-gitops and Requeue time as 180 seconds. Click . In the Repository location for resources section, select Repository type Git . Enter the Git repository URL for the sample application, the github Branch and Path where the resources busybox Pod and PVC will be created. Use the sample application repository as https://github.com/red-hat-storage/ocm-ramen-samples Select Revision as release-4.15 Choose one of the following Path: busybox-odr to use RBD Regional-DR. busybox-odr-cephfs to use CephFS Regional-DR. Enter Remote namespace value. (example, busybox-sample) and click . Select Sync policy settings and click . You can choose one or more options. Add a label <name> with its value set to the managed cluster name. Click . Review the setting details and click Submit . 4.8.2.2. Apply Data policy to sample ApplicationSet-based application Prerequisites Ensure that both managed clusters referenced in the Data policy are reachable. If not, the application will not be protected for disaster recovery until both clusters are online. Procedure On the Hub cluster, navigate to All Clusters Applications . Click the Actions menu at the end of application to view the list of available actions. Click Manage data policy Assign data policy . Select Policy and click . Select an Application resource and then use PVC label selector to select PVC label for the selected application resource. Note You can select more than one PVC label for the selected application resources. After adding all the application resources, click . Review the Policy configuration details and click Assign . The newly assigned Data policy is displayed on the Manage data policy modal list view. Verify that you can view the assigned policy details on the Applications page. On the Applications page, navigate to the Data policy column and click the policy link to expand the view. Verify that you can see the number of policies assigned along with failover and relocate status. Optional: Verify Rados block device (RBD) volumereplication and volumereplicationgroup on the primary cluster. Example output: Example output: Optional: Verify CephFS volsync replication source has been setup successfully in the primary cluster and VolSync ReplicationDestination has been setup in the failover cluster. Example output: Example output: 4.8.3. Deleting sample application This section provides instructions for deleting the sample application busybox using the RHACM console. Important When deleting a DR protected application, access to both clusters that belong to the DRPolicy is required. This is to ensure that all protected API resources and resources in the respective S3 stores are cleaned up as part of removing the DR protection. If access to one of the clusters is not healthy, deleting the DRPlacementControl resource for the application, on the hub, would remain in the Deleting state. Prerequisites These instructions to delete the sample application should not be executed until the failover and relocate testing is completed and the application is ready to be removed from RHACM and the managed clusters. Procedure On the RHACM console, navigate to Applications . Search for the sample application to be deleted (for example, busybox ). Click the Action Menu (...) to the application you want to delete. Click Delete application . When the Delete application is selected a new screen will appear asking if the application related resources should also be deleted. Select Remove application related resources checkbox to delete the Subscription and PlacementRule. Click Delete . This will delete the busybox application on the Primary managed cluster (or whatever cluster the application was running on). In addition to the resources deleted using the RHACM console, delete the DRPlacementControl if it is not auto-deleted after deleting the busybox application. Log in to the OpenShift Web console for the Hub cluster and navigate to Installed Operators for the project busybox-sample . For ApplicationSet applications, select the project as openshift-gitops . Click OpenShift DR Hub Operator and then click the DRPlacementControl tab. Click the Action Menu (...) to the busybox application DRPlacementControl that you want to delete. Click Delete DRPlacementControl . Click Delete . Note This process can be used to delete any application with a DRPlacementControl resource. 4.9. Subscription-based application failover between managed clusters Failover is a process that transitions an application from a primary cluster to a secondary cluster in the event of a primary cluster failure. While failover provides the ability for the application to run on the secondary cluster with minimal interruption, making an uninformed failover decision can have adverse consequences, such as complete data loss in the event of unnoticed replication failure from primary to secondary cluster. If a significant amount of time has gone by since the last successful replication, it's best to wait until the failed primary is recovered. LastGroupSyncTime is a critical metric that reflects the time since the last successful replication occurred for all PVCs associated with an application. In essence, it measures the synchronization health between the primary and secondary clusters. So, prior to initiating a failover from one cluster to another, check for this metric and only initiate the failover if the LastGroupSyncTime is within a reasonable time in the past. Note During the course of failover the Ceph-RBD mirror deployment on the failover cluster is scaled down to ensure a clean failover for volumes that are backed by Ceph-RBD as the storage provisioner. Prerequisites If your setup has active and passive RHACM hub clusters, see Hub recovery using Red Hat Advanced Cluster Management . When the primary cluster is in a state other than Ready , check the actual status of the cluster as it might take some time to update. Navigate to the RHACM console Infrastructure Clusters Cluster list tab. Check the status of both the managed clusters individually before performing failover operation. However, failover operation can still be performed when the cluster you are failing over to is in a Ready state. Run the following command on the Hub Cluster to check if lastGroupSyncTime is within an acceptable data loss window, when compared to current time. Example output: Procedure On the Hub cluster, navigate to Applications . Click the Actions menu at the end of application row to view the list of available actions. Click Failover application . After the Failover application modal is shown, select policy and target cluster to which the associated application will failover in case of a disaster. Click the Select subscription group dropdown to verify the default selection or modify this setting. By default, the subscription group that replicates for the application resources is selected. Check the status of the Failover readiness . If the status is Ready with a green tick, it indicates that the target cluster is ready for failover to start. Proceed to step 7. If the status is Unknown or Not ready , then wait until the status changes to Ready . Click Initiate . The busybox application is now failing over to the Secondary-managed cluster . Close the modal window and track the status using the Data policy column on the Applications page. Verify that the activity status shows as FailedOver for the application. Navigate to the Applications Overview tab. In the Data policy column, click the policy link for the application you applied the policy to. On the Data policy popover, click the View more details link. Verify that you can see one or more policy names and the ongoing activities (Last sync time and Activity status) associated with the policy in use with the application. 4.10. ApplicationSet-based application failover between managed clusters Failover is a process that transitions an application from a primary cluster to a secondary cluster in the event of a primary cluster failure. While failover provides the ability for the application to run on the secondary cluster with minimal interruption, making an uninformed failover decision can have adverse consequences, such as complete data loss in the event of unnoticed replication failure from primary to secondary cluster. If a significant amount of time has gone by since the last successful replication, it's best to wait until the failed primary is recovered. LastGroupSyncTime is a critical metric that reflects the time since the last successful replication occurred for all PVCs associated with an application. In essence, it measures the synchronization health between the primary and secondary clusters. So, prior to initiating a failover from one cluster to another, check for this metric and only initiate the failover if the LastGroupSyncTime is within a reasonable time in the past. Note During the course of failover the Ceph-RBD mirror deployment on the failover cluster is scaled down to ensure a clean failover for volumes that are backed by Ceph-RBD as the storage provisioner. Prerequisites If your setup has active and passive RHACM hub clusters, see Hub recovery using Red Hat Advanced Cluster Management . When the primary cluster is in a state other than Ready , check the actual status of the cluster as it might take some time to update. Navigate to the RHACM console Infrastructure Clusters Cluster list tab. Check the status of both the managed clusters individually before performing failover operation. However, failover operation can still be performed when the cluster you are failing over to is in a Ready state. Run the following command on the Hub Cluster to check if lastGroupSyncTime is within an acceptable data loss window, when compared to current time. Example output: Procedure On the Hub cluster, navigate to Applications . Click the Actions menu at the end of application row to view the list of available actions. Click Failover application . When the Failover application modal is shown, verify the details presented are correct and check the status of the Failover readiness . If the status is Ready with a green tick, it indicates that the target cluster is ready for failover to start. Click Initiate . The busybox resources are now created on the target cluster. Close the modal window and track the status using the Data policy column on the Applications page. Verify that the activity status shows as FailedOver for the application. Navigate to the Applications Overview tab. In the Data policy column, click the policy link for the application you applied the policy to. On the Data policy popover, verify that you can see one or more policy names and the ongoing activities associated with the policy in use with the application. 4.11. Relocating Subscription-based application between managed clusters Relocate an application to its preferred location when all managed clusters are available. Prerequisite If your setup has active and passive RHACM hub clusters, see Hub recovery using Red Hat Advanced Cluster Management . When the primary cluster is in a state other than Ready , check the actual status of the cluster as it might take some time to update. Relocate can only be performed when both primary and preferred clusters are up and running. Navigate to RHACM console Infrastructure Clusters Cluster list tab. Check the status of both the managed clusters individually before performing relocate operation. Perform relocate when lastGroupSyncTime is within the replication interval (for example, 5 minutes) when compared to current time. This is recommended to minimize the Recovery Time Objective (RTO) for any single application. Run this command on the Hub Cluster: Example output: Compare the output time (UTC) to current time to validate that all lastGroupSyncTime values are within their application replication interval. If not, wait to Relocate until this is true for all lastGroupSyncTime values. Procedure On the Hub cluster, navigate to Applications . Click the Actions menu at the end of application row to view the list of available actions. Click Relocate application . When the Relocate application modal is shown, select policy and target cluster to which the associated application will relocate to in case of a disaster. By default, the subscription group that will deploy the application resources is selected. Click the Select subscription group dropdown to verify the default selection or modify this setting. Check the status of the Relocation readiness . If the status is Ready with a green tick, it indicates that the target cluster is ready for relocation to start. Proceed to step 7. If the status is Unknown or Not ready , then wait until the status changes to Ready . Click Initiate . The busybox resources are now created on the target cluster. Close the modal window and track the status using the Data policy column on the Applications page. Verify that the activity status shows as Relocated for the application. Navigate to the Applications Overview tab. In the Data policy column, click the policy link for the application you applied the policy to. On the Data policy popover, click the View more details link. Verify that you can see one or more policy names and the ongoing activities (Last sync time and Activity status) associated with the policy in use with the application. 4.12. Relocating an ApplicationSet-based application between managed clusters Relocate an application to its preferred location when all managed clusters are available. Prerequisite If your setup has active and passive RHACM hub clusters, see Hub recovery using Red Hat Advanced Cluster Management . When the primary cluster is in a state other than Ready , check the actual status of the cluster as it might take some time to update. Relocate can only be performed when both primary and preferred clusters are up and running. Navigate to RHACM console Infrastructure Clusters Cluster list tab. Check the status of both the managed clusters individually before performing relocate operation. Perform relocate when lastGroupSyncTime is within the replication interval (for example, 5 minutes) when compared to current time. This is recommended to minimize the Recovery Time Objective (RTO) for any single application. Run this command on the Hub Cluster: Example output: Compare the output time (UTC) to current time to validate that all lastGroupSyncTime values are within their application replication interval. If not, wait to Relocate until this is true for all lastGroupSyncTime values. Procedure On the Hub cluster, navigate to Applications . Click the Actions menu at the end of application row to view the list of available actions. Click Relocate application . When the Relocate application modal is shown, select policy and target cluster to which the associated application will relocate to in case of a disaster. Click Initiate . The busybox resources are now created on the target cluster. Close the modal window and track the status using the Data policy column on the Applications page. Verify that the activity status shows as Relocated for the application. Navigate to the Applications Overview tab. In the Data policy column, click the policy link for the application you applied the policy to. On the Data policy popover, verify that you can see one or more policy names and the relocation status associated with the policy in use with the application. 4.13. Viewing Recovery Point Objective values for disaster recovery enabled applications Recovery Point Objective (RPO) value is the most recent sync time of persistent data from the cluster where the application is currently active to its peer. This sync time helps determine duration of data lost during failover. Note This RPO value is applicable only for Regional-DR during failover. Relocation ensures there is no data loss during the operation, as all peer clusters are available. You can view the Recovery Point Objective (RPO) value of all the protected volumes for their workload on the Hub cluster. Procedure On the Hub cluster, navigate to Applications Overview tab. In the Data policy column, click the policy link for the application you applied the policy to. A Data Policies modal page appears with the number of disaster recovery policies applied to each application along with failover and relocation status. On the Data Policies modal page, click the View more details link. A detailed Data Policies modal page is displayed that shows the policy names and the ongoing activities (Last sync, Activity status) associated with the policy that is applied to the application. The Last sync time reported in the modal page, represents the most recent sync time of all volumes that are DR protected for the application. 4.14. Hub recovery using Red Hat Advanced Cluster Management [Technology preview] When your setup has active and passive Red Hat Advanced Cluster Management for Kubernetes (RHACM) hub clusters, and in case where the active hub is down, you can use the passive hub to failover or relocate the disaster recovery protected workloads. Important Hub recovery is a Technology Preview feature and is subject to Technology Preview support limitations. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information, see Technology Preview Features Support Scope . 4.14.1. Configuring passive hub cluster To perform hub recovery in case the active hub is down or unreachable, follow the procedure in this section to configure the passive hub cluster and then failover or relocate the disaster recovery protected workloads. Procedure Ensure that RHACM operator and MultiClusterHub is installed on the passive hub cluster. See RHACM installation guide for instructions. After the operator is successfully installed, a popover with a message that the Web console update is available appears on the user interface. Click Refresh web console from this popover for the console changes to reflect. Before hub recovery, configure backup and restore. See Backup and restore topics of RHACM Business continuity guide. Install the multicluster orchestrator (MCO) operator along with Red Hat OpenShift GitOps operator on the passive RHACM hub prior to the restore. For instructions to restore your RHACM hub, see Installing OpenShift Data Foundation Multicluster Orchestrator operator . Ensure that .spec.cleanupBeforeRestore is set to None for the Restore.cluster.open-cluster-management.io resource. For details, see Restoring passive resources while checking for backups chapter of RHACM documentation. If SSL access across clusters was configured manually during setup, then re-configure SSL access across clusters. For instructions, see Configuring SSL access across clusters chapter. On the passive hub, add label to openshift-operators namespace to enable basic monitoring of VolumeSyncronizationDelay alert using this command. For alert details, see Disaster recovery alerts chapter. 4.14.2. Switching to passive hub cluster Use this procedure when active hub is down or unreachable. Procedure Restore the backups on the passive hub cluster. For information, see Restoring a hub cluster from backup. Important Recovering a failed hub to its passive instance will only restore applications and their DR protected state to its last scheduled backup. Any application that was DR protected after the last scheduled backup would need to be protected again on the new hub. Submariner is automatically installed once the managed clusters are imported on the passive hub. Verify that the Primary and Seconday managed clusters are successfully imported into the RHACM console and they are accessible. If any of the managed clusters are down or unreachable then they will not be successfully imported. Wait until DRPolicy validation succeeds. Verify that the DRPolicy is created successfully. Run this command on the Hub cluster for each of the DRPolicy resources created, where <drpolicy_name> is replaced with a unique name. Example output: Refresh the RHACM console to make the DR monitoring dashboard tab accessible if it was enabled on the Active hub cluster. If only the active hub cluster is down, restore the hub by performing hub recovery, and restoring the backups on the passive hub. If the managed clusters are still accessible, no further action is required. If the primary managed cluster is down, along with the active hub cluster, you need to fail over the workloads from the primary managed cluster to the secondary managed cluster. For failover instructions, based on your workload type, see Subscription-based applications or ApplicationSet-based applications . Verify that the failover is successful. When the Primary managed cluster is down, then the PROGRESSION status for the workload would be in Cleaning Up phase until the down managed cluster is back online and successfully imported into the RHACM console. On the passive hub cluster, run the following command to check the PROGRESSION status. Example output: | [
"oc get networks.config.openshift.io cluster -o json | jq .spec",
"{ \"clusterNetwork\": [ { \"cidr\": \"10.5.0.0/16\", \"hostPrefix\": 23 } ], \"externalIP\": { \"policy\": {} }, \"networkType\": \"OVNKubernetes\", \"serviceNetwork\": [ \"10.15.0.0/16\" ] }",
"{ \"clusterNetwork\": [ { \"cidr\": \"10.6.0.0/16\", \"hostPrefix\": 23 } ], \"externalIP\": { \"policy\": {} }, \"networkType\": \"OVNKubernetes\", \"serviceNetwork\": [ \"10.16.0.0/16\" ] }",
"oc get storagecluster -n openshift-storage ocs-storagecluster -o jsonpath='{.status.phase}{\"\\n\"}'",
"oc get noobaa -n openshift-storage noobaa -o jsonpath='{.status.phase}{\"\\n\"}'",
"oc edit storagecluster -o yaml -n openshift-storage",
"spec: network: multiClusterService: clusterID: <clustername> enabled: true",
"oc get serviceexport -n openshift-storage",
"NAME AGE rook-ceph-mon-d 4d14h rook-ceph-mon-e 4d14h rook-ceph-mon-f 4d14h rook-ceph-osd-0 4d14h rook-ceph-osd-1 4d14h rook-ceph-osd-2 4d14h",
"oc get pods -n openshift-operators",
"NAME READY STATUS RESTARTS AGE odf-multicluster-console-6845b795b9-blxrn 1/1 Running 0 4d20h odfmo-controller-manager-f9d9dfb59-jbrsd 1/1 Running 0 4d20h ramen-hub-operator-6fb887f885-fss4w 2/2 Running 0 4d20h",
"oc get cm default-ingress-cert -n openshift-config-managed -o jsonpath=\"{['data']['ca-bundle\\.crt']}\" > primary.crt",
"oc get cm default-ingress-cert -n openshift-config-managed -o jsonpath=\"{['data']['ca-bundle\\.crt']}\" > secondary.crt",
"apiVersion: v1 data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- <copy contents of cert1 from primary.crt here> -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- <copy contents of cert2 from primary.crt here> -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- <copy contents of cert3 primary.crt here> -----END CERTIFICATE---- -----BEGIN CERTIFICATE----- <copy contents of cert1 from secondary.crt here> -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- <copy contents of cert2 from secondary.crt here> -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- <copy contents of cert3 from secondary.crt here> -----END CERTIFICATE----- kind: ConfigMap metadata: name: user-ca-bundle namespace: openshift-config",
"oc create -f cm-clusters-crt.yaml",
"configmap/user-ca-bundle created",
"oc patch proxy cluster --type=merge --patch='{\"spec\":{\"trustedCA\":{\"name\":\"user-ca-bundle\"}}}'",
"proxy.config.openshift.io/cluster patched",
"oc get drpolicy <drpolicy_name> -o jsonpath='{.status.conditions[].reason}{\"\\n\"}'",
"Succeeded",
"oc get drclusters",
"NAME AGE ocp4bos1 4m42s ocp4bos2 4m42s",
"oc get drcluster <drcluster_name> -o jsonpath='{.status.conditions[2].reason}{\"\\n\"}'",
"Succeeded",
"oc get csv,pod -n openshift-dr-system",
"NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/odr-cluster-operator.v4.15.0 Openshift DR Cluster Operator 4.15.0 Succeeded clusterserviceversion.operators.coreos.com/volsync-product.v0.8.0 VolSync 0.8.0 Succeeded NAME READY STATUS RESTARTS AGE pod/ramen-dr-cluster-operator-6467cf5d4c-cc8kz 2/2 Running 0 3d12h",
"oc get cephblockpool ocs-storagecluster-cephblockpool -n openshift-storage -o jsonpath='{.status.mirroringStatus.summary}{\"\\n\"}'",
"{\"daemon_health\":\"OK\",\"health\":\"OK\",\"image_health\":\"OK\",\"states\":{}}",
"oc get pods,pvc -n busybox-sample",
"NAME READY STATUS RESTARTS AGE pod/busybox-67bf494b9-zl5tr 1/1 Running 0 77s NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/busybox-pvc Bound pvc-c732e5fe-daaf-4c4d-99dd-462e04c18412 5Gi RWO ocs-storagecluster-ceph-rbd 77s",
"oc get volumereplications.replication.storage.openshift.io -A",
"NAME AGE VOLUMEREPLICATIONCLASS PVCNAME DESIREDSTATE CURRENTSTATE busybox-pvc 2d16h rbd-volumereplicationclass-1625360775 busybox-pvc primary Primary",
"oc get volumereplicationgroups.ramendr.openshift.io -A",
"NAME DESIREDSTATE CURRENTSTATE busybox-drpc primary Primary",
"oc get replicationsource -n busybox-sample",
"NAME SOURCE LAST SYNC DURATION NEXT SYNC busybox-pvc busybox-pvc 2022-12-20T08:46:07Z 1m7.794661104s 2022-12-20T08:50:00Z",
"oc get replicationdestination -n busybox-sample",
"NAME LAST SYNC DURATION NEXT SYNC busybox-pvc 2022-12-20T08:46:32Z 4m39.52261108s",
"tolerations: - key: cluster.open-cluster-management.io/unreachable operator: Exists - key: cluster.open-cluster-management.io/unavailable operator: Exists",
"oc get volumereplications.replication.storage.openshift.io -A",
"NAME AGE VOLUMEREPLICATIONCLASS PVCNAME DESIREDSTATE CURRENTSTATE busybox-pvc 2d16h rbd-volumereplicationclass-1625360775 busybox-pvc primary Primary",
"oc get volumereplicationgroups.ramendr.openshift.io -A",
"NAME DESIREDSTATE CURRENTSTATE busybox-drpc primary Primary",
"oc get replicationsource -n busybox-sample",
"NAME SOURCE LAST SYNC DURATION NEXT SYNC busybox-pvc busybox-pvc 2022-12-20T08:46:07Z 1m7.794661104s 2022-12-20T08:50:00Z",
"oc get replicationdestination -n busybox-sample",
"NAME LAST SYNC DURATION NEXT SYNC busybox-pvc 2022-12-20T08:46:32Z 4m39.52261108s",
"oc get drpc -o yaml -A | grep lastGroupSyncTime",
"[...] lastGroupSyncTime: \"2023-07-10T12:40:10Z\"",
"oc get drpc -o yaml -A | grep lastGroupSyncTime",
"[...] lastGroupSyncTime: \"2023-07-10T12:40:10Z\"",
"oc get drpc -o yaml -A | grep lastGroupSyncTime",
"[...] lastGroupSyncTime: \"2023-07-10T12:40:10Z\"",
"oc get drpc -o yaml -A | grep lastGroupSyncTime",
"[...] lastGroupSyncTime: \"2023-07-10T12:40:10Z\"",
"oc label namespace openshift-operators openshift.io/cluster-monitoring='true'",
"oc get drpolicy <drpolicy_name> -o jsonpath='{.status.conditions[].reason}{\"\\n\"}'",
"Succeeded",
"oc get drpc -o wide -A",
"NAMESPACE NAME AGE PREFERREDCLUSTER FAILOVERCLUSTER DESIREDSTATE CURRENTSTATE PROGRESSION START TIME DURATION PEER READY [...] busybox cephfs-busybox-placement-1-drpc 103m cluster-1 cluster-2 Failover FailedOver Cleaning Up 2024-04-15T09:12:23Z False busybox cephfs-busybox-placement-1-drpc 102m cluster-1 Deployed Completed 2024-04-15T07:40:09Z 37.200569819s True [...]"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/configuring_openshift_data_foundation_disaster_recovery_for_openshift_workloads/rdr-solution |
Chapter 1. Red Hat Decision Manager project packaging | Chapter 1. Red Hat Decision Manager project packaging Red Hat Decision Manager projects contain the business assets that you develop in Red Hat Decision Manager. Each project in Red Hat Decision Manager is packaged as a Knowledge JAR (KJAR) file with configuration files such as a Maven project object model file ( pom.xml ), which contains build, environment, and other information about the project, and a KIE module descriptor file ( kmodule.xml ), which contains the KIE base and KIE session configurations for the assets in the project. You deploy the packaged KJAR file to a KIE Server that runs the decision services and other deployable assets (collectively referred to as services ) from that KJAR file. These services are consumed at run time through an instantiated KIE container, or deployment unit . Project KJAR files are stored in a Maven repository and identified by three values: GroupId , ArtifactId , and Version (GAV). The Version value must be unique for every new version that might need to be deployed. To identify an artifact (including a KJAR file), you need all three GAV values. Projects in Business Central are packaged automatically when you build and deploy the projects. For projects outside of Business Central, such as independent Maven projects or projects within a Java application, you must configure the KIE module descriptor settings in an appended kmodule.xml file or directly in your Java application in order to build and deploy the projects. | null | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/deploying_and_managing_red_hat_decision_manager_services/project-packaging-con_packaging-deploying |
Chapter 16. Installing a three-node cluster on AWS | Chapter 16. Installing a three-node cluster on AWS In OpenShift Container Platform version 4.14, you can install a three-node cluster on Amazon Web Services (AWS). A three-node cluster consists of three control plane machines, which also act as compute machines. This type of cluster provides a smaller, more resource efficient cluster, for cluster administrators and developers to use for testing, development, and production. You can install a three-node cluster using either installer-provisioned or user-provisioned infrastructure. Note Deploying a three-node cluster using an AWS Marketplace image is not supported. 16.1. Configuring a three-node cluster You configure a three-node cluster by setting the number of worker nodes to 0 in the install-config.yaml file before deploying the cluster. Setting the number of worker nodes to 0 ensures that the control plane machines are schedulable. This allows application workloads to be scheduled to run from the control plane nodes. Note Because application workloads run from control plane nodes, additional subscriptions are required, as the control plane nodes are considered to be compute nodes. Prerequisites You have an existing install-config.yaml file. Procedure Set the number of compute replicas to 0 in your install-config.yaml file, as shown in the following compute stanza: Example install-config.yaml file for a three-node cluster apiVersion: v1 baseDomain: example.com compute: - name: worker platform: {} replicas: 0 # ... If you are deploying a cluster with user-provisioned infrastructure: After you create the Kubernetes manifest files, make sure that the spec.mastersSchedulable parameter is set to true in cluster-scheduler-02-config.yml file. You can locate this file in <installation_directory>/manifests . For more information, see "Creating the Kubernetes manifest and Ignition config files" in "Installing a cluster on user-provisioned infrastructure in AWS by using CloudFormation templates". Do not create additional worker nodes. Example cluster-scheduler-02-config.yml file for a three-node cluster apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: null name: cluster spec: mastersSchedulable: true policy: name: "" status: {} 16.2. steps Installing a cluster on AWS with customizations Installing a cluster on user-provisioned infrastructure in AWS by using CloudFormation templates | [
"apiVersion: v1 baseDomain: example.com compute: - name: worker platform: {} replicas: 0",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: null name: cluster spec: mastersSchedulable: true policy: name: \"\" status: {}"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_aws/installing-aws-three-node |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.