title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
url
stringlengths
79
342
Chapter 17. Monitoring resources
Chapter 17. Monitoring resources The following chapter details how to configure monitoring and reporting for managed systems. This includes host configuration, content views, compliance, subscriptions, registered hosts, promotions, and synchronization. 17.1. Using the Red Hat Satellite content dashboard The Red Hat Satellite content dashboard contains various widgets which provide an overview of the host configuration, content views, compliance reports, subscriptions and hosts currently registered, promotions and synchronization, and a list of the latest notifications. In the Satellite web UI, navigate to Monitor > Dashboard to access the content dashboard. The dashboard can be rearranged by clicking on a widget and dragging it to a different position. The following widgets are available: Host Configuration Status An overview of the configuration states and the number of hosts associated with it during the last reporting interval. The following table shows the descriptions of the possible configuration states. Table 17.1. Host configuration states Icon State Description Hosts that had performed modifications without error Host that successfully performed modifications during the last reporting interval. Hosts in error state Hosts on which an error was detected during the last reporting interval. Good host reports in the last 35 minutes Hosts without error that did not perform any modifications in the last 35 minutes. Hosts that had pending changes Hosts on which some resources would be applied but Puppet was configured to run in the noop mode. Out of sync hosts Hosts that were not synchronized and the report was not received during the last reporting interval. Hosts with no reports Hosts for which no reports were collected during the last reporting interval. Hosts with alerts disabled Hosts which are not being monitored. Click the particular configuration status to view hosts associated with it. Host Configuration Chart A pie chart shows the proportion of the configuration status and the percentage of all hosts associated with it. Latest Events A list of messages produced by hosts including administration information, product and subscription changes, and any errors. Monitor this section for global notifications sent to all users and to detect any unusual activity or errors. Run Distribution (last 30 minutes) A graph shows the distribution of the running Puppet agents during the last puppet interval which is 30 minutes by default. In this case, each column represents a number of reports received from clients during 3 minutes. New Hosts A list of the recently created hosts. Click the host for more details. Task Status A summary of all current tasks, grouped by their state and result. Click the number to see the list of corresponding tasks. Latest Warning/Error Tasks A list of the latest tasks that have been stopped due to a warning or error. Click a task to see more details. Discovered Hosts A list of all bare-metal hosts detected on the provisioning network by the Discovery plugin. Latest Errata A list of all errata available for hosts registered to Satellite. Content Views A list of all content views in Satellite and their publish status. Sync Overview An overview of all products or repositories enabled in Satellite and their synchronization status. All products that are in the queue for synchronization, are unsynchronized or have been previously synchronized are listed in this section. Host Subscription Status An overview of the subscriptions currently consumed by the hosts registered to Satellite. A subscription is a purchased certificate that unlocks access to software, upgrades, and security fixes for hosts. The following table shows the possible states of subscriptions. Table 17.2. Host subscription states Icon State Description Invalid Hosts that have products installed, but are not correctly subscribed. These hosts need attention immediately. Partial Hosts that have a subscription and a valid entitlement, but are not using their full entitlements. These hosts should be monitored to ensure they are configured as expected. Valid Hosts that have a valid entitlement and are using their full entitlements. Click the subscription type to view hosts associated with subscriptions of the selected type. Subscription Status An overview of the current subscription totals that shows the number of active subscriptions, the number of subscriptions that expire in the 120 days, and the number of subscriptions that have recently expired. Host Collections A list of all host collections in Satellite and their status, including the number of content hosts in each host collection. Virt-who Configuration Status An overview of the status of reports received from the virt-who daemon running on hosts in the environment. The following table shows the possible states. Table 17.3. virt-who configuration states State Description No Reports No report has been received because either an error occurred during the virt-who configuration deployment, or the configuration has not been deployed yet, or virt-who cannot connect to Satellite during the scheduled interval. No Change No report has been received because hypervisor did not detect any changes on the virtual machines, or virt-who failed to upload the reports during the scheduled interval. If you added a virtual machine but the configuration is in the No Change state, check that virt-who is running. OK The report has been received without any errors during the scheduled interval. Total Configurations A total number of virt-who configurations. Click the configuration status to see all configurations in this state. The widget also lists the three latest configurations in the No Change state under Latest Configurations Without Change . Latest Compliance Reports A list of the latest compliance reports. Each compliance report shows a number of rules passed (P), failed (F), or othered (O). Click the host for the detailed compliance report. Click the policy for more details on that policy. Compliance Reports Breakdown A pie chart shows the distribution of compliance reports according to their status. Red Hat Insights Actions Red Hat Insights is a tool embedded in Satellite that checks the environment and suggests actions you can take. The actions are divided into 4 categories: Availability, Stability, Performance, and Security. Red Hat Insights Risk Summary A table shows the distribution of the actions according to the risk levels. Risk level represents how critical the action is and how likely it is to cause an actual issue. The possible risk levels are: Low, Medium, High, and Critical. Note It is not possible to change the date format displayed in the Satellite web UI. 17.1.1. Managing tasks Red Hat Satellite keeps a complete log of all planned or performed tasks, such as repositories synchronised, errata applied, and content views published. To review the log, navigate to Monitor > Satellite Tasks > Tasks . In the Task window, you can search for specific tasks, view their status, details, and elapsed time since they started. You can also cancel and resume one or more tasks. The tasks are managed using the Dynflow engine. Remote tasks have a timeout which can be adjusted as needed. To adjust timeout settings In the Satellite web UI, navigate to Administer > Settings . Enter %_timeout in the search box and click Search . The search should return four settings, including a description. In the Value column, click the icon to a number to edit it. Enter the desired value in seconds, and click Save . Note Adjusting the %_finish_timeout values might help in case of low bandwidth. Adjusting the %_accept_timeout values might help in case of high latency. When a task is initialized, any back-end service that will be used in the task, such as Candlepin or Pulp, will be checked for correct functioning. If the check fails, you will receive an error similar to the following one: If the back-end service checking feature turns out to be causing any trouble, it can be disabled as follows. To disable checking for services In the Satellite web UI, navigate to Administer > Settings . Enter check_services_before_actions in the search box and click Search . In the Value column, click the icon to edit the value. From the drop-down menu, select false . Click Save . 17.2. Configuring RSS notifications To view Satellite event notification alerts, click the Notifications icon in the upper right of the screen. By default, the Notifications area displays RSS feed events published in the Red Hat Satellite Blog . The feed is refreshed every 12 hours and the Notifications area is updated whenever new events become available. You can configure the RSS feed notifications by changing the URL feed. The supported feed format is RSS 2.0 and Atom. For an example of the RSS 2.0 feed structure, see the Red Hat Satellite Blog feed . For an example of the Atom feed structure, see the Foreman blog feed . To configure RSS feed notifications In the Satellite web UI, navigate to Administer > Settings and select the Notifications tab. In the RSS URL row, click the edit icon in the Value column and type the required URL. In the RSS enable row, click the edit icon in the Value column to enable or disable this feature. 17.3. Monitoring Satellite Server Audit records list the changes made by all users on Satellite. This information can be used for maintenance and troubleshooting. Procedure In the Satellite web UI, navigate to Monitor > Audits to view the audit records. To obtain a list of all the audit attributes, use the following command: 17.4. Monitoring Capsule Server The following section shows how to use the Satellite web UI to find Capsule information valuable for maintenance and troubleshooting. 17.4.1. Viewing general Capsule information In the Satellite web UI, navigate to Infrastructure > Capsules to view a table of Capsule Servers registered to Satellite Server. The information contained in the table answers the following questions: Is Capsule Server running? This is indicated by a green icon in the Status column. A red icon indicates an inactive Capsule, use the service foreman-proxy restart command on Capsule Server to activate it. What services are enabled on Capsule Server? In the Features column you can verify if Capsule for example provides a DHCP service or acts as a Pulp mirror. Capsule features can be enabled during installation or configured in addition. For more information, see Installing Capsule Server . What organizations and locations is Capsule Server assigned to? A Capsule Server can be assigned to multiple organizations and locations, but only Capsules belonging to the currently selected organization are displayed. To list all Capsules, select Any Organization from the context menu in the top left corner. After changing the Capsule configuration, select Refresh from the drop-down menu in the Actions column to ensure the Capsule table is up to date. Click the Capsule name to view further details. At the Overview tab, you can find the same information as in the Capsule table. In addition, you can answer to the following questions: Which hosts are managed by Capsule Server? The number of associated hosts is displayed to the Hosts managed label. Click the number to view the details of associated hosts. How much storage space is available on Capsule Server? The amount of storage space occupied by the Pulp content in /var/lib/pulp is displayed. Also the remaining storage space available on the Capsule can be ascertained. 17.4.2. Monitoring services In the Satellite web UI, navigate to Infrastructure > Capsules and click the name of the selected Capsule. At the Services tab, you can find basic information on Capsule services, such as the list of DNS domains, or the number of Pulp workers. The appearance of the page depends on what services are enabled on Capsule Server. Services providing more detailed status information can have dedicated tabs at the Capsule page. For more information, see Section 17.4.3, "Monitoring Puppet" . 17.4.3. Monitoring Puppet In the Satellite web UI, navigate to Infrastructure > Capsules and click the name of the selected Capsule. At the Puppet tab you can find the following: A summary of Puppet events, an overview of latest Puppet runs, and the synchronization status of associated hosts at the General sub-tab. A list of Puppet environments at the Environments sub-tab. At the Puppet CA tab you can find the following: A certificate status overview and the number of autosign entries at the General sub-tab. A table of CA certificates associated with the Capsule at the Certificates sub-tab. Here you can inspect the certificate expiry data, or cancel the certificate by clicking Revoke . A list of autosign entries at the Autosign entries sub-tab. Here you can create an entry by clicking New or delete one by clicking Delete . Note The Puppet and Puppet CA tabs are available only if you have Puppet enabled in your Satellite. Additional resources For more information, see Enabling Puppet Integration with Satellite in Managing configurations using Puppet integration .
[ "There was an issue with the backend service candlepin: Connection refused - connect(2).", "foreman-rake audits:list_attributes" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/administering_red_hat_satellite/monitoring_resources_admin
Chapter 5. Installing the Migration Toolkit for Containers in a restricted network environment
Chapter 5. Installing the Migration Toolkit for Containers in a restricted network environment You can install the Migration Toolkit for Containers (MTC) on OpenShift Container Platform 4 in a restricted network environment by performing the following procedures: Create a mirrored Operator catalog . This process creates a mapping.txt file, which contains the mapping between the registry.redhat.io image and your mirror registry image. The mapping.txt file is required for installing the legacy Migration Toolkit for Containers Operator on an OpenShift Container Platform 4.2 to 4.5 source cluster. Install the Migration Toolkit for Containers Operator on the OpenShift Container Platform 4.7 target cluster by using Operator Lifecycle Manager. By default, the MTC web console and the Migration Controller pod run on the target cluster. You can configure the Migration Controller custom resource manifest to run the MTC web console and the Migration Controller pod on a remote cluster . Install the Migration Toolkit for Containers Operator on the source cluster: OpenShift Container Platform 4.6 or later: Install the Migration Toolkit for Containers Operator by using Operator Lifecycle Manager. OpenShift Container Platform 4.2 to 4.5: Install the legacy Migration Toolkit for Containers Operator from the command line interface. Configure object storage to use as a replication repository. Note To install MTC on OpenShift Container Platform 3, see Installing the legacy Migration Toolkit for Containers Operator on OpenShift Container Platform 3 . To uninstall MTC, see Uninstalling MTC and deleting resources . 5.1. Compatibility guidelines You must install the Migration Toolkit for Containers (MTC) Operator that is compatible with your OpenShift Container Platform version. Definitions legacy platform OpenShift Container Platform 4.5 and earlier. modern platform OpenShift Container Platform 4.6 and later. legacy operator The MTC Operator designed for legacy platforms. modern operator The MTC Operator designed for modern platforms. control cluster The cluster that runs the MTC controller and GUI. remote cluster A source or destination cluster for a migration that runs Velero. The Control Cluster communicates with Remote clusters via the Velero API to drive migrations. Table 5.1. MTC compatibility: Migrating from a legacy platform OpenShift Container Platform 4.5 or earlier OpenShift Container Platform 4.6 or later Stable MTC version MTC 1.7. z Legacy 1.7 operator: Install manually with the operator.yml file. Important This cluster cannot be the control cluster. MTC 1.7. z Install with OLM, release channel release-v1.7 Note Edge cases exist in which network restrictions prevent modern clusters from connecting to other clusters involved in the migration. For example, when migrating from an OpenShift Container Platform 3.11 cluster on premises to a modern OpenShift Container Platform cluster in the cloud, where the modern cluster cannot connect to the OpenShift Container Platform 3.11 cluster. With MTC 1.7, if one of the remote clusters is unable to communicate with the control cluster because of network restrictions, use the crane tunnel-api command. With the stable MTC release, although you should always designate the most modern cluster as the control cluster, in this specific case it is possible to designate the legacy cluster as the control cluster and push workloads to the remote cluster. 5.2. Installing the Migration Toolkit for Containers Operator on OpenShift Container Platform 4.7 You install the Migration Toolkit for Containers Operator on OpenShift Container Platform 4.7 by using the Operator Lifecycle Manager. Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. You must create an Operator catalog from a mirror image in a local registry. Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Use the Filter by keyword field to find the Migration Toolkit for Containers Operator . Select the Migration Toolkit for Containers Operator and click Install . Click Install . On the Installed Operators page, the Migration Toolkit for Containers Operator appears in the openshift-migration project with the status Succeeded . Click Migration Toolkit for Containers Operator . Under Provided APIs , locate the Migration Controller tile, and click Create Instance . Click Create . Click Workloads Pods to verify that the MTC pods are running. 5.3. Installing the legacy Migration Toolkit for Containers Operator on OpenShift Container Platform 4.2 to 4.5 You can install the legacy Migration Toolkit for Containers Operator manually on OpenShift Container Platform versions 4.2 to 4.5. Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. You must have access to registry.redhat.io . You must have podman installed. You must have a Linux workstation with network access in order to download files from registry.redhat.io . You must create a mirror image of the Operator catalog. You must install the Migration Toolkit for Containers Operator from the mirrored Operator catalog on OpenShift Container Platform 4.7. Procedure Log in to registry.redhat.io with your Red Hat Customer Portal credentials: USD sudo podman login registry.redhat.io Download the operator.yml file by entering the following command: USD sudo podman cp USD(sudo podman create \ registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./ Download the controller.yml file by entering the following command: USD sudo podman cp USD(sudo podman create \ registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./ Obtain the Operator image mapping by running the following command: USD grep openshift-migration-legacy-rhel8-operator ./mapping.txt | grep rhmtc The mapping.txt file was created when you mirrored the Operator catalog. The output shows the mapping between the registry.redhat.io image and your mirror registry image. Example output registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a=<registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator Update the image values for the ansible and operator containers and the REGISTRY value in the operator.yml file: containers: - name: ansible image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 1 ... - name: operator image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 2 ... env: - name: REGISTRY value: <registry.apps.example.com> 3 1 2 Specify your mirror registry and the sha256 value of the Operator image. 3 Specify your mirror registry. Log in to your source cluster. Create the Migration Toolkit for Containers Operator object: USD oc create -f operator.yml Example output namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating "./operator.yml": rolebindings.rbac.authorization.k8s.io "system:image-builders" already exists 1 Error from server (AlreadyExists): error when creating "./operator.yml": rolebindings.rbac.authorization.k8s.io "system:image-pullers" already exists 1 You can ignore Error from server (AlreadyExists) messages. They are caused by the Migration Toolkit for Containers Operator creating resources for earlier versions of OpenShift Container Platform 4 that are provided in later releases. Create the MigrationController object: USD oc create -f controller.yml Verify that the MTC pods are running: USD oc get pods -n openshift-migration 5.4. Proxy configuration For OpenShift Container Platform 4.1 and earlier versions, you must configure proxies in the MigrationController custom resource (CR) manifest after you install the Migration Toolkit for Containers Operator because these versions do not support a cluster-wide proxy object. For OpenShift Container Platform 4.2 to 4.7, the Migration Toolkit for Containers (MTC) inherits the cluster-wide proxy settings. You can change the proxy parameters if you want to override the cluster-wide proxy settings. 5.4.1. Direct volume migration Direct Volume Migration (DVM) was introduced in MTC 1.4.2. DVM supports only one proxy. The source cluster cannot access the route of the target cluster if the target cluster is also behind a proxy. If you want to perform a DVM from a source cluster behind a proxy, you must configure a TCP proxy that works at the transport layer and forwards the SSL connections transparently without decrypting and re-encrypting them with their own SSL certificates. A Stunnel proxy is an example of such a proxy. 5.4.1.1. TCP proxy setup for DVM You can set up a direct connection between the source and the target cluster through a TCP proxy and configure the stunnel_tcp_proxy variable in the MigrationController CR to use the proxy: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port Direct volume migration (DVM) supports only basic authentication for the proxy. Moreover, DVM works only from behind proxies that can tunnel a TCP connection transparently. HTTP/HTTPS proxies in man-in-the-middle mode do not work. The existing cluster-wide proxies might not support this behavior. As a result, the proxy settings for DVM are intentionally kept different from the usual proxy configuration in MTC. 5.4.1.2. Why use a TCP proxy instead of an HTTP/HTTPS proxy? You can enable DVM by running Rsync between the source and the target cluster over an OpenShift route. Traffic is encrypted using Stunnel, a TCP proxy. The Stunnel running on the source cluster initiates a TLS connection with the target Stunnel and transfers data over an encrypted channel. Cluster-wide HTTP/HTTPS proxies in OpenShift are usually configured in man-in-the-middle mode where they negotiate their own TLS session with the outside servers. However, this does not work with Stunnel. Stunnel requires that its TLS session be untouched by the proxy, essentially making the proxy a transparent tunnel which simply forwards the TCP connection as-is. Therefore, you must use a TCP proxy. 5.4.1.3. Known issue Migration fails with error Upgrade request required The migration Controller uses the SPDY protocol to execute commands within remote pods. If the remote cluster is behind a proxy or a firewall that does not support the SPDY protocol, the migration controller fails to execute remote commands. The migration fails with the error message Upgrade request required . Workaround: Use a proxy that supports the SPDY protocol. In addition to supporting the SPDY protocol, the proxy or firewall also must pass the Upgrade HTTP header to the API server. The client uses this header to open a websocket connection with the API server. If the Upgrade header is blocked by the proxy or firewall, the migration fails with the error message Upgrade request required . Workaround: Ensure that the proxy forwards the Upgrade header. 5.4.2. Tuning network policies for migrations OpenShift supports restricting traffic to or from pods using NetworkPolicy or EgressFirewalls based on the network plugin used by the cluster. If any of the source namespaces involved in a migration use such mechanisms to restrict network traffic to pods, the restrictions might inadvertently stop traffic to Rsync pods during migration. Rsync pods running on both the source and the target clusters must connect to each other over an OpenShift Route. Existing NetworkPolicy or EgressNetworkPolicy objects can be configured to automatically exempt Rsync pods from these traffic restrictions. 5.4.2.1. NetworkPolicy configuration 5.4.2.1.1. Egress traffic from Rsync pods You can use the unique labels of Rsync pods to allow egress traffic to pass from them if the NetworkPolicy configuration in the source or destination namespaces blocks this type of traffic. The following policy allows all egress traffic from Rsync pods in the namespace: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress 5.4.2.1.2. Ingress traffic to Rsync pods apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress 5.4.2.2. EgressNetworkPolicy configuration The EgressNetworkPolicy object or Egress Firewalls are OpenShift constructs designed to block egress traffic leaving the cluster. Unlike the NetworkPolicy object, the Egress Firewall works at a project level because it applies to all pods in the namespace. Therefore, the unique labels of Rsync pods do not exempt only Rsync pods from the restrictions. However, you can add the CIDR ranges of the source or target cluster to the Allow rule of the policy so that a direct connection can be setup between two clusters. Based on which cluster the Egress Firewall is present in, you can add the CIDR range of the other cluster to allow egress traffic between the two: apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny 5.4.2.3. Configuring supplemental groups for Rsync pods When your PVCs use a shared storage, you can configure the access to that storage by adding supplemental groups to Rsync pod definitions in order for the pods to allow access: Table 5.2. Supplementary groups for Rsync pods Variable Type Default Description src_supplemental_groups string Not set Comma-separated list of supplemental groups for source Rsync pods target_supplemental_groups string Not set Comma-separated list of supplemental groups for target Rsync pods Example usage The MigrationController CR can be updated to set values for these supplemental groups: spec: src_supplemental_groups: "1000,2000" target_supplemental_groups: "2000,3000" 5.4.3. Configuring proxies Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Procedure Get the MigrationController CR manifest: USD oc get migrationcontroller <migration_controller> -n openshift-migration Update the proxy parameters: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration ... spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2 1 Stunnel proxy URL for direct volume migration. 2 Comma-separated list of destination domain names, domains, IP addresses, or other network CIDRs to exclude proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues. This field is ignored if neither the httpProxy nor the httpsProxy field is set. Save the manifest as migration-controller.yaml . Apply the updated manifest: USD oc replace -f migration-controller.yaml -n openshift-migration For more information, see Configuring the cluster-wide proxy . 5.5. Configuring a replication repository The Multicloud Object Gateway is the only supported option for a restricted network environment. MTC supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider. 5.5.1. Prerequisites All clusters must have uninterrupted network access to the replication repository. If you use a proxy server with an internally hosted replication repository, you must ensure that the proxy allows access to the replication repository. 5.5.2. Retrieving Multicloud Object Gateway credentials You must retrieve the Multicloud Object Gateway (MCG) credentials in order to create a Secret custom resource (CR) for the OpenShift API for Data Protection (OADP). MCG is a component of OpenShift Container Storage. Prerequisites You must deploy OpenShift Container Storage by using the appropriate OpenShift Container Storage deployment guide . Procedure Obtain the S3 endpoint, AWS_ACCESS_KEY_ID , and AWS_SECRET_ACCESS_KEY by running the describe command on the NooBaa custom resource. 5.5.3. Additional resources Disconnected environment in the Red Hat OpenShift Container Storage documentation. MTC workflow About data copy methods Adding a replication repository to the MTC web console 5.6. Uninstalling MTC and deleting resources You can uninstall the Migration Toolkit for Containers (MTC) and delete its resources to clean up the cluster. Note Deleting the velero CRDs removes Velero from the cluster. Prerequisites You must be logged in as a user with cluster-admin privileges. Procedure Delete the MigrationController custom resource (CR) on all clusters: USD oc delete migrationcontroller <migration_controller> Uninstall the Migration Toolkit for Containers Operator on OpenShift Container Platform 4 by using the Operator Lifecycle Manager. Delete cluster-scoped resources on all clusters by running the following commands: migration custom resource definitions (CRDs): USD oc delete USD(oc get crds -o name | grep 'migration.openshift.io') velero CRDs: USD oc delete USD(oc get crds -o name | grep 'velero') migration cluster roles: USD oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io') migration-operator cluster role: USD oc delete clusterrole migration-operator velero cluster roles: USD oc delete USD(oc get clusterroles -o name | grep 'velero') migration cluster role bindings: USD oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io') migration-operator cluster role bindings: USD oc delete clusterrolebindings migration-operator velero cluster role bindings: USD oc delete USD(oc get clusterrolebindings -o name | grep 'velero')
[ "sudo podman login registry.redhat.io", "sudo podman cp USD(sudo podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./", "sudo podman cp USD(sudo podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./", "grep openshift-migration-legacy-rhel8-operator ./mapping.txt | grep rhmtc", "registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a=<registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator", "containers: - name: ansible image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 1 - name: operator image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 2 env: - name: REGISTRY value: <registry.apps.example.com> 3", "oc create -f operator.yml", "namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-builders\" already exists 1 Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-pullers\" already exists", "oc create -f controller.yml", "oc get pods -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress", "apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny", "spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"", "oc get migrationcontroller <migration_controller> -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2", "oc replace -f migration-controller.yaml -n openshift-migration", "oc delete migrationcontroller <migration_controller>", "oc delete USD(oc get crds -o name | grep 'migration.openshift.io')", "oc delete USD(oc get crds -o name | grep 'velero')", "oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io')", "oc delete clusterrole migration-operator", "oc delete USD(oc get clusterroles -o name | grep 'velero')", "oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io')", "oc delete clusterrolebindings migration-operator", "oc delete USD(oc get clusterrolebindings -o name | grep 'velero')" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/migration_toolkit_for_containers/installing-mtc-restricted
Chapter 64. Managing externally signed certificates for IdM users, hosts, and services
Chapter 64. Managing externally signed certificates for IdM users, hosts, and services This chapter describes how to use the Identity Management (IdM) command-line interface (CLI) and the IdM Web UI to add or remove user, host, or service certificates that were issued by an external certificate authority (CA). 64.1. Adding a certificate issued by an external CA to an IdM user, host, or service by using the IdM CLI As an Identity Management (IdM) administrator, you can add an externally signed certificate to the account of an IdM user, host, or service by using the Identity Management (IdM) CLI. Prerequisites You have obtained the ticket-granting ticket of an administrative user. Procedure To add a certificate to an IdM user, enter: The command requires you to specify the following information: The name of the user The Base64-encoded DER certificate Note Instead of copying and pasting the certificate contents into the command line, you can convert the certificate to the DER format and then re-encode it to Base64. For example, to add the user_cert.pem certificate to user , enter: You can run the ipa user-add-cert command interactively by executing it without adding any options. To add a certificate to an IdM host, enter: ipa host-add-cert To add a certificate to an IdM service, enter: ipa service-add-cert Additional resources Managing certificates for users, hosts, and services using the integrated IdM CA 64.2. Adding a certificate issued by an external CA to an IdM user, host, or service by using the IdM Web UI As an Identity Management (IdM) administrator, you can add an externally signed certificate to the account of an IdM user, host, or service by using the Identity Management (IdM) Web UI. Prerequisites You are logged in to the Identity Management (IdM) Web UI as an administrative user. Procedure Open the Identity tab, and select the Users , Hosts , or Services subtab. Click the name of the user, host, or service to open its configuration page. Click Add to the Certificates entry. Figure 64.1. Adding a certificate to a user account Paste the certificate in Base64 or PEM encoded format into the text field, and click Add . Click Save to store the changes. 64.3. Removing a certificate issued by an external CA from an IdM user, host, or service account by using the IdM CLI As an Identity Management (IdM) administrator, you can remove an externally signed certificate from the account of an IdM user, host, or service by using the Identity Management (IdM) CLI . Prerequisites You have obtained the ticket-granting ticket of an administrative user. Procedure To remove a certificate from an IdM user, enter: The command requires you to specify the following information: The name of the user The Base64-encoded DER certificate Note Instead of copying and pasting the certificate contents into the command line, you can convert the certificate to the DER format and then re-encode it to Base64. For example, to remove the user_cert.pem certificate from user , enter: You can run the ipa user-remove-cert command interactively by executing it without adding any options. To remove a certificate from an IdM host, enter: ipa host-remove-cert To remove a certificate from an IdM service, enter: ipa service-remove-cert Additional resources Managing certificates for users, hosts, and services using the integrated IdM CA 64.4. Removing a certificate issued by an external CA from an IdM user, host, or service account by using the IdM Web UI As an Identity Management (IdM) administrator, you can remove an externally signed certificate from the account of an IdM user, host, or service by using the Identity Management (IdM) Web UI. Prerequisites You are logged in to the Identity Management (IdM) Web UI as an administrative user. Procedure Open the Identity tab, and select the Users , Hosts , or Services subtab. Click the name of the user, host, or service to open its configuration page. Click the Actions to the certificate to delete, and select Delete . Click Save to store the changes. 64.5. Additional resources Ensuring the presence of an externally signed certificate in an IdM service entry using an Ansible playbook
[ "ipa user-add-cert user --certificate= MIQTPrajQAwg", "ipa user-add-cert user --certificate=\"USD(openssl x509 -outform der -in user_cert.pem | base64 -w 0)\"", "ipa user-remove-cert user --certificate= MIQTPrajQAwg", "ipa user-remove-cert user --certificate=\"USD(openssl x509 -outform der -in user_cert.pem | base64 -w 0)\"" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/managing-externally-signed-certificates-for-idm-users-hosts-and-services_configuring-and-managing-idm
17.6. Managing the Certificate Database
17.6. Managing the Certificate Database Each Certificate System instance has a certificate database, which is maintained in its internal token. This database contains certificates belonging to the subsystem installed in the Certificate System instance and various CA certificates the subsystems use for validating the certificates they receive. Even if an external token is used to generate and store key pairs, Certificate System always maintains its list of trusted and untrusted CA certificates in its internal token. This section explains how to view the contents of the certificate database, delete unwanted certificates, and change the trust settings of CA certificates installed in the database using the Certificate System window. For information on adding certificates to the database, see Section 17.6.1, "Installing Certificates in the Certificate System Database" . Note The Certificate System command-line utility certutil can be used to manage the certificate database by editing trust settings and adding and deleting certificates. For details about this tool, see http://www.mozilla.org/projects/security/pki/nss/tools/ . Administrators should periodically check the contents of the certificate database to make sure that it does not include any unwanted CA certificates. For example, if the database includes CA certificates that should not ever be trusted within the PKI setup, delete them. 17.6.1. Installing Certificates in the Certificate System Database If new server certificates are issued for a subsystem, they must be installed in that subsystem database. Additionally, user and agent certificates must be installed in the subsystem databases. If the certificates are issued by an external CA, then usually the corresponding CA certificate or certificate chain needs to be installed. Certificates can be installed in the subsystem certificate database through the Console's Certificate Setup Wizard or using the certutil utility. Section 17.6.1.1, "Installing Certificates through the Console" Section 17.6.1.2, "Installing Certificates Using certutil" Section 17.6.1.3, "About CA Certificate Chains" 17.6.1.1. Installing Certificates through the Console Note pkiconsole is being deprecated. The Certificate Setup Wizard can install or import the following certificates into either an internal or external token used by the Certificate System instance: Any of the certificates used by a Certificate System subsystem Any trusted CA certificates from external CAs or other Certificate System CAs Certificate chains A certificate chain includes a collection of certificates: the subject certificate, the trusted root CA certificate, and any intermediate CA certificates needed to link the subject certificate to the trusted root. However, the certificate chain the wizard imports must include only CA certificates; none of the certificates can be a user certificate. In a certificate chain, each certificate in the chain is encoded as a separate DER-encoded object. When the wizard imports a certificate chain, it imports these objects one after the other, all the way up the chain to the last certificate, which may or may not be the root CA certificate. If any of the certificates in the chain are already installed in the local certificate database, the wizard replaces the existing certificates with the ones in the chain. If the chain includes intermediate CA certificates, the wizard adds them to the certificate database as untrusted CA certificates. The subsystem console uses the same wizard to install certificates and certificate chains. To install certificates in the local security database, do the following: Open the console. In the Configuration tab, select System Keys and Certificates from the left navigation tree. There are two tabs where certificates can be installed, depending on the subsystem type and the type of certificate. The CA Certificates tab is for installing CA certificates and certificate chains. For Certificate Managers, this tab is used for third-party CA certificates or other Certificate System CA certificates; all of the local CA certificates are installed in the Local Certificates tab. For all other subsystems, all CA certificates and chains are installed through this tab. The Local Certificates tab is where all server certificates, subsystem certificates, and local certificates such as OCSP signing or KRA transport are installed. Select the appropriate tab. To install a certificate in the Local Certificates tab, click Add/Renew . To install a certificate in the CA Certificates tab, click Add . Both will open the Certificate Setup Wizard. When the wizard opens, select the Install a certificate radio button, and click . Select the type of certificate to install. The options for the drop-down menu are the same options available for creating a certificate, depending on the type of subsystem, with the additional option to install a cross-pair certificate. Paste in the certificate body, including the -----BEGIN CERTIFICATE----- and -----END CERTIFICATE----- , into the text area, or specify the absolute file location; this must be a local file. The certificate will look like the following: The wizard displays the certificate details. Review the fingerprint to make sure this is the correct certificate, or use the Back button to go back and submit a different one. Give a nickname for the certificate. The wizard installs the certificate. Any CA that signed the certificate must be trusted by the subsystem. Make sure that this CA's certificate exists in the subsystem's certificate database (internal or external) and that it is trusted. If the CA certificate is not listed, add the certificate to the certificate database as a trusted CA. If the CA's certificate is listed but untrusted, change the trust setting to trusted, as shown in Section 17.7, "Changing the Trust Settings of a CA Certificate" . When installing a certificate issued by a CA that is not stored in the Certificate System certificate database, add that CA's certificate chain to the database. To add the CA chain to the database, copy the CA chain to a text file, start the wizard again, and install the CA chain. 17.6.1.2. Installing Certificates Using certutil To install subsystem certificates in the Certificate System instance's security databases using certutil , do the following: Open the subsystem's security database directory. Run the certutil command with the -A to add the certificate and -i pointing to the file containing the certificate issued by the CA. Note If the Certificate System instance's certificates and keys are stored on an HSM, then specify the token name using the -h option. For example: For information about using the certutil command, see http://www.mozilla.org/projects/security/pki/nss/tools/certutil.html . 17.6.1.3. About CA Certificate Chains Any client or server software that supports certificates maintains a collection of trusted CA certificates in its certificate database. These CA certificates determine which other certificates the software can validate. In the simplest case, the software can validate only certificates issued by one of the CAs for which it has a certificate. It is also possible for a trusted CA certificate to be part of a chain of CA certificates, each issued by the CA above it in a certificate hierarchy. The first certificate in the chain is processed in a context-specific manner, which varies according to how it is being imported. For Mozilla Firefox, this handling depends upon the MIME content type used on the object being downloaded. For Red Hat servers, it depends upon the options selected in the server administration interface. Subsequent certificates are all treated the same. If the certificates contain the SSL-CA bit in the Netscape Certificate Type certificate extension and do not already exist in the local certificate database, they are added as untrusted CAs. They can be used for certificate chain validation as long as there is a trusted CA somewhere in the chain. 17.6.2. Viewing Database Content The certificates stored in the subsystem certificates database, cert9.db , can be viewed through the subsystem administrative console. Alternatively, the certificates can be listed using the certutil utility. certutil must be used to view the TPS certificates because the TPS subsystem does not use an administrative console. Section 17.6.2.1, "Viewing Database Content through the Console" Section 17.6.2.2, "Viewing Database Content Using certutil" Note The certificates listed in the cert9.db database are the subsystem certificates used for subsystem operations. User certificates are stored with the user entries in the LDAP internal database. 17.6.2.1. Viewing Database Content through the Console Note pkiconsole is being deprecated. To view the contents of the database through the administrative console, do the following: Open the subsystem console. In the Configuration tab, select System Keys and Certificates from the left navigation tree. There are two tabs, CA Certificates and Local Certificates , which list different kinds of certificates. CA Certificates lists CA certificates for which the corresponding private key material is not available, such as certificates issued by third-party CAs such as Entrust or Verisign or external Certificate System Certificate Managers. Local Certificates lists certificates kept by the Certificate System subsystem instance, such as the KRA transport certificate or OCSP signing certificate. Figure 17.2. Certificate Database Tab The Certificate Database Management table lists the all of the certificates installed on the subsystem. The following information is supplied for each certificate: Certificate Name Serial Number Issuer Names , the common name ( cn ) of the issuer of this certificate. Token Name , the name of the cryptographic token holding the certificate; for certificate stored in the database, this is internal . To view more detailed information about the certificate, select the certificate, and click View . This opens a window which shows the serial number, validity period, subject name, issuer name, and certificate fingerprint of the certificate. 17.6.2.2. Viewing Database Content Using certutil To view the certificates in the subsystem database using certutil , open the instance's certificate database directory, and run the certutil with the -L option. For example: To view the keys stored in the subsystem databases using certutil , run the certutil with the -K option. For example: For information about using the certutil command, see http://www.mozilla.org/projects/security/pki/nss/tools/certutil.html . 17.6.3. Deleting Certificates from the Database Removing unwanted certificates reduces the size of the certificate database. Note When deleting CA certificates from the certificate database, be careful not to delete the intermediate CA certificates , which help a subsystem chain up to the trusted CA certificate. If in doubt, leave the certificates in the database as untrusted CA certificates; see Section 17.7, "Changing the Trust Settings of a CA Certificate" . Section 17.6.3.1, "Deleting Certificates through the Console" Section 17.6.3.2, "Deleting Certificates Using certutil" 17.6.3.1. Deleting Certificates through the Console Note pkiconsole is being deprecated. To delete a certificate through the Console, do the following: Open the subsystem console. In the Configuration tab, select System Keys and Certificates from the left navigation tree. Select the certificate to delete, and click Delete . When prompted, confirm the delete. 17.6.3.2. Deleting Certificates Using certutil To delete a certificate from the database using certutil : Open the instance's certificate databases directory. List the certificates in the database by running the certutil with the -L option. For example: Delete the certificate by running the certutil with the -D option. For example: List the certificates again to confirm that the certificate was removed. For information about using the certutil command, see http://www.mozilla.org/projects/security/pki/nss/tools/certutil.html .
[ "pkiconsole https://server.example.com: secure_port / subsystem_type", "-----BEGIN CERTIFICATE----- MIICKzCCAZSgAwIBAgIBAzANgkqkiG9w0BAQQFADA3MQswCQYDVQQGEw JVUzERMA8GA1UEChMITmV0c2NhcGUxFTATBgNVBAsTDFN1cHJpeWEncy BDQTAeFw05NzEwMTgwMTM2MjVaFw05OTEwMTgwMTM2MjVaMEgxCzAJBg NVBAYTAlVTMREwDwYDVQQKEwhOZXRzY2FwZTENMAsGA1UECxMEUHawcz EXMBUGA1UEAxMOU3Vwcml5YSBTaGV0dHkwgZ8wDQYJKoZIhdfNAQEBBQ ADgY0AMIGJAoGBAMr6eZiPGfjX3uRJgEjmKiqG7SdATYzBcABu1AVyd7 chRFOGD3wNktbf6hRo6EAmM5R1Askzf8AW7LiQZBcrXpc0k4du+2j6xJ u2MPm8WKuMOTuvzpo+SGXelmHVChEqooCwfdiZywyZNmgaMa2MS6pUkf QVAgMBAAGjNjA0MBEGCWCGSAGG+EIBAQQEAwIAgD -----END CERTIFICATE-----", "cd /var/lib/pki/ instance_name /alias", "certutil -A -n cert-name -t trustargs -d . -a -i certificate_file", "certutil -A -n \"ServerCert cert- instance_name \" -t u,u,u -d . -a -i /tmp/example.cert", "pkiconsole https://server.example.com: secure_port / subsystem_type", "cd /var/lib/pki/ instance_name /alias certutil -L -d . Certificate Authority - Example Domain CT,c, subsystemCert cert- instance name u,u,u Server-Cert cert- instance_name u,u,u", "cd /var/lib/pki/ instance_name /alias certutil -K -d . Enter Password or Pin for \"NSS Certificate DB\": <0> subsystemCert cert- instance_name <1> <2> Server-Cert cert- instance_name", "pkiconsole https://server.example.com: secure_port / subsystem_type", "/var/lib/pki/ instance_name /alias", "certutil -L -d . Certificate Authority - Example Domain CT,c, subsystemCert cert- instance_name u,u,u Server-Cert cert- instance_name u,u,u", "certutil -D -d . -n certificate_nickname", "certutil -D -d . -n \"ServerCert cert- instance_name \"", "certutil -L -d . Certificate Authority - Example Domain CT,c, subsystemCert cert- instance_name u,u,u" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/Managing_the_Certificate_Database
Chapter 7. Configuring the Guardrails Orchestrator service
Chapter 7. Configuring the Guardrails Orchestrator service Important The Guardrails Orchestrator service is currently available in Red Hat OpenShift AI as a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The TrustyAI Guardrails Orchestrator service is a tool to invoke detections on text generation inputs and outputs, as well as standalone detections. It is underpinned by the open-source project FMS-Guardrails Orchestrator from IBM. You can deploy the Guardrails Orchestrator service through a Custom Resource Definition (CRD) that is managed by the TrustyAI Operator. The following sections describe how to do the following tasks: Set up the Guardrails Orchestrator service Create a custom resource (CR) Deploy a Guardrails Orchestrator instance Monitor user-inputs to your LLM using this service 7.1. Deploying the Guardrails Orchestrator service You can deploy a Guardrails Orchestrator instance in your namespace to monitor elements, such as user inputs to your Large Language Model (LLM). Prerequisites You have cluster administrator privileges for your OpenShift cluster. You have downloaded and installed the OpenShift command-line interface (CLI). See Installing the OpenShift CLI . You are familiar with creating a config map for monitoring a user-defined workflow. You perform similar steps in this procedure. You have KServe set to RawDeployment . See Deploying models on single-node OpenShift using KServe Raw Deployment mode . You have the TrustyAI component in your OpenShift AI DataScienceCluster set to Managed . You have an LLM for chat generation deployed in your namespace. You have an LLM for text classification deployed in your namespace. Procedure Define a ConfigMap object in a YAML file to specify the chat_generation and detectors services. For example, create a file named orchestrator_cm.yaml with the following content: Example orchestrator_cm.yaml --- kind: ConfigMap apiVersion: v1 metadata: name: fms-orchestr8-config-nlp data: config.yaml: | chat_generation: 1 service: hostname: <CHAT_GENERATION_HOSTNAME> port: 8080 detectors: 2 <DETECTOR_NAME>: type: text_contents service: hostname: <DETECTOR_HOSTNAME> port: 8000 chunker_id: whole_doc_chunker default_threshold: 0.5 --- <1> A service for chat generation referring to a deployed LLM in your namespace where you are adding guardrails. <2> A list of services responsible for running detection of a certain class of content on text spans. Each of these services refer to a deployed LLM for text classification in your namespace. Deploy the orchestrator_cm.yaml config map: --- USD oc apply -f orchestrator_cm.yaml -n <TEST_NAMESPACE> --- Specify the previously created ConfigMap object created in the GuardrailsOrchestrator custom resource (CR). For example, create a file named orchestrator_cr.yaml with the following content: Example orchestrator_cr.yaml CR --- apiVersion: trustyai.opendatahub.io/v1alpha1 kind: GuardrailsOrchestrator metadata: name: gorch-sample spec: orchestratorConfig: "fms-orchestr8-config-nlp" replicas: 1 --- Deploy the orchestrator CR, which creates a service account, deployment, service, and route object in your namespace. --- oc apply -f orchestrator_cr.yaml -n <TEST_NAMESPACE> --- Verification Confirm that the orchestrator and LLM pods are running: --- USD oc get pods -n <TEST_NAMESPACE> --- Example response --- NAME READY STATUS RESTARTS AGE gorch-test-55bf5f84d9-dd4vm 3/3 Running 0 3h53m ibm-container-deployment-bd4d9d898-52r5j 1/1 Running 0 3h53m ibm-hap-predictor-5d54c877d5-rbdms 1/1 Running 0 3h53m llm-container-deployment-bd4d9d898-52r5j 1/1 Running 0 3h53m llm-predictor-5d54c877d5-rbdms 1/1 Running 0 57m --- Query the /health endpoint of the orchestrator route to check the current status of the detector and generator services. If a 200 OK response is returned, the services are functioning normally: --- USD GORCH_ROUTE_HEALTH=USD(oc get routes gorch-test-health -o jsonpath='{.spec.host}') --- --- USD curl -v https://USDGORCH_ROUTE_HEALTH/health --- Example response --- * Trying ::1:8034... * connect to ::1 port 8034 failed: Connection refused * Trying 127.0.0.1:8034... * Connected to localhost (127.0.0.1) port 8034 (#0) > GET /health HTTP/1.1 > Host: localhost:8034 > User-Agent: curl/7.76.1 > Accept: */* > * Mark bundle as not supporting multiuse < HTTP/1.1 200 OK < content-type: application/json < content-length: 36 < date: Fri, 31 Jan 2025 14:04:25 GMT < * Connection #0 to host localhost left intact {"fms-guardrails-orchestr8":"0.1.0"} --- 7.2. Guardrails Orchestrator parameters A GuardrailsOrchestrator object represents an orchestration service that invokes detectors on text generation input and output and standalone detections. You can modify the following parameters for the GuardrailsOrchestrator object you created previously: Parameter Description replicas The number of orchestrator pods to activate orchestratorConfig The name of the ConfigMap object that contains generator, detector, and chunker arguments. otelExporter **(optional)** A list of paired name and value arguments for configuring OpenTelemetry traces or metrics, or both: protocol - Sets the protocol for all the OpenTelemetry protocol (OTLP) endpoints. Valid values are grpc or http tracesProtocol - Sets the protocol for traces. Acceptable values are grpc or http metricsProtocol - Sets the protocol for metrics. Acceptable values are grpc or http otlpEndpoint - Sets the OTLP endpoint. Default values are gRPC localhost:4317 and HTTP localhost:4318 metricsEndpoint - Sets the OTLP endpoint for metrics tracesEndpoint - Sets the OTLP endpoint for traces 7.3. Configuring the OpenTelemetry Exporter for metrics and tracing Enable traces and metrics that are provided for the observability of the GuardrailsOrchestrator service with the OpenTelemetry Operator. Prerequisites You have installed the Red Hat OpenShift AI distributed tracing platform from the OperatorHub and created a Jaeger instance using the default settings. You have installed the Red Hat build of OpenTelemetry from the OperatorHub and created an OpenTelemetry instance. Procedure Define a GuardrailsOrchestrator custom resource object to specify the otelExporter configurations in a YAML file named orchestrator_otel_cr.yaml : Example of a orchestrator_otel_cr.yaml object that has OpenTelemetry configured: + --- apiVersion: trustyai.opendatahub.io/v1alpha1 kind: GuardrailsOrchestrator metadata: name: gorch-test spec: orchestratorConfig: "fms-orchestr8-config-nlp" 1 vllmGatewayConfig: "fms-orchestr8-config-gateway" 2 replicas: 1 otelExporter: protocol: "http" otlpEndpoint: "localhost:4318" otlpExport: "metrics" --- <1> These speficications are the same as Step 7 from "Configuring the regex detector and vLLM gateway". This example CR adds `otelExporter` configurations. Deploy the orchestrator custom resource. --- USD oc apply -f orchestrator_otel_cr.yaml ---
[ "--- kind: ConfigMap apiVersion: v1 metadata: name: fms-orchestr8-config-nlp data: config.yaml: | chat_generation: 1 service: hostname: <CHAT_GENERATION_HOSTNAME> port: 8080 detectors: 2 <DETECTOR_NAME>: type: text_contents service: hostname: <DETECTOR_HOSTNAME> port: 8000 chunker_id: whole_doc_chunker default_threshold: 0.5 --- <1> A service for chat generation referring to a deployed LLM in your namespace where you are adding guardrails. <2> A list of services responsible for running detection of a certain class of content on text spans. Each of these services refer to a deployed LLM for text classification in your namespace.", "--- oc apply -f orchestrator_cm.yaml -n <TEST_NAMESPACE> ---", "--- apiVersion: trustyai.opendatahub.io/v1alpha1 kind: GuardrailsOrchestrator metadata: name: gorch-sample spec: orchestratorConfig: \"fms-orchestr8-config-nlp\" replicas: 1 ---", "--- apply -f orchestrator_cr.yaml -n <TEST_NAMESPACE> ---", "--- oc get pods -n <TEST_NAMESPACE> ---", "--- NAME READY STATUS RESTARTS AGE gorch-test-55bf5f84d9-dd4vm 3/3 Running 0 3h53m ibm-container-deployment-bd4d9d898-52r5j 1/1 Running 0 3h53m ibm-hap-predictor-5d54c877d5-rbdms 1/1 Running 0 3h53m llm-container-deployment-bd4d9d898-52r5j 1/1 Running 0 3h53m llm-predictor-5d54c877d5-rbdms 1/1 Running 0 57m ---", "--- GORCH_ROUTE_HEALTH=USD(oc get routes gorch-test-health -o jsonpath='{.spec.host}') ---", "--- curl -v https://USDGORCH_ROUTE_HEALTH/health ---", "--- * Trying ::1:8034 * connect to ::1 port 8034 failed: Connection refused * Trying 127.0.0.1:8034 * Connected to localhost (127.0.0.1) port 8034 (#0) > GET /health HTTP/1.1 > Host: localhost:8034 > User-Agent: curl/7.76.1 > Accept: */* > * Mark bundle as not supporting multiuse < HTTP/1.1 200 OK < content-type: application/json < content-length: 36 < date: Fri, 31 Jan 2025 14:04:25 GMT < * Connection #0 to host localhost left intact {\"fms-guardrails-orchestr8\":\"0.1.0\"} ---", "--- apiVersion: trustyai.opendatahub.io/v1alpha1 kind: GuardrailsOrchestrator metadata: name: gorch-test spec: orchestratorConfig: \"fms-orchestr8-config-nlp\" 1 vllmGatewayConfig: \"fms-orchestr8-config-gateway\" 2 replicas: 1 otelExporter: protocol: \"http\" otlpEndpoint: \"localhost:4318\" otlpExport: \"metrics\" --- <1> These speficications are the same as Step 7 from \"Configuring the regex detector and vLLM gateway\". This example CR adds `otelExporter` configurations.", "--- oc apply -f orchestrator_otel_cr.yaml ---" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/monitoring_data_science_models/configuring-the-guardrails-orchestrator-service_monitor
10.4. Configuration Examples
10.4. Configuration Examples 10.4.1. PostgreSQL Changing Database Location When using Red Hat Enterprise Linux 6, the default location for PostgreSQL to store its database is /var/lib/pgsql/data . This is where SELinux expects it to be by default, and hence this area is already labeled appropriately for you, using the postgresql_db_t type. The area where the database is located can be changed depending on individual environment requirements or preferences, however it is important that SELinux is aware of this new location; that it is labeled accordingly. This example explains how to change the location of a PostgreSQL database and then how to label the new location so that SELinux can still provide its protection mechanisms to the new area based on its contents. Note that this is an example only and demonstrates how SELinux can affect PostgreSQL. Comprehensive documentation of PostgreSQL is beyond the scope of this document. Refer to the official PostgreSQL documentation for further details. This example assumes that the postgresql-server package is installed. Run the ls -lZ /var/lib/pgsql command to view the SELinux context of the default database location for postgresql : This shows postgresql_db_t which is the default context element for the location of database files. This context will have to be manually applied to the new database location that will be used in this example in order for it to function properly. Create a new directory for the new location of the database(s). In this example, /opt/postgresql/data/ is used. If you use a different location, replace the text in the following steps with your location: Perform a directory listing of the new location. Note that the initial context of the new directory is usr_t . This context is not sufficient for SELinux to offer its protection mechanisms to PostgreSQL. Once the context has been changed, it will be able to function properly in the new area. Change the ownership of the new location to allow access by the postgres user and group. This sets the traditional Unix permissions which SELinux will still observe. Open the PostgreSQL init file /etc/rc.d/init.d/postgresql with a text editor and modify the PGDATA and PGLOG variables to point to the new location: Save this file and exit the text editor. Initialize the database in the new location. Having changed the database location, starting the service will fail at this point: SELinux has caused the service to not start. This is because the new location is not properly labelled. The following steps explain how to label the new location ( /opt/postgresql/ ) and start the postgresql service properly: Run the semanage command to add a context mapping for /opt/postgresql/ and any other directories/files within it: This mapping is written to the /etc/selinux/targeted/contexts/files/file_contexts.local file: Now use the restorecon command to apply this context mapping to the running system: Now that the /opt/postgresql/ location has been labeled with the correct context for PostgreSQL, the postgresql service will start successfully: Confirm the context is correct for /opt/postgresql/ : Check with the ps command that the postgresql process displays the new location: The location has been changed and labeled, and the postgresql daemon has started successfully. At this point all running services should be tested to confirm normal operation.
[ "~]# ls -lZ /var/lib/pgsql drwx------. postgres postgres system_u:object_r: postgresql_db_t :s0 data", "~]# mkdir -p /opt/postgresql/data", "~]# ls -lZ /opt/postgresql/ drwxr-xr-x. root root unconfined_u:object_r: usr_t :s0 data", "~]# chown -R postgres:postgres /opt/postgresql", "~]# vi /etc/rc.d/init.d/postgresql PGDATA=/opt/postgresql/data PGLOG=/opt/postgresql/data/pgstartup.log", "~]USD su - postgres -c \"initdb -D /opt/postgresql/data\"", "~]# service postgresql start Starting postgresql service: [FAILED]", "~]# semanage fcontext -a -t postgresql_db_t \"/opt/postgresql(/.*)?\"", "~]# grep -i postgresql /etc/selinux/targeted/contexts/files/file_contexts.local /opt/postgresql(/.*)? system_u:object_r:postgresql_db_t:s0", "~]# restorecon -R -v /opt/postgresql", "~]# service postgresql start Starting postgreSQL service: [ OK ]", "~]USD ls -lZ /opt drwxr-xr-x. root root system_u:object_r: postgresql_db_t :s0 postgresql", "~]# ps aux | grep -i postmaster postgres 21564 0.3 0.3 42308 4032 ? S 10:13 0:00 /usr/bin/postmaster -p 5432 -D /opt/postgresql/data/" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/managing_confined_services/sect-managing_confined_services-postgresql-configuration_examples
Chapter 7. Fixed Common Vulnerabilities and Exposures
Chapter 7. Fixed Common Vulnerabilities and Exposures This section details Common Vulnerabilities and Exposures (CVEs) fixed in the AMQ Broker 7.11 release. ENTMQBR-6630 - CVE-2022-1278 WildFly: possible information disclosure ENTMQBR-7397 - CVE-2022-22970 springframework: DoS via data binding to multipartFile or servlet part ENTMQBR-7398 - CVE-2022-22971 springframework: DoS with STOMP over WebSocket ENTMQBR-7005 - CVE-2022-2047 jetty-http: improver hostname input handling ENTMQBR-7640 - CVE-2022-3782 keycloak: path traversal via double URL encoding
null
https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.11/html/release_notes_for_red_hat_amq_broker_7.11/resolved_cves
Chapter 1. Machine APIs
Chapter 1. Machine APIs 1.1. ContainerRuntimeConfig [machineconfiguration.openshift.io/v1] Description ContainerRuntimeConfig describes a customized Container Runtime configuration. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.2. ControllerConfig [machineconfiguration.openshift.io/v1] Description ControllerConfig describes configuration for MachineConfigController. This is currently only used to drive the MachineConfig objects generated by the TemplateController. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.3. ControlPlaneMachineSet [machine.openshift.io/v1] Description ControlPlaneMachineSet ensures that a specified number of control plane machine replicas are running at any given time. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.4. KubeletConfig [machineconfiguration.openshift.io/v1] Description KubeletConfig describes a customized Kubelet configuration. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.5. MachineConfig [machineconfiguration.openshift.io/v1] Description MachineConfig defines the configuration for a machine Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.6. MachineConfigPool [machineconfiguration.openshift.io/v1] Description MachineConfigPool describes a pool of MachineConfigs. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.7. MachineHealthCheck [machine.openshift.io/v1beta1] Description MachineHealthCheck is the Schema for the machinehealthchecks API Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object 1.8. Machine [machine.openshift.io/v1beta1] Description Machine is the Schema for the machines API Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object 1.9. MachineSet [machine.openshift.io/v1beta1] Description MachineSet ensures that a specified number of machines replicas are running at any given time. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/machine_apis/machine-apis
Chapter 3. Configuring OVN gateways for a Red Hat OpenStack Services on OpenShift deployment
Chapter 3. Configuring OVN gateways for a Red Hat OpenStack Services on OpenShift deployment An OVN gateway connects the logical OpenStack tenant network to a physical external network. Many RHOSO environments have at least one OVN gateway and might have more than one physical external network and more than one OVN gateway. Some environments do not include an OVN gateway. For example, an environment might not have an OVN gateway because connectivity is not required, because the environment does not use centralized floating IPs or routers and workloads directly connected to provider networks, or because some other connection method is used. You can choose where OVN gateways are configured. OVN gateway location choices include the following: Control plane OVN gateways on RHOCP worker nodes that host the OpenStack controller services. You can choose one of the following control plane gateway configurations: Dedicated NIC: Place the OVN gateway on a NIC whose sole purpose is to provide an interface to the OVN gateway. Shared NIC: Place the OVN gateway on a shared NIC. Use the bridge CNI plugin to share a NIC between the OVN gateway and other OCP and OpenStack traffic. Not supported for production use in the current version of RHOSO. Data plane OVN gateways on dedicated networker nodes on the data plane. Not yet documented. Control and data plane OVN gateway on a combination of the control plane nodes and dedicated networker nodes. Not yet documented. Control plane OVN gateways may be subject to more disruption than data plane OVN gateways. Note The optional load-balancing service (octavia) requires the use of at least one control plane OVN gateway. The optional BGP service is not presently supported in environments with control plane OVN gateways. As a result, deployments that use both the load-balancing service and the BGP service are not presently supported. For more information, see link:https://issues.redhat.com/browse/OSPRH-10768. 3.1. Configuring a control plane OVN gateway with a dedicated NIC You can place OVN gateways on dedicated NICs on the control plane nodes. This reduces the potential for interruption but requires an additional NIC. Prerequisites You have the oc command line tool installed on your workstation. You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges. Each RHOCP worker node that hosts the RHOSO control plane has a NIC dedicated to an OVN gateway. Use the same NIC name for the dedicated NIC on each node. In addition, each worker node has at least the two NICs described in Red Hat OpenShift Container Platform cluster requirements . Your OpenStackControlPlane custom resource (CR) file, openstack_control_plane.yaml, exists on your workstation. Procedure Open the OpenStackControlPlane CR definition file, openstack_control_plane.yaml . Add the following ovnController configuration, including nicMappings , to the ovn service configuration: Replace <network_name> with the name of the physical provider network your gateway is on. This should match the value of the --provider-physical-network argument to the openstack network create command used to create the network. For example, datacentre . Replace <nic_name> with the name of the NIC connecting to the gateway network, such as enp6s0 . Optional: Add additional <network_name>:<nic_name> pairs under nicMappings as required. Update the control plane: The ovn-operator creates the network attachment definitions, adds them to the pods, creates an external bridge, and configures external-ids:ovn-bridge-mappings . The setting external-ids:ovn-cms-options=enable-chassis-as-gw is configured by default. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status: The OpenStackControlPlane resources are created when the status is "Setup complete". Tip Append the -w option to the end of the get command to track deployment progress. Confirm that the control plane is deployed by reviewing the pods in the openstack namespace: The control plane is deployed when all the pods are either completed or running. Verify that ovn-controller and ovn-controller-ovs pods are running, and that the number of running pods is equal to the number of OCP control plane nodes where OpenStack control plane services are running. Verification Run a remote shell command on the OpenStackClient pod to confirm that the OVN Controller Gateway Agents are running on the control plane nodes: Example output 3.2. Configuring RHOSO with no control plane OVN gateways You can configure a deployment with no control plane OVN gateways. For example, you configure data plane OVN gateways only, or you do not configure any OVN gateways. Configuring a deployment with no control plane OVN gateways requires omitting the ovnController configuration from the control plane custom resource (CR). Prerequisites RHOSO 18.0.3 (Feature Release 1) or later. You have the oc command line tool installed on your workstation. You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges. Procedure Open your OpenStackControlPlane custom resource (CR) file, openstack_control_plane.yaml, on your workstation. If there is an ovnController section: Remove the ovnController section. Update the control plane:
[ "ovnController: spec: ovn: template: ovnController: networkAttachment: tenant nicMappings: <network_name: nic_name>", "oc apply -f openstack_control_plane.yaml -n openstack", "oc get openstackcontrolplane -n openstack NAME STATUS MESSAGE openstack-control-plane Unknown Setup started", "oc get pods -n openstack", "oc rsh -n openstack openstackclient openstack network agent list", "+--------------------------------------+------------------------------+---------+ | ID | agent_type | host | +--------------------------------------+----------------------------------------+ | 5335c34d-9233-47bd-92f1-fc7503270783 | OVN Controller Gateway agent | ctrl0 | | ff66288c-5a7c-41fb-ba54-6c781f95a81e | OVN Controller Gateway agent | ctrl1 | | 5335c34d-9233-47bd-92f1-fc7503270783 | OVN Controller Gateway agent | ctrl2 | +--------------------------------------+----------------------------------------+", "oc apply -f openstack_control_plane.yaml -n openstack" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/configuring_networking_services/configuring-ovn-gateways_rhoso-cfgnet
Chapter 7. opm CLI
Chapter 7. opm CLI 7.1. Installing the opm CLI 7.1.1. About the opm CLI The opm CLI tool is provided by the Operator Framework for use with the Operator bundle format. This tool allows you to create and maintain catalogs of Operators from a list of Operator bundles that are similar to software repositories. The result is a container image which can be stored in a container registry and then installed on a cluster. A catalog contains a database of pointers to Operator manifest content that can be queried through an included API that is served when the container image is run. On OpenShift Container Platform, Operator Lifecycle Manager (OLM) can reference the image in a catalog source, defined by a CatalogSource object, which polls the image at regular intervals to enable frequent updates to installed Operators on the cluster. Additional resources See Operator Framework packaging format for more information about the bundle format. To create a bundle image using the Operator SDK, see Working with bundle images . 7.1.2. Installing the opm CLI You can install the opm CLI tool on your Linux, macOS, or Windows workstation. Prerequisites For Linux, you must provide the following packages. RHEL 8 meets these requirements: podman version 1.9.3+ (version 2.0+ recommended) glibc version 2.28+ Procedure Navigate to the OpenShift mirror site and download the latest version of the tarball that matches your operating system. Unpack the archive. For Linux or macOS: USD tar xvf <file> For Windows, unzip the archive with a ZIP program. Place the file anywhere in your PATH . For Linux or macOS: Check your PATH : USD echo USDPATH Move the file. For example: USD sudo mv ./opm /usr/local/bin/ For Windows: Check your PATH : C:\> path Move the file: C:\> move opm.exe <directory> Verification After you install the opm CLI, verify that it is available: USD opm version 7.1.3. Additional resources See Managing custom catalogs for opm procedures including creating, updating, and pruning catalogs. 7.2. opm CLI reference The opm command-line interface (CLI) is a tool for creating and maintaining Operator catalogs. opm CLI syntax USD opm <command> [<subcommand>] [<argument>] [<flags>] Table 7.1. Global flags Flag Description -skip-tls-verify Skip TLS certificate verification for container image registries while pulling bundles or indexes. --use-http When you pull bundles, use plain HTTP for container image registries. Important The SQLite-based catalog format, including the related CLI commands, is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. 7.2.1. generate Generate various artifacts for declarative config indexes. Command syntax USD opm generate <subcommand> [<flags>] Table 7.2. generate subcommands Subcommand Description dockerfile Generate a Dockerfile for a declarative config index. Table 7.3. generate flags Flags Description -h , --help Help for generate. 7.2.1.1. dockerfile Generate a Dockerfile for a declarative config index. Important This command creates a Dockerfile in the same directory as the <dcRootDir> (named <dcDirName>.Dockerfile ) that is used to build the index. If a Dockerfile with the same name already exists, this command fails. When specifying extra labels, if duplicate keys exist, only the last value of each duplicate key gets added to the generated Dockerfile. Command syntax USD opm generate dockerfile <dcRootDir> [<flags>] Table 7.4. generate dockerfile flags Flag Description -i, --binary-image (string) Image in which to build catalog. The default value is quay.io/operator-framework/opm:latest . -l , --extra-labels (string) Extra labels to include in the generated Dockerfile. Labels have the form key=value . -h , --help Help for Dockerfile. Note To build with the official Red Hat image, use the registry.redhat.io/openshift4/ose-operator-registry:v4.13 value with the -i flag. 7.2.2. index Generate Operator index for SQLite database format container images from pre-existing Operator bundles. Important As of OpenShift Container Platform 4.11, the default Red Hat-provided Operator catalog releases in the file-based catalog format. The default Red Hat-provided Operator catalogs for OpenShift Container Platform 4.6 through 4.10 released in the deprecated SQLite database format. The opm subcommands, flags, and functionality related to the SQLite database format are also deprecated and will be removed in a future release. The features are still supported and must be used for catalogs that use the deprecated SQLite database format. Many of the opm subcommands and flags for working with the SQLite database format, such as opm index prune , do not work with the file-based catalog format. For more information about working with file-based catalogs, see "Additional resources". Command syntax USD opm index <subcommand> [<flags>] Table 7.5. index subcommands Subcommand Description add Add Operator bundles to an index. prune Prune an index of all but specified packages. prune-stranded Prune an index of stranded bundles, which are bundles that are not associated with a particular image. rm Delete an entire Operator from an index. 7.2.2.1. add Add Operator bundles to an index. Command syntax USD opm index add [<flags>] Table 7.6. index add flags Flag Description -i , --binary-image Container image for on-image opm command -u , --build-tool (string) Tool to build container images: podman (the default value) or docker . Overrides part of the --container-tool flag. -b , --bundles (strings) Comma-separated list of bundles to add. -c , --container-tool (string) Tool to interact with container images, such as for saving and building: docker or podman . -f , --from-index (string) index to add to. --generate If enabled, only creates the Dockerfile and saves it to local disk. --mode (string) Graph update mode that defines how channel graphs are updated: replaces (the default value), semver , or semver-skippatch . -d , --out-dockerfile (string) Optional: If generating the Dockerfile, specify a file name. --permissive Allow registry load errors. -p , --pull-tool (string) Tool to pull container images: none (the default value), docker , or podman . Overrides part of the --container-tool flag. -t , --tag (string) Custom tag for container image being built. 7.2.2.2. prune Prune an index of all but specified packages. Command syntax USD opm index prune [<flags>] Table 7.7. index prune flags Flag Description -i , --binary-image Container image for on-image opm command -c , --container-tool (string) Tool to interact with container images, such as for saving and building: docker or podman . -f , --from-index (string) Index to prune. --generate If enabled, only creates the Dockerfile and saves it to local disk. -d , --out-dockerfile (string) Optional: If generating the Dockerfile, specify a file name. -p , --packages (strings) Comma-separated list of packages to keep. --permissive Allow registry load errors. -t , --tag (string) Custom tag for container image being built. 7.2.2.3. prune-stranded Prune an index of stranded bundles, which are bundles that are not associated with a particular image. Command syntax USD opm index prune-stranded [<flags>] Table 7.8. index prune-stranded flags Flag Description -i , --binary-image Container image for on-image opm command -c , --container-tool (string) Tool to interact with container images, such as for saving and building: docker or podman . -f , --from-index (string) Index to prune. --generate If enabled, only creates the Dockerfile and saves it to local disk. -d , --out-dockerfile (string) Optional: If generating the Dockerfile, specify a file name. -p , --packages (strings) Comma-separated list of packages to keep. --permissive Allow registry load errors. -t , --tag (string) Custom tag for container image being built. 7.2.2.4. rm Delete an entire Operator from an index. Command syntax USD opm index rm [<flags>] Table 7.9. index rm flags Flag Description -i , --binary-image Container image for on-image opm command -u , --build-tool (string) Tool to build container images: podman (the default value) or docker . Overrides part of the --container-tool flag. -c , --container-tool (string) Tool to interact with container images, such as for saving and building: docker or podman . -f , --from-index (string) index to delete from. --generate If enabled, only creates the Dockerfile and saves it to local disk. -o , --operators (strings) Comma-separated list of Operators to delete. -d , --out-dockerfile (string) Optional: If generating the Dockerfile, specify a file name. -p , --packages (strings) Comma-separated list of packages to keep. --permissive Allow registry load errors. -p , --pull-tool (string) Tool to pull container images: none (the default value), docker , or podman . Overrides part of the --container-tool flag. -t , --tag (string) Custom tag for container image being built. Additional resources Operator Framework packaging format Managing custom catalogs Mirroring images for a disconnected installation using the oc-mirror plugin 7.2.3. init Generate an olm.package declarative config blob. Command syntax USD opm init <package_name> [<flags>] Table 7.10. init flags Flag Description -c , --default-channel (string) The channel that subscriptions will default to if unspecified. -d , --description (string) Path to the Operator's README.md or other documentation. -i , --icon (string) Path to package's icon. -o , --output (string) Output format: json (the default value) or yaml . 7.2.4. migrate Migrate a SQLite database format index image or database file to a file-based catalog. Important The SQLite-based catalog format, including the related CLI commands, is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. Command syntax USD opm migrate <index_ref> <output_dir> [<flags>] Table 7.11. migrate flags Flag Description -o , --output (string) Output format: json (the default value) or yaml . 7.2.5. render Generate a declarative config blob from the provided index images, bundle images, and SQLite database files. Command syntax USD opm render <index_image | bundle_image | sqlite_file> [<flags>] Table 7.12. render flags Flag Description -o , --output (string) Output format: json (the default value) or yaml . 7.2.6. serve Serve declarative configs via a GRPC server. Note The declarative config directory is loaded by the serve command at startup. Changes made to the declarative config after this command starts are not reflected in the served content. Command syntax USD opm serve <source_path> [<flags>] Table 7.13. serve flags Flag Description --cache-dir (string) If this flag is set, it syncs and persists the server cache directory. --cache-enforce-integrity Exits with an error if the cache is not present or is invalidated. The default value is true when the --cache-dir flag is set and the --cache-only flag is false . Otherwise, the default is false . --cache-only Syncs the serve cache and exits without serving. --debug Enables debug logging. h , --help Help for serve. -p , --port (string) The port number for the service. The default value is 50051 . --pprof-addr (string) The address of the startup profiling endpoint. The format is Addr:Port . -t , --termination-log (string) The path to a container termination log file. The default value is /dev/termination-log . 7.2.7. validate Validate the declarative config JSON file(s) in a given directory. Command syntax USD opm validate <directory> [<flags>]
[ "tar xvf <file>", "echo USDPATH", "sudo mv ./opm /usr/local/bin/", "C:\\> path", "C:\\> move opm.exe <directory>", "opm version", "opm <command> [<subcommand>] [<argument>] [<flags>]", "opm generate <subcommand> [<flags>]", "opm generate dockerfile <dcRootDir> [<flags>]", "opm index <subcommand> [<flags>]", "opm index add [<flags>]", "opm index prune [<flags>]", "opm index prune-stranded [<flags>]", "opm index rm [<flags>]", "opm init <package_name> [<flags>]", "opm migrate <index_ref> <output_dir> [<flags>]", "opm render <index_image | bundle_image | sqlite_file> [<flags>]", "opm serve <source_path> [<flags>]", "opm validate <directory> [<flags>]" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/cli_tools/opm-cli
7.96. jss
7.96. jss 7.96.1. RHBA-2013:0424 - jss bug fix and enhancement update Updated jss packages that fix one bug and add one enhancement are now available for Red Hat Enterprise Linux 6. Java Security Services (JSS) provides an interface between Java Virtual Machine and Network Security Services (NSS). It supports most of the security standards and encryption technologies supported by NSS including communication through SSL/TLS network protocols. JSS is primarily utilized by the Certificate Server. Bug Fix BZ#797352 Previously, some JSS calls to certain NSS functions were to be replaced with calls to the JCA interface. The original JSS calls were therefore deprecated and as such caused warnings to be reported during refactoring. However, the deprecated calls have not been fully replaced with their JCA-based implementation in JSS 4.2. With this update, the calls are now no longer deprecated and the warnings now longer occur. Enhancement BZ#804838 This update adds support for Elliptic Curve Cryptography (ECC) key archival in JSS. It provides new methods, such as getCurve(), Java_org_mozilla_jss_asn1_ASN1Util_getTagDescriptionByOid() and getECCurveBytesByX509PublicKeyBytes(). All users of jss are advised to upgrade to these updated packages, which fix this bug and add this enhancement.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/jss
Chapter 80. tld
Chapter 80. tld This chapter describes the commands under the tld command. 80.1. tld create Create new tld Usage: Table 80.1. Command arguments Value Summary -h, --help Show this help message and exit --name NAME Tld name --description DESCRIPTION Description --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 80.2. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 80.3. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 80.4. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 80.5. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 80.2. tld delete Delete tld Usage: Table 80.6. Positional arguments Value Summary id Tld name or id Table 80.7. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None 80.3. tld list List tlds Usage: Table 80.8. Command arguments Value Summary -h, --help Show this help message and exit --name NAME Tld name --description DESCRIPTION Tld description --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 80.9. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 80.10. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 80.11. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 80.12. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 80.4. tld set Set tld properties Usage: Table 80.13. Positional arguments Value Summary id Tld name or id Table 80.14. Command arguments Value Summary -h, --help Show this help message and exit --name NAME Tld name --description DESCRIPTION Description --no-description- all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 80.15. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 80.16. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 80.17. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 80.18. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 80.5. tld show Show tld details Usage: Table 80.19. Positional arguments Value Summary id Tld name or id Table 80.20. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 80.21. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 80.22. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 80.23. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 80.24. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack tld create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --name NAME [--description DESCRIPTION] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID]", "openstack tld delete [-h] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID] id", "openstack tld list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--name NAME] [--description DESCRIPTION] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID]", "openstack tld set [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--name NAME] [--description DESCRIPTION | --no-description] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID] id", "openstack tld show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID] id" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/command_line_interface_reference/tld
6.10 Technical Notes
6.10 Technical Notes Red Hat Enterprise Linux 6.10 Technical Notes for Red Hat Enterprise Linux 6.10 Edition 10 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.10_technical_notes/index
Chapter 6. References
Chapter 6. References This chapter enumerates other references for more information about SystemTap. It is advisable that you refer to these sources in the course of writing advanced probes and tapsets. SystemTap Wiki The SystemTap Wiki is a collection of links and articles related to the deployment, usage, and development of SystemTap. You can find it at http://sourceware.org/systemtap/wiki/HomePage . SystemTap Tutorial Much of the content in this book comes from the SystemTap Tutorial . The SystemTap Tutorial is a more appropriate reference for users with intermediate to advanced knowledge of C++ and kernel development, and can be found at http://sourceware.org/systemtap/tutorial/ . man stapprobes The stapprobes man page enumerates a variety of probe points supported by SystemTap, along with additional aliases defined by the SystemTap tapset library. The bottom of the man page includes a list of other man pages enumerating similar probe points for specific system components, such as stapprobes.scsi , stapprobes.kprocess , stapprobes.signal and so on. man stapfuncs The stapfuncs man page enumerates numerous functions supported by the SystemTap tapset library, along with the prescribed syntax for each one. Note, however, that this is not a complete list of all supported functions; there are more undocumented functions available. SystemTap Language Reference This document is a comprehensive reference of SystemTap's language constructs and syntax. It is recommended for users with a rudimentary to intermediate knowledge of C++ and other similar programming languages. The SystemTap Language Reference is available to all users at http://sourceware.org/systemtap/langref/ Tapset Developers Guide Once you have sufficient proficiency in writing SystemTap scripts, you can then try your hand out on writing your own tapsets. The Tapset Developers Guide describes how to add functions to your tapset library. Test Suite The systemtap-testsuite package allows you to test the entire SystemTap toolchain without having to build from source. In addition, it also contains numerous examples of SystemTap scripts you can study and test; some of these scripts are also documented in Chapter 4, Useful SystemTap Scripts . By default, the example scripts included in systemtap-testsuite are located in /usr/share/systemtap/testsuite/systemtap.examples .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_beginners_guide/references
Chapter 6. Supported components
Chapter 6. Supported components For a list of component versions that are supported in this release of Red Hat JBoss Core Services, see the Core Services Apache HTTP Server Component Details page. Before you attempt to access the Component Details page, you must ensure that you have an active Red Hat subscription and you are logged in to the Red Hat Customer Portal.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_core_services/2.4.57/html/red_hat_jboss_core_services_apache_http_server_2.4.57_release_notes/supported_components
25.11.2. Related Books
25.11.2. Related Books Apache: The Definitive Guide , 3rd edition, by Ben Laurie and Peter Laurie, O'Reilly & Associates, Inc.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/additional_resources-related_books
Chapter 11. Troubleshooting CephFS PVC creation in external mode
Chapter 11. Troubleshooting CephFS PVC creation in external mode If you have updated the Red Hat Ceph Storage cluster from a version lower than 4.1.1 to the latest release and is not a freshly deployed cluster, you must manually set the application type for the CephFS pool on the Red Hat Ceph Storage cluster to enable CephFS Persistent Volume Claim (PVC) creation in external mode. Check for CephFS pvc stuck in Pending status. Example output : Check the output of the oc describe command to see the events for respective pvc. Expected error message is cephfs_metadata/csi.volumes.default/csi.volume.pvc-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx: (1) Operation not permitted) Example output: Check the settings for the <cephfs metadata pool name> (here cephfs_metadata ) and <cephfs data pool name> (here cephfs_data ). For running the command, you will need jq preinstalled in the Red Hat Ceph Storage client node. Set the application type for the CephFS pool. Run the following commands on the Red Hat Ceph Storage client node : Verify if the settings are applied. Check the CephFS PVC status again. The PVC should now be in Bound state. Example output :
[ "oc get pvc -n <namespace>", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE ngx-fs-pxknkcix20-pod Pending ocs-external-storagecluster-cephfs 28h [...]", "oc describe pvc ngx-fs-pxknkcix20-pod -n nginx-file", "Name: ngx-fs-pxknkcix20-pod Namespace: nginx-file StorageClass: ocs-external-storagecluster-cephfs Status: Pending Volume: Labels: <none> Annotations: volume.beta.kubernetes.io/storage-provisioner: openshift-storage.cephfs.csi.ceph.com Finalizers: [kubernetes.io/pvc-protection] Capacity: Access Modes: VolumeMode: Filesystem Mounted By: ngx-fs-oyoe047v2bn2ka42jfgg-pod-hqhzf Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning ProvisioningFailed 107m (x245 over 22h) openshift-storage.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-5f8b66cc96-hvcqp_6b7044af-c904-4795-9ce5-bf0cf63cc4a4 (combined from similar events): failed to provision volume with StorageClass \"ocs-external-storagecluster-cephfs\": rpc error: code = Internal desc = error (an error (exit status 1) occurred while running rados args: [-m 192.168.13.212:6789,192.168.13.211:6789,192.168.13.213:6789 --id csi-cephfs-provisioner --keyfile= stripped -c /etc/ceph/ceph.conf -p cephfs_metadata getomapval csi.volumes.default csi.volume.pvc-1ac0c6e6-9428-445d-bbd6-1284d54ddb47 /tmp/omap-get-186436239 --namespace=csi]) occurred, command output streams is ( error getting omap value cephfs_metadata/csi.volumes.default/csi.volume.pvc-1ac0c6e6-9428-445d-bbd6-1284d54ddb47: (1) Operation not permitted)", "ceph osd pool ls detail --format=json | jq '.[] | select(.pool_name| startswith(\"cephfs\")) | .pool_name, .application_metadata' \"cephfs_data\" { \"cephfs\": {} } \"cephfs_metadata\" { \"cephfs\": {} }", "ceph osd pool application set <cephfs metadata pool name> cephfs metadata cephfs", "ceph osd pool application set <cephfs data pool name> cephfs data cephfs", "ceph osd pool ls detail --format=json | jq '.[] | select(.pool_name| startswith(\"cephfs\")) | .pool_name, .application_metadata' \"cephfs_data\" { \"cephfs\": { \"data\": \"cephfs\" } } \"cephfs_metadata\" { \"cephfs\": { \"metadata\": \"cephfs\" } }", "oc get pvc -n <namespace>", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE ngx-fs-pxknkcix20-pod Bound pvc-1ac0c6e6-9428-445d-bbd6-1284d54ddb47 1Mi RWO ocs-external-storagecluster-cephfs 29h [...]" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/troubleshooting_openshift_data_foundation/troubleshooting-cephfs-pvc-creation-in-external-mode_rhodf
Operator Guide
Operator Guide Red Hat build of Keycloak 22.0 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/operator_guide/index
Chapter 5. Configuring resources for managed components on OpenShift Container Platform
Chapter 5. Configuring resources for managed components on OpenShift Container Platform You can manually adjust the resources on Red Hat Quay on OpenShift Container Platform for the following components that have running pods: quay clair mirroring clairpostgres postgres This feature allows users to run smaller test clusters, or to request more resources upfront in order to avoid partially degraded Quay pods. Limitations and requests can be set in accordance with Kubernetes resource units . The following components should not be set lower than their minimum requirements. This can cause issues with your deployment and, in some cases, result in failure of the pod's deployment. quay : Minimum of 6 GB, 2vCPUs clair : Recommended of 2 GB memory, 2 vCPUs clairpostgres : Minimum of 200 MB You can configure resource requests on the OpenShift Container Platform UI, or by directly by updating the QuayRegistry YAML. Important The default values set for these components are the suggested values. Setting resource requests too high or too low might lead to inefficient resource utilization, or performance degradation, respectively. 5.1. Configuring resource requests by using the OpenShift Container Platform UI Use the following procedure to configure resources by using the OpenShift Container Platform UI. Procedure On the OpenShift Container Platform developer console, click Operators Installed Operators Red Hat Quay . Click QuayRegistry . Click the name of your registry, for example, example-registry . Click YAML . In the spec.components field, you can override the resource of the quay , clair , mirroring clairpostgres , and postgres resources by setting values for the .overrides.resources.limits and the overrides.resources.requests fields. For example: spec: components: - kind: clair managed: true overrides: resources: limits: cpu: "5" # Limiting to 5 CPU (equivalent to 5000m or 5000 millicpu) memory: "18Gi" # Limiting to 18 Gibibytes of memory requests: cpu: "4" # Requesting 4 CPU memory: "4Gi" # Requesting 4 Gibibytes of memory - kind: postgres managed: true overrides: resources: limits: {} 1 requests: cpu: "700m" # Requesting 700 millicpu or 0.7 CPU memory: "4Gi" # Requesting 4 Gibibytes of memory - kind: mirror managed: true overrides: resources: limits: 2 requests: cpu: "800m" # Requesting 800 millicpu or 0.8 CPU memory: "1Gi" # Requesting 1 Gibibyte of memory - kind: quay managed: true overrides: resources: limits: cpu: "4" # Limiting to 4 CPU memory: "10Gi" # Limiting to 10 Gibibytes of memory requests: cpu: "4" # Requesting 4 CPU memory: "10Gi" # Requesting 10 Gibi of memory - kind: clairpostgres managed: true overrides: resources: limits: cpu: "800m" # Limiting to 800 millicpu or 0.8 CPU memory: "3Gi" # Limiting to 3 Gibibytes of memory requests: {} 1 Setting the limits or requests fields to {} uses the default values for these resources. 2 Leaving the limits or requests field empty puts no limitations on these resources. 5.2. Configuring resource requests by editing the QuayRegistry YAML You can re-configure Red Hat Quay to configure resource requests after you have already deployed a registry. This can be done by editing the QuayRegistry YAML file directly and then re-deploying the registry. Procedure Optional: If you do not have a local copy of the QuayRegistry YAML file, enter the following command to obtain it: USD oc get quayregistry <registry_name> -n <namespace> -o yaml > quayregistry.yaml Open the quayregistry.yaml created from Step 1 of this procedure and make the desired changes. For example: - kind: quay managed: true overrides: resources: limits: {} requests: cpu: "0.7" # Requesting 0.7 CPU (equivalent to 500m or 500 millicpu) memory: "512Mi" # Requesting 512 Mebibytes of memory Save the changes. Apply the Red Hat Quay registry using the updated configurations by running the following command: USD oc replace -f quayregistry.yaml Example output quayregistry.quay.redhat.com/example-registry replaced
[ "spec: components: - kind: clair managed: true overrides: resources: limits: cpu: \"5\" # Limiting to 5 CPU (equivalent to 5000m or 5000 millicpu) memory: \"18Gi\" # Limiting to 18 Gibibytes of memory requests: cpu: \"4\" # Requesting 4 CPU memory: \"4Gi\" # Requesting 4 Gibibytes of memory - kind: postgres managed: true overrides: resources: limits: {} 1 requests: cpu: \"700m\" # Requesting 700 millicpu or 0.7 CPU memory: \"4Gi\" # Requesting 4 Gibibytes of memory - kind: mirror managed: true overrides: resources: limits: 2 requests: cpu: \"800m\" # Requesting 800 millicpu or 0.8 CPU memory: \"1Gi\" # Requesting 1 Gibibyte of memory - kind: quay managed: true overrides: resources: limits: cpu: \"4\" # Limiting to 4 CPU memory: \"10Gi\" # Limiting to 10 Gibibytes of memory requests: cpu: \"4\" # Requesting 4 CPU memory: \"10Gi\" # Requesting 10 Gibi of memory - kind: clairpostgres managed: true overrides: resources: limits: cpu: \"800m\" # Limiting to 800 millicpu or 0.8 CPU memory: \"3Gi\" # Limiting to 3 Gibibytes of memory requests: {}", "oc get quayregistry <registry_name> -n <namespace> -o yaml > quayregistry.yaml", "- kind: quay managed: true overrides: resources: limits: {} requests: cpu: \"0.7\" # Requesting 0.7 CPU (equivalent to 500m or 500 millicpu) memory: \"512Mi\" # Requesting 512 Mebibytes of memory", "oc replace -f quayregistry.yaml", "quayregistry.quay.redhat.com/example-registry replaced" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/deploying_the_red_hat_quay_operator_on_openshift_container_platform/configuring-resources-managed-components
Chapter 11. CertSecretSource schema reference
Chapter 11. CertSecretSource schema reference Used in: ClientTls , KafkaAuthorizationKeycloak , KafkaAuthorizationOpa , KafkaClientAuthenticationOAuth , KafkaListenerAuthenticationOAuth Property Description certificate The name of the file certificate in the Secret. string secretName The name of the Secret containing the certificate. string
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-certsecretsource-reference
Chapter 4. Configuration information for Red Hat Quay
Chapter 4. Configuration information for Red Hat Quay Checking a configuration YAML can help identify and resolve various issues related to the configuration of Red Hat Quay. Checking the configuration YAML can help you address the following issues: Incorrect Configuration Parameters : If the database is not functioning as expected or is experiencing performance issues, your configuration parameters could be at fault. By checking the configuration YAML, administrators can ensure that all the required parameters are set correctly and match the intended settings for the database. Resource Limitations : The configuration YAML might specify resource limits for the database, such as memory and CPU limits. If the database is running into resource constraints or experiencing contention with other services, adjusting these limits can help optimize resource allocation and improve overall performance. Connectivity Issues : Incorrect network settings in the configuration YAML can lead to connectivity problems between the application and the database. Ensuring that the correct network configurations are in place can resolve issues related to connectivity and communication. Data Storage and Paths : The configuration YAML may include paths for storing data and logs. If the paths are misconfigured or inaccessible, the database may encounter errors while reading or writing data, leading to operational issues. Authentication and Security : The configuration YAML may contain authentication settings, including usernames, passwords, and access controls. Verifying these settings is crucial for maintaining the security of the database and ensuring only authorized users have access. Plugin and Extension Settings : Some databases support extensions or plugins that enhance functionality. Issues may arise if these plugins are misconfigured or not loaded correctly. Checking the configuration YAML can help identify any problems with plugin settings. Replication and High Availability Settings : In clustered or replicated database setups, the configuration YAML may define replication settings and high availability configurations. Incorrect settings can lead to data inconsistency and system instability. Backup and Recovery Options : The configuration YAML might include backup and recovery options, specifying how data backups are performed and how data can be recovered in case of failures. Validating these settings can ensure data safety and successful recovery processes. By checking your configuration YAML, Red Hat Quay administrators can detect and resolve these issues before they cause significant disruptions to the application or service relying on the database. 4.1. Obtaining configuration information for Red Hat Quay Configuration information can be obtained for all types of Red Hat Quay deployments, include standalone, Operator, and geo-replication deployments. Obtaining configuration information can help you resolve issues with authentication and authorization, your database, object storage, and repository mirroring. After you have obtained the necessary configuration information, you can update your config.yaml file, search the Red Hat Knowledgebase for a solution, or file a support ticket with the Red Hat Support team. Procedure To obtain configuration information on Red Hat Quay Operator deployments, you can use oc exec , oc cp , or oc rsync . To use the oc exec command, enter the following command: USD oc exec -it <quay_pod_name> -- cat /conf/stack/config.yaml This command returns your config.yaml file directly to your terminal. To use the oc copy command, enter the following commands: USD oc cp <quay_pod_name>:/conf/stack/config.yaml /tmp/config.yaml To display this information in your terminal, enter the following command: USD cat /tmp/config.yaml To use the oc rsync command, enter the following commands: oc rsync <quay_pod_name>:/conf/stack/ /tmp/local_directory/ To display this information in your terminal, enter the following command: USD cat /tmp/local_directory/config.yaml Example output DISTRIBUTED_STORAGE_CONFIG: local_us: - RHOCSStorage - access_key: redacted bucket_name: lht-quay-datastore-68fff7b8-1b5e-46aa-8110-c4b7ead781f5 hostname: s3.openshift-storage.svc.cluster.local is_secure: true port: 443 secret_key: redacted storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: - local_us DISTRIBUTED_STORAGE_PREFERENCE: - local_us To obtain configuration information on standalone Red Hat Quay deployments, you can use podman cp or podman exec . To use the podman copy command, enter the following commands: USD podman cp <quay_container_id>:/conf/stack/config.yaml /tmp/local_directory/ To display this information in your terminal, enter the following command: USD cat /tmp/local_directory/config.yaml To use podman exec , enter the following commands: USD podman exec -it <quay_container_id> cat /conf/stack/config.yaml Example output BROWSER_API_CALLS_XHR_ONLY: false ALLOWED_OCI_ARTIFACT_TYPES: application/vnd.oci.image.config.v1+json: - application/vnd.oci.image.layer.v1.tar+zstd application/vnd.sylabs.sif.config.v1+json: - application/vnd.sylabs.sif.layer.v1+tar AUTHENTICATION_TYPE: Database AVATAR_KIND: local BUILDLOGS_REDIS: host: quay-server.example.com password: strongpassword port: 6379 DATABASE_SECRET_KEY: 05ee6382-24a6-43c0-b30f-849c8a0f7260 DB_CONNECTION_ARGS: {} --- 4.2. Obtaining database configuration information You can obtain configuration information about your database by using the following procedure. Warning Interacting with the PostgreSQL database is potentially destructive. It is highly recommended that you perform the following procedure with the help of a Red Hat Quay Support Specialist. Procedure If you are using the Red Hat Quay Operator on OpenShift Container Platform, enter the following command: USD oc exec -it <database_pod> -- cat /var/lib/pgsql/data/userdata/postgresql.conf If you are using a standalone deployment of Red Hat Quay, enter the following command: USD podman exec -it <database_container> cat /var/lib/pgsql/data/userdata/postgresql.conf
[ "oc exec -it <quay_pod_name> -- cat /conf/stack/config.yaml", "oc cp <quay_pod_name>:/conf/stack/config.yaml /tmp/config.yaml", "cat /tmp/config.yaml", "rsync <quay_pod_name>:/conf/stack/ /tmp/local_directory/", "cat /tmp/local_directory/config.yaml", "DISTRIBUTED_STORAGE_CONFIG: local_us: - RHOCSStorage - access_key: redacted bucket_name: lht-quay-datastore-68fff7b8-1b5e-46aa-8110-c4b7ead781f5 hostname: s3.openshift-storage.svc.cluster.local is_secure: true port: 443 secret_key: redacted storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: - local_us DISTRIBUTED_STORAGE_PREFERENCE: - local_us", "podman cp <quay_container_id>:/conf/stack/config.yaml /tmp/local_directory/", "cat /tmp/local_directory/config.yaml", "podman exec -it <quay_container_id> cat /conf/stack/config.yaml", "BROWSER_API_CALLS_XHR_ONLY: false ALLOWED_OCI_ARTIFACT_TYPES: application/vnd.oci.image.config.v1+json: - application/vnd.oci.image.layer.v1.tar+zstd application/vnd.sylabs.sif.config.v1+json: - application/vnd.sylabs.sif.layer.v1+tar AUTHENTICATION_TYPE: Database AVATAR_KIND: local BUILDLOGS_REDIS: host: quay-server.example.com password: strongpassword port: 6379 DATABASE_SECRET_KEY: 05ee6382-24a6-43c0-b30f-849c8a0f7260 DB_CONNECTION_ARGS: {} ---", "oc exec -it <database_pod> -- cat /var/lib/pgsql/data/userdata/postgresql.conf", "podman exec -it <database_container> cat /var/lib/pgsql/data/userdata/postgresql.conf" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/troubleshooting_red_hat_quay/obtaining-quay-config-information
Chapter 5. LVM Administration with CLI Commands
Chapter 5. LVM Administration with CLI Commands This chapter summarizes the individual administrative tasks you can perform with the LVM Command Line Interface (CLI) commands to create and maintain logical volumes. Note If you are creating or modifying an LVM volume for a clustered environment, you must ensure that you are running the clvmd daemon. For information, see Section 4.1, "Creating LVM Volumes in a Cluster" . 5.1. Using CLI Commands There are several general features of all LVM CLI commands. When sizes are required in a command line argument, units can always be specified explicitly. If you do not specify a unit, then a default is assumed, usually KB or MB. LVM CLI commands do not accept fractions. When specifying units in a command line argument, LVM is case-insensitive; specifying M or m is equivalent, for example, and powers of 2 (multiples of 1024) are used. However, when specifying the --units argument in a command, lower-case indicates that units are in multiples of 1024 while upper-case indicates that units are in multiples of 1000. Where commands take volume group or logical volume names as arguments, the full path name is optional. A logical volume called lvol0 in a volume group called vg0 can be specified as vg0/lvol0 . Where a list of volume groups is required but is left empty, a list of all volume groups will be substituted. Where a list of logical volumes is required but a volume group is given, a list of all the logical volumes in that volume group will be substituted. For example, the lvdisplay vg0 command will display all the logical volumes in volume group vg0 . All LVM commands accept a -v argument, which can be entered multiple times to increase the output verbosity. For example, the following examples shows the default output of the lvcreate command. The following command shows the output of the lvcreate command with the -v argument. You could also have used the -vv , -vvv or the -vvvv argument to display increasingly more details about the command execution. The -vvvv argument provides the maximum amount of information at this time. The following example shows only the first few lines of output for the lvcreate command with the -vvvv argument specified. You can display help for any of the LVM CLI commands with the --help argument of the command. To display the man page for a command, execute the man command: The man lvm command provides general online information about LVM. All LVM objects are referenced internally by a UUID, which is assigned when you create the object. This can be useful in a situation where you remove a physical volume called /dev/sdf which is part of a volume group and, when you plug it back in, you find that it is now /dev/sdk . LVM will still find the physical volume because it identifies the physical volume by its UUID and not its device name. For information on specifying the UUID of a physical volume when creating a physical volume, see Section 7.4, "Recovering Physical Volume Metadata" .
[ "lvcreate -L 50MB new_vg Rounding up size to full physical extent 52.00 MB Logical volume \"lvol0\" created", "lvcreate -v -L 50MB new_vg Finding volume group \"new_vg\" Rounding up size to full physical extent 52.00 MB Archiving volume group \"new_vg\" metadata (seqno 4). Creating logical volume lvol0 Creating volume group backup \"/etc/lvm/backup/new_vg\" (seqno 5). Found volume group \"new_vg\" Creating new_vg-lvol0 Loading new_vg-lvol0 table Resuming new_vg-lvol0 (253:2) Clearing start of logical volume \"lvol0\" Creating volume group backup \"/etc/lvm/backup/new_vg\" (seqno 5). Logical volume \"lvol0\" created", "lvcreate -vvvv -L 50MB new_vg #lvmcmdline.c:913 Processing: lvcreate -vvvv -L 50MB new_vg #lvmcmdline.c:916 O_DIRECT will be used #config/config.c:864 Setting global/locking_type to 1 #locking/locking.c:138 File-based locking selected. #config/config.c:841 Setting global/locking_dir to /var/lock/lvm #activate/activate.c:358 Getting target version for linear #ioctl/libdm-iface.c:1569 dm version OF [16384] #ioctl/libdm-iface.c:1569 dm versions OF [16384] #activate/activate.c:358 Getting target version for striped #ioctl/libdm-iface.c:1569 dm versions OF [16384] #config/config.c:864 Setting activation/mirror_region_size to 512", "commandname --help", "man commandname" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/lvm_cli
Chapter 15. Network flows format reference
Chapter 15. Network flows format reference These are the specifications for network flows format, used both internally and when exporting flows to Kafka. 15.1. Network Flows format reference This is the specification of the network flows format. That format is used when a Kafka exporter is configured, for Prometheus metrics labels as well as internally for the Loki store. The "Filter ID" column shows which related name to use when defining Quick Filters (see spec.consolePlugin.quickFilters in the FlowCollector specification). The "Loki label" column is useful when querying Loki directly: label fields need to be selected using stream selectors . The "Cardinality" column gives information about the implied metric cardinality if this field was to be used as a Prometheus label with the FlowMetrics API. Refer to the FlowMetrics documentation for more information on using this API. Name Type Description Filter ID Loki label Cardinality OpenTelemetry Bytes number Number of bytes n/a no avoid bytes DnsErrno number Error number returned from DNS tracker ebpf hook function dns_errno no fine dns.errno DnsFlags number DNS flags for DNS record n/a no fine dns.flags DnsFlagsResponseCode string Parsed DNS header RCODEs name dns_flag_response_code no fine dns.responsecode DnsId number DNS record id dns_id no avoid dns.id DnsLatencyMs number Time between a DNS request and response, in milliseconds dns_latency no avoid dns.latency Dscp number Differentiated Services Code Point (DSCP) value dscp no fine dscp DstAddr string Destination IP address (ipv4 or ipv6) dst_address no avoid destination.address DstK8S_HostIP string Destination node IP dst_host_address no fine destination.k8s.host.address DstK8S_HostName string Destination node name dst_host_name no fine destination.k8s.host.name DstK8S_Name string Name of the destination Kubernetes object, such as Pod name, Service name or Node name. dst_name no careful destination.k8s.name DstK8S_Namespace string Destination namespace dst_namespace yes fine destination.k8s.namespace.name DstK8S_NetworkName string Destination network name dst_network no fine n/a DstK8S_OwnerName string Name of the destination owner, such as Deployment name, StatefulSet name, etc. dst_owner_name yes fine destination.k8s.owner.name DstK8S_OwnerType string Kind of the destination owner, such as Deployment, StatefulSet, etc. dst_kind no fine destination.k8s.owner.kind DstK8S_Type string Kind of the destination Kubernetes object, such as Pod, Service or Node. dst_kind yes fine destination.k8s.kind DstK8S_Zone string Destination availability zone dst_zone yes fine destination.zone DstMac string Destination MAC address dst_mac no avoid destination.mac DstPort number Destination port dst_port no careful destination.port DstSubnetLabel string Destination subnet label dst_subnet_label no fine n/a Duplicate boolean Indicates if this flow was also captured from another interface on the same host n/a no fine n/a Flags string[] List of TCP flags comprised in the flow, according to RFC-9293, with additional custom flags to represent the following per-packet combinations: - SYN_ACK - FIN_ACK - RST_ACK tcp_flags no careful tcp.flags FlowDirection number Flow interpreted direction from the node observation point. Can be one of: - 0: Ingress (incoming traffic, from the node observation point) - 1: Egress (outgoing traffic, from the node observation point) - 2: Inner (with the same source and destination node) node_direction yes fine host.direction IcmpCode number ICMP code icmp_code no fine icmp.code IcmpType number ICMP type icmp_type no fine icmp.type IfDirections number[] Flow directions from the network interface observation point. Can be one of: - 0: Ingress (interface incoming traffic) - 1: Egress (interface outgoing traffic) ifdirections no fine interface.directions Interfaces string[] Network interfaces interfaces no careful interface.names K8S_ClusterName string Cluster name or identifier cluster_name yes fine k8s.cluster.name K8S_FlowLayer string Flow layer: 'app' or 'infra' flow_layer yes fine k8s.layer NetworkEvents object[] Network events, such as network policy actions, composed of nested fields: - Feature (such as "acl" for network policies) - Type (such as an "AdminNetworkPolicy") - Namespace (namespace where the event applies, if any) - Name (name of the resource that triggered the event) - Action (such as "allow" or "drop") - Direction (Ingress or Egress) network_events no avoid n/a Packets number Number of packets pkt_drop_cause no avoid packets PktDropBytes number Number of bytes dropped by the kernel n/a no avoid drops.bytes PktDropLatestDropCause string Latest drop cause pkt_drop_cause no fine drops.latestcause PktDropLatestFlags number TCP flags on last dropped packet n/a no fine drops.latestflags PktDropLatestState string TCP state on last dropped packet pkt_drop_state no fine drops.lateststate PktDropPackets number Number of packets dropped by the kernel n/a no avoid drops.packets Proto number L4 protocol protocol no fine protocol Sampling number Sampling rate used for this flow n/a no fine n/a SrcAddr string Source IP address (ipv4 or ipv6) src_address no avoid source.address SrcK8S_HostIP string Source node IP src_host_address no fine source.k8s.host.address SrcK8S_HostName string Source node name src_host_name no fine source.k8s.host.name SrcK8S_Name string Name of the source Kubernetes object, such as Pod name, Service name or Node name. src_name no careful source.k8s.name SrcK8S_Namespace string Source namespace src_namespace yes fine source.k8s.namespace.name SrcK8S_NetworkName string Source network name src_network no fine n/a SrcK8S_OwnerName string Name of the source owner, such as Deployment name, StatefulSet name, etc. src_owner_name yes fine source.k8s.owner.name SrcK8S_OwnerType string Kind of the source owner, such as Deployment, StatefulSet, etc. src_kind no fine source.k8s.owner.kind SrcK8S_Type string Kind of the source Kubernetes object, such as Pod, Service or Node. src_kind yes fine source.k8s.kind SrcK8S_Zone string Source availability zone src_zone yes fine source.zone SrcMac string Source MAC address src_mac no avoid source.mac SrcPort number Source port src_port no careful source.port SrcSubnetLabel string Source subnet label src_subnet_label no fine n/a TimeFlowEndMs number End timestamp of this flow, in milliseconds n/a no avoid timeflowend TimeFlowRttNs number TCP Smoothed Round Trip Time (SRTT), in nanoseconds time_flow_rtt no avoid tcp.rtt TimeFlowStartMs number Start timestamp of this flow, in milliseconds n/a no avoid timeflowstart TimeReceived number Timestamp when this flow was received and processed by the flow collector, in seconds n/a no avoid timereceived Udns string[] List of User Defined Networks udns no careful n/a XlatDstAddr string Packet translation destination address xlat_dst_address no avoid n/a XlatDstPort number Packet translation destination port xlat_dst_port no careful n/a XlatSrcAddr string Packet translation source address xlat_src_address no avoid n/a XlatSrcPort number Packet translation source port xlat_src_port no careful n/a ZoneId number Packet translation zone id xlat_zone_id no avoid n/a _HashId string In conversation tracking, the conversation identifier id no avoid n/a _RecordType string Type of record: flowLog for regular flow logs, or newConnection , heartbeat , endConnection for conversation tracking type yes fine n/a
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/network_observability/json-flows-format-reference
Chapter 5. Post-installation cluster tasks
Chapter 5. Post-installation cluster tasks After installing OpenShift Container Platform, you can further expand and customize your cluster to your requirements. 5.1. Available cluster customizations You complete most of the cluster configuration and customization after you deploy your OpenShift Container Platform cluster. A number of configuration resources are available. Note If you install your cluster on IBM Z, not all features and functions are available. You modify the configuration resources to configure the major features of the cluster, such as the image registry, networking configuration, image build behavior, and the identity provider. For current documentation of the settings that you control by using these resources, use the oc explain command, for example oc explain builds --api-version=config.openshift.io/v1 5.1.1. Cluster configuration resources All cluster configuration resources are globally scoped (not namespaced) and named cluster . Resource name Description apiserver.config.openshift.io Provides API server configuration such as certificates and certificate authorities . authentication.config.openshift.io Controls the identity provider and authentication configuration for the cluster. build.config.openshift.io Controls default and enforced configuration for all builds on the cluster. console.config.openshift.io Configures the behavior of the web console interface, including the logout behavior . featuregate.config.openshift.io Enables FeatureGates so that you can use Tech Preview features. image.config.openshift.io Configures how specific image registries should be treated (allowed, disallowed, insecure, CA details). ingress.config.openshift.io Configuration details related to routing such as the default domain for routes. oauth.config.openshift.io Configures identity providers and other behavior related to internal OAuth server flows. project.config.openshift.io Configures how projects are created including the project template. proxy.config.openshift.io Defines proxies to be used by components needing external network access. Note: not all components currently consume this value. scheduler.config.openshift.io Configures scheduler behavior such as profiles and default node selectors. 5.1.2. Operator configuration resources These configuration resources are cluster-scoped instances, named cluster , which control the behavior of a specific component as owned by a particular Operator. Resource name Description consoles.operator.openshift.io Controls console appearance such as branding customizations config.imageregistry.operator.openshift.io Configures OpenShift image registry settings such as public routing, log levels, proxy settings, resource constraints, replica counts, and storage type. config.samples.operator.openshift.io Configures the Samples Operator to control which example image streams and templates are installed on the cluster. 5.1.3. Additional configuration resources These configuration resources represent a single instance of a particular component. In some cases, you can request multiple instances by creating multiple instances of the resource. In other cases, the Operator can use only a specific resource instance name in a specific namespace. Reference the component-specific documentation for details on how and when you can create additional resource instances. Resource name Instance name Namespace Description alertmanager.monitoring.coreos.com main openshift-monitoring Controls the Alertmanager deployment parameters. ingresscontroller.operator.openshift.io default openshift-ingress-operator Configures Ingress Operator behavior such as domain, number of replicas, certificates, and controller placement. 5.1.4. Informational Resources You use these resources to retrieve information about the cluster. Some configurations might require you to edit these resources directly. Resource name Instance name Description clusterversion.config.openshift.io version In OpenShift Container Platform 4.10, you must not customize the ClusterVersion resource for production clusters. Instead, follow the process to update a cluster . dns.config.openshift.io cluster You cannot modify the DNS settings for your cluster. You can view the DNS Operator status . infrastructure.config.openshift.io cluster Configuration details allowing the cluster to interact with its cloud provider. network.config.openshift.io cluster You cannot modify your cluster networking after installation. To customize your network, follow the process to customize networking during installation . 5.2. Updating the global cluster pull secret You can update the global pull secret for your cluster by either replacing the current pull secret or appending a new pull secret. The procedure is required when users use a separate registry to store images than the registry used during installation. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Optional: To append a new pull secret to the existing pull secret, complete the following steps: Enter the following command to download the pull secret: USD oc get secret/pull-secret -n openshift-config --template='{{index .data ".dockerconfigjson" | base64decode}}' ><pull_secret_location> 1 1 Provide the path to the pull secret file. Enter the following command to add the new pull secret: USD oc registry login --registry="<registry>" \ 1 --auth-basic="<username>:<password>" \ 2 --to=<pull_secret_location> 3 1 Provide the new registry. You can include multiple repositories within the same registry, for example: --registry="<registry/my-namespace/my-repository>" . 2 Provide the credentials of the new registry. 3 Provide the path to the pull secret file. Alternatively, you can perform a manual update to the pull secret file. Enter the following command to update the global pull secret for your cluster: USD oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1 1 Provide the path to the new pull secret file. This update is rolled out to all nodes, which can take some time depending on the size of your cluster. Note As of OpenShift Container Platform 4.7.4, changes to the global pull secret no longer trigger a node drain or reboot. 5.3. Adjust worker nodes If you incorrectly sized the worker nodes during deployment, adjust them by creating one or more new machine sets, scale them up, then scale the original machine set down before removing them. 5.3.1. Understanding the difference between machine sets and the machine config pool MachineSet objects describe OpenShift Container Platform nodes with respect to the cloud or machine provider. The MachineConfigPool object allows MachineConfigController components to define and provide the status of machines in the context of upgrades. The MachineConfigPool object allows users to configure how upgrades are rolled out to the OpenShift Container Platform nodes in the machine config pool. The NodeSelector object can be replaced with a reference to the MachineSet object. 5.3.2. Scaling a machine set manually To add or remove an instance of a machine in a machine set, you can manually scale the machine set. This guidance is relevant to fully automated, installer-provisioned infrastructure installations. Customized, user-provisioned infrastructure installations do not have machine sets. Prerequisites Install an OpenShift Container Platform cluster and the oc command line. Log in to oc as a user with cluster-admin permission. Procedure View the machine sets that are in the cluster: USD oc get machinesets -n openshift-machine-api The machine sets are listed in the form of <clusterid>-worker-<aws-region-az> . View the machines that are in the cluster: USD oc get machine -n openshift-machine-api Set the annotation on the machine that you want to delete: USD oc annotate machine/<machine_name> -n openshift-machine-api machine.openshift.io/cluster-api-delete-machine="true" Scale the compute machine set by running one of the following commands: USD oc scale --replicas=2 machineset <machineset> -n openshift-machine-api Or: USD oc edit machineset <machineset> -n openshift-machine-api Tip You can alternatively apply the following YAML to scale the machine set: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2 You can scale the compute machine set up or down. It takes several minutes for the new machines to be available. Important By default, the machine controller tries to drain the node that is backed by the machine until it succeeds. In some situations, such as with a misconfigured pod disruption budget, the drain operation might not be able to succeed. If the drain operation fails, the machine controller cannot proceed removing the machine. You can skip draining the node by annotating machine.openshift.io/exclude-node-draining in a specific machine. Verification Verify the deletion of the intended machine: USD oc get machines 5.3.3. The machine set deletion policy Random , Newest , and Oldest are the three supported deletion options. The default is Random , meaning that random machines are chosen and deleted when scaling machine sets down. The deletion policy can be set according to the use case by modifying the particular machine set: spec: deletePolicy: <delete_policy> replicas: <desired_replica_count> Specific machines can also be prioritized for deletion by adding the annotation machine.openshift.io/cluster-api-delete-machine=true to the machine of interest, regardless of the deletion policy. Important By default, the OpenShift Container Platform router pods are deployed on workers. Because the router is required to access some cluster resources, including the web console, do not scale the worker machine set to 0 unless you first relocate the router pods. Note Custom machine sets can be used for use cases requiring that services run on specific nodes and that those services are ignored by the controller when the worker machine sets are scaling down. This prevents service disruption. 5.3.4. Creating default cluster-wide node selectors You can use default cluster-wide node selectors on pods together with labels on nodes to constrain all pods created in a cluster to specific nodes. With cluster-wide node selectors, when you create a pod in that cluster, OpenShift Container Platform adds the default node selectors to the pod and schedules the pod on nodes with matching labels. You configure cluster-wide node selectors by editing the Scheduler Operator custom resource (CR). You add labels to a node, a machine set, or a machine config. Adding the label to the machine set ensures that if the node or machine goes down, new nodes have the label. Labels added to a node or machine config do not persist if the node or machine goes down. Note You can add additional key/value pairs to a pod. But you cannot add a different value for a default key. Procedure To add a default cluster-wide node selector: Edit the Scheduler Operator CR to add the default cluster-wide node selectors: USD oc edit scheduler cluster Example Scheduler Operator CR with a node selector apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster ... spec: defaultNodeSelector: type=user-node,region=east 1 mastersSchedulable: false 1 Add a node selector with the appropriate <key>:<value> pairs. After making this change, wait for the pods in the openshift-kube-apiserver project to redeploy. This can take several minutes. The default cluster-wide node selector does not take effect until the pods redeploy. Add labels to a node by using a machine set or editing the node directly: Use a machine set to add labels to nodes managed by the machine set when a node is created: Run the following command to add labels to a MachineSet object: USD oc patch MachineSet <name> --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"<key>"="<value>","<key>"="<value>"}}]' -n openshift-machine-api 1 1 Add a <key>/<value> pair for each label. For example: USD oc patch MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"type":"user-node","region":"east"}}]' -n openshift-machine-api Tip You can alternatively apply the following YAML to add labels to a machine set: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: template: spec: metadata: labels: region: "east" type: "user-node" Verify that the labels are added to the MachineSet object by using the oc edit command: For example: USD oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api Example MachineSet object apiVersion: machine.openshift.io/v1beta1 kind: MachineSet ... spec: ... template: metadata: ... spec: metadata: labels: region: east type: user-node ... Redeploy the nodes associated with that machine set by scaling down to 0 and scaling up the nodes: For example: USD oc scale --replicas=0 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api USD oc scale --replicas=1 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api When the nodes are ready and available, verify that the label is added to the nodes by using the oc get command: USD oc get nodes -l <key>=<value> For example: USD oc get nodes -l type=user-node Example output NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.23.0 Add labels directly to a node: Edit the Node object for the node: USD oc label nodes <name> <key>=<value> For example, to label a node: USD oc label nodes ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 type=user-node region=east Tip You can alternatively apply the following YAML to add labels to a node: kind: Node apiVersion: v1 metadata: name: <node_name> labels: type: "user-node" region: "east" Verify that the labels are added to the node using the oc get command: USD oc get nodes -l <key>=<value>,<key>=<value> For example: USD oc get nodes -l type=user-node,region=east Example output NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.23.0 5.4. Creating infrastructure machine sets for production environments You can create a machine set to create machines that host only infrastructure components, such as the default router, the integrated container image registry, and components for cluster metrics and monitoring. These infrastructure machines are not counted toward the total number of subscriptions that are required to run the environment. In a production deployment, it is recommended that you deploy at least three machine sets to hold infrastructure components. Both OpenShift Logging and Red Hat OpenShift Service Mesh deploy Elasticsearch, which requires three instances to be installed on different nodes. Each of these nodes can be deployed to different availability zones for high availability. A configuration like this requires three different machine sets, one for each availability zone. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. For information on infrastructure nodes and which components can run on infrastructure nodes, see Creating infrastructure machine sets . To create an infrastructure node, you can use a machine set , post_installation_configuration/cluster-tasks.adoc#creating-an-infra-node_post-install-cluster-tasks[assign a label to the nodes], or use a machine config pool . For sample machine sets that you can use with these procedures, see Creating machine sets for different clouds . Applying a specific node selector to all infrastructure components causes OpenShift Container Platform to schedule those workloads on nodes with that label . 5.4.1. Creating a machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new machine set is available, the DESIRED and CURRENT values match. If the machine set is not available, wait a few minutes and run the command again. 5.4.2. Creating an infrastructure node Important See Creating infrastructure machine sets for installer-provisioned infrastructure environments or for any cluster where the control plane nodes are managed by the machine API. Requirements of the cluster dictate that infrastructure, also called infra nodes, be provisioned. The installer only provides provisions for control plane and worker nodes. Worker nodes can be designated as infrastructure nodes or application, also called app , nodes through labeling. Procedure Add a label to the worker node that you want to act as application node: USD oc label node <node-name> node-role.kubernetes.io/app="" Add a label to the worker nodes that you want to act as infrastructure nodes: USD oc label node <node-name> node-role.kubernetes.io/infra="" Check to see if applicable nodes now have the infra role and app roles: USD oc get nodes Create a default cluster-wide node selector. The default node selector is applied to pods created in all namespaces. This creates an intersection with any existing node selectors on a pod, which additionally constrains the pod's selector. Important If the default node selector key conflicts with the key of a pod's label, then the default node selector is not applied. However, do not set a default node selector that might cause a pod to become unschedulable. For example, setting the default node selector to a specific node role, such as node-role.kubernetes.io/infra="" , when a pod's label is set to a different node role, such as node-role.kubernetes.io/master="" , can cause the pod to become unschedulable. For this reason, use caution when setting the default node selector to specific node roles. You can alternatively use a project node selector to avoid cluster-wide node selector key conflicts. Edit the Scheduler object: USD oc edit scheduler cluster Add the defaultNodeSelector field with the appropriate node selector: apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: topology.kubernetes.io/region=us-east-1 1 # ... 1 This example node selector deploys pods on nodes in the us-east-1 region by default. Save the file to apply the changes. You can now move infrastructure resources to the newly labeled infra nodes. Additional resources For information on how to configure project node selectors to avoid cluster-wide node selector key conflicts, see Project node selectors . 5.4.3. Creating a machine config pool for infrastructure machines If you need infrastructure machines to have dedicated configurations, you must create an infra pool. Procedure Add a label to the node you want to assign as the infra node with a specific label: USD oc label node <node_name> <label> USD oc label node ci-ln-n8mqwr2-f76d1-xscn2-worker-c-6fmtx node-role.kubernetes.io/infra= Create a machine config pool that contains both the worker role and your custom role as machine config selector: USD cat infra.mcp.yaml Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: infra spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]} 1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: "" 2 1 Add the worker role and your custom role. 2 Add the label you added to the node as a nodeSelector . Note Custom machine config pools inherit machine configs from the worker pool. Custom pools use any machine config targeted for the worker pool, but add the ability to also deploy changes that are targeted at only the custom pool. Because a custom pool inherits resources from the worker pool, any change to the worker pool also affects the custom pool. After you have the YAML file, you can create the machine config pool: USD oc create -f infra.mcp.yaml Check the machine configs to ensure that the infrastructure configuration rendered successfully: USD oc get machineconfig Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED 00-master 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 00-worker 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-1ae2a1e0-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-ssh 3.2.0 31d 99-worker-1ae64748-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-worker-ssh 3.2.0 31d rendered-infra-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 23m rendered-master-072d4b2da7f88162636902b074e9e28e 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-master-3e88ec72aed3886dec061df60d16d1af 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-master-419bee7de96134963a15fdf9dd473b25 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-master-53f5c91c7661708adce18739cc0f40fb 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d rendered-master-a6a357ec18e5bce7f5ac426fc7c5ffcd 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-master-dc7f874ec77fc4b969674204332da037 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-1a75960c52ad18ff5dfa6674eb7e533d 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-2640531be11ba43c61d72e82dc634ce6 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-worker-4f110718fe88e5f349987854a1147755 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-worker-afc758e194d6188677eb837842d3b379 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-worker-daa08cc1e8f5fcdeba24de60cd955cc3 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d You should see a new machine config, with the rendered-infra-* prefix. Optional: To deploy changes to a custom pool, create a machine config that uses the custom pool name as the label, such as infra . Note that this is not required and only shown for instructional purposes. In this manner, you can apply any custom configurations specific to only your infra nodes. Note After you create the new machine config pool, the MCO generates a new rendered config for that pool, and associated nodes of that pool reboot to apply the new configuration. Create a machine config: USD cat infra.mc.yaml Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 51-infra labels: machineconfiguration.openshift.io/role: infra 1 spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/infratest mode: 0644 contents: source: data:,infra 1 Add the label you added to the node as a nodeSelector . Apply the machine config to the infra-labeled nodes: USD oc create -f infra.mc.yaml Confirm that your new machine config pool is available: USD oc get mcp Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE infra rendered-infra-60e35c2e99f42d976e084fa94da4d0fc True False False 1 1 1 0 4m20s master rendered-master-9360fdb895d4c131c7c4bebbae099c90 True False False 3 3 3 0 91m worker rendered-worker-60e35c2e99f42d976e084fa94da4d0fc True False False 2 2 2 0 91m In this example, a worker node was changed to an infra node. Additional resources See Node configuration management with machine config pools for more information on grouping infra machines in a custom pool. 5.5. Assigning machine set resources to infrastructure nodes After creating an infrastructure machine set, the worker and infra roles are applied to new infra nodes. Nodes with the infra role are not counted toward the total number of subscriptions that are required to run the environment, even when the worker role is also applied. However, when an infra node is assigned the worker role, there is a chance that user workloads can get assigned inadvertently to the infra node. To avoid this, you can apply a taint to the infra node and tolerations for the pods that you want to control. 5.5.1. Binding infrastructure node workloads using taints and tolerations If you have an infra node that has the infra and worker roles assigned, you must configure the node so that user workloads are not assigned to it. Important It is recommended that you preserve the dual infra,worker label that is created for infra nodes and use taints and tolerations to manage nodes that user workloads are scheduled on. If you remove the worker label from the node, you must create a custom pool to manage it. A node with a label other than master or worker is not recognized by the MCO without a custom pool. Maintaining the worker label allows the node to be managed by the default worker machine config pool, if no custom pools that select the custom label exists. The infra label communicates to the cluster that it does not count toward the total number of subscriptions. Prerequisites Configure additional MachineSet objects in your OpenShift Container Platform cluster. Procedure Add a taint to the infra node to prevent scheduling user workloads on it: Determine if the node has the taint: USD oc describe nodes <node_name> Sample output oc describe node ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Name: ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Roles: worker ... Taints: node-role.kubernetes.io/infra:NoSchedule ... This example shows that the node has a taint. You can proceed with adding a toleration to your pod in the step. If you have not configured a taint to prevent scheduling user workloads on it: USD oc adm taint nodes <node_name> <key>=<value>:<effect> For example: USD oc adm taint nodes node1 node-role.kubernetes.io/infra=reserved:NoExecute Tip You can alternatively apply the following YAML to add the taint: kind: Node apiVersion: v1 metadata: name: <node_name> labels: ... spec: taints: - key: node-role.kubernetes.io/infra effect: NoExecute value: reserved ... This example places a taint on node1 that has key node-role.kubernetes.io/infra and taint effect NoSchedule . Nodes with the NoSchedule effect schedule only pods that tolerate the taint, but allow existing pods to remain scheduled on the node. Note If a descheduler is used, pods violating node taints could be evicted from the cluster. Add tolerations for the pod configurations you want to schedule on the infra node, like router, registry, and monitoring workloads. Add the following code to the Pod object specification: tolerations: - effect: NoExecute 1 key: node-role.kubernetes.io/infra 2 operator: Exists 3 value: reserved 4 1 Specify the effect that you added to the node. 2 Specify the key that you added to the node. 3 Specify the Exists Operator to require a taint with the key node-role.kubernetes.io/infra to be present on the node. 4 Specify the value of the key-value pair taint that you added to the node. This toleration matches the taint created by the oc adm taint command. A pod with this toleration can be scheduled onto the infra node. Note Moving pods for an Operator installed via OLM to an infra node is not always possible. The capability to move Operator pods depends on the configuration of each Operator. Schedule the pod to the infra node using a scheduler. See the documentation for Controlling pod placement onto nodes for details. Additional resources See Controlling pod placement using the scheduler for general information on scheduling a pod to a node. 5.6. Moving resources to infrastructure machine sets Some of the infrastructure resources are deployed in your cluster by default. You can move them to the infrastructure machine sets that you created. 5.6.1. Moving the router You can deploy the router pod to a different machine set. By default, the pod is deployed to a worker node. Prerequisites Configure additional machine sets in your OpenShift Container Platform cluster. Procedure View the IngressController custom resource for the router Operator: USD oc get ingresscontroller default -n openshift-ingress-operator -o yaml The command output resembles the following text: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: 2019-04-18T12:35:39Z finalizers: - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller generation: 1 name: default namespace: openshift-ingress-operator resourceVersion: "11341" selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default uid: 79509e05-61d6-11e9-bc55-02ce4781844a spec: {} status: availableReplicas: 2 conditions: - lastTransitionTime: 2019-04-18T12:36:15Z status: "True" type: Available domain: apps.<cluster>.example.com endpointPublishingStrategy: type: LoadBalancerService selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default Edit the ingresscontroller resource and change the nodeSelector to use the infra label: USD oc edit ingresscontroller default -n openshift-ingress-operator spec: nodePlacement: nodeSelector: 1 matchLabels: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved 1 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrastructure node, also add a matching toleration. Confirm that the router pod is running on the infra node. View the list of router pods and note the node name of the running pod: USD oc get pod -n openshift-ingress -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none> router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none> In this example, the running pod is on the ip-10-0-217-226.ec2.internal node. View the node status of the running pod: USD oc get node <node_name> 1 1 Specify the <node_name> that you obtained from the pod list. Example output NAME STATUS ROLES AGE VERSION ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.23.0 Because the role list includes infra , the pod is running on the correct node. 5.6.2. Moving the default registry You configure the registry Operator to deploy its pods to different nodes. Prerequisites Configure additional machine sets in your OpenShift Container Platform cluster. Procedure View the config/instance object: USD oc get configs.imageregistry.operator.openshift.io/cluster -o yaml Example output apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: creationTimestamp: 2019-02-05T13:52:05Z finalizers: - imageregistry.operator.openshift.io/finalizer generation: 1 name: cluster resourceVersion: "56174" selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster uid: 36fd3724-294d-11e9-a524-12ffeee2931b spec: httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623 logging: 2 managementState: Managed proxy: {} replicas: 1 requests: read: {} write: {} storage: s3: bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c region: us-east-1 status: ... Edit the config/instance object: USD oc edit configs.imageregistry.operator.openshift.io/cluster spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: namespaces: - openshift-image-registry topologyKey: kubernetes.io/hostname weight: 100 logLevel: Normal managementState: Managed nodeSelector: 1 node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved 1 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration. Verify the registry pod has been moved to the infrastructure node. Run the following command to identify the node where the registry pod is located: USD oc get pods -o wide -n openshift-image-registry Confirm the node has the label you specified: USD oc describe node <node_name> Review the command output and confirm that node-role.kubernetes.io/infra is in the LABELS list. 5.6.3. Moving the monitoring solution The monitoring stack includes multiple components, including Prometheus, Grafana, and Alertmanager. The Cluster Monitoring Operator manages this stack. To redeploy the monitoring stack to infrastructure nodes, you can create and apply a custom config map. Procedure Edit the cluster-monitoring-config config map and change the nodeSelector to use the infra label: USD oc edit configmap cluster-monitoring-config -n openshift-monitoring apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |+ alertmanagerMain: nodeSelector: 1 node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusK8s: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusOperator: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute grafana: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute k8sPrometheusAdapter: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute kubeStateMetrics: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute telemeterClient: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute openshiftStateMetrics: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute thanosQuerier: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute 1 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration. Watch the monitoring pods move to the new machines: USD watch 'oc get pod -n openshift-monitoring -o wide' If a component has not moved to the infra node, delete the pod with this component: USD oc delete pod -n openshift-monitoring <pod> The component from the deleted pod is re-created on the infra node. 5.6.4. Moving OpenShift Logging resources You can configure the Cluster Logging Operator to deploy the pods for logging subsystem components, such as Elasticsearch and Kibana, to different nodes. You cannot move the Cluster Logging Operator pod from its installed location. For example, you can move the Elasticsearch pods to a separate node because of high CPU, memory, and disk requirements. Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. These features are not installed by default. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc edit ClusterLogging instance apiVersion: logging.openshift.io/v1 kind: ClusterLogging ... spec: collection: logs: fluentd: resources: null type: fluentd logStore: elasticsearch: nodeCount: 3 nodeSelector: 1 node-role.kubernetes.io/infra: '' tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved redundancyPolicy: SingleRedundancy resources: limits: cpu: 500m memory: 16Gi requests: cpu: 500m memory: 16Gi storage: {} type: elasticsearch managementState: Managed visualization: kibana: nodeSelector: 2 node-role.kubernetes.io/infra: '' tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved proxy: resources: null replicas: 1 resources: null type: kibana ... 1 2 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration. Verification To verify that a component has moved, you can use the oc get pod -o wide command. For example: You want to move the Kibana pod from the ip-10-0-147-79.us-east-2.compute.internal node: USD oc get pod kibana-5b8bdf44f9-ccpq9 -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-5b8bdf44f9-ccpq9 2/2 Running 0 27s 10.129.2.18 ip-10-0-147-79.us-east-2.compute.internal <none> <none> You want to move the Kibana pod to the ip-10-0-139-48.us-east-2.compute.internal node, a dedicated infrastructure node: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ip-10-0-133-216.us-east-2.compute.internal Ready master 60m v1.23.0 ip-10-0-139-146.us-east-2.compute.internal Ready master 60m v1.23.0 ip-10-0-139-192.us-east-2.compute.internal Ready worker 51m v1.23.0 ip-10-0-139-241.us-east-2.compute.internal Ready worker 51m v1.23.0 ip-10-0-147-79.us-east-2.compute.internal Ready worker 51m v1.23.0 ip-10-0-152-241.us-east-2.compute.internal Ready master 60m v1.23.0 ip-10-0-139-48.us-east-2.compute.internal Ready infra 51m v1.23.0 Note that the node has a node-role.kubernetes.io/infra: '' label: USD oc get node ip-10-0-139-48.us-east-2.compute.internal -o yaml Example output kind: Node apiVersion: v1 metadata: name: ip-10-0-139-48.us-east-2.compute.internal selfLink: /api/v1/nodes/ip-10-0-139-48.us-east-2.compute.internal uid: 62038aa9-661f-41d7-ba93-b5f1b6ef8751 resourceVersion: '39083' creationTimestamp: '2020-04-13T19:07:55Z' labels: node-role.kubernetes.io/infra: '' ... To move the Kibana pod, edit the ClusterLogging CR to add a node selector: apiVersion: logging.openshift.io/v1 kind: ClusterLogging ... spec: ... visualization: kibana: nodeSelector: 1 node-role.kubernetes.io/infra: '' proxy: resources: null replicas: 1 resources: null type: kibana 1 Add a node selector to match the label in the node specification. After you save the CR, the current Kibana pod is terminated and new pod is deployed: USD oc get pods Example output NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 29m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 28m fluentd-42dzz 1/1 Running 0 28m fluentd-d74rq 1/1 Running 0 28m fluentd-m5vr9 1/1 Running 0 28m fluentd-nkxl7 1/1 Running 0 28m fluentd-pdvqb 1/1 Running 0 28m fluentd-tflh6 1/1 Running 0 28m kibana-5b8bdf44f9-ccpq9 2/2 Terminating 0 4m11s kibana-7d85dcffc8-bfpfp 2/2 Running 0 33s The new pod is on the ip-10-0-139-48.us-east-2.compute.internal node: USD oc get pod kibana-7d85dcffc8-bfpfp -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-7d85dcffc8-bfpfp 2/2 Running 0 43s 10.131.0.22 ip-10-0-139-48.us-east-2.compute.internal <none> <none> After a few moments, the original Kibana pod is removed. USD oc get pods Example output NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 30m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 29m fluentd-42dzz 1/1 Running 0 29m fluentd-d74rq 1/1 Running 0 29m fluentd-m5vr9 1/1 Running 0 29m fluentd-nkxl7 1/1 Running 0 29m fluentd-pdvqb 1/1 Running 0 29m fluentd-tflh6 1/1 Running 0 29m kibana-7d85dcffc8-bfpfp 2/2 Running 0 62s 5.7. About the cluster autoscaler The cluster autoscaler adjusts the size of an OpenShift Container Platform cluster to meet its current deployment needs. It uses declarative, Kubernetes-style arguments to provide infrastructure management that does not rely on objects of a specific cloud provider. The cluster autoscaler has a cluster scope, and is not associated with a particular namespace. The cluster autoscaler increases the size of the cluster when there are pods that fail to schedule on any of the current worker nodes due to insufficient resources or when another node is necessary to meet deployment needs. The cluster autoscaler does not increase the cluster resources beyond the limits that you specify. The cluster autoscaler computes the total memory, CPU, and GPU on all nodes the cluster, even though it does not manage the control plane nodes. These values are not single-machine oriented. They are an aggregation of all the resources in the entire cluster. For example, if you set the maximum memory resource limit, the cluster autoscaler includes all the nodes in the cluster when calculating the current memory usage. That calculation is then used to determine if the cluster autoscaler has the capacity to add more worker resources. Important Ensure that the maxNodesTotal value in the ClusterAutoscaler resource definition that you create is large enough to account for the total possible number of machines in your cluster. This value must encompass the number of control plane machines and the possible number of compute machines that you might scale to. Every 10 seconds, the cluster autoscaler checks which nodes are unnecessary in the cluster and removes them. The cluster autoscaler considers a node for removal if the following conditions apply: The node utilization is less than the node utilization level threshold for the cluster. The node utilization level is the sum of the requested resources divided by the allocated resources for the node. If you do not specify a value in the ClusterAutoscaler custom resource, the cluster autoscaler uses a default value of 0.5 , which corresponds to 50% utilization. The cluster autoscaler can move all pods running on the node to the other nodes. The Kubernetes scheduler is responsible for scheduling pods on the nodes. The cluster autoscaler does not have scale down disabled annotation. If the following types of pods are present on a node, the cluster autoscaler will not remove the node: Pods with restrictive pod disruption budgets (PDBs). Kube-system pods that do not run on the node by default. Kube-system pods that do not have a PDB or have a PDB that is too restrictive. Pods that are not backed by a controller object such as a deployment, replica set, or stateful set. Pods with local storage. Pods that cannot be moved elsewhere because of a lack of resources, incompatible node selectors or affinity, matching anti-affinity, and so on. Unless they also have a "cluster-autoscaler.kubernetes.io/safe-to-evict": "true" annotation, pods that have a "cluster-autoscaler.kubernetes.io/safe-to-evict": "false" annotation. For example, you set the maximum CPU limit to 64 cores and configure the cluster autoscaler to only create machines that have 8 cores each. If your cluster starts with 30 cores, the cluster autoscaler can add up to 4 more nodes with 32 cores, for a total of 62. If you configure the cluster autoscaler, additional usage restrictions apply: Do not modify the nodes that are in autoscaled node groups directly. All nodes within the same node group have the same capacity and labels and run the same system pods. Specify requests for your pods. If you have to prevent pods from being deleted too quickly, configure appropriate PDBs. Confirm that your cloud provider quota is large enough to support the maximum node pools that you configure. Do not run additional node group autoscalers, especially the ones offered by your cloud provider. The horizontal pod autoscaler (HPA) and the cluster autoscaler modify cluster resources in different ways. The HPA changes the deployment's or replica set's number of replicas based on the current CPU load. If the load increases, the HPA creates new replicas, regardless of the amount of resources available to the cluster. If there are not enough resources, the cluster autoscaler adds resources so that the HPA-created pods can run. If the load decreases, the HPA stops some replicas. If this action causes some nodes to be underutilized or completely empty, the cluster autoscaler deletes the unnecessary nodes. The cluster autoscaler takes pod priorities into account. The Pod Priority and Preemption feature enables scheduling pods based on priorities if the cluster does not have enough resources, but the cluster autoscaler ensures that the cluster has resources to run all pods. To honor the intention of both features, the cluster autoscaler includes a priority cutoff function. You can use this cutoff to schedule "best-effort" pods, which do not cause the cluster autoscaler to increase resources but instead run only when spare resources are available. Pods with priority lower than the cutoff value do not cause the cluster to scale up or prevent the cluster from scaling down. No new nodes are added to run the pods, and nodes running these pods might be deleted to free resources. Cluster autoscaling is supported for the platforms that have machine API available on it. 5.7.1. ClusterAutoscaler resource definition This ClusterAutoscaler resource definition shows the parameters and sample values for the cluster autoscaler. apiVersion: "autoscaling.openshift.io/v1" kind: "ClusterAutoscaler" metadata: name: "default" spec: podPriorityThreshold: -10 1 resourceLimits: maxNodesTotal: 24 2 cores: min: 8 3 max: 128 4 memory: min: 4 5 max: 256 6 gpus: - type: nvidia.com/gpu 7 min: 0 8 max: 16 9 - type: amd.com/gpu min: 0 max: 4 scaleDown: 10 enabled: true 11 delayAfterAdd: 10m 12 delayAfterDelete: 5m 13 delayAfterFailure: 30s 14 unneededTime: 5m 15 utilizationThreshold: "0.4" 16 1 Specify the priority that a pod must exceed to cause the cluster autoscaler to deploy additional nodes. Enter a 32-bit integer value. The podPriorityThreshold value is compared to the value of the PriorityClass that you assign to each pod. 2 Specify the maximum number of nodes to deploy. This value is the total number of machines that are deployed in your cluster, not just the ones that the autoscaler controls. Ensure that this value is large enough to account for all of your control plane and compute machines and the total number of replicas that you specify in your MachineAutoscaler resources. 3 Specify the minimum number of cores to deploy in the cluster. 4 Specify the maximum number of cores to deploy in the cluster. 5 Specify the minimum amount of memory, in GiB, in the cluster. 6 Specify the maximum amount of memory, in GiB, in the cluster. 7 Optional: Specify the type of GPU node to deploy. Only nvidia.com/gpu and amd.com/gpu are valid types. 8 Specify the minimum number of GPUs to deploy in the cluster. 9 Specify the maximum number of GPUs to deploy in the cluster. 10 In this section, you can specify the period to wait for each action by using any valid ParseDuration interval, including ns , us , ms , s , m , and h . 11 Specify whether the cluster autoscaler can remove unnecessary nodes. 12 Optional: Specify the period to wait before deleting a node after a node has recently been added . If you do not specify a value, the default value of 10m is used. 13 Optional: Specify the period to wait before deleting a node after a node has recently been deleted . If you do not specify a value, the default value of 0s is used. 14 Optional: Specify the period to wait before deleting a node after a scale down failure occurred. If you do not specify a value, the default value of 3m is used. 15 Optional: Specify the period before an unnecessary node is eligible for deletion. If you do not specify a value, the default value of 10m is used. 16 Optional: Specify the node utilization level below which an unnecessary node is eligible for deletion. The node utilization level is the sum of the requested resources divided by the allocated resources for the node, and must be a value greater than "0" but less than "1" . If you do not specify a value, the cluster autoscaler uses a default value of "0.5" , which corresponds to 50% utilization. This value must be expressed as a string. Note When performing a scaling operation, the cluster autoscaler remains within the ranges set in the ClusterAutoscaler resource definition, such as the minimum and maximum number of cores to deploy or the amount of memory in the cluster. However, the cluster autoscaler does not correct the current values in your cluster to be within those ranges. The minimum and maximum CPUs, memory, and GPU values are determined by calculating those resources on all nodes in the cluster, even if the cluster autoscaler does not manage the nodes. For example, the control plane nodes are considered in the total memory in the cluster, even though the cluster autoscaler does not manage the control plane nodes. 5.7.2. Deploying the cluster autoscaler To deploy the cluster autoscaler, you create an instance of the ClusterAutoscaler resource. Procedure Create a YAML file for the ClusterAutoscaler resource that contains the customized resource definition. Create the resource in the cluster: USD oc create -f <filename>.yaml 1 1 <filename> is the name of the resource file that you customized. 5.8. About the machine autoscaler The machine autoscaler adjusts the number of Machines in the machine sets that you deploy in an OpenShift Container Platform cluster. You can scale both the default worker machine set and any other machine sets that you create. The machine autoscaler makes more Machines when the cluster runs out of resources to support more deployments. Any changes to the values in MachineAutoscaler resources, such as the minimum or maximum number of instances, are immediately applied to the machine set they target. Important You must deploy a machine autoscaler for the cluster autoscaler to scale your machines. The cluster autoscaler uses the annotations on machine sets that the machine autoscaler sets to determine the resources that it can scale. If you define a cluster autoscaler without also defining machine autoscalers, the cluster autoscaler will never scale your cluster. 5.8.1. MachineAutoscaler resource definition This MachineAutoscaler resource definition shows the parameters and sample values for the machine autoscaler. apiVersion: "autoscaling.openshift.io/v1beta1" kind: "MachineAutoscaler" metadata: name: "worker-us-east-1a" 1 namespace: "openshift-machine-api" spec: minReplicas: 1 2 maxReplicas: 12 3 scaleTargetRef: 4 apiVersion: machine.openshift.io/v1beta1 kind: MachineSet 5 name: worker-us-east-1a 6 1 Specify the machine autoscaler name. To make it easier to identify which machine set this machine autoscaler scales, specify or include the name of the machine set to scale. The machine set name takes the following form: <clusterid>-<machineset>-<region> . 2 Specify the minimum number machines of the specified type that must remain in the specified zone after the cluster autoscaler initiates cluster scaling. If running in AWS, GCP, Azure, RHOSP, or vSphere, this value can be set to 0 . For other providers, do not set this value to 0 . You can save on costs by setting this value to 0 for use cases such as running expensive or limited-usage hardware that is used for specialized workloads, or by scaling a machine set with extra large machines. The cluster autoscaler scales the machine set down to zero if the machines are not in use. Important Do not set the spec.minReplicas value to 0 for the three compute machine sets that are created during the OpenShift Container Platform installation process for an installer provisioned infrastructure. 3 Specify the maximum number machines of the specified type that the cluster autoscaler can deploy in the specified zone after it initiates cluster scaling. Ensure that the maxNodesTotal value in the ClusterAutoscaler resource definition is large enough to allow the machine autoscaler to deploy this number of machines. 4 In this section, provide values that describe the existing machine set to scale. 5 The kind parameter value is always MachineSet . 6 The name value must match the name of an existing machine set, as shown in the metadata.name parameter value. 5.8.2. Deploying the machine autoscaler To deploy the machine autoscaler, you create an instance of the MachineAutoscaler resource. Procedure Create a YAML file for the MachineAutoscaler resource that contains the customized resource definition. Create the resource in the cluster: USD oc create -f <filename>.yaml 1 1 <filename> is the name of the resource file that you customized. 5.9. Enabling Technology Preview features using FeatureGates You can turn on a subset of the current Technology Preview features on for all nodes in the cluster by editing the FeatureGate custom resource (CR). 5.9.1. Understanding feature gates You can use the FeatureGate custom resource (CR) to enable specific feature sets in your cluster. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. You can activate the following feature set by using the FeatureGate CR: TechPreviewNoUpgrade . This feature set is a subset of the current Technology Preview features. This feature set allows you to enable these tech preview features on test clusters, where you can fully test them, while leaving the features disabled on production clusters. Enabling this feature set cannot be undone and prevents minor version updates. This feature set is not recommended on production clusters. Warning Enabling the TechPreviewNoUpgrade feature set on your cluster cannot be undone and prevents minor version updates. You should not enable this feature set on production clusters. The following Technology Preview features are enabled by this feature set: Microsoft Azure File CSI Driver Operator. Enables the provisioning of persistent volumes (PVs) by using the Container Storage Interface (CSI) driver for Microsoft Azure File Storage. CSI automatic migration. Enables automatic migration for supported in-tree volume plugins to their equivalent Container Storage Interface (CSI) drivers. Supported for: Amazon Web Services (AWS) Elastic Block Storage (EBS) OpenStack Cinder Azure Disk Azure File Google Cloud Platform Persistent Disk (CSI) VMware vSphere Cluster Cloud Controller Manager Operator. Enables the Cluster Cloud Controller Manager Operator rather than the in-tree cloud controller. Available as a Technology Preview for: Alibaba Cloud Amazon Web Services (AWS) Google Cloud Platform (GCP) IBM Cloud Microsoft Azure Red Hat OpenStack Platform (RHOSP) VMware vSphere Shared resource CSI driver CSI volume support for the OpenShift Container Platform build system Swap memory on nodes 5.9.2. Enabling feature sets using the web console You can use the OpenShift Container Platform web console to enable feature sets for all of the nodes in a cluster by editing the FeatureGate custom resource (CR). Procedure To enable feature sets: In the OpenShift Container Platform web console, switch to the Administration Custom Resource Definitions page. On the Custom Resource Definitions page, click FeatureGate . On the Custom Resource Definition Details page, click the Instances tab. Click the cluster feature gate, then click the YAML tab. Edit the cluster instance to add specific feature sets: Warning Enabling the TechPreviewNoUpgrade feature set on your cluster cannot be undone and prevents minor version updates. You should not enable this feature set on production clusters. Sample Feature Gate custom resource apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 # ... spec: featureSet: TechPreviewNoUpgrade 2 1 The name of the FeatureGate CR must be cluster . 2 Add the feature set that you want to enable: TechPreviewNoUpgrade enables specific Technology Preview features. After you save the changes, new machine configs are created, the machine config pools are updated, and scheduling on each node is disabled while the change is being applied. Verification You can verify that the feature gates are enabled by looking at the kubelet.conf file on a node after the nodes return to the ready state. From the Administrator perspective in the web console, navigate to Compute Nodes . Select a node. In the Node details page, click Terminal . In the terminal window, change your root directory to /host : sh-4.2# chroot /host View the kubelet.conf file: sh-4.2# cat /etc/kubernetes/kubelet.conf Sample output # ... featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false # ... The features that are listed as true are enabled on your cluster. Note The features listed vary depending upon the OpenShift Container Platform version. 5.9.3. Enabling feature sets using the CLI You can use the OpenShift CLI ( oc ) to enable feature sets for all of the nodes in a cluster by editing the FeatureGate custom resource (CR). Prerequisites You have installed the OpenShift CLI ( oc ). Procedure To enable feature sets: Edit the FeatureGate CR named cluster : USD oc edit featuregate cluster Warning Enabling the TechPreviewNoUpgrade feature set on your cluster cannot be undone and prevents minor version updates. You should not enable this feature set on production clusters. Sample FeatureGate custom resource apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 # ... spec: featureSet: TechPreviewNoUpgrade 2 1 The name of the FeatureGate CR must be cluster . 2 Add the feature set that you want to enable: TechPreviewNoUpgrade enables specific Technology Preview features. After you save the changes, new machine configs are created, the machine config pools are updated, and scheduling on each node is disabled while the change is being applied. Verification You can verify that the feature gates are enabled by looking at the kubelet.conf file on a node after the nodes return to the ready state. From the Administrator perspective in the web console, navigate to Compute Nodes . Select a node. In the Node details page, click Terminal . In the terminal window, change your root directory to /host : sh-4.2# chroot /host View the kubelet.conf file: sh-4.2# cat /etc/kubernetes/kubelet.conf Sample output # ... featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false # ... The features that are listed as true are enabled on your cluster. Note The features listed vary depending upon the OpenShift Container Platform version. 5.10. etcd tasks Back up etcd, enable or disable etcd encryption, or defragment etcd data. 5.10.1. About etcd encryption By default, etcd data is not encrypted in OpenShift Container Platform. You can enable etcd encryption for your cluster to provide an additional layer of data security. For example, it can help protect the loss of sensitive data if an etcd backup is exposed to the incorrect parties. When you enable etcd encryption, the following OpenShift API server and Kubernetes API server resources are encrypted: Secrets Config maps Routes OAuth access tokens OAuth authorize tokens When you enable etcd encryption, encryption keys are created. These keys are rotated on a weekly basis. You must have these keys to restore from an etcd backup. Note Etcd encryption only encrypts values, not keys. Resource types, namespaces, and object names are unencrypted. If etcd encryption is enabled during a backup, the static_kuberesources_<datetimestamp>.tar.gz file contains the encryption keys for the etcd snapshot. For security reasons, store this file separately from the etcd snapshot. However, this file is required to restore a state of etcd from the respective etcd snapshot. 5.10.2. Enabling etcd encryption You can enable etcd encryption to encrypt sensitive resources in your cluster. Warning Do not back up etcd resources until the initial encryption process is completed. If the encryption process is not completed, the backup might be only partially encrypted. After you enable etcd encryption, several changes can occur: The etcd encryption might affect the memory consumption of a few resources. You might notice a transient affect on backup performance because the leader must serve the backup. A disk I/O can affect the node that receives the backup state. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Modify the APIServer object: USD oc edit apiserver Set the encryption field type to aescbc : spec: encryption: type: aescbc 1 1 The aescbc type means that AES-CBC with PKCS#7 padding and a 32 byte key is used to perform the encryption. Save the file to apply the changes. The encryption process starts. It can take 20 minutes or longer for this process to complete, depending on the size of your cluster. Verify that etcd encryption was successful. Review the Encrypted status condition for the OpenShift API server to verify that its resources were successfully encrypted: USD oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}' The output shows EncryptionCompleted upon successful encryption: EncryptionCompleted All resources encrypted: routes.route.openshift.io If the output shows EncryptionInProgress , encryption is still in progress. Wait a few minutes and try again. Review the Encrypted status condition for the Kubernetes API server to verify that its resources were successfully encrypted: USD oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}' The output shows EncryptionCompleted upon successful encryption: EncryptionCompleted All resources encrypted: secrets, configmaps If the output shows EncryptionInProgress , encryption is still in progress. Wait a few minutes and try again. Review the Encrypted status condition for the OpenShift OAuth API server to verify that its resources were successfully encrypted: USD oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}' The output shows EncryptionCompleted upon successful encryption: EncryptionCompleted All resources encrypted: oauthaccesstokens.oauth.openshift.io, oauthauthorizetokens.oauth.openshift.io If the output shows EncryptionInProgress , encryption is still in progress. Wait a few minutes and try again. 5.10.3. Disabling etcd encryption You can disable encryption of etcd data in your cluster. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Modify the APIServer object: USD oc edit apiserver Set the encryption field type to identity : spec: encryption: type: identity 1 1 The identity type is the default value and means that no encryption is performed. Save the file to apply the changes. The decryption process starts. It can take 20 minutes or longer for this process to complete, depending on the size of your cluster. Verify that etcd decryption was successful. Review the Encrypted status condition for the OpenShift API server to verify that its resources were successfully decrypted: USD oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}' The output shows DecryptionCompleted upon successful decryption: DecryptionCompleted Encryption mode set to identity and everything is decrypted If the output shows DecryptionInProgress , decryption is still in progress. Wait a few minutes and try again. Review the Encrypted status condition for the Kubernetes API server to verify that its resources were successfully decrypted: USD oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}' The output shows DecryptionCompleted upon successful decryption: DecryptionCompleted Encryption mode set to identity and everything is decrypted If the output shows DecryptionInProgress , decryption is still in progress. Wait a few minutes and try again. Review the Encrypted status condition for the OpenShift OAuth API server to verify that its resources were successfully decrypted: USD oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}' The output shows DecryptionCompleted upon successful decryption: DecryptionCompleted Encryption mode set to identity and everything is decrypted If the output shows DecryptionInProgress , decryption is still in progress. Wait a few minutes and try again. 5.10.4. Backing up etcd data Follow these steps to back up etcd data by creating an etcd snapshot and backing up the resources for the static pods. This backup can be saved and used at a later time if you need to restore etcd. Important Only save a backup from a single control plane host. Do not take a backup from each control plane host in the cluster. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have checked whether the cluster-wide proxy is enabled. Tip You can check whether the proxy is enabled by reviewing the output of oc get proxy cluster -o yaml . The proxy is enabled if the httpProxy , httpsProxy , and noProxy fields have values set. Procedure Start a debug session for a control plane node: USD oc debug node/<node_name> Change your root directory to /host : sh-4.2# chroot /host If the cluster-wide proxy is enabled, be sure that you have exported the NO_PROXY , HTTP_PROXY , and HTTPS_PROXY environment variables. Run the cluster-backup.sh script and pass in the location to save the backup to. Tip The cluster-backup.sh script is maintained as a component of the etcd Cluster Operator and is a wrapper around the etcdctl snapshot save command. sh-4.4# /usr/local/bin/cluster-backup.sh /home/core/assets/backup Example script output found latest kube-apiserver: /etc/kubernetes/static-pod-resources/kube-apiserver-pod-6 found latest kube-controller-manager: /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-7 found latest kube-scheduler: /etc/kubernetes/static-pod-resources/kube-scheduler-pod-6 found latest etcd: /etc/kubernetes/static-pod-resources/etcd-pod-3 ede95fe6b88b87ba86a03c15e669fb4aa5bf0991c180d3c6895ce72eaade54a1 etcdctl version: 3.4.14 API version: 3.4 {"level":"info","ts":1624647639.0188997,"caller":"snapshot/v3_snapshot.go:119","msg":"created temporary db file","path":"/home/core/assets/backup/snapshot_2021-06-25_190035.db.part"} {"level":"info","ts":"2021-06-25T19:00:39.030Z","caller":"clientv3/maintenance.go:200","msg":"opened snapshot stream; downloading"} {"level":"info","ts":1624647639.0301006,"caller":"snapshot/v3_snapshot.go:127","msg":"fetching snapshot","endpoint":"https://10.0.0.5:2379"} {"level":"info","ts":"2021-06-25T19:00:40.215Z","caller":"clientv3/maintenance.go:208","msg":"completed snapshot read; closing"} {"level":"info","ts":1624647640.6032252,"caller":"snapshot/v3_snapshot.go:142","msg":"fetched snapshot","endpoint":"https://10.0.0.5:2379","size":"114 MB","took":1.584090459} {"level":"info","ts":1624647640.6047094,"caller":"snapshot/v3_snapshot.go:152","msg":"saved","path":"/home/core/assets/backup/snapshot_2021-06-25_190035.db"} Snapshot saved at /home/core/assets/backup/snapshot_2021-06-25_190035.db {"hash":3866667823,"revision":31407,"totalKey":12828,"totalSize":114446336} snapshot db and kube resources are successfully saved to /home/core/assets/backup In this example, two files are created in the /home/core/assets/backup/ directory on the control plane host: snapshot_<datetimestamp>.db : This file is the etcd snapshot. The cluster-backup.sh script confirms its validity. static_kuberesources_<datetimestamp>.tar.gz : This file contains the resources for the static pods. If etcd encryption is enabled, it also contains the encryption keys for the etcd snapshot. Note If etcd encryption is enabled, it is recommended to store this second file separately from the etcd snapshot for security reasons. However, this file is required to restore from the etcd snapshot. Keep in mind that etcd encryption only encrypts values, not keys. This means that resource types, namespaces, and object names are unencrypted. 5.10.5. Defragmenting etcd data For large and dense clusters, etcd can suffer from poor performance if the keyspace grows too large and exceeds the space quota. Periodically maintain and defragment etcd to free up space in the data store. Monitor Prometheus for etcd metrics and defragment it when required; otherwise, etcd can raise a cluster-wide alarm that puts the cluster into a maintenance mode that accepts only key reads and deletes. Monitor these key metrics: etcd_server_quota_backend_bytes , which is the current quota limit etcd_mvcc_db_total_size_in_use_in_bytes , which indicates the actual database usage after a history compaction etcd_mvcc_db_total_size_in_bytes , which shows the database size, including free space waiting for defragmentation Defragment etcd data to reclaim disk space after events that cause disk fragmentation, such as etcd history compaction. History compaction is performed automatically every five minutes and leaves gaps in the back-end database. This fragmented space is available for use by etcd, but is not available to the host file system. You must defragment etcd to make this space available to the host file system. Defragmentation occurs automatically, but you can also trigger it manually. Note Automatic defragmentation is good for most cases, because the etcd operator uses cluster information to determine the most efficient operation for the user. 5.10.5.1. Automatic defragmentation The etcd Operator automatically defragments disks. No manual intervention is needed. Verify that the defragmentation process is successful by viewing one of these logs: etcd logs cluster-etcd-operator pod operator status error log Warning Automatic defragmentation can cause leader election failure in various OpenShift core components, such as the Kubernetes controller manager, which triggers a restart of the failing component. The restart is harmless and either triggers failover to the running instance or the component resumes work again after the restart. Example log output for successful defragmentation etcd member has been defragmented: <member_name> , memberID: <member_id> Example log output for unsuccessful defragmentation failed defrag on member: <member_name> , memberID: <member_id> : <error_message> 5.10.5.2. Manual defragmentation A Prometheus alert indicates when you need to use manual defragmentation. The alert is displayed in two cases: When etcd uses more than 50% of its available space for more than 10 minutes When etcd is actively using less than 50% of its total database size for more than 10 minutes You can also determine whether defragmentation is needed by checking the etcd database size in MB that will be freed by defragmentation with the PromQL expression: (etcd_mvcc_db_total_size_in_bytes - etcd_mvcc_db_total_size_in_use_in_bytes)/1024/1024 Warning Defragmenting etcd is a blocking action. The etcd member will not respond until defragmentation is complete. For this reason, wait at least one minute between defragmentation actions on each of the pods to allow the cluster to recover. Follow this procedure to defragment etcd data on each etcd member. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Determine which etcd member is the leader, because the leader should be defragmented last. Get the list of etcd pods: USD oc -n openshift-etcd get pods -l k8s-app=etcd -o wide Example output etcd-ip-10-0-159-225.example.redhat.com 3/3 Running 0 175m 10.0.159.225 ip-10-0-159-225.example.redhat.com <none> <none> etcd-ip-10-0-191-37.example.redhat.com 3/3 Running 0 173m 10.0.191.37 ip-10-0-191-37.example.redhat.com <none> <none> etcd-ip-10-0-199-170.example.redhat.com 3/3 Running 0 176m 10.0.199.170 ip-10-0-199-170.example.redhat.com <none> <none> Choose a pod and run the following command to determine which etcd member is the leader: USD oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com etcdctl endpoint status --cluster -w table Example output Defaulting container name to etcdctl. Use 'oc describe pod/etcd-ip-10-0-159-225.example.redhat.com -n openshift-etcd' to see all of the containers in this pod. +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.4.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ Based on the IS LEADER column of this output, the https://10.0.199.170:2379 endpoint is the leader. Matching this endpoint with the output of the step, the pod name of the leader is etcd-ip-10-0-199-170.example.redhat.com . Defragment an etcd member. Connect to the running etcd container, passing in the name of a pod that is not the leader: USD oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com Unset the ETCDCTL_ENDPOINTS environment variable: sh-4.4# unset ETCDCTL_ENDPOINTS Defragment the etcd member: sh-4.4# etcdctl --command-timeout=30s --endpoints=https://localhost:2379 defrag Example output Finished defragmenting etcd member[https://localhost:2379] If a timeout error occurs, increase the value for --command-timeout until the command succeeds. Verify that the database size was reduced: sh-4.4# etcdctl endpoint status -w table --cluster Example output +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.4.9 | 41 MB | false | false | 7 | 91624 | 91624 | | 1 | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.4.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ This example shows that the database size for this etcd member is now 41 MB as opposed to the starting size of 104 MB. Repeat these steps to connect to each of the other etcd members and defragment them. Always defragment the leader last. Wait at least one minute between defragmentation actions to allow the etcd pod to recover. Until the etcd pod recovers, the etcd member will not respond. If any NOSPACE alarms were triggered due to the space quota being exceeded, clear them. Check if there are any NOSPACE alarms: sh-4.4# etcdctl alarm list Example output memberID:12345678912345678912 alarm:NOSPACE Clear the alarms: sh-4.4# etcdctl alarm disarm 5.10.6. Restoring to a cluster state You can use a saved etcd backup to restore a cluster state or restore a cluster that has lost the majority of control plane hosts. Important When you restore your cluster, you must use an etcd backup that was taken from the same z-stream release. For example, an OpenShift Container Platform 4.7.2 cluster must use an etcd backup that was taken from 4.7.2. Prerequisites Access to the cluster as a user with the cluster-admin role through a certificate-based kubeconfig file, like the one that was used during installation. A healthy control plane host to use as the recovery host. SSH access to control plane hosts. A backup directory containing both the etcd snapshot and the resources for the static pods, which were from the same backup. The file names in the directory must be in the following formats: snapshot_<datetimestamp>.db and static_kuberesources_<datetimestamp>.tar.gz . Important For non-recovery control plane nodes, it is not required to establish SSH connectivity or to stop the static pods. You can delete and recreate other non-recovery, control plane machines, one by one. Procedure Select a control plane host to use as the recovery host. This is the host that you will run the restore operation on. Establish SSH connectivity to each of the control plane nodes, including the recovery host. The Kubernetes API server becomes inaccessible after the restore process starts, so you cannot access the control plane nodes. For this reason, it is recommended to establish SSH connectivity to each control plane host in a separate terminal. Important If you do not complete this step, you will not be able to access the control plane hosts to complete the restore procedure, and you will be unable to recover your cluster from this state. Copy the etcd backup directory to the recovery control plane host. This procedure assumes that you copied the backup directory containing the etcd snapshot and the resources for the static pods to the /home/core/ directory of your recovery control plane host. Stop the static pods on any other control plane nodes. Note You do not need to stop the static pods on the recovery host. Access a control plane host that is not the recovery host. Move the existing etcd pod file out of the kubelet manifest directory: USD sudo mv /etc/kubernetes/manifests/etcd-pod.yaml /tmp Verify that the etcd pods are stopped. USD sudo crictl ps | grep etcd | grep -v operator The output of this command should be empty. If it is not empty, wait a few minutes and check again. Move the existing Kubernetes API server pod file out of the kubelet manifest directory: USD sudo mv /etc/kubernetes/manifests/kube-apiserver-pod.yaml /tmp Verify that the Kubernetes API server pods are stopped. USD sudo crictl ps | grep kube-apiserver | grep -v operator The output of this command should be empty. If it is not empty, wait a few minutes and check again. Move the etcd data directory to a different location: USD sudo mv /var/lib/etcd/ /tmp Repeat this step on each of the other control plane hosts that is not the recovery host. Access the recovery control plane host. If the cluster-wide proxy is enabled, be sure that you have exported the NO_PROXY , HTTP_PROXY , and HTTPS_PROXY environment variables. Tip You can check whether the proxy is enabled by reviewing the output of oc get proxy cluster -o yaml . The proxy is enabled if the httpProxy , httpsProxy , and noProxy fields have values set. Run the restore script on the recovery control plane host and pass in the path to the etcd backup directory: USD sudo -E /usr/local/bin/cluster-restore.sh /home/core/backup Example script output ...stopping kube-scheduler-pod.yaml ...stopping kube-controller-manager-pod.yaml ...stopping etcd-pod.yaml ...stopping kube-apiserver-pod.yaml Waiting for container etcd to stop .complete Waiting for container etcdctl to stop .............................complete Waiting for container etcd-metrics to stop complete Waiting for container kube-controller-manager to stop complete Waiting for container kube-apiserver to stop ..........................................................................................complete Waiting for container kube-scheduler to stop complete Moving etcd data-dir /var/lib/etcd/member to /var/lib/etcd-backup starting restore-etcd static pod starting kube-apiserver-pod.yaml static-pod-resources/kube-apiserver-pod-7/kube-apiserver-pod.yaml starting kube-controller-manager-pod.yaml static-pod-resources/kube-controller-manager-pod-7/kube-controller-manager-pod.yaml starting kube-scheduler-pod.yaml static-pod-resources/kube-scheduler-pod-8/kube-scheduler-pod.yaml Note The restore process can cause nodes to enter the NotReady state if the node certificates were updated after the last etcd backup. Check the nodes to ensure they are in the Ready state. Run the following command: USD oc get nodes -w Sample output NAME STATUS ROLES AGE VERSION host-172-25-75-28 Ready master 3d20h v1.23.3+e419edf host-172-25-75-38 Ready infra,worker 3d20h v1.23.3+e419edf host-172-25-75-40 Ready master 3d20h v1.23.3+e419edf host-172-25-75-65 Ready master 3d20h v1.23.3+e419edf host-172-25-75-74 Ready infra,worker 3d20h v1.23.3+e419edf host-172-25-75-79 Ready worker 3d20h v1.23.3+e419edf host-172-25-75-86 Ready worker 3d20h v1.23.3+e419edf host-172-25-75-98 Ready infra,worker 3d20h v1.23.3+e419edf It can take several minutes for all nodes to report their state. If any nodes are in the NotReady state, log in to the nodes and remove all of the PEM files from the /var/lib/kubelet/pki directory on each node. You can SSH into the nodes or use the terminal window in the web console. USD ssh -i <ssh-key-path> core@<master-hostname> Sample pki directory Restart the kubelet service on all control plane hosts. From the recovery host, run the following command: USD sudo systemctl restart kubelet.service Repeat this step on all other control plane hosts. Approve the pending CSRs: Get the list of current CSRs: USD oc get csr Example output 1 1 2 A pending kubelet service CSR (for user-provisioned installations). 3 4 A pending node-bootstrapper CSR. Review the details of a CSR to verify that it is valid: USD oc describe csr <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. Approve each valid node-bootstrapper CSR: USD oc adm certificate approve <csr_name> For user-provisioned installations, approve each valid kubelet service CSR: USD oc adm certificate approve <csr_name> Verify that the single member control plane has started successfully. From the recovery host, verify that the etcd container is running. USD sudo crictl ps | grep etcd | egrep -v "operator|etcd-guard" Example output 3ad41b7908e32 36f86e2eeaaffe662df0d21041eb22b8198e0e58abeeae8c743c3e6e977e8009 About a minute ago Running etcd 0 7c05f8af362f0 From the recovery host, verify that the etcd pod is running. USD oc -n openshift-etcd get pods -l k8s-app=etcd Example output NAME READY STATUS RESTARTS AGE etcd-ip-10-0-143-125.ec2.internal 1/1 Running 1 2m47s If the status is Pending , or the output lists more than one running etcd pod, wait a few minutes and check again. Note Perform the following step only if you are using OVNKubernetes Container Network Interface (CNI) plugin. Restart the Open Virtual Network (OVN) Kubernetes pods on all the hosts. Remove the northbound database (nbdb) and southbound database (sbdb). Access the recovery host and the remaining control plane nodes by using Secure Shell (SSH) and run the following command: USD sudo rm -f /var/lib/ovn/etc/*.db Delete all OVN-Kubernetes control plane pods by running the following command: USD oc delete pods -l app=ovnkube-master -n openshift-ovn-kubernetes Ensure that any OVN-Kubernetes control plane pods are deployed again and are in a Running state by running the following command: USD oc get pods -l app=ovnkube-master -n openshift-ovn-kubernetes Example output NAME READY STATUS RESTARTS AGE ovnkube-master-nb24h 4/4 Running 0 48s Delete all ovnkube-node pods by running the following command: USD oc get pods -n openshift-ovn-kubernetes -o name | grep ovnkube-node | while read p ; do oc delete USDp -n openshift-ovn-kubernetes ; done Ensure that all the ovnkube-node pods are deployed again and are in a Running state by running the following command: USD oc get pods -n openshift-ovn-kubernetes | grep ovnkube-node Delete and re-create other non-recovery, control plane machines, one by one. After the machines are re-created, a new revision is forced and etcd automatically scales up. If you use a user-provisioned bare metal installation, you can re-create a control plane machine by using the same method that you used to originally create it. For more information, see "Installing a user-provisioned cluster on bare metal". Warning Do not delete and re-create the machine for the recovery host. If you are running installer-provisioned infrastructure, or you used the Machine API to create your machines, follow these steps: Warning Do not delete and re-create the machine for the recovery host. For bare metal installations on installer-provisioned infrastructure, control plane machines are not re-created. For more information, see "Replacing a bare-metal control plane node". Obtain the machine for one of the lost control plane hosts. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc get machines -n openshift-machine-api -o wide Example output: NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-0 Running m4.xlarge us-east-1 us-east-1a 3h37m ip-10-0-131-183.ec2.internal aws:///us-east-1a/i-0ec2782f8287dfb7e stopped 1 clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running 1 This is the control plane machine for the lost control plane host, ip-10-0-131-183.ec2.internal . Save the machine configuration to a file on your file system: USD oc get machine clustername-8qw5l-master-0 \ 1 -n openshift-machine-api \ -o yaml \ > new-master-machine.yaml 1 Specify the name of the control plane machine for the lost control plane host. Edit the new-master-machine.yaml file that was created in the step to assign a new name and remove unnecessary fields. Remove the entire status section: status: addresses: - address: 10.0.131.183 type: InternalIP - address: ip-10-0-131-183.ec2.internal type: InternalDNS - address: ip-10-0-131-183.ec2.internal type: Hostname lastUpdated: "2020-04-20T17:44:29Z" nodeRef: kind: Node name: ip-10-0-131-183.ec2.internal uid: acca4411-af0d-4387-b73e-52b2484295ad phase: Running providerStatus: apiVersion: awsproviderconfig.openshift.io/v1beta1 conditions: - lastProbeTime: "2020-04-20T16:53:50Z" lastTransitionTime: "2020-04-20T16:53:50Z" message: machine successfully created reason: MachineCreationSucceeded status: "True" type: MachineCreation instanceId: i-0fdb85790d76d0c3f instanceState: stopped kind: AWSMachineProviderStatus Change the metadata.name field to a new name. It is recommended to keep the same base name as the old machine and change the ending number to the available number. In this example, clustername-8qw5l-master-0 is changed to clustername-8qw5l-master-3 : apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: ... name: clustername-8qw5l-master-3 ... Remove the spec.providerID field: providerID: aws:///us-east-1a/i-0fdb85790d76d0c3f Remove the metadata.annotations and metadata.generation fields: annotations: machine.openshift.io/instance-state: running ... generation: 2 Remove the metadata.resourceVersion and metadata.uid fields: resourceVersion: "13291" uid: a282eb70-40a2-4e89-8009-d05dd420d31a Delete the machine of the lost control plane host: USD oc delete machine -n openshift-machine-api clustername-8qw5l-master-0 1 1 Specify the name of the control plane machine for the lost control plane host. Verify that the machine was deleted: USD oc get machines -n openshift-machine-api -o wide Example output: NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running Create a machine by using the new-master-machine.yaml file: USD oc apply -f new-master-machine.yaml Verify that the new machine has been created: USD oc get machines -n openshift-machine-api -o wide Example output: NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-master-3 Provisioning m4.xlarge us-east-1 us-east-1a 85s ip-10-0-173-171.ec2.internal aws:///us-east-1a/i-015b0888fe17bc2c8 running 1 clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running 1 The new machine, clustername-8qw5l-master-3 is being created and is ready after the phase changes from Provisioning to Running . It might take a few minutes for the new machine to be created. The etcd cluster Operator will automatically sync when the machine or node returns to a healthy state. Repeat these steps for each lost control plane host that is not the recovery host. Turn off the quorum guard by entering the following command: USD oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": {"useUnsupportedUnsafeNonHANonProductionUnstableEtcd": true}}}' This command ensures that you can successfully re-create secrets and roll out the static pods. In a separate terminal window within the recovery host, export the recovery kubeconfig file by running the following command: USD export KUBECONFIG=/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost-recovery.kubeconfig Force etcd redeployment. In the same terminal window where you exported the recovery kubeconfig file, run the following command: USD oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"USD( date --rfc-3339=ns )"'"}}' --type=merge 1 1 The forceRedeploymentReason value must be unique, which is why a timestamp is appended. When the etcd cluster Operator performs a redeployment, the existing nodes are started with new pods similar to the initial bootstrap scale up. Verify all nodes are updated to the latest revision. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}' Review the NodeInstallerProgressing status condition for etcd to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update: AllNodesAtLatestRevision 3 nodes are at revision 7 1 1 In this example, the latest revision number is 7 . If the output includes multiple revision numbers, such as 2 nodes are at revision 6; 1 nodes are at revision 7 , this means that the update is still in progress. Wait a few minutes and try again. After etcd is redeployed, force new rollouts for the control plane. The Kubernetes API server will reinstall itself on the other nodes because the kubelet is connected to API servers using an internal load balancer. In a terminal that has access to the cluster as a cluster-admin user, run the following commands. Force a new rollout for the Kubernetes API server: USD oc patch kubeapiserver cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"USD( date --rfc-3339=ns )"'"}}' --type=merge Verify all nodes are updated to the latest revision. USD oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}' Review the NodeInstallerProgressing status condition to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update: AllNodesAtLatestRevision 3 nodes are at revision 7 1 1 In this example, the latest revision number is 7 . If the output includes multiple revision numbers, such as 2 nodes are at revision 6; 1 nodes are at revision 7 , this means that the update is still in progress. Wait a few minutes and try again. Force a new rollout for the Kubernetes controller manager: USD oc patch kubecontrollermanager cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"USD( date --rfc-3339=ns )"'"}}' --type=merge Verify all nodes are updated to the latest revision. USD oc get kubecontrollermanager -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}' Review the NodeInstallerProgressing status condition to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update: AllNodesAtLatestRevision 3 nodes are at revision 7 1 1 In this example, the latest revision number is 7 . If the output includes multiple revision numbers, such as 2 nodes are at revision 6; 1 nodes are at revision 7 , this means that the update is still in progress. Wait a few minutes and try again. Force a new rollout for the Kubernetes scheduler: USD oc patch kubescheduler cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"USD( date --rfc-3339=ns )"'"}}' --type=merge Verify all nodes are updated to the latest revision. USD oc get kubescheduler -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}' Review the NodeInstallerProgressing status condition to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update: AllNodesAtLatestRevision 3 nodes are at revision 7 1 1 In this example, the latest revision number is 7 . If the output includes multiple revision numbers, such as 2 nodes are at revision 6; 1 nodes are at revision 7 , this means that the update is still in progress. Wait a few minutes and try again. Verify that all control plane hosts have started and joined the cluster. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc -n openshift-etcd get pods -l k8s-app=etcd Example output etcd-ip-10-0-143-125.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-154-194.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-173-171.ec2.internal 2/2 Running 0 9h To ensure that all workloads return to normal operation following a recovery procedure, restart each pod that stores Kubernetes API information. This includes OpenShift Container Platform components such as routers, Operators, and third-party components. Note On completion of the procedural steps, you might need to wait a few minutes for all services to return to their restored state. For example, authentication by using oc login might not immediately work until the OAuth server pods are restarted. Consider using the system:admin kubeconfig file for immediate authentication. This method basis its authentication on SSL/TLS client certificates as against OAuth tokens. You can authenticate with this file by issuing the following command: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig Issue the following command to display your authenticated user name: USD oc whoami Additional resources Installing a user-provisioned cluster on bare metal Replacing a bare-metal control plane node 5.10.7. Issues and workarounds for restoring a persistent storage state If your OpenShift Container Platform cluster uses persistent storage of any form, a state of the cluster is typically stored outside etcd. It might be an Elasticsearch cluster running in a pod or a database running in a StatefulSet object. When you restore from an etcd backup, the status of the workloads in OpenShift Container Platform is also restored. However, if the etcd snapshot is old, the status might be invalid or outdated. Important The contents of persistent volumes (PVs) are never part of the etcd snapshot. When you restore an OpenShift Container Platform cluster from an etcd snapshot, non-critical workloads might gain access to critical data, or vice-versa. The following are some example scenarios that produce an out-of-date status: MySQL database is running in a pod backed up by a PV object. Restoring OpenShift Container Platform from an etcd snapshot does not bring back the volume on the storage provider, and does not produce a running MySQL pod, despite the pod repeatedly attempting to start. You must manually restore this pod by restoring the volume on the storage provider, and then editing the PV to point to the new volume. Pod P1 is using volume A, which is attached to node X. If the etcd snapshot is taken while another pod uses the same volume on node Y, then when the etcd restore is performed, pod P1 might not be able to start correctly due to the volume still being attached to node Y. OpenShift Container Platform is not aware of the attachment, and does not automatically detach it. When this occurs, the volume must be manually detached from node Y so that the volume can attach on node X, and then pod P1 can start. Cloud provider or storage provider credentials were updated after the etcd snapshot was taken. This causes any CSI drivers or Operators that depend on the those credentials to not work. You might have to manually update the credentials required by those drivers or Operators. A device is removed or renamed from OpenShift Container Platform nodes after the etcd snapshot is taken. The Local Storage Operator creates symlinks for each PV that it manages from /dev/disk/by-id or /dev directories. This situation might cause the local PVs to refer to devices that no longer exist. To fix this problem, an administrator must: Manually remove the PVs with invalid devices. Remove symlinks from respective nodes. Delete LocalVolume or LocalVolumeSet objects (see Storage Configuring persistent storage Persistent storage using local volumes Deleting the Local Storage Operator Resources ). 5.11. Pod disruption budgets Understand and configure pod disruption budgets. 5.11.1. Understanding how to use pod disruption budgets to specify the number of pods that must be up A pod disruption budget allows the specification of safety constraints on pods during operations, such as draining a node for maintenance. PodDisruptionBudget is an API object that specifies the minimum number or percentage of replicas that must be up at a time. Setting these in projects can be helpful during node maintenance (such as scaling a cluster down or a cluster upgrade) and is only honored on voluntary evictions (not on node failures). A PodDisruptionBudget object's configuration consists of the following key parts: A label selector, which is a label query over a set of pods. An availability level, which specifies the minimum number of pods that must be available simultaneously, either: minAvailable is the number of pods must always be available, even during a disruption. maxUnavailable is the number of pods can be unavailable during a disruption. Note Available refers to the number of pods that has condition Ready=True . Ready=True refers to the pod that is able to serve requests and should be added to the load balancing pools of all matching services. A maxUnavailable of 0% or 0 or a minAvailable of 100% or equal to the number of replicas is permitted but can block nodes from being drained. You can check for pod disruption budgets across all projects with the following: USD oc get poddisruptionbudget --all-namespaces Example output NAMESPACE NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE openshift-apiserver openshift-apiserver-pdb N/A 1 1 121m openshift-cloud-controller-manager aws-cloud-controller-manager 1 N/A 1 125m openshift-cloud-credential-operator pod-identity-webhook 1 N/A 1 117m openshift-cluster-csi-drivers aws-ebs-csi-driver-controller-pdb N/A 1 1 121m openshift-cluster-storage-operator csi-snapshot-controller-pdb N/A 1 1 122m openshift-cluster-storage-operator csi-snapshot-webhook-pdb N/A 1 1 122m openshift-console console N/A 1 1 116m #... The PodDisruptionBudget is considered healthy when there are at least minAvailable pods running in the system. Every pod above that limit can be evicted. Note Depending on your pod priority and preemption settings, lower-priority pods might be removed despite their pod disruption budget requirements. 5.11.2. Specifying the number of pods that must be up with pod disruption budgets You can use a PodDisruptionBudget object to specify the minimum number or percentage of replicas that must be up at a time. Procedure To configure a pod disruption budget: Create a YAML file with the an object definition similar to the following: apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: minAvailable: 2 2 selector: 3 matchLabels: name: my-pod 1 PodDisruptionBudget is part of the policy/v1 API group. 2 The minimum number of pods that must be available simultaneously. This can be either an integer or a string specifying a percentage, for example, 20% . 3 A label query over a set of resources. The result of matchLabels and matchExpressions are logically conjoined. Leave this paramter blank, for example selector {} , to select all pods in the project. Or: apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: maxUnavailable: 25% 2 selector: 3 matchLabels: name: my-pod 1 PodDisruptionBudget is part of the policy/v1 API group. 2 The maximum number of pods that can be unavailable simultaneously. This can be either an integer or a string specifying a percentage, for example, 20% . 3 A label query over a set of resources. The result of matchLabels and matchExpressions are logically conjoined. Leave this paramter blank, for example selector {} , to select all pods in the project. Run the following command to add the object to project: USD oc create -f </path/to/file> -n <project_name> 5.12. Rotating or removing cloud provider credentials After installing OpenShift Container Platform, some organizations require the rotation or removal of the cloud provider credentials that were used during the initial installation. To allow the cluster to use the new credentials, you must update the secrets that the Cloud Credential Operator (CCO) uses to manage cloud provider credentials. 5.12.1. Rotating cloud provider credentials with the Cloud Credential Operator utility The Cloud Credential Operator (CCO) utility ccoctl supports updating secrets for clusters installed on IBM Cloud. 5.12.1.1. Rotating API keys for IBM Cloud You can rotate API keys for your existing service IDs and update the corresponding secrets. Prerequisites You have configured the ccoctl binary. You have existing service IDs in a live OpenShift Container Platform cluster installed on IBM Cloud. Procedure Use the ccoctl utility to rotate your API keys for the service IDs and update the secrets: USD ccoctl ibmcloud refresh-keys \ --kubeconfig <openshift_kubeconfig_file> \ 1 --credentials-requests-dir <path_to_credential_requests_directory> \ 2 --name <name> 3 1 The kubeconfig file associated with the cluster. For example, <installation_directory>/auth/kubeconfig . 2 The directory where the credential requests are stored. 3 The name of the OpenShift Container Platform cluster. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. 5.12.2. Rotating cloud provider credentials manually If your cloud provider credentials are changed for any reason, you must manually update the secret that the Cloud Credential Operator (CCO) uses to manage cloud provider credentials. The process for rotating cloud credentials depends on the mode that the CCO is configured to use. After you rotate credentials for a cluster that is using mint mode, you must manually remove the component credentials that were created by the removed credential. Prerequisites Your cluster is installed on a platform that supports rotating cloud credentials manually with the CCO mode that you are using: For mint mode, Amazon Web Services (AWS) and Google Cloud Platform (GCP) are supported. For passthrough mode, Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), Red Hat OpenStack Platform (RHOSP), Red Hat Virtualization (RHV), and VMware vSphere are supported. You have changed the credentials that are used to interface with your cloud provider. The new credentials have sufficient permissions for the mode CCO is configured to use in your cluster. Procedure In the Administrator perspective of the web console, navigate to Workloads Secrets . In the table on the Secrets page, find the root secret for your cloud provider. Platform Secret name AWS aws-creds Azure azure-credentials GCP gcp-credentials RHOSP openstack-credentials RHV ovirt-credentials VMware vSphere vsphere-creds Click the Options menu in the same row as the secret and select Edit Secret . Record the contents of the Value field or fields. You can use this information to verify that the value is different after updating the credentials. Update the text in the Value field or fields with the new authentication information for your cloud provider, and then click Save . If you are updating the credentials for a vSphere cluster that does not have the vSphere CSI Driver Operator enabled, you must force a rollout of the Kubernetes controller manager to apply the updated credentials. Note If the vSphere CSI Driver Operator is enabled, this step is not required. To apply the updated vSphere credentials, log in to the OpenShift Container Platform CLI as a user with the cluster-admin role and run the following command: USD oc patch kubecontrollermanager cluster \ -p='{"spec": {"forceRedeploymentReason": "recovery-'"USD( date )"'"}}' \ --type=merge While the credentials are rolling out, the status of the Kubernetes Controller Manager Operator reports Progressing=true . To view the status, run the following command: USD oc get co kube-controller-manager If the CCO for your cluster is configured to use mint mode, delete each component secret that is referenced by the individual CredentialsRequest objects. Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role. Get the names and namespaces of all referenced component secrets: USD oc -n openshift-cloud-credential-operator get CredentialsRequest \ -o json | jq -r '.items[] | select (.spec.providerSpec.kind=="<provider_spec>") | .spec.secretRef' where <provider_spec> is the corresponding value for your cloud provider: AWS: AWSProviderSpec GCP: GCPProviderSpec Partial example output for AWS { "name": "ebs-cloud-credentials", "namespace": "openshift-cluster-csi-drivers" } { "name": "cloud-credential-operator-iam-ro-creds", "namespace": "openshift-cloud-credential-operator" } Delete each of the referenced component secrets: USD oc delete secret <secret_name> \ 1 -n <secret_namespace> 2 1 Specify the name of a secret. 2 Specify the namespace that contains the secret. Example deletion of an AWS secret USD oc delete secret ebs-cloud-credentials -n openshift-cluster-csi-drivers You do not need to manually delete the credentials from your provider console. Deleting the referenced component secrets will cause the CCO to delete the existing credentials from the platform and create new ones. Verification To verify that the credentials have changed: In the Administrator perspective of the web console, navigate to Workloads Secrets . Verify that the contents of the Value field or fields have changed. Additional resources vSphere CSI Driver Operator 5.12.3. Removing cloud provider credentials After installing an OpenShift Container Platform cluster with the Cloud Credential Operator (CCO) in mint mode, you can remove the administrator-level credential secret from the kube-system namespace in the cluster. The administrator-level credential is required only during changes that require its elevated permissions, such as upgrades. Note Prior to a non z-stream upgrade, you must reinstate the credential secret with the administrator-level credential. If the credential is not present, the upgrade might be blocked. Prerequisites Your cluster is installed on a platform that supports removing cloud credentials from the CCO. Supported platforms are AWS and GCP. Procedure In the Administrator perspective of the web console, navigate to Workloads Secrets . In the table on the Secrets page, find the root secret for your cloud provider. Platform Secret name AWS aws-creds GCP gcp-credentials Click the Options menu in the same row as the secret and select Delete Secret . Additional resources About the Cloud Credential Operator Amazon Web Services (AWS) secret format Microsoft Azure secret format Google Cloud Platform (GCP) secret format 5.13. Configuring image streams for a disconnected cluster After installing OpenShift Container Platform in a disconnected environment, configure the image streams for the Cluster Samples Operator and the must-gather image stream. 5.13.1. Cluster Samples Operator assistance for mirroring During installation, OpenShift Container Platform creates a config map named imagestreamtag-to-image in the openshift-cluster-samples-operator namespace. The imagestreamtag-to-image config map contains an entry, the populating image, for each image stream tag. The format of the key for each entry in the data field in the config map is <image_stream_name>_<image_stream_tag_name> . During a disconnected installation of OpenShift Container Platform, the status of the Cluster Samples Operator is set to Removed . If you choose to change it to Managed , it installs samples. Note The use of samples in a network-restricted or discontinued environment may require access to services external to your network. Some example services include: Github, Maven Central, npm, RubyGems, PyPi and others. There might be additional steps to take that allow the cluster samples operators's objects to reach the services they require. You can use this config map as a reference for which images need to be mirrored for your image streams to import. While the Cluster Samples Operator is set to Removed , you can create your mirrored registry, or determine which existing mirrored registry you want to use. Mirror the samples you want to the mirrored registry using the new config map as your guide. Add any of the image streams you did not mirror to the skippedImagestreams list of the Cluster Samples Operator configuration object. Set samplesRegistry of the Cluster Samples Operator configuration object to the mirrored registry. Then set the Cluster Samples Operator to Managed to install the image streams you have mirrored. 5.13.2. Using Cluster Samples Operator image streams with alternate or mirrored registries Most image streams in the openshift namespace managed by the Cluster Samples Operator point to images located in the Red Hat registry at registry.redhat.io . Mirroring will not apply to these image streams. Important The jenkins , jenkins-agent-maven , and jenkins-agent-nodejs image streams come from the install payload and are managed by the Samples Operator, so no further mirroring procedures are needed for those image streams. Setting the samplesRegistry field in the Sample Operator configuration file to registry.redhat.io is redundant because it is already directed to registry.redhat.io for everything but Jenkins images and image streams. Note The cli , installer , must-gather , and tests image streams, while part of the install payload, are not managed by the Cluster Samples Operator. These are not addressed in this procedure. Important The Cluster Samples Operator must be set to Managed in a disconnected environment. To install the image streams, you have a mirrored registry. Prerequisites Access to the cluster as a user with the cluster-admin role. Create a pull secret for your mirror registry. Procedure Access the images of a specific image stream to mirror, for example: USD oc get is <imagestream> -n openshift -o json | jq .spec.tags[].from.name | grep registry.redhat.io Mirror images from registry.redhat.io associated with any image streams you need in the restricted network environment into one of the defined mirrors, for example: USD oc image mirror registry.redhat.io/rhscl/ruby-25-rhel7:latest USD{MIRROR_ADDR}/rhscl/ruby-25-rhel7:latest Create the cluster's image configuration object: USD oc create configmap registry-config --from-file=USD{MIRROR_ADDR_HOSTNAME}..5000=USDpath/ca.crt -n openshift-config Add the required trusted CAs for the mirror in the cluster's image configuration object: USD oc patch image.config.openshift.io/cluster --patch '{"spec":{"additionalTrustedCA":{"name":"registry-config"}}}' --type=merge Update the samplesRegistry field in the Cluster Samples Operator configuration object to contain the hostname portion of the mirror location defined in the mirror configuration: USD oc edit configs.samples.operator.openshift.io -n openshift-cluster-samples-operator Note This is required because the image stream import process does not use the mirror or search mechanism at this time. Add any image streams that are not mirrored into the skippedImagestreams field of the Cluster Samples Operator configuration object. Or if you do not want to support any of the sample image streams, set the Cluster Samples Operator to Removed in the Cluster Samples Operator configuration object. Note The Cluster Samples Operator issues alerts if image stream imports are failing but the Cluster Samples Operator is either periodically retrying or does not appear to be retrying them. Many of the templates in the openshift namespace reference the image streams. So using Removed to purge both the image streams and templates will eliminate the possibility of attempts to use them if they are not functional because of any missing image streams. 5.13.3. Preparing your cluster to gather support data Clusters using a restricted network must import the default must-gather image to gather debugging data for Red Hat support. The must-gather image is not imported by default, and clusters on a restricted network do not have access to the internet to pull the latest image from a remote repository. Procedure If you have not added your mirror registry's trusted CA to your cluster's image configuration object as part of the Cluster Samples Operator configuration, perform the following steps: Create the cluster's image configuration object: USD oc create configmap registry-config --from-file=USD{MIRROR_ADDR_HOSTNAME}..5000=USDpath/ca.crt -n openshift-config Add the required trusted CAs for the mirror in the cluster's image configuration object: USD oc patch image.config.openshift.io/cluster --patch '{"spec":{"additionalTrustedCA":{"name":"registry-config"}}}' --type=merge Import the default must-gather image from your installation payload: USD oc import-image is/must-gather -n openshift When running the oc adm must-gather command, use the --image flag and point to the payload image, as in the following example: USD oc adm must-gather --image=USD(oc adm release info --image-for must-gather) 5.14. Configuring periodic importing of Cluster Sample Operator image stream tags You can ensure that you always have access to the latest versions of the Cluster Sample Operator images by periodically importing the image stream tags when new versions become available. Procedure Fetch all the imagestreams in the openshift namespace by running the following command: oc get imagestreams -nopenshift Fetch the tags for every imagestream in the openshift namespace by running the following command: USD oc get is <image-stream-name> -o jsonpath="{range .spec.tags[*]}{.name}{'\t'}{.from.name}{'\n'}{end}" -nopenshift For example: USD oc get is ubi8-openjdk-17 -o jsonpath="{range .spec.tags[*]}{.name}{'\t'}{.from.name}{'\n'}{end}" -nopenshift Example output 1.11 registry.access.redhat.com/ubi8/openjdk-17:1.11 1.12 registry.access.redhat.com/ubi8/openjdk-17:1.12 Schedule periodic importing of images for each tag present in the image stream by running the following command: USD oc tag <repository/image> <image-stream-name:tag> --scheduled -nopenshift For example: USD oc tag registry.access.redhat.com/ubi8/openjdk-17:1.11 ubi8-openjdk-17:1.11 --scheduled -nopenshift USD oc tag registry.access.redhat.com/ubi8/openjdk-17:1.12 ubi8-openjdk-17:1.12 --scheduled -nopenshift This command causes OpenShift Container Platform to periodically update this particular image stream tag. This period is a cluster-wide setting set to 15 minutes by default. Verify the scheduling status of the periodic import by running the following command: oc get imagestream <image-stream-name> -o jsonpath="{range .spec.tags[*]}Tag: {.name}{'\t'}Scheduled: {.importPolicy.scheduled}{'\n'}{end}" -nopenshift For example: oc get imagestream ubi8-openjdk-17 -o jsonpath="{range .spec.tags[*]}Tag: {.name}{'\t'}Scheduled: {.importPolicy.scheduled}{'\n'}{end}" -nopenshift Example output Tag: 1.11 Scheduled: true Tag: 1.12 Scheduled: true
[ "oc get secret/pull-secret -n openshift-config --template='{{index .data \".dockerconfigjson\" | base64decode}}' ><pull_secret_location> 1", "oc registry login --registry=\"<registry>\" \\ 1 --auth-basic=\"<username>:<password>\" \\ 2 --to=<pull_secret_location> 3", "oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1", "oc get machinesets -n openshift-machine-api", "oc get machine -n openshift-machine-api", "oc annotate machine/<machine_name> -n openshift-machine-api machine.openshift.io/cluster-api-delete-machine=\"true\"", "oc scale --replicas=2 machineset <machineset> -n openshift-machine-api", "oc edit machineset <machineset> -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2", "oc get machines", "spec: deletePolicy: <delete_policy> replicas: <desired_replica_count>", "oc edit scheduler cluster", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: type=user-node,region=east 1 mastersSchedulable: false", "oc patch MachineSet <name> --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"<key>\"=\"<value>\",\"<key>\"=\"<value>\"}}]' -n openshift-machine-api 1", "oc patch MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"type\":\"user-node\",\"region\":\"east\"}}]' -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: template: spec: metadata: labels: region: \"east\" type: \"user-node\"", "oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: metadata: spec: metadata: labels: region: east type: user-node", "oc scale --replicas=0 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api", "oc scale --replicas=1 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api", "oc get nodes -l <key>=<value>", "oc get nodes -l type=user-node", "NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.23.0", "oc label nodes <name> <key>=<value>", "oc label nodes ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 type=user-node region=east", "kind: Node apiVersion: v1 metadata: name: <node_name> labels: type: \"user-node\" region: \"east\"", "oc get nodes -l <key>=<value>,<key>=<value>", "oc get nodes -l type=user-node,region=east", "NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.23.0", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc label node <node-name> node-role.kubernetes.io/app=\"\"", "oc label node <node-name> node-role.kubernetes.io/infra=\"\"", "oc get nodes", "oc edit scheduler cluster", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: topology.kubernetes.io/region=us-east-1 1", "oc label node <node_name> <label>", "oc label node ci-ln-n8mqwr2-f76d1-xscn2-worker-c-6fmtx node-role.kubernetes.io/infra=", "cat infra.mcp.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: infra spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]} 1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: \"\" 2", "oc create -f infra.mcp.yaml", "oc get machineconfig", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED 00-master 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 00-worker 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-1ae2a1e0-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-ssh 3.2.0 31d 99-worker-1ae64748-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-worker-ssh 3.2.0 31d rendered-infra-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 23m rendered-master-072d4b2da7f88162636902b074e9e28e 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-master-3e88ec72aed3886dec061df60d16d1af 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-master-419bee7de96134963a15fdf9dd473b25 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-master-53f5c91c7661708adce18739cc0f40fb 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d rendered-master-a6a357ec18e5bce7f5ac426fc7c5ffcd 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-master-dc7f874ec77fc4b969674204332da037 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-1a75960c52ad18ff5dfa6674eb7e533d 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-2640531be11ba43c61d72e82dc634ce6 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-worker-4f110718fe88e5f349987854a1147755 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-worker-afc758e194d6188677eb837842d3b379 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-worker-daa08cc1e8f5fcdeba24de60cd955cc3 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d", "cat infra.mc.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 51-infra labels: machineconfiguration.openshift.io/role: infra 1 spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/infratest mode: 0644 contents: source: data:,infra", "oc create -f infra.mc.yaml", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE infra rendered-infra-60e35c2e99f42d976e084fa94da4d0fc True False False 1 1 1 0 4m20s master rendered-master-9360fdb895d4c131c7c4bebbae099c90 True False False 3 3 3 0 91m worker rendered-worker-60e35c2e99f42d976e084fa94da4d0fc True False False 2 2 2 0 91m", "oc describe nodes <node_name>", "describe node ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Name: ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Roles: worker Taints: node-role.kubernetes.io/infra:NoSchedule", "oc adm taint nodes <node_name> <key>=<value>:<effect>", "oc adm taint nodes node1 node-role.kubernetes.io/infra=reserved:NoExecute", "kind: Node apiVersion: v1 metadata: name: <node_name> labels: spec: taints: - key: node-role.kubernetes.io/infra effect: NoExecute value: reserved", "tolerations: - effect: NoExecute 1 key: node-role.kubernetes.io/infra 2 operator: Exists 3 value: reserved 4", "oc get ingresscontroller default -n openshift-ingress-operator -o yaml", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: 2019-04-18T12:35:39Z finalizers: - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller generation: 1 name: default namespace: openshift-ingress-operator resourceVersion: \"11341\" selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default uid: 79509e05-61d6-11e9-bc55-02ce4781844a spec: {} status: availableReplicas: 2 conditions: - lastTransitionTime: 2019-04-18T12:36:15Z status: \"True\" type: Available domain: apps.<cluster>.example.com endpointPublishingStrategy: type: LoadBalancerService selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default", "oc edit ingresscontroller default -n openshift-ingress-operator", "spec: nodePlacement: nodeSelector: 1 matchLabels: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved", "oc get pod -n openshift-ingress -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none> router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none>", "oc get node <node_name> 1", "NAME STATUS ROLES AGE VERSION ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.23.0", "oc get configs.imageregistry.operator.openshift.io/cluster -o yaml", "apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: creationTimestamp: 2019-02-05T13:52:05Z finalizers: - imageregistry.operator.openshift.io/finalizer generation: 1 name: cluster resourceVersion: \"56174\" selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster uid: 36fd3724-294d-11e9-a524-12ffeee2931b spec: httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623 logging: 2 managementState: Managed proxy: {} replicas: 1 requests: read: {} write: {} storage: s3: bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c region: us-east-1 status:", "oc edit configs.imageregistry.operator.openshift.io/cluster", "spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: namespaces: - openshift-image-registry topologyKey: kubernetes.io/hostname weight: 100 logLevel: Normal managementState: Managed nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved", "oc get pods -o wide -n openshift-image-registry", "oc describe node <node_name>", "oc edit configmap cluster-monitoring-config -n openshift-monitoring", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |+ alertmanagerMain: nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusK8s: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusOperator: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute grafana: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute k8sPrometheusAdapter: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute kubeStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute telemeterClient: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute openshiftStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute thanosQuerier: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute", "watch 'oc get pod -n openshift-monitoring -o wide'", "oc delete pod -n openshift-monitoring <pod>", "oc edit ClusterLogging instance", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging spec: collection: logs: fluentd: resources: null type: fluentd logStore: elasticsearch: nodeCount: 3 nodeSelector: 1 node-role.kubernetes.io/infra: '' tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved redundancyPolicy: SingleRedundancy resources: limits: cpu: 500m memory: 16Gi requests: cpu: 500m memory: 16Gi storage: {} type: elasticsearch managementState: Managed visualization: kibana: nodeSelector: 2 node-role.kubernetes.io/infra: '' tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved proxy: resources: null replicas: 1 resources: null type: kibana", "oc get pod kibana-5b8bdf44f9-ccpq9 -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-5b8bdf44f9-ccpq9 2/2 Running 0 27s 10.129.2.18 ip-10-0-147-79.us-east-2.compute.internal <none> <none>", "oc get nodes", "NAME STATUS ROLES AGE VERSION ip-10-0-133-216.us-east-2.compute.internal Ready master 60m v1.23.0 ip-10-0-139-146.us-east-2.compute.internal Ready master 60m v1.23.0 ip-10-0-139-192.us-east-2.compute.internal Ready worker 51m v1.23.0 ip-10-0-139-241.us-east-2.compute.internal Ready worker 51m v1.23.0 ip-10-0-147-79.us-east-2.compute.internal Ready worker 51m v1.23.0 ip-10-0-152-241.us-east-2.compute.internal Ready master 60m v1.23.0 ip-10-0-139-48.us-east-2.compute.internal Ready infra 51m v1.23.0", "oc get node ip-10-0-139-48.us-east-2.compute.internal -o yaml", "kind: Node apiVersion: v1 metadata: name: ip-10-0-139-48.us-east-2.compute.internal selfLink: /api/v1/nodes/ip-10-0-139-48.us-east-2.compute.internal uid: 62038aa9-661f-41d7-ba93-b5f1b6ef8751 resourceVersion: '39083' creationTimestamp: '2020-04-13T19:07:55Z' labels: node-role.kubernetes.io/infra: ''", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging spec: visualization: kibana: nodeSelector: 1 node-role.kubernetes.io/infra: '' proxy: resources: null replicas: 1 resources: null type: kibana", "oc get pods", "NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 29m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 28m fluentd-42dzz 1/1 Running 0 28m fluentd-d74rq 1/1 Running 0 28m fluentd-m5vr9 1/1 Running 0 28m fluentd-nkxl7 1/1 Running 0 28m fluentd-pdvqb 1/1 Running 0 28m fluentd-tflh6 1/1 Running 0 28m kibana-5b8bdf44f9-ccpq9 2/2 Terminating 0 4m11s kibana-7d85dcffc8-bfpfp 2/2 Running 0 33s", "oc get pod kibana-7d85dcffc8-bfpfp -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-7d85dcffc8-bfpfp 2/2 Running 0 43s 10.131.0.22 ip-10-0-139-48.us-east-2.compute.internal <none> <none>", "oc get pods", "NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 30m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 29m fluentd-42dzz 1/1 Running 0 29m fluentd-d74rq 1/1 Running 0 29m fluentd-m5vr9 1/1 Running 0 29m fluentd-nkxl7 1/1 Running 0 29m fluentd-pdvqb 1/1 Running 0 29m fluentd-tflh6 1/1 Running 0 29m kibana-7d85dcffc8-bfpfp 2/2 Running 0 62s", "apiVersion: \"autoscaling.openshift.io/v1\" kind: \"ClusterAutoscaler\" metadata: name: \"default\" spec: podPriorityThreshold: -10 1 resourceLimits: maxNodesTotal: 24 2 cores: min: 8 3 max: 128 4 memory: min: 4 5 max: 256 6 gpus: - type: nvidia.com/gpu 7 min: 0 8 max: 16 9 - type: amd.com/gpu min: 0 max: 4 scaleDown: 10 enabled: true 11 delayAfterAdd: 10m 12 delayAfterDelete: 5m 13 delayAfterFailure: 30s 14 unneededTime: 5m 15 utilizationThreshold: \"0.4\" 16", "oc create -f <filename>.yaml 1", "apiVersion: \"autoscaling.openshift.io/v1beta1\" kind: \"MachineAutoscaler\" metadata: name: \"worker-us-east-1a\" 1 namespace: \"openshift-machine-api\" spec: minReplicas: 1 2 maxReplicas: 12 3 scaleTargetRef: 4 apiVersion: machine.openshift.io/v1beta1 kind: MachineSet 5 name: worker-us-east-1a 6", "oc create -f <filename>.yaml 1", "apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 spec: featureSet: TechPreviewNoUpgrade 2", "sh-4.2# chroot /host", "sh-4.2# cat /etc/kubernetes/kubelet.conf", "featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false", "oc edit featuregate cluster", "apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 spec: featureSet: TechPreviewNoUpgrade 2", "sh-4.2# chroot /host", "sh-4.2# cat /etc/kubernetes/kubelet.conf", "featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false", "oc edit apiserver", "spec: encryption: type: aescbc 1", "oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "EncryptionCompleted All resources encrypted: routes.route.openshift.io", "oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "EncryptionCompleted All resources encrypted: secrets, configmaps", "oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "EncryptionCompleted All resources encrypted: oauthaccesstokens.oauth.openshift.io, oauthauthorizetokens.oauth.openshift.io", "oc edit apiserver", "spec: encryption: type: identity 1", "oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "DecryptionCompleted Encryption mode set to identity and everything is decrypted", "oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "DecryptionCompleted Encryption mode set to identity and everything is decrypted", "oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "DecryptionCompleted Encryption mode set to identity and everything is decrypted", "oc debug node/<node_name>", "sh-4.2# chroot /host", "sh-4.4# /usr/local/bin/cluster-backup.sh /home/core/assets/backup", "found latest kube-apiserver: /etc/kubernetes/static-pod-resources/kube-apiserver-pod-6 found latest kube-controller-manager: /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-7 found latest kube-scheduler: /etc/kubernetes/static-pod-resources/kube-scheduler-pod-6 found latest etcd: /etc/kubernetes/static-pod-resources/etcd-pod-3 ede95fe6b88b87ba86a03c15e669fb4aa5bf0991c180d3c6895ce72eaade54a1 etcdctl version: 3.4.14 API version: 3.4 {\"level\":\"info\",\"ts\":1624647639.0188997,\"caller\":\"snapshot/v3_snapshot.go:119\",\"msg\":\"created temporary db file\",\"path\":\"/home/core/assets/backup/snapshot_2021-06-25_190035.db.part\"} {\"level\":\"info\",\"ts\":\"2021-06-25T19:00:39.030Z\",\"caller\":\"clientv3/maintenance.go:200\",\"msg\":\"opened snapshot stream; downloading\"} {\"level\":\"info\",\"ts\":1624647639.0301006,\"caller\":\"snapshot/v3_snapshot.go:127\",\"msg\":\"fetching snapshot\",\"endpoint\":\"https://10.0.0.5:2379\"} {\"level\":\"info\",\"ts\":\"2021-06-25T19:00:40.215Z\",\"caller\":\"clientv3/maintenance.go:208\",\"msg\":\"completed snapshot read; closing\"} {\"level\":\"info\",\"ts\":1624647640.6032252,\"caller\":\"snapshot/v3_snapshot.go:142\",\"msg\":\"fetched snapshot\",\"endpoint\":\"https://10.0.0.5:2379\",\"size\":\"114 MB\",\"took\":1.584090459} {\"level\":\"info\",\"ts\":1624647640.6047094,\"caller\":\"snapshot/v3_snapshot.go:152\",\"msg\":\"saved\",\"path\":\"/home/core/assets/backup/snapshot_2021-06-25_190035.db\"} Snapshot saved at /home/core/assets/backup/snapshot_2021-06-25_190035.db {\"hash\":3866667823,\"revision\":31407,\"totalKey\":12828,\"totalSize\":114446336} snapshot db and kube resources are successfully saved to /home/core/assets/backup", "etcd member has been defragmented: <member_name> , memberID: <member_id>", "failed defrag on member: <member_name> , memberID: <member_id> : <error_message>", "oc -n openshift-etcd get pods -l k8s-app=etcd -o wide", "etcd-ip-10-0-159-225.example.redhat.com 3/3 Running 0 175m 10.0.159.225 ip-10-0-159-225.example.redhat.com <none> <none> etcd-ip-10-0-191-37.example.redhat.com 3/3 Running 0 173m 10.0.191.37 ip-10-0-191-37.example.redhat.com <none> <none> etcd-ip-10-0-199-170.example.redhat.com 3/3 Running 0 176m 10.0.199.170 ip-10-0-199-170.example.redhat.com <none> <none>", "oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com etcdctl endpoint status --cluster -w table", "Defaulting container name to etcdctl. Use 'oc describe pod/etcd-ip-10-0-159-225.example.redhat.com -n openshift-etcd' to see all of the containers in this pod. +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.4.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+", "oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com", "sh-4.4# unset ETCDCTL_ENDPOINTS", "sh-4.4# etcdctl --command-timeout=30s --endpoints=https://localhost:2379 defrag", "Finished defragmenting etcd member[https://localhost:2379]", "sh-4.4# etcdctl endpoint status -w table --cluster", "+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.4.9 | 41 MB | false | false | 7 | 91624 | 91624 | | 1 | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.4.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+", "sh-4.4# etcdctl alarm list", "memberID:12345678912345678912 alarm:NOSPACE", "sh-4.4# etcdctl alarm disarm", "sudo mv /etc/kubernetes/manifests/etcd-pod.yaml /tmp", "sudo crictl ps | grep etcd | grep -v operator", "sudo mv /etc/kubernetes/manifests/kube-apiserver-pod.yaml /tmp", "sudo crictl ps | grep kube-apiserver | grep -v operator", "sudo mv /var/lib/etcd/ /tmp", "sudo -E /usr/local/bin/cluster-restore.sh /home/core/backup", "...stopping kube-scheduler-pod.yaml ...stopping kube-controller-manager-pod.yaml ...stopping etcd-pod.yaml ...stopping kube-apiserver-pod.yaml Waiting for container etcd to stop .complete Waiting for container etcdctl to stop .............................complete Waiting for container etcd-metrics to stop complete Waiting for container kube-controller-manager to stop complete Waiting for container kube-apiserver to stop ..........................................................................................complete Waiting for container kube-scheduler to stop complete Moving etcd data-dir /var/lib/etcd/member to /var/lib/etcd-backup starting restore-etcd static pod starting kube-apiserver-pod.yaml static-pod-resources/kube-apiserver-pod-7/kube-apiserver-pod.yaml starting kube-controller-manager-pod.yaml static-pod-resources/kube-controller-manager-pod-7/kube-controller-manager-pod.yaml starting kube-scheduler-pod.yaml static-pod-resources/kube-scheduler-pod-8/kube-scheduler-pod.yaml", "oc get nodes -w", "NAME STATUS ROLES AGE VERSION host-172-25-75-28 Ready master 3d20h v1.23.3+e419edf host-172-25-75-38 Ready infra,worker 3d20h v1.23.3+e419edf host-172-25-75-40 Ready master 3d20h v1.23.3+e419edf host-172-25-75-65 Ready master 3d20h v1.23.3+e419edf host-172-25-75-74 Ready infra,worker 3d20h v1.23.3+e419edf host-172-25-75-79 Ready worker 3d20h v1.23.3+e419edf host-172-25-75-86 Ready worker 3d20h v1.23.3+e419edf host-172-25-75-98 Ready infra,worker 3d20h v1.23.3+e419edf", "ssh -i <ssh-key-path> core@<master-hostname>", "sh-4.4# pwd /var/lib/kubelet/pki sh-4.4# ls kubelet-client-2022-04-28-11-24-09.pem kubelet-server-2022-04-28-11-24-15.pem kubelet-client-current.pem kubelet-server-current.pem", "sudo systemctl restart kubelet.service", "oc get csr", "NAME AGE SIGNERNAME REQUESTOR CONDITION csr-2s94x 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending 1 csr-4bd6t 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending 2 csr-4hl85 13m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 3 csr-zhhhp 3m8s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 4", "oc describe csr <csr_name> 1", "oc adm certificate approve <csr_name>", "oc adm certificate approve <csr_name>", "sudo crictl ps | grep etcd | egrep -v \"operator|etcd-guard\"", "3ad41b7908e32 36f86e2eeaaffe662df0d21041eb22b8198e0e58abeeae8c743c3e6e977e8009 About a minute ago Running etcd 0 7c05f8af362f0", "oc -n openshift-etcd get pods -l k8s-app=etcd", "NAME READY STATUS RESTARTS AGE etcd-ip-10-0-143-125.ec2.internal 1/1 Running 1 2m47s", "sudo rm -f /var/lib/ovn/etc/*.db", "oc delete pods -l app=ovnkube-master -n openshift-ovn-kubernetes", "oc get pods -l app=ovnkube-master -n openshift-ovn-kubernetes", "NAME READY STATUS RESTARTS AGE ovnkube-master-nb24h 4/4 Running 0 48s", "oc get pods -n openshift-ovn-kubernetes -o name | grep ovnkube-node | while read p ; do oc delete USDp -n openshift-ovn-kubernetes ; done", "oc get pods -n openshift-ovn-kubernetes | grep ovnkube-node", "oc get machines -n openshift-machine-api -o wide", "NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-0 Running m4.xlarge us-east-1 us-east-1a 3h37m ip-10-0-131-183.ec2.internal aws:///us-east-1a/i-0ec2782f8287dfb7e stopped 1 clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running", "oc get machine clustername-8qw5l-master-0 \\ 1 -n openshift-machine-api -o yaml > new-master-machine.yaml", "status: addresses: - address: 10.0.131.183 type: InternalIP - address: ip-10-0-131-183.ec2.internal type: InternalDNS - address: ip-10-0-131-183.ec2.internal type: Hostname lastUpdated: \"2020-04-20T17:44:29Z\" nodeRef: kind: Node name: ip-10-0-131-183.ec2.internal uid: acca4411-af0d-4387-b73e-52b2484295ad phase: Running providerStatus: apiVersion: awsproviderconfig.openshift.io/v1beta1 conditions: - lastProbeTime: \"2020-04-20T16:53:50Z\" lastTransitionTime: \"2020-04-20T16:53:50Z\" message: machine successfully created reason: MachineCreationSucceeded status: \"True\" type: MachineCreation instanceId: i-0fdb85790d76d0c3f instanceState: stopped kind: AWSMachineProviderStatus", "apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: name: clustername-8qw5l-master-3", "providerID: aws:///us-east-1a/i-0fdb85790d76d0c3f", "annotations: machine.openshift.io/instance-state: running generation: 2", "resourceVersion: \"13291\" uid: a282eb70-40a2-4e89-8009-d05dd420d31a", "oc delete machine -n openshift-machine-api clustername-8qw5l-master-0 1", "oc get machines -n openshift-machine-api -o wide", "NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running", "oc apply -f new-master-machine.yaml", "oc get machines -n openshift-machine-api -o wide", "NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-master-3 Provisioning m4.xlarge us-east-1 us-east-1a 85s ip-10-0-173-171.ec2.internal aws:///us-east-1a/i-015b0888fe17bc2c8 running 1 clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running", "oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": {\"useUnsupportedUnsafeNonHANonProductionUnstableEtcd\": true}}}'", "export KUBECONFIG=/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost-recovery.kubeconfig", "oc patch etcd cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge 1", "oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 7 1", "oc patch kubeapiserver cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge", "oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 7 1", "oc patch kubecontrollermanager cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge", "oc get kubecontrollermanager -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 7 1", "oc patch kubescheduler cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge", "oc get kubescheduler -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 7 1", "oc -n openshift-etcd get pods -l k8s-app=etcd", "etcd-ip-10-0-143-125.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-154-194.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-173-171.ec2.internal 2/2 Running 0 9h", "export KUBECONFIG=<installation_directory>/auth/kubeconfig", "oc whoami", "oc get poddisruptionbudget --all-namespaces", "NAMESPACE NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE openshift-apiserver openshift-apiserver-pdb N/A 1 1 121m openshift-cloud-controller-manager aws-cloud-controller-manager 1 N/A 1 125m openshift-cloud-credential-operator pod-identity-webhook 1 N/A 1 117m openshift-cluster-csi-drivers aws-ebs-csi-driver-controller-pdb N/A 1 1 121m openshift-cluster-storage-operator csi-snapshot-controller-pdb N/A 1 1 122m openshift-cluster-storage-operator csi-snapshot-webhook-pdb N/A 1 1 122m openshift-console console N/A 1 1 116m #", "apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: minAvailable: 2 2 selector: 3 matchLabels: name: my-pod", "apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: maxUnavailable: 25% 2 selector: 3 matchLabels: name: my-pod", "oc create -f </path/to/file> -n <project_name>", "ccoctl ibmcloud refresh-keys --kubeconfig <openshift_kubeconfig_file> \\ 1 --credentials-requests-dir <path_to_credential_requests_directory> \\ 2 --name <name> 3", "oc patch kubecontrollermanager cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date )\"'\"}}' --type=merge", "oc get co kube-controller-manager", "oc -n openshift-cloud-credential-operator get CredentialsRequest -o json | jq -r '.items[] | select (.spec.providerSpec.kind==\"<provider_spec>\") | .spec.secretRef'", "{ \"name\": \"ebs-cloud-credentials\", \"namespace\": \"openshift-cluster-csi-drivers\" } { \"name\": \"cloud-credential-operator-iam-ro-creds\", \"namespace\": \"openshift-cloud-credential-operator\" }", "oc delete secret <secret_name> \\ 1 -n <secret_namespace> 2", "oc delete secret ebs-cloud-credentials -n openshift-cluster-csi-drivers", "oc get is <imagestream> -n openshift -o json | jq .spec.tags[].from.name | grep registry.redhat.io", "oc image mirror registry.redhat.io/rhscl/ruby-25-rhel7:latest USD{MIRROR_ADDR}/rhscl/ruby-25-rhel7:latest", "oc create configmap registry-config --from-file=USD{MIRROR_ADDR_HOSTNAME}..5000=USDpath/ca.crt -n openshift-config", "oc patch image.config.openshift.io/cluster --patch '{\"spec\":{\"additionalTrustedCA\":{\"name\":\"registry-config\"}}}' --type=merge", "oc edit configs.samples.operator.openshift.io -n openshift-cluster-samples-operator", "oc create configmap registry-config --from-file=USD{MIRROR_ADDR_HOSTNAME}..5000=USDpath/ca.crt -n openshift-config", "oc patch image.config.openshift.io/cluster --patch '{\"spec\":{\"additionalTrustedCA\":{\"name\":\"registry-config\"}}}' --type=merge", "oc import-image is/must-gather -n openshift", "oc adm must-gather --image=USD(oc adm release info --image-for must-gather)", "get imagestreams -nopenshift", "oc get is <image-stream-name> -o jsonpath=\"{range .spec.tags[*]}{.name}{'\\t'}{.from.name}{'\\n'}{end}\" -nopenshift", "oc get is ubi8-openjdk-17 -o jsonpath=\"{range .spec.tags[*]}{.name}{'\\t'}{.from.name}{'\\n'}{end}\" -nopenshift", "1.11 registry.access.redhat.com/ubi8/openjdk-17:1.11 1.12 registry.access.redhat.com/ubi8/openjdk-17:1.12", "oc tag <repository/image> <image-stream-name:tag> --scheduled -nopenshift", "oc tag registry.access.redhat.com/ubi8/openjdk-17:1.11 ubi8-openjdk-17:1.11 --scheduled -nopenshift oc tag registry.access.redhat.com/ubi8/openjdk-17:1.12 ubi8-openjdk-17:1.12 --scheduled -nopenshift", "get imagestream <image-stream-name> -o jsonpath=\"{range .spec.tags[*]}Tag: {.name}{'\\t'}Scheduled: {.importPolicy.scheduled}{'\\n'}{end}\" -nopenshift", "get imagestream ubi8-openjdk-17 -o jsonpath=\"{range .spec.tags[*]}Tag: {.name}{'\\t'}Scheduled: {.importPolicy.scheduled}{'\\n'}{end}\" -nopenshift", "Tag: 1.11 Scheduled: true Tag: 1.12 Scheduled: true" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/post-installation_configuration/post-install-cluster-tasks
Chapter 77. user
Chapter 77. user This chapter describes the commands under the user command. 77.1. user create Create new user Usage: Table 77.1. Positional arguments Value Summary <name> New user name Table 77.2. Command arguments Value Summary -h, --help Show this help message and exit --domain <domain> Default domain (name or id) --project <project> Default project (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --password <password> Set user password --password-prompt Prompt interactively for password --email <email-address> Set user email address --description <description> User description --ignore-lockout-failure-attempts Opt into ignoring the number of times a user has authenticated and locking out the user as a result --no-ignore-lockout-failure-attempts Opt out of ignoring the number of times a user has authenticated and locking out the user as a result --ignore-password-expiry Opt into allowing user to continue using passwords that may be expired --no-ignore-password-expiry Opt out of allowing user to continue using passwords that may be expired --ignore-change-password-upon-first-use Control if a user should be forced to change their password immediately after they log into keystone for the first time. Opt into ignoring the user to change their password during first time login in keystone --no-ignore-change-password-upon-first-use Control if a user should be forced to change their password immediately after they log into keystone for the first time. Opt out of ignoring the user to change their password during first time login in keystone --enable-lock-password Disables the ability for a user to change its password through self-service APIs --disable-lock-password Enables the ability for a user to change its password through self-service APIs --enable-multi-factor-auth Enables the mfa (multi factor auth) --disable-multi-factor-auth Disables the mfa (multi factor auth) --multi-factor-auth-rule <rule> Set multi-factor auth rules. for example, to set a rule requiring the "password" and "totp" auth methods to be provided, use: "--multi-factor-auth-rule password,totp". May be provided multiple times to set different rule combinations. --enable Enable user (default) --disable Disable user --or-show Return existing user Table 77.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 77.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 77.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 77.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 77.2. user delete Delete user(s) Usage: Table 77.7. Positional arguments Value Summary <user> User(s) to delete (name or id) Table 77.8. Command arguments Value Summary -h, --help Show this help message and exit --domain <domain> Domain owning <user> (name or id) 77.3. user list List users Usage: Table 77.9. Command arguments Value Summary -h, --help Show this help message and exit --domain <domain> Filter users by <domain> (name or id) --group <group> Filter users by <group> membership (name or id) --project <project> Filter users by <project> (name or id) --long List additional fields in output Table 77.10. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 77.11. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 77.12. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 77.13. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 77.4. user password set Change current user password Usage: Table 77.14. Command arguments Value Summary -h, --help Show this help message and exit --password <new-password> New user password --original-password <original-password> Original user password 77.5. user set Set user properties Usage: Table 77.15. Positional arguments Value Summary <user> User to modify (name or id) Table 77.16. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Set user name --domain <domain> Domain the user belongs to (name or id). this can be used in case collisions between user names exist. --project <project> Set default project (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --password <password> Set user password --password-prompt Prompt interactively for password --email <email-address> Set user email address --description <description> Set user description --ignore-lockout-failure-attempts Opt into ignoring the number of times a user has authenticated and locking out the user as a result --no-ignore-lockout-failure-attempts Opt out of ignoring the number of times a user has authenticated and locking out the user as a result --ignore-password-expiry Opt into allowing user to continue using passwords that may be expired --no-ignore-password-expiry Opt out of allowing user to continue using passwords that may be expired --ignore-change-password-upon-first-use Control if a user should be forced to change their password immediately after they log into keystone for the first time. Opt into ignoring the user to change their password during first time login in keystone --no-ignore-change-password-upon-first-use Control if a user should be forced to change their password immediately after they log into keystone for the first time. Opt out of ignoring the user to change their password during first time login in keystone --enable-lock-password Disables the ability for a user to change its password through self-service APIs --disable-lock-password Enables the ability for a user to change its password through self-service APIs --enable-multi-factor-auth Enables the mfa (multi factor auth) --disable-multi-factor-auth Disables the mfa (multi factor auth) --multi-factor-auth-rule <rule> Set multi-factor auth rules. for example, to set a rule requiring the "password" and "totp" auth methods to be provided, use: "--multi-factor-auth-rule password,totp". May be provided multiple times to set different rule combinations. --enable Enable user (default) --disable Disable user 77.6. user show Display user details Usage: Table 77.17. Positional arguments Value Summary <user> User to display (name or id) Table 77.18. Command arguments Value Summary -h, --help Show this help message and exit --domain <domain> Domain owning <user> (name or id) Table 77.19. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 77.20. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 77.21. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 77.22. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack user create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--domain <domain>] [--project <project>] [--project-domain <project-domain>] [--password <password>] [--password-prompt] [--email <email-address>] [--description <description>] [--ignore-lockout-failure-attempts] [--no-ignore-lockout-failure-attempts] [--ignore-password-expiry] [--no-ignore-password-expiry] [--ignore-change-password-upon-first-use] [--no-ignore-change-password-upon-first-use] [--enable-lock-password] [--disable-lock-password] [--enable-multi-factor-auth] [--disable-multi-factor-auth] [--multi-factor-auth-rule <rule>] [--enable | --disable] [--or-show] <name>", "openstack user delete [-h] [--domain <domain>] <user> [<user> ...]", "openstack user list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--domain <domain>] [--group <group> | --project <project>] [--long]", "openstack user password set [-h] [--password <new-password>] [--original-password <original-password>]", "openstack user set [-h] [--name <name>] [--domain <domain>] [--project <project>] [--project-domain <project-domain>] [--password <password>] [--password-prompt] [--email <email-address>] [--description <description>] [--ignore-lockout-failure-attempts] [--no-ignore-lockout-failure-attempts] [--ignore-password-expiry] [--no-ignore-password-expiry] [--ignore-change-password-upon-first-use] [--no-ignore-change-password-upon-first-use] [--enable-lock-password] [--disable-lock-password] [--enable-multi-factor-auth] [--disable-multi-factor-auth] [--multi-factor-auth-rule <rule>] [--enable | --disable] <user>", "openstack user show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--domain <domain>] <user>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/command_line_interface_reference/user
B.3. LVM Profiles
B.3. LVM Profiles An LVM profile is a set of selected customizable configuration settings that can be used to achieve certain characteristics in various environments or uses. Normally, the name of the profile should reflect that environment or use. An LVM profile overrides existing configuration. There are two groups of LVM profiles that LVM recognizes: command profiles and metadata profiles. A command profile is used to override selected configuration settings at the global LVM command level. The profile is applied at the beginning of LVM command execution and it is used throughout the time of the LVM command execution. You apply a command profile by specifying the --commandprofile ProfileName option when executing an LVM command. A metadata profile is used to override selected configuration settings at the volume group/logical volume level. It is applied independently for each volume group/logical volume that is being processed. As such, each volume group/logical volume can store the profile name used in its metadata so that time the volume group/logical volume is processed, the profile is applied automatically. If the volume group and any of its logical volumes have different profiles defined, the profile defined for the logical volume is preferred. You can attach a metadata profile to a volume group or logical volume by specifying the --metadataprofile ProfileName option when you create the volume group or logical volume with the vgcreate or lvcreate command. You can attach or detach a metadata profile to an existing volume group or logical volume by specifying the --metadataprofile ProfileName or the --detachprofile option of the lvchange or vgchange command. You can specify the -o vg_profile and -o lv_profile output options of the vgs and lvs commands to display the metadata profile currently attached to a volume group or a logical volume. The set of options allowed for command profiles and the set of options allowed for metadata profiles are mutually exclusive. The settings that belong to either of these two sets cannot be mixed together and the LVM tools will reject such profiles. LVM provides a few predefined configuration profiles. The LVM profiles are stored in the /etc/lvm/profile directory by default. This location can be changed by using the profile_dir setting in the /etc/lvm/lvm.conf file. Each profile configuration is stored in ProfileName .profile file in the profile directory. When referencing the profile in an LVM command, the .profile suffix is omitted. You can create additional profiles with different values. For this purpose, LVM provides the command_profile_template.profile file (for command profiles) and the metadata_profile_template.profile file (for metadata profiles) which contain all settings that are customizable by profiles of each type. You can copy these template profiles and edit them as needed. Alternatively, you can use the lvmconfig command to generate a new profile for a given section of the profile file for either profile type. The following command creates a new command profile named ProfileName .profile consisting of the settings in section . The following command creates a new metadata profile named ProfileName .profile consisting of the settings in section . If the section is not specified, all settings that can be customized by a profile are reported.
[ "lvmconfig --file ProfileName .profile --type profilable-command section", "lvmconfig --file ProfileName .profile --type profilable-metadata section" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/logical_volume_manager_administration/lvm_profiles
Chapter 17. Route [route.openshift.io/v1]
Chapter 17. Route [route.openshift.io/v1] Description A route allows developers to expose services through an HTTP(S) aware load balancing and proxy layer via a public DNS entry. The route may further specify TLS options and a certificate, or specify a public CNAME that the router should also accept for HTTP and HTTPS traffic. An administrator typically configures their router to be visible outside the cluster firewall, and may also add additional security, caching, or traffic controls on the service content. Routers usually talk directly to the service endpoints. Once a route is created, the host field may not be changed. Generally, routers use the oldest route with a given host when resolving conflicts. Routers are subject to additional customization and may support additional controls via the annotations field. Because administrators may configure multiple routers, the route status field is used to return information to clients about the names and states of the route under each router. If a client chooses a duplicate name, for instance, the route status conditions are used to indicate the route cannot be chosen. To enable HTTP/2 ALPN on a route it requires a custom (non-wildcard) certificate. This prevents connection coalescing by clients, notably web browsers. We do not support HTTP/2 ALPN on routes that use the default certificate because of the risk of connection re-use/coalescing. Routes that do not have their own custom certificate will not be HTTP/2 ALPN-enabled on either the frontend or the backend. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 17.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object RouteSpec describes the hostname or path the route exposes, any security information, and one to four backends (services) the route points to. Requests are distributed among the backends depending on the weights assigned to each backend. When using roundrobin scheduling the portion of requests that go to each backend is the backend weight divided by the sum of all of the backend weights. When the backend has more than one endpoint the requests that end up on the backend are roundrobin distributed among the endpoints. Weights are between 0 and 256 with default 100. Weight 0 causes no requests to the backend. If all weights are zero the route will be considered to have no backends and return a standard 503 response. The tls field is optional and allows specific certificates or behavior for the route. Routers typically configure a default certificate on a wildcard domain to terminate routes without explicit certificates, but custom hostnames usually must choose passthrough (send traffic directly to the backend via the TLS Server-Name- Indication field) or provide a certificate. status object RouteStatus provides relevant info about the status of a route, including which routers acknowledge it. 17.1.1. .spec Description RouteSpec describes the hostname or path the route exposes, any security information, and one to four backends (services) the route points to. Requests are distributed among the backends depending on the weights assigned to each backend. When using roundrobin scheduling the portion of requests that go to each backend is the backend weight divided by the sum of all of the backend weights. When the backend has more than one endpoint the requests that end up on the backend are roundrobin distributed among the endpoints. Weights are between 0 and 256 with default 100. Weight 0 causes no requests to the backend. If all weights are zero the route will be considered to have no backends and return a standard 503 response. The tls field is optional and allows specific certificates or behavior for the route. Routers typically configure a default certificate on a wildcard domain to terminate routes without explicit certificates, but custom hostnames usually must choose passthrough (send traffic directly to the backend via the TLS Server-Name- Indication field) or provide a certificate. Type object Required to Property Type Description alternateBackends array alternateBackends allows up to 3 additional backends to be assigned to the route. Only the Service kind is allowed, and it will be defaulted to Service. Use the weight field in RouteTargetReference object to specify relative preference. alternateBackends[] object RouteTargetReference specifies the target that resolve into endpoints. Only the 'Service' kind is allowed. Use 'weight' field to emphasize one over others. host string host is an alias/DNS that points to the service. Optional. If not specified a route name will typically be automatically chosen. Must follow DNS952 subdomain conventions. httpHeaders object RouteHTTPHeaders defines policy for HTTP headers. path string path that the router watches for, to route traffic for to the service. Optional port object RoutePort defines a port mapping from a router to an endpoint in the service endpoints. subdomain string subdomain is a DNS subdomain that is requested within the ingress controller's domain (as a subdomain). If host is set this field is ignored. An ingress controller may choose to ignore this suggested name, in which case the controller will report the assigned name in the status.ingress array or refuse to admit the route. If this value is set and the server does not support this field host will be populated automatically. Otherwise host is left empty. The field may have multiple parts separated by a dot, but not all ingress controllers may honor the request. This field may not be changed after creation except by a user with the update routes/custom-host permission. Example: subdomain frontend automatically receives the router subdomain apps.mycluster.com to have a full hostname frontend.apps.mycluster.com . tls object TLSConfig defines config used to secure a route and provide termination to object RouteTargetReference specifies the target that resolve into endpoints. Only the 'Service' kind is allowed. Use 'weight' field to emphasize one over others. wildcardPolicy string Wildcard policy if any for the route. Currently only 'Subdomain' or 'None' is allowed. 17.1.2. .spec.alternateBackends Description alternateBackends allows up to 3 additional backends to be assigned to the route. Only the Service kind is allowed, and it will be defaulted to Service. Use the weight field in RouteTargetReference object to specify relative preference. Type array 17.1.3. .spec.alternateBackends[] Description RouteTargetReference specifies the target that resolve into endpoints. Only the 'Service' kind is allowed. Use 'weight' field to emphasize one over others. Type object Required kind name Property Type Description kind string The kind of target that the route is referring to. Currently, only 'Service' is allowed name string name of the service/target that is being referred to. e.g. name of the service weight integer weight as an integer between 0 and 256, default 100, that specifies the target's relative weight against other target reference objects. 0 suppresses requests to this backend. 17.1.4. .spec.httpHeaders Description RouteHTTPHeaders defines policy for HTTP headers. Type object Property Type Description actions object RouteHTTPHeaderActions defines configuration for actions on HTTP request and response headers. 17.1.5. .spec.httpHeaders.actions Description RouteHTTPHeaderActions defines configuration for actions on HTTP request and response headers. Type object Property Type Description request array request is a list of HTTP request headers to modify. Currently, actions may define to either Set or Delete headers values. Actions defined here will modify the request headers of all requests made through a route. These actions are applied to a specific Route defined within a cluster i.e. connections made through a route. Currently, actions may define to either Set or Delete headers values. Route actions will be executed after IngressController actions for request headers. Actions are applied in sequence as defined in this list. A maximum of 20 request header actions may be configured. You can use this field to specify HTTP request headers that should be set or deleted when forwarding connections from the client to your application. Sample fetchers allowed are "req.hdr" and "ssl_c_der". Converters allowed are "lower" and "base64". Example header values: "%[req.hdr(X-target),lower]", "%{+Q}[ssl_c_der,base64]". Any request header configuration applied directly via a Route resource using this API will override header configuration for a header of the same name applied via spec.httpHeaders.actions on the IngressController or route annotation. Note: This field cannot be used if your route uses TLS passthrough. request[] object RouteHTTPHeader specifies configuration for setting or deleting an HTTP header. response array response is a list of HTTP response headers to modify. Currently, actions may define to either Set or Delete headers values. Actions defined here will modify the response headers of all requests made through a route. These actions are applied to a specific Route defined within a cluster i.e. connections made through a route. Route actions will be executed before IngressController actions for response headers. Actions are applied in sequence as defined in this list. A maximum of 20 response header actions may be configured. You can use this field to specify HTTP response headers that should be set or deleted when forwarding responses from your application to the client. Sample fetchers allowed are "res.hdr" and "ssl_c_der". Converters allowed are "lower" and "base64". Example header values: "%[res.hdr(X-target),lower]", "%{+Q}[ssl_c_der,base64]". Note: This field cannot be used if your route uses TLS passthrough. response[] object RouteHTTPHeader specifies configuration for setting or deleting an HTTP header. 17.1.6. .spec.httpHeaders.actions.request Description request is a list of HTTP request headers to modify. Currently, actions may define to either Set or Delete headers values. Actions defined here will modify the request headers of all requests made through a route. These actions are applied to a specific Route defined within a cluster i.e. connections made through a route. Currently, actions may define to either Set or Delete headers values. Route actions will be executed after IngressController actions for request headers. Actions are applied in sequence as defined in this list. A maximum of 20 request header actions may be configured. You can use this field to specify HTTP request headers that should be set or deleted when forwarding connections from the client to your application. Sample fetchers allowed are "req.hdr" and "ssl_c_der". Converters allowed are "lower" and "base64". Example header values: "%[req.hdr(X-target),lower]", "%{+Q}[ssl_c_der,base64]". Any request header configuration applied directly via a Route resource using this API will override header configuration for a header of the same name applied via spec.httpHeaders.actions on the IngressController or route annotation. Note: This field cannot be used if your route uses TLS passthrough. Type array 17.1.7. .spec.httpHeaders.actions.request[] Description RouteHTTPHeader specifies configuration for setting or deleting an HTTP header. Type object Required name action Property Type Description action object RouteHTTPHeaderActionUnion specifies an action to take on an HTTP header. name string name specifies the name of a header on which to perform an action. Its value must be a valid HTTP header name as defined in RFC 2616 section 4.2. The name must consist only of alphanumeric and the following special characters, "-!#USD%&'*+.^_`". The following header names are reserved and may not be modified via this API: Strict-Transport-Security, Proxy, Cookie, Set-Cookie. It must be no more than 255 characters in length. Header name must be unique. 17.1.8. .spec.httpHeaders.actions.request[].action Description RouteHTTPHeaderActionUnion specifies an action to take on an HTTP header. Type object Required type Property Type Description set object RouteSetHTTPHeader specifies what value needs to be set on an HTTP header. type string type defines the type of the action to be applied on the header. Possible values are Set or Delete. Set allows you to set HTTP request and response headers. Delete allows you to delete HTTP request and response headers. 17.1.9. .spec.httpHeaders.actions.request[].action.set Description RouteSetHTTPHeader specifies what value needs to be set on an HTTP header. Type object Required value Property Type Description value string value specifies a header value. Dynamic values can be added. The value will be interpreted as an HAProxy format string as defined in http://cbonte.github.io/haproxy-dconv/2.6/configuration.html#8.2.6 and may use HAProxy's %[] syntax and otherwise must be a valid HTTP header value as defined in https://datatracker.ietf.org/doc/html/rfc7230#section-3.2 . The value of this field must be no more than 16384 characters in length. Note that the total size of all net added headers after interpolating dynamic values must not exceed the value of spec.tuningOptions.headerBufferMaxRewriteBytes on the IngressController. 17.1.10. .spec.httpHeaders.actions.response Description response is a list of HTTP response headers to modify. Currently, actions may define to either Set or Delete headers values. Actions defined here will modify the response headers of all requests made through a route. These actions are applied to a specific Route defined within a cluster i.e. connections made through a route. Route actions will be executed before IngressController actions for response headers. Actions are applied in sequence as defined in this list. A maximum of 20 response header actions may be configured. You can use this field to specify HTTP response headers that should be set or deleted when forwarding responses from your application to the client. Sample fetchers allowed are "res.hdr" and "ssl_c_der". Converters allowed are "lower" and "base64". Example header values: "%[res.hdr(X-target),lower]", "%{+Q}[ssl_c_der,base64]". Note: This field cannot be used if your route uses TLS passthrough. Type array 17.1.11. .spec.httpHeaders.actions.response[] Description RouteHTTPHeader specifies configuration for setting or deleting an HTTP header. Type object Required name action Property Type Description action object RouteHTTPHeaderActionUnion specifies an action to take on an HTTP header. name string name specifies the name of a header on which to perform an action. Its value must be a valid HTTP header name as defined in RFC 2616 section 4.2. The name must consist only of alphanumeric and the following special characters, "-!#USD%&'*+.^_`". The following header names are reserved and may not be modified via this API: Strict-Transport-Security, Proxy, Cookie, Set-Cookie. It must be no more than 255 characters in length. Header name must be unique. 17.1.12. .spec.httpHeaders.actions.response[].action Description RouteHTTPHeaderActionUnion specifies an action to take on an HTTP header. Type object Required type Property Type Description set object RouteSetHTTPHeader specifies what value needs to be set on an HTTP header. type string type defines the type of the action to be applied on the header. Possible values are Set or Delete. Set allows you to set HTTP request and response headers. Delete allows you to delete HTTP request and response headers. 17.1.13. .spec.httpHeaders.actions.response[].action.set Description RouteSetHTTPHeader specifies what value needs to be set on an HTTP header. Type object Required value Property Type Description value string value specifies a header value. Dynamic values can be added. The value will be interpreted as an HAProxy format string as defined in http://cbonte.github.io/haproxy-dconv/2.6/configuration.html#8.2.6 and may use HAProxy's %[] syntax and otherwise must be a valid HTTP header value as defined in https://datatracker.ietf.org/doc/html/rfc7230#section-3.2 . The value of this field must be no more than 16384 characters in length. Note that the total size of all net added headers after interpolating dynamic values must not exceed the value of spec.tuningOptions.headerBufferMaxRewriteBytes on the IngressController. 17.1.14. .spec.port Description RoutePort defines a port mapping from a router to an endpoint in the service endpoints. Type object Required targetPort Property Type Description targetPort IntOrString The target port on pods selected by the service this route points to. If this is a string, it will be looked up as a named port in the target endpoints port list. Required 17.1.15. .spec.tls Description TLSConfig defines config used to secure a route and provide termination Type object Required termination Property Type Description caCertificate string caCertificate provides the cert authority certificate contents certificate string certificate provides certificate contents. This should be a single serving certificate, not a certificate chain. Do not include a CA certificate. destinationCACertificate string destinationCACertificate provides the contents of the ca certificate of the final destination. When using reencrypt termination this file should be provided in order to have routers use it for health checks on the secure connection. If this field is not specified, the router may provide its own destination CA and perform hostname validation using the short service name (service.namespace.svc), which allows infrastructure generated certificates to automatically verify. externalCertificate object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. insecureEdgeTerminationPolicy string insecureEdgeTerminationPolicy indicates the desired behavior for insecure connections to a route. While each router may make its own decisions on which ports to expose, this is normally port 80. * Allow - traffic is sent to the server on the insecure port (edge/reencrypt terminations only) (default). * None - no traffic is allowed on the insecure port. * Redirect - clients are redirected to the secure port. key string key provides key file contents termination string termination indicates termination type. * edge - TLS termination is done by the router and http is used to communicate with the backend (default) * passthrough - Traffic is sent straight to the destination without the router providing TLS termination * reencrypt - TLS termination is done by the router and https is used to communicate with the backend Note: passthrough termination is incompatible with httpHeader actions 17.1.16. .spec.tls.externalCertificate Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 17.1.17. .spec.to Description RouteTargetReference specifies the target that resolve into endpoints. Only the 'Service' kind is allowed. Use 'weight' field to emphasize one over others. Type object Required kind name Property Type Description kind string The kind of target that the route is referring to. Currently, only 'Service' is allowed name string name of the service/target that is being referred to. e.g. name of the service weight integer weight as an integer between 0 and 256, default 100, that specifies the target's relative weight against other target reference objects. 0 suppresses requests to this backend. 17.1.18. .status Description RouteStatus provides relevant info about the status of a route, including which routers acknowledge it. Type object Property Type Description ingress array ingress describes the places where the route may be exposed. The list of ingress points may contain duplicate Host or RouterName values. Routes are considered live once they are Ready ingress[] object RouteIngress holds information about the places where a route is exposed. 17.1.19. .status.ingress Description ingress describes the places where the route may be exposed. The list of ingress points may contain duplicate Host or RouterName values. Routes are considered live once they are Ready Type array 17.1.20. .status.ingress[] Description RouteIngress holds information about the places where a route is exposed. Type object Property Type Description conditions array Conditions is the state of the route, may be empty. conditions[] object RouteIngressCondition contains details for the current condition of this route on a particular router. host string Host is the host string under which the route is exposed; this value is required routerCanonicalHostname string CanonicalHostname is the external host name for the router that can be used as a CNAME for the host requested for this route. This value is optional and may not be set in all cases. routerName string Name is a name chosen by the router to identify itself; this value is required wildcardPolicy string Wildcard policy is the wildcard policy that was allowed where this route is exposed. 17.1.21. .status.ingress[].conditions Description Conditions is the state of the route, may be empty. Type array 17.1.22. .status.ingress[].conditions[] Description RouteIngressCondition contains details for the current condition of this route on a particular router. Type object Required type status Property Type Description lastTransitionTime Time RFC 3339 date and time when this condition last transitioned message string Human readable message indicating details about last transition. reason string (brief) reason for the condition's last transition, and is usually a machine and human readable constant status string Status is the status of the condition. Can be True, False, Unknown. type string Type is the type of the condition. Currently only Admitted. 17.2. API endpoints The following API endpoints are available: /apis/route.openshift.io/v1/routes GET : list or watch objects of kind Route /apis/route.openshift.io/v1/watch/routes GET : watch individual changes to a list of Route. deprecated: use the 'watch' parameter with a list operation instead. /apis/route.openshift.io/v1/namespaces/{namespace}/routes DELETE : delete collection of Route GET : list or watch objects of kind Route POST : create a Route /apis/route.openshift.io/v1/watch/namespaces/{namespace}/routes GET : watch individual changes to a list of Route. deprecated: use the 'watch' parameter with a list operation instead. /apis/route.openshift.io/v1/namespaces/{namespace}/routes/{name} DELETE : delete a Route GET : read the specified Route PATCH : partially update the specified Route PUT : replace the specified Route /apis/route.openshift.io/v1/watch/namespaces/{namespace}/routes/{name} GET : watch changes to an object of kind Route. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/route.openshift.io/v1/namespaces/{namespace}/routes/{name}/status GET : read status of the specified Route PATCH : partially update status of the specified Route PUT : replace status of the specified Route 17.2.1. /apis/route.openshift.io/v1/routes Table 17.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind Route Table 17.2. HTTP responses HTTP code Reponse body 200 - OK RouteList schema 401 - Unauthorized Empty 17.2.2. /apis/route.openshift.io/v1/watch/routes Table 17.3. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of Route. deprecated: use the 'watch' parameter with a list operation instead. Table 17.4. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 17.2.3. /apis/route.openshift.io/v1/namespaces/{namespace}/routes Table 17.5. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 17.6. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Route Table 17.7. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 17.8. Body parameters Parameter Type Description body DeleteOptions schema Table 17.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind Route Table 17.10. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 17.11. HTTP responses HTTP code Reponse body 200 - OK RouteList schema 401 - Unauthorized Empty HTTP method POST Description create a Route Table 17.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 17.13. Body parameters Parameter Type Description body Route schema Table 17.14. HTTP responses HTTP code Reponse body 200 - OK Route schema 201 - Created Route schema 202 - Accepted Route schema 401 - Unauthorized Empty 17.2.4. /apis/route.openshift.io/v1/watch/namespaces/{namespace}/routes Table 17.15. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 17.16. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of Route. deprecated: use the 'watch' parameter with a list operation instead. Table 17.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 17.2.5. /apis/route.openshift.io/v1/namespaces/{namespace}/routes/{name} Table 17.18. Global path parameters Parameter Type Description name string name of the Route namespace string object name and auth scope, such as for teams and projects Table 17.19. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a Route Table 17.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 17.21. Body parameters Parameter Type Description body DeleteOptions schema Table 17.22. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Route Table 17.23. HTTP responses HTTP code Reponse body 200 - OK Route schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Route Table 17.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 17.25. Body parameters Parameter Type Description body Patch schema Table 17.26. HTTP responses HTTP code Reponse body 200 - OK Route schema 201 - Created Route schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Route Table 17.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 17.28. Body parameters Parameter Type Description body Route schema Table 17.29. HTTP responses HTTP code Reponse body 200 - OK Route schema 201 - Created Route schema 401 - Unauthorized Empty 17.2.6. /apis/route.openshift.io/v1/watch/namespaces/{namespace}/routes/{name} Table 17.30. Global path parameters Parameter Type Description name string name of the Route namespace string object name and auth scope, such as for teams and projects Table 17.31. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind Route. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 17.32. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 17.2.7. /apis/route.openshift.io/v1/namespaces/{namespace}/routes/{name}/status Table 17.33. Global path parameters Parameter Type Description name string name of the Route namespace string object name and auth scope, such as for teams and projects Table 17.34. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified Route Table 17.35. HTTP responses HTTP code Reponse body 200 - OK Route schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Route Table 17.36. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 17.37. Body parameters Parameter Type Description body Patch schema Table 17.38. HTTP responses HTTP code Reponse body 200 - OK Route schema 201 - Created Route schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Route Table 17.39. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 17.40. Body parameters Parameter Type Description body Route schema Table 17.41. HTTP responses HTTP code Reponse body 200 - OK Route schema 201 - Created Route schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/network_apis/route-route-openshift-io-v1
Chapter 6. UserOAuthAccessToken [oauth.openshift.io/v1]
Chapter 6. UserOAuthAccessToken [oauth.openshift.io/v1] Description UserOAuthAccessToken is a virtual resource to mirror OAuthAccessTokens to the user the access token was issued for Type object 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources authorizeToken string AuthorizeToken contains the token that authorized this token clientName string ClientName references the client that created this token. expiresIn integer ExpiresIn is the seconds from CreationTime before this token expires. inactivityTimeoutSeconds integer InactivityTimeoutSeconds is the value in seconds, from the CreationTimestamp, after which this token can no longer be used. The value is automatically incremented when the token is used. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata redirectURI string RedirectURI is the redirection associated with the token. refreshToken string RefreshToken is the value by which this token can be renewed. Can be blank. scopes array (string) Scopes is an array of the requested scopes. userName string UserName is the user name associated with this token userUID string UserUID is the unique UID associated with this token 6.2. API endpoints The following API endpoints are available: /apis/oauth.openshift.io/v1/useroauthaccesstokens GET : list or watch objects of kind UserOAuthAccessToken /apis/oauth.openshift.io/v1/watch/useroauthaccesstokens GET : watch individual changes to a list of UserOAuthAccessToken. deprecated: use the 'watch' parameter with a list operation instead. /apis/oauth.openshift.io/v1/useroauthaccesstokens/{name} DELETE : delete an UserOAuthAccessToken GET : read the specified UserOAuthAccessToken /apis/oauth.openshift.io/v1/watch/useroauthaccesstokens/{name} GET : watch changes to an object of kind UserOAuthAccessToken. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 6.2.1. /apis/oauth.openshift.io/v1/useroauthaccesstokens HTTP method GET Description list or watch objects of kind UserOAuthAccessToken Table 6.1. HTTP responses HTTP code Reponse body 200 - OK UserOAuthAccessTokenList schema 401 - Unauthorized Empty 6.2.2. /apis/oauth.openshift.io/v1/watch/useroauthaccesstokens HTTP method GET Description watch individual changes to a list of UserOAuthAccessToken. deprecated: use the 'watch' parameter with a list operation instead. Table 6.2. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 6.2.3. /apis/oauth.openshift.io/v1/useroauthaccesstokens/{name} Table 6.3. Global path parameters Parameter Type Description name string name of the UserOAuthAccessToken HTTP method DELETE Description delete an UserOAuthAccessToken Table 6.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 6.5. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified UserOAuthAccessToken Table 6.6. HTTP responses HTTP code Reponse body 200 - OK UserOAuthAccessToken schema 401 - Unauthorized Empty 6.2.4. /apis/oauth.openshift.io/v1/watch/useroauthaccesstokens/{name} Table 6.7. Global path parameters Parameter Type Description name string name of the UserOAuthAccessToken HTTP method GET Description watch changes to an object of kind UserOAuthAccessToken. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 6.8. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/oauth_apis/useroauthaccesstoken-oauth-openshift-io-v1
Chapter 3. Using realmd to Connect to an Active Directory Domain
Chapter 3. Using realmd to Connect to an Active Directory Domain The realmd system provides a clear and simple way to discover and join identity domains to achieve direct domain integration. It configures underlying Linux system services, such as SSSD or Winbind, to connect to the domain. Chapter 2, Using Active Directory as an Identity Provider for SSSD describes how to use the System Security Services Daemon (SSSD) on a local system and Active Directory as a back-end identity provider. Ensuring that the system is properly configured for this can be a complex task: there are a number of different configuration parameters for each possible identity provider and for SSSD itself. In addition, all domain information must be available in advance and then properly formatted in the SSSD configuration for SSSD to integrate the local system with AD. The realmd system simplifies that configuration. It can run a discovery search to identify available AD and Identity Management domains and then join the system to the domain, as well as set up the required client services used to connect to the given identity domain and manage user access. Additionally, because SSSD as an underlying service supports multiple domains, realmd can discover and support multiple domains as well. 3.1. Supported Domain Types and Clients The realmd system supports the following domain types: Microsoft Active Directory Red Hat Enterprise Linux Identity Management The following domain clients are supported by realmd : SSSD for both Red Hat Enterprise Linux Identity Management and Microsoft Active Directory Winbind for Microsoft Active Directory
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/windows_integration_guide/ch-configuring_authentication
Chapter 1. Introduction to advanced operation of 3scale API Management APIcast API gateway
Chapter 1. Introduction to advanced operation of 3scale API Management APIcast API gateway The introduction to advanced operation of 3scale APIcast will help you to adjust the configuration of access to your application programming interface (API). 1.1. Public base URL for calls to 3scale API Management APIs The Public Base URL is the URL that your API consumers use to make requests to your API product, which is exposed publicly with 3scale. This will be the URL of your APIcast instance. If you are using one of the Self-managed deployment options , you can choose your own Public Base URL for each of the environments provided (staging and production) on a domain name you are managing. This URL should be different from the one for your API backend, and could be something like https://api.yourdomain.com:443 , where yourdomain.com is the domain that belongs to you. After setting the Public Base URL, make sure you save the changes and, if necessary, promote the changes in staging to production. Note The Public Base URL that you specify must use a port that is available in your OpenShift cluster. By default, the OpenShift router listens for connections only on the standard HTTP and HTTPS ports (80 and 443). If you want users to connect to your API over some other port, work with your OpenShift administrator to enable the port. APIcast accepts calls to only the hostname specified in the Public Base URL. For example, if you specify https://echo-api.3scale.net:443 as the Public Base URL, the correct call would be: curl "https://echo-api.3scale.net:443/hello?user_key=you_user_key" In case you do not have a public domain for your API, you can use the APIcast IP address in the requests, but you still need to specify a value in the Public Base URL field even if the domain is not real. In this case, make sure you provide the host in the Host header. For example: curl "http://192.0.2.12:80/hello?user_key=your_user_key" -H "Host: echo-api.3scale.net" If you are deploying on a local machine, you can specify "localhost" as the domain, so the Public Base URL would look like http://localhost:80 , and then you can make requests like this: curl "http://localhost:80/hello?user_key=your_user_key" If you have multiple API products, set the Public Base URL appropriately for each product. APIcast routes the requests based on the hostname. 1.2. How APIcast applies mapping rules for capturing usage of 3scale API Management APIs Based on the requests to your API, mapping rules define the metrics or designate the methods for which you want to capture API usage. The following is an example of a mapping rule: This rule means that any GET requests that start with / increment the metric hits by 1. This rule matches any request to your API. While this is a valid mapping rule, it is too generic and often leads to double counts if you add more specific mapping rules. The following mapping rules for the Echo API show more specific examples: Mapping rules work at the API product and API backend levels. Mapping rules at the product level. The mapping rule takes precedence. This means that the product mapping rule is the first one to be evaluated. The mapping rule is always evaluated, independent of which backend receives the redirected traffic. Mapping rules at the backend level. When you add mapping rules to a backend, these are added to all the products bundling said backend. The mapping rule is evaluated after the mapping rules defined at the product level. The mapping rule is evaluated only if the traffic is redirected to the same backend the mapping rule belongs to. The path of the backend for a product is automatically prepended to each mapping rule of the backend bundled to said product. Example of mapping rules with products and backends The following example shows mapping rules for a product with one backend. The Echo API backend: Has the private endpoint: https://echo-api.3scale.net Contains 2 mapping rules with the following patterns: The Cool API product: Has this public endpoint: https://cool.api Uses the Echo API backend with this routing path: /echo . Mapping rules with the following patterns are automatically part of the Cool API product: This means that a request sent to the public URL https://cool.api/echo/hello is redirected to https://echo-api.3scale.net/hello . Similarly, a request sent to https://cool.api/echo/bye redirects to https://echo-api.3scale.net/bye . Now consider an additional product called Tools For Devs using the same Echo API backend. The Tools For Devs product: Has this public endpoint: https://dev-tools.api Uses the Echo API backend with the following routing path: /tellmeback . Mapping rules with the following patterns are automatically part of the Tools For Devs product: Therefore, a request sent to the public URL https://dev-tools.api/tellmeback/hello is redirected to https://echo-api.3scale.net/hello . Similarly, a request sent to https://dev-tools.api/tellmeback/bye redirects to https://echo-api.3scale.net/bye . If you add a mapping rule with the /ping pattern to the Echo API backend, both products - Cool API and Tools For Devs - are affected: Cool API has a mapping rule with this pattern: /echo/ping . Tools For Devs has a mapping rule with this pattern: /tellmeback/ping . Matching of mapping rules 3scale applies mapping rules based on prefixes. The notation follows the OpenAPI and ActiveDocs specifications: A mapping rule must start with a forward slash ( / ). Perform a match on the path over a literal string, which is a URL, for example, /hello . The mapping rule, once you have saved it, will cause requests to the URL string you have set and invoke metrics or methods you have defined around each mapping rule. Mapping rules can include parameters on the query string or in the body, for example, /{word}?value={value} ). APIcast fetches the parameters in the following ways: GET method: From the query string. POST , DELETE , or PUT method: From the body. Mapping rules can contain named wildcards, for example, /{word} . This rule matches anything in the placeholder {word} , which makes requests such as /morning match the mapping rule. Wildcards can appear between slashes or between a slash and a dot. Parameters can also include wildcards. By default, all mapping rules are evaluated from first to last, according to the sort order you specified. If you add a rule /v1 , it matches requests whose paths start with /v1 , for example, /v1/word or /v1/sentence . You can add a dollar sign ( USD ) to the end of a pattern to specify exact matching. For example, /v1/word matches only /v1/word requests, and does not match /v1/word/hello requests. For exact matching, you must also ensure that the default mapping rule that matches everything ( / ) has been disabled. More than one mapping rule can match the request path, but if none matches, the request is discarded with an HTTP 404 status code. Mapping rules workflow Mapping rules have the following workflow: You can define a new mapping rule at any time. See Defining mapping rules . Mapping rules are grayed out on the reload to prevent accidental modifications. To edit an existing mapping rule, you must enable it first by clicking the pencil icon on the right. To delete a rule, click the trash icon. All modifications and deletions are saved when you promote the changes in Integration > Configuration . Stop other mapping rules To stop processing further mapping rules, select the option Last? when you create a new mapping rule, especially after processing one or more mapping rules. For example, if you have defined multiple mapping rules associated with different metrics in the API Integration Settings , such as: The rule /path/to/example/search can be marked Last? , then when calling (get) /path/to/example/search , after matching this rule, APIcast stops processing and will not search for matches in the remaining rules, and the metric for the rule (get) /path/to/example/{id} will not be incremented. 1.3. How APIcast handles APIs that have custom requirements There are special cases that require custom APIcast configuration so that API consumers can successfully call the API. Host header This option is only needed for those API products that reject traffic unless the Host header matches the expected one. In these cases, having a gateway in front of your API product causes problems because the Host is the one of the gateway, for example, xxx-yyy.staging.apicast.io . To avoid this issue, you can define the host your API product expects in the Host Header field in the Authentication Settings : [Your_product_name] > Integration > Settings . The result is that the hosted APIcast instance rewrites the host specification in the request call. Protecting your API backend After you have APIcast working in production, you might want to restrict direct access to your API product to only those calls that specify a secret token that you specify. Do this by setting the APIcast Secret Token . See Advanced APIcast configuration for information on how to set it up. Using APIcast with private APIs With APIcast, it is possible to protect the APIs that are not publicly accessible on the internet. The requirements that must be met are: Self-managed APIcast must be used as the deployment option. APIcast needs to be accessible from the public internet and be able to make outbound calls to the 3scale Service Management API. The API product should be accessible by APIcast. In this case, you can set your internal domain name or the IP address of your API in the Private Base URL field and follow the rest of the steps as usual. However, doing this means that you cannot take advantage of the staging environment. Test calls will not be successful because the staging APIcast instance is hosted by 3scale, which does not have access to your private API backend. After you deploy APIcast in your production environment, if the configuration is correct, APIcast works as expected.
[ "curl \"https://echo-api.3scale.net:443/hello?user_key=you_user_key\"", "curl \"http://192.0.2.12:80/hello?user_key=your_user_key\" -H \"Host: echo-api.3scale.net\"", "curl \"http://localhost:80/hello?user_key=your_user_key\"", "/hello /bye", "/echo/hello /echo/bye", "/tellmeback/hello /tellmeback/bye", "(get) /path/to/example/search (get) /path/to/example/{id}" ]
https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/administering_the_api_gateway/introduction-to-advanced-operation-of-threescale-apicast-api-gateway_api-gateway-apicast
Chapter 3. Managing Resource Servers
Chapter 3. Managing Resource Servers According to the OAuth2 specification, a resource server is a server hosting the protected resources and capable of accepting and responding to protected resource requests. In Red Hat Single Sign-On, resource servers are provided with a rich platform for enabling fine-grained authorization for their protected resources, where authorization decisions can be made based on different access control mechanisms. Any client application can be configured to support fine-grained permissions. In doing so, you are conceptually turning the client application into a resource server. 3.1. Creating a Client Application The first step to enable Red Hat Single Sign-On Authorization Services is to create the client application that you want to turn into a resource server. To create a client application, complete the following steps: Click Clients . Clients On this page, click Create . Create Client Type the Client ID of the client. For example, my-resource-server . Type the Root URL for your application. For example: Click Save . The client is created and the client Settings page opens. A page similar to the following is displayed: Client Settings 3.2. Enabling Authorization Services To turn your OIDC Client Application into a resource server and enable fine-grained authorization, select Access type confidential and click the Authorization Enabled switch to ON then click Save . Enabling Authorization Services A new Authorization tab is displayed for this client. Click the Authorization tab and a page similar to the following is displayed: Resource Server Settings The Authorization tab contains additional sub-tabs covering the different steps that you must follow to actually protect your application's resources. Each tab is covered separately by a specific topic in this documentation. But here is a quick description about each one: Settings General settings for your resource server. For more details about this page see the Resource Server Settings section. Resource From this page, you can manage your application's resources . Authorization Scopes From this page, you can manage scopes . Policies From this page, you can manage authorization policies and define the conditions that must be met to grant a permission. Permissions From this page, you can manage the permissions for your protected resources and scopes by linking them with the policies you created. Evaluate From this page, you can simulate authorization requests and view the result of the evaluation of the permissions and authorization policies you have defined. Export Settings From this page, you can export the authorization settings to a JSON file. 3.2.1. Resource Server Settings On the Resource Server Settings page, you can configure the policy enforcement mode, allow remote resource management, and export the authorization configuration settings. Policy Enforcement Mode Specifies how policies are enforced when processing authorization requests sent to the server. Enforcing (default mode) Requests are denied by default even when there is no policy associated with a given resource. Permissive Requests are allowed even when there is no policy associated with a given resource. Disabled Disables the evaluation of all policies and allows access to all resources. Decision Strategy This configurations changes how the policy evaluation engine decides whether or not a resource or scope should be granted based on the outcome from all evaluated permissions. Affirmative means that at least one permission must evaluate to a positive decision in order grant access to a resource and its scopes. Unanimous means that all permissions must evaluate to a positive decision in order for the final decision to be also positive. As an example, if two permissions for a same resource or scope are in conflict (one of them is granting access and the other is denying access), the permission to the resource or scope will be granted if the choosen strategy is Affirmative . Otherwise, a single deny from any permission will also deny access to the resource or scope. Remote Resource Management Specifies whether resources can be managed remotely by the resource server. If false, resources can be managed only from the administration console. 3.3. Default Configuration When you create a resource server, Red Hat Single Sign-On creates a default configuration for your newly created resource server. The default configuration consists of: A default protected resource representing all resources in your application. A policy that always grants access to the resources protected by this policy. A permission that governs access to all resources based on the default policy. The default protected resource is referred to as the default resource and you can view it if you navigate to the Resources tab. Default Resource This resource defines a Type , namely urn:my-resource-server:resources:default and a URI /* . Here, the URI field defines a wildcard pattern that indicates to Red Hat Single Sign-On that this resource represents all the paths in your application. In other words, when enabling policy enforcement for your application, all the permissions associated with the resource will be examined before granting access. The Type mentioned previously defines a value that can be used to create typed resource permissions that must be applied to the default resource or any other resource you create using the same type. The default policy is referred to as the only from realm policy and you can view it if you navigate to the Policies tab. Default Policy This policy is a JavaScript-based policy defining a condition that always grants access to the resources protected by this policy. If you click this policy you can see that it defines a rule as follows: // by default, grants any permission associated with this policy USDevaluation.grant(); Lastly, the default permission is referred to as the default permission and you can view it if you navigate to the Permissions tab. Default Permission This permission is a resource-based permission , defining a set of one or more policies that are applied to all resources with a given type. 3.3.1. Changing the Default Configuration You can change the default configuration by removing the default resource, policy, or permission definitions and creating your own. The default resource is created with an URI that maps to any resource or path in your application using a / * pattern. Before creating your own resources, permissions and policies, make sure the default configuration doesn't conflict with your own settings. Note The default configuration defines a resource that maps to all paths in your application. If you are about to write permissions to your own resources, be sure to remove the Default Resource or change its URIS fields to a more specific paths in your application. Otherwise, the policy associated with the default resource (which by default always grants access) will allow Red Hat Single Sign-On to grant access to any protected resource. 3.4. Export and Import Authorization Configuration The configuration settings for a resource server (or client) can be exported and downloaded. You can also import an existing configuration file for a resource server. Importing and exporting a configuration file is helpful when you want to create an initial configuration for a resource server or to update an existing configuration. The configuration file contains definitions for: Protected resources and scopes Policies Permissions 3.4.1. Exporting a Configuration File To export a configuration file, complete the following steps: Navigate to the Resource Server Settings page. Click the Export Settings tab. On this page, click Export . Export Settings The configuration file is exported in JSON format and displayed in a text area, from which you can copy and paste. You can also click Download to download the configuration file and save it. 3.4.2. Importing a Configuration File To import a configuration file, complete the following steps: Navigate to the Resource Server Settings page. Import Settings To import a configuration file for a resource server, click Select file to select a file containing the configuration you want to import.
[ "http://USD{host}:USD{port}/my-resource-server", "// by default, grants any permission associated with this policy USDevaluation.grant();" ]
https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.4/html/authorization_services_guide/resource_server_overview
Chapter 8. Updating systems in a group
Chapter 8. Updating systems in a group If you have several systems using the same image in the edge management application, you can update several systems in a group simultaneously after you update the image. Use the More options icon (...) , select the images that you plan to update. You will be able to choose the package version to update your images. Warning Using RHEL for Edge customized images that were created using the on-site version of RHEL image builder is not supported by the edge management application. There is no support for updating an Edge system by using the CLI. You can only update your Edge systems by using the Red Hat Hybrid Cloud Console. Prerequisites You have a Red Hat Hybrid Cloud Console account. You have the systems registered with Remote Host Configuration and Management. You have an updated image that contains the changes you want to push to your systems. For more information, see Updating an image . Procedure Go to the edge management application in the Red Hat Hybrid Cloud Console platform and log in. In the edge management menu, click Inventory > Groups . Select the group from the list that contains several systems you want to update. Select the systems in the group that have the Update available status and that use the same image. You can check the image each system uses in the Image column. In the Group toolbar, click the Actions for group details menu, which is three vertical dots. Click Update selected . Review the information about the update. Click Update system . Verification Check Inventory > Groups and click the group from which the updates were triggered. If the update was sent to the system, the system status is Up to date . Additional resources How to start the upgrade process of an Edge Management deployed system . Edge Management supportability .
null
https://docs.redhat.com/en/documentation/edge_management/1-latest/html/working_with_systems_in_the_insights_inventory_application/proc-rhem-update-groups
Chapter 6. Search
Chapter 6. Search Use automation controller's search tool for search and filter capabilities across multiple functions. An expandable list of search conditions is available from the Advanced option from the Name menu in the search field. From there, use the combination of Set type , Key , and Lookup type to filter. Important If you receive an error when using the Advanced search option ensure that you enter a valid input. Example When you use the Advanced search option for adding team or user permissions and want to search an exact project, enter a valid project ID and not a name: From the navigation panel, select Access Teams . Select a team and click the Roles tab. Click Add and select a resource type, such as Job templates . Select the Advanced option in the search drop-down menu. Select project in the Key drop-down and select exact in the Lookup type . Enter a valid project ID, not a name. The following error message appears if you input an invalid project ID: 6.1. Rules for searching These searching tips assume that you are not searching hosts. Most of this section still applies to hosts but with some subtle differences. The typical syntax of a search consists of a field (left-hand side) and a value (right-hand side). A colon is used to separate the field that you want to search from the value. If the search has no colon (see example 3) it is treated as a simple string search where ?search=foobar is sent. The following are examples of syntax used for searching: name:localhost In this example, the user is searching for the string `localhost' in the name attribute. If that string does not match something from Fields or Related Fields , the entire search is treated as a string. organization.name:Default This example shows a Related Field Search. The period in organization.name separates the model from the field. Depending on how deep or complex the search is, you can have multiple periods in that part of the query. foobar This is a simple string (key term) search that finds all instances of the search term using an icontains search against the name and description fields. If you use a space between terms, for example foo bar , then results that contain both terms are returned. If the terms are wrapped in quotes, for example, "foo bar" , automation controller searches for the string with the terms appearing together. Specific name searches search against the API name. For example, Management job in the user interface is system_job in the API. . organization:Default This example shows a Related Field search but without specifying a field to go along with the organization. This is supported by the API and is analogous to a simple string search but carried out against the organization (does an icontains search against both the name and description). 6.1.1. Values for search fields To find values for certain fields, refer to the API endpoint for extensive options and their valid values. For example, if you want to search against /api/v2/jobs > type field, you can find the values by performing an OPTIONS request to /api/v2/jobs and look for entries in the API for "type" . Additionally, you can view the related searches by scrolling to the bottom of each screen. In the example for /api/v2/jobs , the related search shows: "related_search_fields": [ "modified_by__search", "project__search", "project_update__search", "credentials__search", "unified_job_template__search", "created_by__search", "inventory__search", "labels__search", "schedule__search", "webhook_credential__search", "job_template__search", "job_events__search", "dependent_jobs__search", "launch_config__search", "unifiedjob_ptr__search", "notifications__search", "unified_job_node__search", "instance_group__search", "hosts__search", "job_host_summaries__search" The values for Fields come from the keys in a GET request. url , related , and summary_fields are not used. The values for Related Fields also come from the OPTIONS response, but from a different attribute. Related Fields is populated by taking all the values from related_search_fields and stripping off the __search from the end. Any search that does not start with a value from Fields or a value from the Related Fields, is treated as a generic string search. Searching for localhost , for example, results in the UI sending ?search=localhost as a query parameter to the API endpoint. This is a shortcut for an icontains search on the name and description fields. 6.1.2. Searching using values from related fields Searching a Related Field requires you to start the search string with the Related Field. The following example describes how to search using values from the Related Field, organization . The left-hand side of the search string must start with organization , for example, organization:Default . Depending on the related field, you can provide more specific direction for the search by providing secondary and tertiary fields. An example of this is to specify that you want to search for all job templates that use a project matching a certain name. The syntax on this would look like: job_template.project.name:"A Project" . Note This query executes against the unified_job_templates endpoint which is why it starts with job_template . If you were searching against the job_templates endpoint, then you would not need the job_template portion of the query. 6.1.3. Other search considerations Be aware of the following issues when searching in automation controller: There is currently no supported syntax for OR queries. All search terms are AND ed in the query parameters. The left-hand portion of a search parameter can be wrapped in quotes to support searching for strings with spaces. For more information, see Tips for searching . Currently, the values in the Fields are direct attributes expected to be returned in a GET request. Whenever you search against one of the values, automation controller carries out an __icontains search. So, for example, name:localhost sends back ?name__icontains=localhost . Automation controller currently performs this search for every Field value, even id . 6.2. Sort Where applicable, use the arrows in each column to sort by ascending order. The following is an example from the schedules list: The direction of the arrow indicates the sort order of the column.
[ "\"related_search_fields\": [ \"modified_by__search\", \"project__search\", \"project_update__search\", \"credentials__search\", \"unified_job_template__search\", \"created_by__search\", \"inventory__search\", \"labels__search\", \"schedule__search\", \"webhook_credential__search\", \"job_template__search\", \"job_events__search\", \"dependent_jobs__search\", \"launch_config__search\", \"unifiedjob_ptr__search\", \"notifications__search\", \"unified_job_node__search\", \"instance_group__search\", \"hosts__search\", \"job_host_summaries__search\"" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/automation_controller_user_guide/assembly-controller-search
Chapter 33. Unregistering from Red Hat Subscription Management Services
Chapter 33. Unregistering from Red Hat Subscription Management Services A system can only be registered with one subscription service. If you need to change which service your system is registered with or need to delete the registration in general, then the method to unregister depends on which type of subscription service the system was originally registered with. 33.1. Systems Registered with Red Hat Subscription Management Several different subscription services use the same, certificate-based framework to identify systems, installed products, and attached subscriptions. These services are Customer Portal Subscription Management (hosted), Subscription Asset Manager (on-premise subscription service), and CloudForms System Engine (on-premise subscription and content delivery services). These are all part of Red Hat Subscription Management . For all services within Red Hat Subscription Management, the systems are managed with the Red Hat Subscription Manager client tools. To unregister a system registered with a Red Hat Subscription Management server, use the unregister command as root without any additional parameters: For additional information, see Using and Configuring Red Hat Subscription Manager .
[ "subscription-manager unregister" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/chap-subscription-management-unregistering
Chapter 9. File Systems
Chapter 9. File Systems XFS runtime statistics are available per file system in the /sys/fs/ directory The existing XFS global statistics directory has been moved from the /proc/fs/xfs/ directory to the /sys/fs/xfs/ directory while maintaining compatibility with earlier versions with a symbolic link in /proc/fs/xfs/stat . New subdirectories will be created and maintained for statistics per file system in /sys/fs/xfs/ , for example /sys/fs/xfs/sdb7/stats and /sys/fs/xfs/sdb8/stats . Previously, XFS runtime statistics were available only per server. Now, XFS runtime statistics are available per device. (BZ#1269281) A progress indicator has been added to mkfs.gfs2 The mkfs.gfs2 tool now reports its progress when building journals and resource groups. As mkfs.gfs2 can take some time to complete with large or slow devices, it was not previously clear if mkfs.gfs2 was working correctly until a report was printed. A progress bar has been added to mkfs.gfs2 indicate progress. (BZ# 1196321 ) fsck.gfs2 has been enhanced to require considerably less memory on large file systems Prior to this update, the Global File System 2 (GFS2) file system checker, fsck.gfs2, required a large amount of memory to run on large file systems, and running fsck.gfs2 on file systems larger than 100 TB was therefore impractical. With this update, fsck.gfs2 has been enhanced to run in considerably less memory, which allows for better scalability and makes running fsck.gf2 practical to run on much larger file systems. (BZ# 1268045 ) GFS2 has been enhanced to allow better scalability of its glocks In the Global File System 2 (GFS2), opening or creating a large number of files, even if they are closed again, leaves a lot of GFS2 cluster locks (glocks) in slab memory. When the number of glocks was in the millions, GFS2 previously started to slow down, especially with file creates: GFS2 became gradually slower to create files. With this update, the GFS2 has been enhanced to allow better scalability of its glocks, and the GFS2 can now therefore maintain good performance across millions of file creates. (BZ#1172819) xfsprogs rebased to version 4.5.0 The xfsprogs packages have been upgraded to upstream version 4.5.0, which provides a number of bug fixes and enhancements over the version. The Red Hat Enterprise Linux 7.3 kernel RPM requires the upgraded version of xfsprogs because the new default on-disk format requires special handling of log cycle numbers when running the xfs_repair utility. Notable changes include: Metadata cyclic redundancy checks (CRCs) and directory entry file types are now enabled by default. To replicate the older mkfs on-disk format used in earlier versions of Red Hat Enterprise Linux 7, use the -m crc=0 -n ftype=0 options on the mkfs.xfs command line. The GETNEXTQUOTA interface is now implemented in xfs_quota , which allows fast iteration over all on-disk quotas even when the number of entries in the user database is extremely large. Also, note the following differences between upstream and Red Hat Enterprise Linux 7.3: The experimental sparse inode feature is not available. The free inode btree (finobt) feature is disabled by default to ensure compatibility with earlier Red Hat Enterprise Linux 7 kernel versions. (BZ# 1309498 ) The CIFS kernel module rebased to version 6.4 The Common Internet File System (CIFS) has been upgraded to upstream version 6.4, which provides a number of bug fixes and enhancements over the version. Notably: Support for Kerberos authentication has been added. Support for MFSymlink has been added. The mknod and mkfifo named pipes are now allowed. Also, several memory leaks have been identified and fixed. (BZ#1337587) quota now supports suppressing warnings about NFS mount points with unavailable quota RPC service If a user listed disk quotas with the quota tool, and the local system mounted a network file system with an NFS server that did not provide the quota RPC service, the quota tool returned the error while getting quota from server error message. Now, the quota tools can distinguish between unreachable NFS server and a reachable NFS server without the quota RPC service, and no error is reported in the second case. (BZ# 1155584 ) The /proc/ directory now uses the red-black tree implementation to improve the performance Previously, the /proc/ directory entries implementation used a single linked list, which slowed down the manipulation of directories with a large number of entries. With this update, the single linked list implementation has been replaced by a red-black tree implementation, which improves the performance of directory entries manipulation. (BZ#1210350)
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.3_release_notes/new_features_file_systems
function::mem_page_size
function::mem_page_size Name function::mem_page_size - Number of bytes in a page for this architecture Synopsis Arguments None
[ "mem_page_size:long()" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-mem-page-size
Chapter 2. Understanding authentication
Chapter 2. Understanding authentication For users to interact with OpenShift Container Platform, they must first authenticate to the cluster. The authentication layer identifies the user associated with requests to the OpenShift Container Platform API. The authorization layer then uses information about the requesting user to determine if the request is allowed. As an administrator, you can configure authentication for OpenShift Container Platform. 2.1. Users A user in OpenShift Container Platform is an entity that can make requests to the OpenShift Container Platform API. An OpenShift Container Platform User object represents an actor which can be granted permissions in the system by adding roles to them or to their groups. Typically, this represents the account of a developer or administrator that is interacting with OpenShift Container Platform. Several types of users can exist: User type Description Regular users This is the way most interactive OpenShift Container Platform users are represented. Regular users are created automatically in the system upon first login or can be created via the API. Regular users are represented with the User object. Examples: joe alice System users Many of these are created automatically when the infrastructure is defined, mainly for the purpose of enabling the infrastructure to interact with the API securely. They include a cluster administrator (with access to everything), a per-node user, users for use by routers and registries, and various others. Finally, there is an anonymous system user that is used by default for unauthenticated requests. Examples: system:admin system:openshift-registry system:node:node1.example.com Service accounts These are special system users associated with projects; some are created automatically when the project is first created, while project administrators can create more for the purpose of defining access to the contents of each project. Service accounts are represented with the ServiceAccount object. Examples: system:serviceaccount:default:deployer system:serviceaccount:foo:builder Each user must authenticate in some way to access OpenShift Container Platform. API requests with no authentication or invalid authentication are authenticated as requests by the anonymous system user. Once authenticated, policy determines what the user is authorized to do. 2.2. Groups A user can be assigned to one or more groups , each of which represent a certain set of users. Groups are useful when managing authorization policies to grant permissions to multiple users at once, for example allowing access to objects within a project, versus granting them to users individually. In addition to explicitly defined groups, there are also system groups, or virtual groups , that are automatically provisioned by the cluster. The following default virtual groups are most important: Virtual group Description system:authenticated Automatically associated with all authenticated users. system:authenticated:oauth Automatically associated with all users authenticated with an OAuth access token. system:unauthenticated Automatically associated with all unauthenticated users. 2.3. API authentication Requests to the OpenShift Container Platform API are authenticated using the following methods: OAuth access tokens Obtained from the OpenShift Container Platform OAuth server using the <namespace_route> /oauth/authorize and <namespace_route> /oauth/token endpoints. Sent as an Authorization: Bearer... header. Sent as a websocket subprotocol header in the form base64url.bearer.authorization.k8s.io.<base64url-encoded-token> for websocket requests. X.509 client certificates Requires an HTTPS connection to the API server. Verified by the API server against a trusted certificate authority bundle. The API server creates and distributes certificates to controllers to authenticate themselves. Any request with an invalid access token or an invalid certificate is rejected by the authentication layer with a 401 error. If no access token or certificate is presented, the authentication layer assigns the system:anonymous virtual user and the system:unauthenticated virtual group to the request. This allows the authorization layer to determine which requests, if any, an anonymous user is allowed to make. 2.3.1. OpenShift Container Platform OAuth server The OpenShift Container Platform master includes a built-in OAuth server. Users obtain OAuth access tokens to authenticate themselves to the API. When a person requests a new OAuth token, the OAuth server uses the configured identity provider to determine the identity of the person making the request. It then determines what user that identity maps to, creates an access token for that user, and returns the token for use. 2.3.1.1. OAuth token requests Every request for an OAuth token must specify the OAuth client that will receive and use the token. The following OAuth clients are automatically created when starting the OpenShift Container Platform API: OAuth client Usage openshift-browser-client Requests tokens at <namespace_route>/oauth/token/request with a user-agent that can handle interactive logins. [1] openshift-challenging-client Requests tokens with a user-agent that can handle WWW-Authenticate challenges. <namespace_route> refers to the namespace route. This is found by running the following command: USD oc get route oauth-openshift -n openshift-authentication -o json | jq .spec.host All requests for OAuth tokens involve a request to <namespace_route>/oauth/authorize . Most authentication integrations place an authenticating proxy in front of this endpoint, or configure OpenShift Container Platform to validate credentials against a backing identity provider. Requests to <namespace_route>/oauth/authorize can come from user-agents that cannot display interactive login pages, such as the CLI. Therefore, OpenShift Container Platform supports authenticating using a WWW-Authenticate challenge in addition to interactive login flows. If an authenticating proxy is placed in front of the <namespace_route>/oauth/authorize endpoint, it sends unauthenticated, non-browser user-agents WWW-Authenticate challenges rather than displaying an interactive login page or redirecting to an interactive login flow. Note To prevent cross-site request forgery (CSRF) attacks against browser clients, only send Basic authentication challenges with if a X-CSRF-Token header is on the request. Clients that expect to receive Basic WWW-Authenticate challenges must set this header to a non-empty value. If the authenticating proxy cannot support WWW-Authenticate challenges, or if OpenShift Container Platform is configured to use an identity provider that does not support WWW-Authenticate challenges, you must use a browser to manually obtain a token from <namespace_route>/oauth/token/request . 2.3.1.2. API impersonation You can configure a request to the OpenShift Container Platform API to act as though it originated from another user. For more information, see User impersonation in the Kubernetes documentation. 2.3.1.3. Authentication metrics for Prometheus OpenShift Container Platform captures the following Prometheus system metrics during authentication attempts: openshift_auth_basic_password_count counts the number of oc login user name and password attempts. openshift_auth_basic_password_count_result counts the number of oc login user name and password attempts by result, success or error . openshift_auth_form_password_count counts the number of web console login attempts. openshift_auth_form_password_count_result counts the number of web console login attempts by result, success or error . openshift_auth_password_total counts the total number of oc login and web console login attempts.
[ "oc get route oauth-openshift -n openshift-authentication -o json | jq .spec.host" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/authentication_and_authorization/understanding-authentication
Chapter 11. Object Bucket Claim
Chapter 11. Object Bucket Claim An Object Bucket Claim can be used to request an S3 compatible bucket backend for your workloads. You can create an Object Bucket Claim in three ways: Section 11.1, "Dynamic Object Bucket Claim" Section 11.2, "Creating an Object Bucket Claim using the command line interface" Section 11.3, "Creating an Object Bucket Claim using the OpenShift Web Console" An object bucket claim creates a new bucket and an application account in NooBaa with permissions to the bucket, including a new access key and secret access key. The application account is allowed to access only a single bucket and can't create new buckets by default. 11.1. Dynamic Object Bucket Claim Similar to Persistent Volumes, you can add the details of the Object Bucket claim (OBC) to your application's YAML, and get the object service endpoint, access key, and secret access key available in a configuration map and secret. It is easy to read this information dynamically into environment variables of your application. Note The Multicloud Object Gateway endpoints uses self-signed certificates only if OpenShift uses self-signed certificates. Using signed certificates in OpenShift automatically replaces the Multicloud Object Gateway endpoints certificates with signed certificates. Get the certificate currently used by Multicloud Object Gateway by accessing the endpoint via the browser. See Accessing the Multicloud Object Gateway with your applications for more information. Procedure Add the following lines to your application YAML: These lines are the OBC itself. Replace <obc-name> with the a unique OBC name. Replace <obc-bucket-name> with a unique bucket name for your OBC. To automate the use of the OBC add more lines to the YAML file. For example: The example is the mapping between the bucket claim result, which is a configuration map with data and a secret with the credentials. This specific job claims the Object Bucket from NooBaa, which creates a bucket and an account. Replace all instances of <obc-name> with your OBC name. Replace <your application image> with your application image. Apply the updated YAML file: Replace <yaml.file> with the name of your YAML file. To view the new configuration map, run the following: Replace obc-name with the name of your OBC. You can expect the following environment variables in the output: BUCKET_HOST - Endpoint to use in the application. BUCKET_PORT - The port available for the application. The port is related to the BUCKET_HOST . For example, if the BUCKET_HOST is https://my.example.com , and the BUCKET_PORT is 443, the endpoint for the object service would be https://my.example.com:443 . BUCKET_NAME - Requested or generated bucket name. AWS_ACCESS_KEY_ID - Access key that is part of the credentials. AWS_SECRET_ACCESS_KEY - Secret access key that is part of the credentials. Important Retrieve the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY . The names are used so that it is compatible with the AWS S3 API. You need to specify the keys while performing S3 operations, especially when you read, write or list from the Multicloud Object Gateway (MCG) bucket. The keys are encoded in Base64. Decode the keys before using them. <obc_name> Specify the name of the object bucket claim. 11.2. Creating an Object Bucket Claim using the command line interface When creating an Object Bucket Claim (OBC) using the command-line interface, you get a configuration map and a Secret that together contain all the information your application needs to use the object storage service. Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Use the command-line interface to generate the details of a new bucket and credentials. Run the following command: Replace <obc-name> with a unique OBC name, for example, myappobc . Additionally, you can use the --app-namespace option to specify the namespace where the OBC configuration map and secret will be created, for example, myapp-namespace . For example: The MCG command-line-interface has created the necessary configuration and has informed OpenShift about the new OBC. Run the following command to view the OBC: For example: Run the following command to view the YAML file for the new OBC: For example: Inside of your openshift-storage namespace, you can find the configuration map and the secret to use this OBC. The CM and the secret have the same name as the OBC. Run the following command to view the secret: For example: The secret gives you the S3 access credentials. Run the following command to view the configuration map: For example: The configuration map contains the S3 endpoint information for your application. 11.3. Creating an Object Bucket Claim using the OpenShift Web Console You can create an Object Bucket Claim (OBC) using the OpenShift Web Console. Prerequisites Administrative access to the OpenShift Web Console. In order for your applications to communicate with the OBC, you need to use the configmap and secret. For more information about this, see Section 11.1, "Dynamic Object Bucket Claim" . Procedure Log into the OpenShift Web Console. On the left navigation bar, click Storage Object Storage Object Bucket Claims Create Object Bucket Claim . Enter a name for your object bucket claim and select the appropriate storage class based on your deployment, internal or external, from the dropdown menu: Internal mode The following storage classes, which were created after deployment, are available for use: ocs-storagecluster-ceph-rgw uses the Ceph Object Gateway (RGW) openshift-storage.noobaa.io uses the Multicloud Object Gateway (MCG) External mode The following storage classes, which were created after deployment, are available for use: ocs-external-storagecluster-ceph-rgw uses the RGW openshift-storage.noobaa.io uses the MCG Note The RGW OBC storage class is only available with fresh installations of OpenShift Data Foundation version 4.5. It does not apply to clusters upgraded from OpenShift Data Foundation releases. Click Create . Once you create the OBC, you are redirected to its detail page. 11.4. Attaching an Object Bucket Claim to a deployment Once created, Object Bucket Claims (OBCs) can be attached to specific deployments. Prerequisites Administrative access to the OpenShift Web Console. Procedure On the left navigation bar, click Storage Object Storage Object Bucket Claims . Click the Action menu (...) to the OBC you created. From the drop-down menu, select Attach to Deployment . Select the desired deployment from the Deployment Name list, then click Attach . 11.5. Viewing object buckets using the OpenShift Web Console You can view the details of object buckets created for Object Bucket Claims (OBCs) using the OpenShift Web Console. Prerequisites Administrative access to the OpenShift Web Console. Procedure Log into the OpenShift Web Console. On the left navigation bar, click Storage Object Storage Object Buckets . Optonal: You can also navigate to the details page of a specific OBC, and click the Resource link to view the object buckets for that OBC. Select the object bucket of which you want to see the details. Once selected you are navigated to the Object Bucket Details page. 11.6. Deleting Object Bucket Claims Prerequisites Administrative access to the OpenShift Web Console. Procedure On the left navigation bar, click Storage Object Storage Object Bucket Claims . Click the Action menu (...) to the Object Bucket Claim (OBC) you want to delete. Select Delete Object Bucket Claim . Click Delete .
[ "apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <obc-name> spec: generateBucketName: <obc-bucket-name> storageClassName: openshift-storage.noobaa.io", "apiVersion: batch/v1 kind: Job metadata: name: testjob spec: template: spec: restartPolicy: OnFailure containers: - image: <your application image> name: test env: - name: BUCKET_NAME valueFrom: configMapKeyRef: name: <obc-name> key: BUCKET_NAME - name: BUCKET_HOST valueFrom: configMapKeyRef: name: <obc-name> key: BUCKET_HOST - name: BUCKET_PORT valueFrom: configMapKeyRef: name: <obc-name> key: BUCKET_PORT - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: <obc-name> key: AWS_ACCESS_KEY_ID - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: <obc-name> key: AWS_SECRET_ACCESS_KEY", "oc apply -f <yaml.file>", "oc get cm <obc-name> -o yaml", "oc get secret <obc_name> -o yaml", "noobaa obc create <obc-name> -n openshift-storage", "INFO[0001] ✅ Created: ObjectBucketClaim \"test21obc\"", "oc get obc -n openshift-storage", "NAME STORAGE-CLASS PHASE AGE test21obc openshift-storage.noobaa.io Bound 38s", "oc get obc test21obc -o yaml -n openshift-storage", "apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: creationTimestamp: \"2019-10-24T13:30:07Z\" finalizers: - objectbucket.io/finalizer generation: 2 labels: app: noobaa bucket-provisioner: openshift-storage.noobaa.io-obc noobaa-domain: openshift-storage.noobaa.io name: test21obc namespace: openshift-storage resourceVersion: \"40756\" selfLink: /apis/objectbucket.io/v1alpha1/namespaces/openshift-storage/objectbucketclaims/test21obc uid: 64f04cba-f662-11e9-bc3c-0295250841af spec: ObjectBucketName: obc-openshift-storage-test21obc bucketName: test21obc-933348a6-e267-4f82-82f1-e59bf4fe3bb4 generateBucketName: test21obc storageClassName: openshift-storage.noobaa.io status: phase: Bound", "oc get -n openshift-storage secret test21obc -o yaml", "apiVersion: v1 data: AWS_ACCESS_KEY_ID: c0M0R2xVanF3ODR3bHBkVW94cmY= AWS_SECRET_ACCESS_KEY: Wi9kcFluSWxHRzlWaFlzNk1hc0xma2JXcjM1MVhqa051SlBleXpmOQ== kind: Secret metadata: creationTimestamp: \"2019-10-24T13:30:07Z\" finalizers: - objectbucket.io/finalizer labels: app: noobaa bucket-provisioner: openshift-storage.noobaa.io-obc noobaa-domain: openshift-storage.noobaa.io name: test21obc namespace: openshift-storage ownerReferences: - apiVersion: objectbucket.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ObjectBucketClaim name: test21obc uid: 64f04cba-f662-11e9-bc3c-0295250841af resourceVersion: \"40751\" selfLink: /api/v1/namespaces/openshift-storage/secrets/test21obc uid: 65117c1c-f662-11e9-9094-0a5305de57bb type: Opaque", "oc get -n openshift-storage cm test21obc -o yaml", "apiVersion: v1 data: BUCKET_HOST: 10.0.171.35 BUCKET_NAME: test21obc-933348a6-e267-4f82-82f1-e59bf4fe3bb4 BUCKET_PORT: \"31242\" BUCKET_REGION: \"\" BUCKET_SUBREGION: \"\" kind: ConfigMap metadata: creationTimestamp: \"2019-10-24T13:30:07Z\" finalizers: - objectbucket.io/finalizer labels: app: noobaa bucket-provisioner: openshift-storage.noobaa.io-obc noobaa-domain: openshift-storage.noobaa.io name: test21obc namespace: openshift-storage ownerReferences: - apiVersion: objectbucket.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ObjectBucketClaim name: test21obc uid: 64f04cba-f662-11e9-bc3c-0295250841af resourceVersion: \"40752\" selfLink: /api/v1/namespaces/openshift-storage/configmaps/test21obc uid: 651c6501-f662-11e9-9094-0a5305de57bb" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/managing_hybrid_and_multicloud_resources/object-bucket-claim
B.2. Maven Repository Configuration Example
B.2. Maven Repository Configuration Example A sample Maven repository file named example-settings.xml is available in the root directory of the Maven repository folder after it is unzipped. The following is an excerpt that contains the relevant parts of the example-settings.xml file: Example B.1. Sample Maven Repository Configuration Report a bug
[ "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <settings xmlns=\"http://maven.apache.org/SETTINGS/1.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd\"> <proxies> <!-- proxy Specification for one proxy, to be used in connecting to the network. <proxy> <id>optional</id> <active>true</active> <protocol>http</protocol> <username>proxyuser</</username> <password>proxypass</password> <host>proxy.host.net</host> <port>80</port> <nonProxyHosts>local.net|some.host.com</nonProxyHosts> </proxy> --> </proxies> <profiles> <!-- Configure the JBoss GA Maven repository --> <profile> <id>jboss-ga-repository</id> <repositories> <repository> <id>jboss-ga-repository</id> <url>http://maven.repository.redhat.com/techpreview/all</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>jboss-ga-plugin-repository</id> <url>http://maven.repository.redhat.com/techpreview/all</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> <!-- Configure the JBoss Early Access Maven repository --> <profile> <id>jboss-earlyaccess-repository</id> <repositories> <repository> <id>jboss-earlyaccess-repository</id> <url>http://maven.repository.redhat.com/earlyaccess/all/</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>jboss-earlyaccess-plugin-repository</id> <url>http://maven.repository.redhat.com/earlyaccess/all/</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> </profiles> <activeProfiles> <!-- Optionally, make the repositories active by default --> <activeProfile>jboss-ga-repository</activeProfile> <activeProfile>jboss-earlyaccess-repository</activeProfile> </activeProfiles> </settings>" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/getting_started_guide/Maven_Repository_Configuration_Example
Chapter 12. Monitoring
Chapter 12. Monitoring 12.1. Monitoring overview You can monitor the health of your cluster and virtual machines (VMs) with the following tools: Monitoring OpenShift Virtualization VM health status View the overall health of your OpenShift Virtualization environment in the web console by navigating to the Home Overview page in the OpenShift Container Platform web console. The Status card displays the overall health of OpenShift Virtualization based on the alerts and conditions. OpenShift Container Platform cluster checkup framework Run automated tests on your cluster with the OpenShift Container Platform cluster checkup framework to check the following conditions: Network connectivity and latency between two VMs attached to a secondary network interface VM running a Data Plane Development Kit (DPDK) workload with zero packet loss Cluster storage is optimally configured for OpenShift Virtualization Prometheus queries for virtual resources Query vCPU, network, storage, and guest memory swapping usage and live migration progress. VM custom metrics Configure the node-exporter service to expose internal VM metrics and processes. VM health checks Configure readiness, liveness, and guest agent ping probes and a watchdog for VMs. Runbooks Diagnose and resolve issues that trigger OpenShift Virtualization alerts in the OpenShift Container Platform web console. 12.2. OpenShift Virtualization cluster checkup framework OpenShift Virtualization includes the following predefined checkups that can be used for cluster maintenance and troubleshooting: Latency checkup, which verifies network connectivity and measures latency between two virtual machines (VMs) that are attached to a secondary network interface. Important Before you run a latency checkup, you must first create a bridge interface on the cluster nodes to connect the VM's secondary interface to any interface on the node. If you do not create a bridge interface, the VMs do not start and the job fails. Storage checkup, which verifies if the cluster storage is optimally configured for OpenShift Virtualization. DPDK checkup, which verifies that a node can run a VM with a Data Plane Development Kit (DPDK) workload with zero packet loss. Important The OpenShift Virtualization cluster checkup framework is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 12.2.1. About the OpenShift Virtualization cluster checkup framework A checkup is an automated test workload that allows you to verify if a specific cluster functionality works as expected. The cluster checkup framework uses native Kubernetes resources to configure and execute the checkup. By using predefined checkups, cluster administrators and developers can improve cluster maintainability, troubleshoot unexpected behavior, minimize errors, and save time. They can also review the results of the checkup and share them with experts for further analysis. Vendors can write and publish checkups for features or services that they provide and verify that their customer environments are configured correctly. Running a predefined checkup in an existing namespace involves setting up a service account for the checkup, creating the Role and RoleBinding objects for the service account, enabling permissions for the checkup, and creating the input config map and the checkup job. You can run a checkup multiple times. Important You must always: Verify that the checkup image is from a trustworthy source before applying it. Review the checkup permissions before creating the Role and RoleBinding objects. 12.2.2. Running checkups by using the web console Use the following procedures the first time you run checkups by using the web console. For additional checkups, click Run checkup on either checkup tab, and select the appropriate checkup from the drop down menu. 12.2.2.1. Running a latency checkup by using the web console Run a latency checkup to verify network connectivity and measure the latency between two virtual machines attached to a secondary network interface. Prerequisites You must add a NetworkAttachmentDefinition to the namespace. Procedure Navigate to Virtualization Checkups in the web console. Click the Network latency tab. Click Install permissions . Click Run checkup . Enter a name for the checkup in the Name field. Select a NetworkAttachmentDefinition from the drop-down menu. Optional: Set a duration for the latency sample in the Sample duration (seconds) field. Optional: Define a maximum latency time interval by enabling Set maximum desired latency (milliseconds) and defining the time interval. Optional: Target specific nodes by enabling Select nodes and specifying the Source node and Target node . Click Run . You can view the status of the latency checkup in the Checkups list on the Latency checkup tab. Click on the name of the checkup for more details. 12.2.2.2. Running a storage checkup by using the web console Run a storage checkup to validate that storage is working correctly for virtual machines. Procedure Navigate to Virtualization Checkups in the web console. Click the Storage tab. Click Install permissions . Click Run checkup . Enter a name for the checkup in the Name field. Enter a timeout value for the checkup in the Timeout (minutes) fields. Click Run . You can view the status of the storage checkup in the Checkups list on the Storage tab. Click on the name of the checkup for more details. 12.2.3. Running checkups by using the command line Use the following procedures the first time you run checkups by using the command line. 12.2.3.1. Running a latency checkup by using the command line You use a predefined checkup to verify network connectivity and measure latency between two virtual machines (VMs) that are attached to a secondary network interface. The latency checkup uses the ping utility. You run a latency checkup by performing the following steps: Create a service account, roles, and rolebindings to provide cluster access permissions to the latency checkup. Create a config map to provide the input to run the checkup and to store the results. Create a job to run the checkup. Review the results in the config map. Optional: To rerun the checkup, delete the existing config map and job and then create a new config map and job. When you are finished, delete the latency checkup resources. Prerequisites You installed the OpenShift CLI ( oc ). The cluster has at least two worker nodes. You configured a network attachment definition for a namespace. Procedure Create a ServiceAccount , Role , and RoleBinding manifest for the latency checkup: Example 12.1. Example role manifest file --- apiVersion: v1 kind: ServiceAccount metadata: name: vm-latency-checkup-sa --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: kubevirt-vm-latency-checker rules: - apiGroups: ["kubevirt.io"] resources: ["virtualmachineinstances"] verbs: ["get", "create", "delete"] - apiGroups: ["subresources.kubevirt.io"] resources: ["virtualmachineinstances/console"] verbs: ["get"] - apiGroups: ["k8s.cni.cncf.io"] resources: ["network-attachment-definitions"] verbs: ["get"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kubevirt-vm-latency-checker subjects: - kind: ServiceAccount name: vm-latency-checkup-sa roleRef: kind: Role name: kubevirt-vm-latency-checker apiGroup: rbac.authorization.k8s.io --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: kiagnose-configmap-access rules: - apiGroups: [ "" ] resources: [ "configmaps" ] verbs: ["get", "update"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kiagnose-configmap-access subjects: - kind: ServiceAccount name: vm-latency-checkup-sa roleRef: kind: Role name: kiagnose-configmap-access apiGroup: rbac.authorization.k8s.io Apply the ServiceAccount , Role , and RoleBinding manifest: USD oc apply -n <target_namespace> -f <latency_sa_roles_rolebinding>.yaml 1 1 <target_namespace> is the namespace where the checkup is to be run. This must be an existing namespace where the NetworkAttachmentDefinition object resides. Create a ConfigMap manifest that contains the input parameters for the checkup: Example input config map apiVersion: v1 kind: ConfigMap metadata: name: kubevirt-vm-latency-checkup-config labels: kiagnose/checkup-type: kubevirt-vm-latency data: spec.timeout: 5m spec.param.networkAttachmentDefinitionNamespace: <target_namespace> spec.param.networkAttachmentDefinitionName: "blue-network" 1 spec.param.maxDesiredLatencyMilliseconds: "10" 2 spec.param.sampleDurationSeconds: "5" 3 spec.param.sourceNode: "worker1" 4 spec.param.targetNode: "worker2" 5 1 The name of the NetworkAttachmentDefinition object. 2 Optional: The maximum desired latency, in milliseconds, between the virtual machines. If the measured latency exceeds this value, the checkup fails. 3 Optional: The duration of the latency check, in seconds. 4 Optional: When specified, latency is measured from this node to the target node. If the source node is specified, the spec.param.targetNode field cannot be empty. 5 Optional: When specified, latency is measured from the source node to this node. Apply the config map manifest in the target namespace: USD oc apply -n <target_namespace> -f <latency_config_map>.yaml Create a Job manifest to run the checkup: Example job manifest apiVersion: batch/v1 kind: Job metadata: name: kubevirt-vm-latency-checkup labels: kiagnose/checkup-type: kubevirt-vm-latency spec: backoffLimit: 0 template: spec: serviceAccountName: vm-latency-checkup-sa restartPolicy: Never containers: - name: vm-latency-checkup image: registry.redhat.io/container-native-virtualization/vm-network-latency-checkup-rhel9:v4.16.0 securityContext: allowPrivilegeEscalation: false capabilities: drop: ["ALL"] runAsNonRoot: true seccompProfile: type: "RuntimeDefault" env: - name: CONFIGMAP_NAMESPACE value: <target_namespace> - name: CONFIGMAP_NAME value: kubevirt-vm-latency-checkup-config - name: POD_UID valueFrom: fieldRef: fieldPath: metadata.uid Apply the Job manifest: USD oc apply -n <target_namespace> -f <latency_job>.yaml Wait for the job to complete: USD oc wait job kubevirt-vm-latency-checkup -n <target_namespace> --for condition=complete --timeout 6m Review the results of the latency checkup by running the following command. If the maximum measured latency is greater than the value of the spec.param.maxDesiredLatencyMilliseconds attribute, the checkup fails and returns an error. USD oc get configmap kubevirt-vm-latency-checkup-config -n <target_namespace> -o yaml Example output config map (success) apiVersion: v1 kind: ConfigMap metadata: name: kubevirt-vm-latency-checkup-config namespace: <target_namespace> labels: kiagnose/checkup-type: kubevirt-vm-latency data: spec.timeout: 5m spec.param.networkAttachmentDefinitionNamespace: <target_namespace> spec.param.networkAttachmentDefinitionName: "blue-network" spec.param.maxDesiredLatencyMilliseconds: "10" spec.param.sampleDurationSeconds: "5" spec.param.sourceNode: "worker1" spec.param.targetNode: "worker2" status.succeeded: "true" status.failureReason: "" status.completionTimestamp: "2022-01-01T09:00:00Z" status.startTimestamp: "2022-01-01T09:00:07Z" status.result.avgLatencyNanoSec: "177000" status.result.maxLatencyNanoSec: "244000" 1 status.result.measurementDurationSec: "5" status.result.minLatencyNanoSec: "135000" status.result.sourceNode: "worker1" status.result.targetNode: "worker2" 1 The maximum measured latency in nanoseconds. Optional: To view the detailed job log in case of checkup failure, use the following command: USD oc logs job.batch/kubevirt-vm-latency-checkup -n <target_namespace> Delete the job and config map that you previously created by running the following commands: USD oc delete job -n <target_namespace> kubevirt-vm-latency-checkup USD oc delete config-map -n <target_namespace> kubevirt-vm-latency-checkup-config Optional: If you do not plan to run another checkup, delete the roles manifest: USD oc delete -f <latency_sa_roles_rolebinding>.yaml 12.2.3.2. Running a storage checkup by using the command line Use a predefined checkup to verify that the OpenShift Container Platform cluster storage is configured optimally to run OpenShift Virtualization workloads. Prerequisites You have installed the OpenShift CLI ( oc ). The cluster administrator has created the required cluster-reader permissions for the storage checkup service account and namespace, such as in the following example: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: kubevirt-storage-checkup-clustereader roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-reader subjects: - kind: ServiceAccount name: storage-checkup-sa namespace: <target_namespace> 1 1 The namespace where the checkup is to be run. Procedure Create a ServiceAccount , Role , and RoleBinding manifest file for the storage checkup: Example 12.2. Example service account, role, and rolebinding manifest --- apiVersion: v1 kind: ServiceAccount metadata: name: storage-checkup-sa --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: storage-checkup-role rules: - apiGroups: [ "" ] resources: [ "configmaps" ] verbs: ["get", "update"] - apiGroups: [ "kubevirt.io" ] resources: [ "virtualmachines" ] verbs: [ "create", "delete" ] - apiGroups: [ "kubevirt.io" ] resources: [ "virtualmachineinstances" ] verbs: [ "get" ] - apiGroups: [ "subresources.kubevirt.io" ] resources: [ "virtualmachineinstances/addvolume", "virtualmachineinstances/removevolume" ] verbs: [ "update" ] - apiGroups: [ "kubevirt.io" ] resources: [ "virtualmachineinstancemigrations" ] verbs: [ "create" ] - apiGroups: [ "cdi.kubevirt.io" ] resources: [ "datavolumes" ] verbs: [ "create", "delete" ] - apiGroups: [ "" ] resources: [ "persistentvolumeclaims" ] verbs: [ "delete" ] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: storage-checkup-role subjects: - kind: ServiceAccount name: storage-checkup-sa roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: storage-checkup-role Apply the ServiceAccount , Role , and RoleBinding manifest in the target namespace: USD oc apply -n <target_namespace> -f <storage_sa_roles_rolebinding>.yaml Create a ConfigMap and Job manifest file. The config map contains the input parameters for the checkup job. Example input config map and job manifest --- apiVersion: v1 kind: ConfigMap metadata: name: storage-checkup-config namespace: USDCHECKUP_NAMESPACE data: spec.timeout: 10m spec.param.storageClass: ocs-storagecluster-ceph-rbd-virtualization spec.param.vmiTimeout: 3m --- apiVersion: batch/v1 kind: Job metadata: name: storage-checkup namespace: USDCHECKUP_NAMESPACE spec: backoffLimit: 0 template: spec: serviceAccount: storage-checkup-sa restartPolicy: Never containers: - name: storage-checkup image: quay.io/kiagnose/kubevirt-storage-checkup:main imagePullPolicy: Always env: - name: CONFIGMAP_NAMESPACE value: USDCHECKUP_NAMESPACE - name: CONFIGMAP_NAME value: storage-checkup-config Apply the ConfigMap and Job manifest file in the target namespace to run the checkup: USD oc apply -n <target_namespace> -f <storage_configmap_job>.yaml Wait for the job to complete: USD oc wait job storage-checkup -n <target_namespace> --for condition=complete --timeout 10m Review the results of the checkup by running the following command: USD oc get configmap storage-checkup-config -n <target_namespace> -o yaml Example output config map (success) apiVersion: v1 kind: ConfigMap metadata: name: storage-checkup-config labels: kiagnose/checkup-type: kubevirt-storage data: spec.timeout: 10m status.succeeded: "true" 1 status.failureReason: "" 2 status.startTimestamp: "2023-07-31T13:14:38Z" 3 status.completionTimestamp: "2023-07-31T13:19:41Z" 4 status.result.cnvVersion: 4.16.2 5 status.result.defaultStorageClass: trident-nfs 6 status.result.goldenImagesNoDataSource: <data_import_cron_list> 7 status.result.goldenImagesNotUpToDate: <data_import_cron_list> 8 status.result.ocpVersion: 4.16.0 9 status.result.pvcBound: "true" 10 status.result.storageProfileMissingVolumeSnapshotClass: <storage_class_list> 11 status.result.storageProfilesWithEmptyClaimPropertySets: <storage_profile_list> 12 status.result.storageProfilesWithSmartClone: <storage_profile_list> 13 status.result.storageProfilesWithSpecClaimPropertySets: <storage_profile_list> 14 status.result.storageProfilesWithRWX: |- ocs-storagecluster-ceph-rbd ocs-storagecluster-ceph-rbd-virtualization ocs-storagecluster-cephfs trident-iscsi trident-minio trident-nfs windows-vms status.result.vmBootFromGoldenImage: VMI "vmi-under-test-dhkb8" successfully booted status.result.vmHotplugVolume: |- VMI "vmi-under-test-dhkb8" hotplug volume ready VMI "vmi-under-test-dhkb8" hotplug volume removed status.result.vmLiveMigration: VMI "vmi-under-test-dhkb8" migration completed status.result.vmVolumeClone: 'DV cloneType: "csi-clone"' status.result.vmsWithNonVirtRbdStorageClass: <vm_list> 15 status.result.vmsWithUnsetEfsStorageClass: <vm_list> 16 1 Specifies if the checkup is successful ( true ) or not ( false ). 2 The reason for failure if the checkup fails. 3 The time when the checkup started, in RFC 3339 time format. 4 The time when the checkup has completed, in RFC 3339 time format. 5 The OpenShift Virtualization version. 6 Specifies if there is a default storage class. 7 The list of golden images whose data source is not ready. 8 The list of golden images whose data import cron is not up-to-date. 9 The OpenShift Container Platform version. 10 Specifies if a PVC of 10Mi has been created and bound by the provisioner. 11 The list of storage profiles using snapshot-based clone but missing VolumeSnapshotClass. 12 The list of storage profiles with unknown provisioners. 13 The list of storage profiles with smart clone support (CSI/snapshot). 14 The list of storage profiles spec-overriden claimPropertySets. 15 The list of virtual machines that use the Ceph RBD storage class when the virtualization storage class exists. 16 The list of virtual machines that use an Elastic File Store (EFS) storage class where the GID and UID are not set in the storage class. Delete the job and config map that you previously created by running the following commands: USD oc delete job -n <target_namespace> storage-checkup USD oc delete config-map -n <target_namespace> storage-checkup-config Optional: If you do not plan to run another checkup, delete the ServiceAccount , Role , and RoleBinding manifest: USD oc delete -f <storage_sa_roles_rolebinding>.yaml 12.2.3.3. Running a DPDK checkup by using the command line Use a predefined checkup to verify that your OpenShift Container Platform cluster node can run a virtual machine (VM) with a Data Plane Development Kit (DPDK) workload with zero packet loss. The DPDK checkup runs traffic between a traffic generator and a VM running a test DPDK application. You run a DPDK checkup by performing the following steps: Create a service account, role, and role bindings for the DPDK checkup. Create a config map to provide the input to run the checkup and to store the results. Create a job to run the checkup. Review the results in the config map. Optional: To rerun the checkup, delete the existing config map and job and then create a new config map and job. When you are finished, delete the DPDK checkup resources. Prerequisites You have installed the OpenShift CLI ( oc ). The cluster is configured to run DPDK applications. The project is configured to run DPDK applications. Procedure Create a ServiceAccount , Role , and RoleBinding manifest for the DPDK checkup: Example 12.3. Example service account, role, and rolebinding manifest file --- apiVersion: v1 kind: ServiceAccount metadata: name: dpdk-checkup-sa --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: kiagnose-configmap-access rules: - apiGroups: [ "" ] resources: [ "configmaps" ] verbs: [ "get", "update" ] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kiagnose-configmap-access subjects: - kind: ServiceAccount name: dpdk-checkup-sa roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kiagnose-configmap-access --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: kubevirt-dpdk-checker rules: - apiGroups: [ "kubevirt.io" ] resources: [ "virtualmachineinstances" ] verbs: [ "create", "get", "delete" ] - apiGroups: [ "subresources.kubevirt.io" ] resources: [ "virtualmachineinstances/console" ] verbs: [ "get" ] - apiGroups: [ "" ] resources: [ "configmaps" ] verbs: [ "create", "delete" ] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kubevirt-dpdk-checker subjects: - kind: ServiceAccount name: dpdk-checkup-sa roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kubevirt-dpdk-checker Apply the ServiceAccount , Role , and RoleBinding manifest: USD oc apply -n <target_namespace> -f <dpdk_sa_roles_rolebinding>.yaml Create a ConfigMap manifest that contains the input parameters for the checkup: Example input config map apiVersion: v1 kind: ConfigMap metadata: name: dpdk-checkup-config labels: kiagnose/checkup-type: kubevirt-dpdk data: spec.timeout: 10m spec.param.networkAttachmentDefinitionName: <network_name> 1 spec.param.trafficGenContainerDiskImage: "quay.io/kiagnose/kubevirt-dpdk-checkup-traffic-gen:v0.4.0 2 spec.param.vmUnderTestContainerDiskImage: "quay.io/kiagnose/kubevirt-dpdk-checkup-vm:v0.4.0" 3 1 The name of the NetworkAttachmentDefinition object. 2 The container disk image for the traffic generator. In this example, the image is pulled from the upstream Project Quay Container Registry. 3 The container disk image for the VM under test. In this example, the image is pulled from the upstream Project Quay Container Registry. Apply the ConfigMap manifest in the target namespace: USD oc apply -n <target_namespace> -f <dpdk_config_map>.yaml Create a Job manifest to run the checkup: Example job manifest apiVersion: batch/v1 kind: Job metadata: name: dpdk-checkup labels: kiagnose/checkup-type: kubevirt-dpdk spec: backoffLimit: 0 template: spec: serviceAccountName: dpdk-checkup-sa restartPolicy: Never containers: - name: dpdk-checkup image: registry.redhat.io/container-native-virtualization/kubevirt-dpdk-checkup-rhel9:v4.16.0 imagePullPolicy: Always securityContext: allowPrivilegeEscalation: false capabilities: drop: ["ALL"] runAsNonRoot: true seccompProfile: type: "RuntimeDefault" env: - name: CONFIGMAP_NAMESPACE value: <target-namespace> - name: CONFIGMAP_NAME value: dpdk-checkup-config - name: POD_UID valueFrom: fieldRef: fieldPath: metadata.uid Apply the Job manifest: USD oc apply -n <target_namespace> -f <dpdk_job>.yaml Wait for the job to complete: USD oc wait job dpdk-checkup -n <target_namespace> --for condition=complete --timeout 10m Review the results of the checkup by running the following command: USD oc get configmap dpdk-checkup-config -n <target_namespace> -o yaml Example output config map (success) apiVersion: v1 kind: ConfigMap metadata: name: dpdk-checkup-config labels: kiagnose/checkup-type: kubevirt-dpdk data: spec.timeout: 10m spec.param.NetworkAttachmentDefinitionName: "dpdk-network-1" spec.param.trafficGenContainerDiskImage: "quay.io/kiagnose/kubevirt-dpdk-checkup-traffic-gen:v0.4.0" spec.param.vmUnderTestContainerDiskImage: "quay.io/kiagnose/kubevirt-dpdk-checkup-vm:v0.4.0" status.succeeded: "true" 1 status.failureReason: "" 2 status.startTimestamp: "2023-07-31T13:14:38Z" 3 status.completionTimestamp: "2023-07-31T13:19:41Z" 4 status.result.trafficGenSentPackets: "480000000" 5 status.result.trafficGenOutputErrorPackets: "0" 6 status.result.trafficGenInputErrorPackets: "0" 7 status.result.trafficGenActualNodeName: worker-dpdk1 8 status.result.vmUnderTestActualNodeName: worker-dpdk2 9 status.result.vmUnderTestReceivedPackets: "480000000" 10 status.result.vmUnderTestRxDroppedPackets: "0" 11 status.result.vmUnderTestTxDroppedPackets: "0" 12 1 Specifies if the checkup is successful ( true ) or not ( false ). 2 The reason for failure if the checkup fails. 3 The time when the checkup started, in RFC 3339 time format. 4 The time when the checkup has completed, in RFC 3339 time format. 5 The number of packets sent from the traffic generator. 6 The number of error packets sent from the traffic generator. 7 The number of error packets received by the traffic generator. 8 The node on which the traffic generator VM was scheduled. 9 The node on which the VM under test was scheduled. 10 The number of packets received on the VM under test. 11 The ingress traffic packets that were dropped by the DPDK application. 12 The egress traffic packets that were dropped from the DPDK application. Delete the job and config map that you previously created by running the following commands: USD oc delete job -n <target_namespace> dpdk-checkup USD oc delete config-map -n <target_namespace> dpdk-checkup-config Optional: If you do not plan to run another checkup, delete the ServiceAccount , Role , and RoleBinding manifest: USD oc delete -f <dpdk_sa_roles_rolebinding>.yaml 12.2.3.3.1. DPDK checkup config map parameters The following table shows the mandatory and optional parameters that you can set in the data stanza of the input ConfigMap manifest when you run a cluster DPDK readiness checkup: Table 12.1. DPDK checkup config map input parameters Parameter Description Is Mandatory spec.timeout The time, in minutes, before the checkup fails. True spec.param.networkAttachmentDefinitionName The name of the NetworkAttachmentDefinition object of the SR-IOV NICs connected. True spec.param.trafficGenContainerDiskImage The container disk image for the traffic generator. True spec.param.trafficGenTargetNodeName The node on which the traffic generator VM is to be scheduled. The node should be configured to allow DPDK traffic. False spec.param.trafficGenPacketsPerSecond The number of packets per second, in kilo (k) or million(m). The default value is 8m. False spec.param.vmUnderTestContainerDiskImage The container disk image for the VM under test. True spec.param.vmUnderTestTargetNodeName The node on which the VM under test is to be scheduled. The node should be configured to allow DPDK traffic. False spec.param.testDuration The duration, in minutes, for which the traffic generator runs. The default value is 5 minutes. False spec.param.portBandwidthGbps The maximum bandwidth of the SR-IOV NIC. The default value is 10Gbps. False spec.param.verbose When set to true , it increases the verbosity of the checkup log. The default value is false . False 12.2.3.3.2. Building a container disk image for RHEL virtual machines You can build a custom Red Hat Enterprise Linux (RHEL) 8 OS image in qcow2 format and use it to create a container disk image. You can store the container disk image in a registry that is accessible from your cluster and specify the image location in the spec.param.vmContainerDiskImage attribute of the DPDK checkup config map. To build a container disk image, you must create an image builder virtual machine (VM). The image builder VM is a RHEL 8 VM that can be used to build custom RHEL images. Prerequisites The image builder VM must run RHEL 8.7 and must have a minimum of 2 CPU cores, 4 GiB RAM, and 20 GB of free space in the /var directory. You have installed the image builder tool and its CLI ( composer-cli ) on the VM. You have installed the virt-customize tool: # dnf install libguestfs-tools You have installed the Podman CLI tool ( podman ). Procedure Verify that you can build a RHEL 8.7 image: # composer-cli distros list Note To run the composer-cli commands as non-root, add your user to the weldr or root groups: # usermod -a -G weldr user USD newgrp weldr Enter the following command to create an image blueprint file in TOML format that contains the packages to be installed, kernel customizations, and the services to be disabled during boot time: USD cat << EOF > dpdk-vm.toml name = "dpdk_image" description = "Image to use with the DPDK checkup" version = "0.0.1" distro = "rhel-87" [[customizations.user]] name = "root" password = "redhat" [[packages]] name = "dpdk" [[packages]] name = "dpdk-tools" [[packages]] name = "driverctl" [[packages]] name = "tuned-profiles-cpu-partitioning" [customizations.kernel] append = "default_hugepagesz=1GB hugepagesz=1G hugepages=1" [customizations.services] disabled = ["NetworkManager-wait-online", "sshd"] EOF Push the blueprint file to the image builder tool by running the following command: # composer-cli blueprints push dpdk-vm.toml Generate the system image by specifying the blueprint name and output file format. The Universally Unique Identifier (UUID) of the image is displayed when you start the compose process. # composer-cli compose start dpdk_image qcow2 Wait for the compose process to complete. The compose status must show FINISHED before you can continue to the step. # composer-cli compose status Enter the following command to download the qcow2 image file by specifying its UUID: # composer-cli compose image <UUID> Create the customization scripts by running the following commands: USD cat <<EOF >customize-vm #!/bin/bash # Setup hugepages mount mkdir -p /mnt/huge echo "hugetlbfs /mnt/huge hugetlbfs defaults,pagesize=1GB 0 0" >> /etc/fstab # Create vfio-noiommu.conf echo "options vfio enable_unsafe_noiommu_mode=1" > /etc/modprobe.d/vfio-noiommu.conf # Enable guest-exec,guest-exec-status on the qemu-guest-agent configuration sed -i '/^BLACKLIST_RPC=/ { s/guest-exec-status//; s/guest-exec//g }' /etc/sysconfig/qemu-ga sed -i '/^BLACKLIST_RPC=/ { s/,\+/,/g; s/^,\|,USD//g }' /etc/sysconfig/qemu-ga EOF Use the virt-customize tool to customize the image generated by the image builder tool: USD virt-customize -a <UUID>-disk.qcow2 --run=customize-vm --selinux-relabel To create a Dockerfile that contains all the commands to build the container disk image, enter the following command: USD cat << EOF > Dockerfile FROM scratch COPY --chown=107:107 <UUID>-disk.qcow2 /disk/ EOF where: <UUID>-disk.qcow2 Specifies the name of the custom image in qcow2 format. Build and tag the container by running the following command: USD podman build . -t dpdk-rhel:latest Push the container disk image to a registry that is accessible from your cluster by running the following command: USD podman push dpdk-rhel:latest Provide a link to the container disk image in the spec.param.vmUnderTestContainerDiskImage attribute in the DPDK checkup config map. 12.2.4. Additional resources Attaching a virtual machine to multiple networks Using a virtual function in DPDK mode with an Intel NIC Using SR-IOV and the Node Tuning Operator to achieve a DPDK line rate Installing image builder How to register and subscribe a RHEL system to the Red Hat Customer Portal using Red Hat Subscription Manager 12.3. Prometheus queries for virtual resources OpenShift Virtualization provides metrics that you can use to monitor the consumption of cluster infrastructure resources, including vCPU, network, storage, and guest memory swapping. You can also use metrics to query live migration status. 12.3.1. Prerequisites To use the vCPU metric, the schedstats=enable kernel argument must be applied to the MachineConfig object. This kernel argument enables scheduler statistics used for debugging and performance tuning and adds a minor additional load to the scheduler. For more information, see Adding kernel arguments to nodes . For guest memory swapping queries to return data, memory swapping must be enabled on the virtual guests. 12.3.2. Querying metrics for all projects with the OpenShift Container Platform web console You can use the OpenShift Container Platform metrics query browser to run Prometheus Query Language (PromQL) queries to examine metrics visualized on a plot. This functionality provides information about the state of a cluster and any user-defined workloads that you are monitoring. As a cluster administrator or as a user with view permissions for all projects, you can access metrics for all default OpenShift Container Platform and user-defined projects in the Metrics UI. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or with view permissions for all projects. You have installed the OpenShift CLI ( oc ). Procedure From the Administrator perspective in the OpenShift Container Platform web console, select Observe Metrics . To add one or more queries, do any of the following: Option Description Create a custom query. Add your Prometheus Query Language (PromQL) query to the Expression field. As you type a PromQL expression, autocomplete suggestions appear in a drop-down list. These suggestions include functions, metrics, labels, and time tokens. You can use the keyboard arrows to select one of these suggested items and then press Enter to add the item to your expression. You can also move your mouse pointer over a suggested item to view a brief description of that item. Add multiple queries. Select Add query . Duplicate an existing query. Select the Options menu to the query, then choose Duplicate query . Disable a query from being run. Select the Options menu to the query and choose Disable query . To run queries that you created, select Run queries . The metrics from the queries are visualized on the plot. If a query is invalid, the UI shows an error message. Note Queries that operate on large amounts of data might time out or overload the browser when drawing time series graphs. To avoid this, select Hide graph and calibrate your query using only the metrics table. Then, after finding a feasible query, enable the plot to draw the graphs. Note By default, the query table shows an expanded view that lists every metric and its current value. You can select ˅ to minimize the expanded view for a query. Optional: Save the page URL to use this set of queries again in the future. Explore the visualized metrics. Initially, all metrics from all enabled queries are shown on the plot. You can select which metrics are shown by doing any of the following: Option Description Hide all metrics from a query. Click the Options menu for the query and click Hide all series . Hide a specific metric. Go to the query table and click the colored square near the metric name. Zoom into the plot and change the time range. Either: Visually select the time range by clicking and dragging on the plot horizontally. Use the menu in the left upper corner to select the time range. Reset the time range. Select Reset zoom . Display outputs for all queries at a specific point in time. Hold the mouse cursor on the plot at that point. The query outputs will appear in a pop-up box. Hide the plot. Select Hide graph . 12.3.3. Querying metrics for user-defined projects with the OpenShift Container Platform web console You can use the OpenShift Container Platform metrics query browser to run Prometheus Query Language (PromQL) queries to examine metrics visualized on a plot. This functionality provides information about any user-defined workloads that you are monitoring. As a developer, you must specify a project name when querying metrics. You must have the required privileges to view metrics for the selected project. In the Developer perspective, the Metrics UI includes some predefined CPU, memory, bandwidth, and network packet queries for the selected project. You can also run custom Prometheus Query Language (PromQL) queries for CPU, memory, bandwidth, network packet and application metrics for the project. Note Developers can only use the Developer perspective and not the Administrator perspective. As a developer, you can only query metrics for one project at a time. Prerequisites You have access to the cluster as a developer or as a user with view permissions for the project that you are viewing metrics for. You have enabled monitoring for user-defined projects. You have deployed a service in a user-defined project. You have created a ServiceMonitor custom resource definition (CRD) for the service to define how the service is monitored. Procedure From the Developer perspective in the OpenShift Container Platform web console, select Observe Metrics . Select the project that you want to view metrics for from the Project: list. Select a query from the Select query list, or create a custom PromQL query based on the selected query by selecting Show PromQL . The metrics from the queries are visualized on the plot. Note In the Developer perspective, you can only run one query at a time. Explore the visualized metrics by doing any of the following: Option Description Zoom into the plot and change the time range. Either: Visually select the time range by clicking and dragging on the plot horizontally. Use the menu in the left upper corner to select the time range. Reset the time range. Select Reset zoom . Display outputs for all queries at a specific point in time. Hold the mouse cursor on the plot at that point. The query outputs appear in a pop-up box. 12.3.4. Virtualization metrics The following metric descriptions include example Prometheus Query Language (PromQL) queries. These metrics are not an API and might change between versions. For a complete list of virtualization metrics, see KubeVirt components metrics . Note The following examples use topk queries that specify a time period. If virtual machines are deleted during that time period, they can still appear in the query output. 12.3.4.1. vCPU metrics The following query can identify virtual machines that are waiting for Input/Output (I/O): kubevirt_vmi_vcpu_wait_seconds_total Returns the wait time (in seconds) for a virtual machine's vCPU. Type: Counter. A value above '0' means that the vCPU wants to run, but the host scheduler cannot run it yet. This inability to run indicates that there is an issue with I/O. Note To query the vCPU metric, the schedstats=enable kernel argument must first be applied to the MachineConfig object. This kernel argument enables scheduler statistics used for debugging and performance tuning and adds a minor additional load to the scheduler. Example vCPU wait time query topk(3, sum by (name, namespace) (rate(kubevirt_vmi_vcpu_wait_seconds_total[6m]))) > 0 1 1 This query returns the top 3 VMs waiting for I/O at every given moment over a six-minute time period. 12.3.4.2. Network metrics The following queries can identify virtual machines that are saturating the network: kubevirt_vmi_network_receive_bytes_total Returns the total amount of traffic received (in bytes) on the virtual machine's network. Type: Counter. kubevirt_vmi_network_transmit_bytes_total Returns the total amount of traffic transmitted (in bytes) on the virtual machine's network. Type: Counter. Example network traffic query topk(3, sum by (name, namespace) (rate(kubevirt_vmi_network_receive_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_network_transmit_bytes_total[6m]))) > 0 1 1 This query returns the top 3 VMs transmitting the most network traffic at every given moment over a six-minute time period. 12.3.4.3. Storage metrics 12.3.4.3.1. Storage-related traffic The following queries can identify VMs that are writing large amounts of data: kubevirt_vmi_storage_read_traffic_bytes_total Returns the total amount (in bytes) of the virtual machine's storage-related traffic. Type: Counter. kubevirt_vmi_storage_write_traffic_bytes_total Returns the total amount of storage writes (in bytes) of the virtual machine's storage-related traffic. Type: Counter. Example storage-related traffic query topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_read_traffic_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_write_traffic_bytes_total[6m]))) > 0 1 1 This query returns the top 3 VMs performing the most storage traffic at every given moment over a six-minute time period. 12.3.4.3.2. Storage snapshot data kubevirt_vmsnapshot_disks_restored_from_source Returns the total number of virtual machine disks restored from the source virtual machine. Type: Gauge. kubevirt_vmsnapshot_disks_restored_from_source_bytes Returns the amount of space in bytes restored from the source virtual machine. Type: Gauge. Examples of storage snapshot data queries kubevirt_vmsnapshot_disks_restored_from_source{vm_name="simple-vm", vm_namespace="default"} 1 1 This query returns the total number of virtual machine disks restored from the source virtual machine. kubevirt_vmsnapshot_disks_restored_from_source_bytes{vm_name="simple-vm", vm_namespace="default"} 1 1 This query returns the amount of space in bytes restored from the source virtual machine. 12.3.4.3.3. I/O performance The following queries can determine the I/O performance of storage devices: kubevirt_vmi_storage_iops_read_total Returns the amount of write I/O operations the virtual machine is performing per second. Type: Counter. kubevirt_vmi_storage_iops_write_total Returns the amount of read I/O operations the virtual machine is performing per second. Type: Counter. Example I/O performance query topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_read_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_write_total[6m]))) > 0 1 1 This query returns the top 3 VMs performing the most I/O operations per second at every given moment over a six-minute time period. 12.3.4.4. Guest memory swapping metrics The following queries can identify which swap-enabled guests are performing the most memory swapping: kubevirt_vmi_memory_swap_in_traffic_bytes Returns the total amount (in bytes) of memory the virtual guest is swapping in. Type: Gauge. kubevirt_vmi_memory_swap_out_traffic_bytes Returns the total amount (in bytes) of memory the virtual guest is swapping out. Type: Gauge. Example memory swapping query topk(3, sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_in_traffic_bytes[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_out_traffic_bytes[6m]))) > 0 1 1 This query returns the top 3 VMs where the guest is performing the most memory swapping at every given moment over a six-minute time period. Note Memory swapping indicates that the virtual machine is under memory pressure. Increasing the memory allocation of the virtual machine can mitigate this issue. 12.3.4.5. Live migration metrics The following metrics can be queried to show live migration status: kubevirt_vmi_migration_data_processed_bytes The amount of guest operating system data that has migrated to the new virtual machine (VM). Type: Gauge. kubevirt_vmi_migration_data_remaining_bytes The amount of guest operating system data that remains to be migrated. Type: Gauge. kubevirt_vmi_migration_memory_transfer_rate_bytes The rate at which memory is becoming dirty in the guest operating system. Dirty memory is data that has been changed but not yet written to disk. Type: Gauge. kubevirt_vmi_migrations_in_pending_phase The number of pending migrations. Type: Gauge. kubevirt_vmi_migrations_in_scheduling_phase The number of scheduling migrations. Type: Gauge. kubevirt_vmi_migrations_in_running_phase The number of running migrations. Type: Gauge. kubevirt_vmi_migration_succeeded The number of successfully completed migrations. Type: Gauge. kubevirt_vmi_migration_failed The number of failed migrations. Type: Gauge. 12.3.5. Additional resources About OpenShift Container Platform monitoring Querying Prometheus Prometheus query examples 12.4. Exposing custom metrics for virtual machines OpenShift Container Platform includes a preconfigured, preinstalled, and self-updating monitoring stack that provides monitoring for core platform components. This monitoring stack is based on the Prometheus monitoring system. Prometheus is a time-series database and a rule evaluation engine for metrics. In addition to using the OpenShift Container Platform monitoring stack, you can enable monitoring for user-defined projects by using the CLI and query custom metrics that are exposed for virtual machines through the node-exporter service. 12.4.1. Configuring the node exporter service The node-exporter agent is deployed on every virtual machine in the cluster from which you want to collect metrics. Configure the node-exporter agent as a service to expose internal metrics and processes that are associated with virtual machines. Prerequisites Install the OpenShift Container Platform CLI oc . Log in to the cluster as a user with cluster-admin privileges. Create the cluster-monitoring-config ConfigMap object in the openshift-monitoring project. Configure the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project by setting enableUserWorkload to true . Procedure Create the Service YAML file. In the following example, the file is called node-exporter-service.yaml . kind: Service apiVersion: v1 metadata: name: node-exporter-service 1 namespace: dynamation 2 labels: servicetype: metrics 3 spec: ports: - name: exmet 4 protocol: TCP port: 9100 5 targetPort: 9100 6 type: ClusterIP selector: monitor: metrics 7 1 The node-exporter service that exposes the metrics from the virtual machines. 2 The namespace where the service is created. 3 The label for the service. The ServiceMonitor uses this label to match this service. 4 The name given to the port that exposes metrics on port 9100 for the ClusterIP service. 5 The target port used by node-exporter-service to listen for requests. 6 The TCP port number of the virtual machine that is configured with the monitor label. 7 The label used to match the virtual machine's pods. In this example, any virtual machine's pod with the label monitor and a value of metrics will be matched. Create the node-exporter service: USD oc create -f node-exporter-service.yaml 12.4.2. Configuring a virtual machine with the node exporter service Download the node-exporter file on to the virtual machine. Then, create a systemd service that runs the node-exporter service when the virtual machine boots. Prerequisites The pods for the component are running in the openshift-user-workload-monitoring project. Grant the monitoring-edit role to users who need to monitor this user-defined project. Procedure Log on to the virtual machine. Download the node-exporter file on to the virtual machine by using the directory path that applies to the version of node-exporter file. USD wget https://github.com/prometheus/node_exporter/releases/download/v1.3.1/node_exporter-1.3.1.linux-amd64.tar.gz Extract the executable and place it in the /usr/bin directory. USD sudo tar xvf node_exporter-1.3.1.linux-amd64.tar.gz \ --directory /usr/bin --strip 1 "*/node_exporter" Create a node_exporter.service file in this directory path: /etc/systemd/system . This systemd service file runs the node-exporter service when the virtual machine reboots. [Unit] Description=Prometheus Metrics Exporter After=network.target StartLimitIntervalSec=0 [Service] Type=simple Restart=always RestartSec=1 User=root ExecStart=/usr/bin/node_exporter [Install] WantedBy=multi-user.target Enable and start the systemd service. USD sudo systemctl enable node_exporter.service USD sudo systemctl start node_exporter.service Verification Verify that the node-exporter agent is reporting metrics from the virtual machine. USD curl http://localhost:9100/metrics Example output go_gc_duration_seconds{quantile="0"} 1.5244e-05 go_gc_duration_seconds{quantile="0.25"} 3.0449e-05 go_gc_duration_seconds{quantile="0.5"} 3.7913e-05 12.4.3. Creating a custom monitoring label for virtual machines To enable queries to multiple virtual machines from a single service, add a custom label in the virtual machine's YAML file. Prerequisites Install the OpenShift Container Platform CLI oc . Log in as a user with cluster-admin privileges. Access to the web console for stop and restart a virtual machine. Procedure Edit the template spec of your virtual machine configuration file. In this example, the label monitor has the value metrics . spec: template: metadata: labels: monitor: metrics Stop and restart the virtual machine to create a new pod with the label name given to the monitor label. 12.4.3.1. Querying the node-exporter service for metrics Metrics are exposed for virtual machines through an HTTP service endpoint under the /metrics canonical name. When you query for metrics, Prometheus directly scrapes the metrics from the metrics endpoint exposed by the virtual machines and presents these metrics for viewing. Prerequisites You have access to the cluster as a user with cluster-admin privileges or the monitoring-edit role. You have enabled monitoring for the user-defined project by configuring the node-exporter service. Procedure Obtain the HTTP service endpoint by specifying the namespace for the service: USD oc get service -n <namespace> <node-exporter-service> To list all available metrics for the node-exporter service, query the metrics resource. USD curl http://<172.30.226.162:9100>/metrics | grep -vE "^#|^USD" Example output node_arp_entries{device="eth0"} 1 node_boot_time_seconds 1.643153218e+09 node_context_switches_total 4.4938158e+07 node_cooling_device_cur_state{name="0",type="Processor"} 0 node_cooling_device_max_state{name="0",type="Processor"} 0 node_cpu_guest_seconds_total{cpu="0",mode="nice"} 0 node_cpu_guest_seconds_total{cpu="0",mode="user"} 0 node_cpu_seconds_total{cpu="0",mode="idle"} 1.10586485e+06 node_cpu_seconds_total{cpu="0",mode="iowait"} 37.61 node_cpu_seconds_total{cpu="0",mode="irq"} 233.91 node_cpu_seconds_total{cpu="0",mode="nice"} 551.47 node_cpu_seconds_total{cpu="0",mode="softirq"} 87.3 node_cpu_seconds_total{cpu="0",mode="steal"} 86.12 node_cpu_seconds_total{cpu="0",mode="system"} 464.15 node_cpu_seconds_total{cpu="0",mode="user"} 1075.2 node_disk_discard_time_seconds_total{device="vda"} 0 node_disk_discard_time_seconds_total{device="vdb"} 0 node_disk_discarded_sectors_total{device="vda"} 0 node_disk_discarded_sectors_total{device="vdb"} 0 node_disk_discards_completed_total{device="vda"} 0 node_disk_discards_completed_total{device="vdb"} 0 node_disk_discards_merged_total{device="vda"} 0 node_disk_discards_merged_total{device="vdb"} 0 node_disk_info{device="vda",major="252",minor="0"} 1 node_disk_info{device="vdb",major="252",minor="16"} 1 node_disk_io_now{device="vda"} 0 node_disk_io_now{device="vdb"} 0 node_disk_io_time_seconds_total{device="vda"} 174 node_disk_io_time_seconds_total{device="vdb"} 0.054 node_disk_io_time_weighted_seconds_total{device="vda"} 259.79200000000003 node_disk_io_time_weighted_seconds_total{device="vdb"} 0.039 node_disk_read_bytes_total{device="vda"} 3.71867136e+08 node_disk_read_bytes_total{device="vdb"} 366592 node_disk_read_time_seconds_total{device="vda"} 19.128 node_disk_read_time_seconds_total{device="vdb"} 0.039 node_disk_reads_completed_total{device="vda"} 5619 node_disk_reads_completed_total{device="vdb"} 96 node_disk_reads_merged_total{device="vda"} 5 node_disk_reads_merged_total{device="vdb"} 0 node_disk_write_time_seconds_total{device="vda"} 240.66400000000002 node_disk_write_time_seconds_total{device="vdb"} 0 node_disk_writes_completed_total{device="vda"} 71584 node_disk_writes_completed_total{device="vdb"} 0 node_disk_writes_merged_total{device="vda"} 19761 node_disk_writes_merged_total{device="vdb"} 0 node_disk_written_bytes_total{device="vda"} 2.007924224e+09 node_disk_written_bytes_total{device="vdb"} 0 12.4.4. Creating a ServiceMonitor resource for the node exporter service You can use a Prometheus client library and scrape metrics from the /metrics endpoint to access and view the metrics exposed by the node-exporter service. Use a ServiceMonitor custom resource definition (CRD) to monitor the node exporter service. Prerequisites You have access to the cluster as a user with cluster-admin privileges or the monitoring-edit role. You have enabled monitoring for the user-defined project by configuring the node-exporter service. Procedure Create a YAML file for the ServiceMonitor resource configuration. In this example, the service monitor matches any service with the label metrics and queries the exmet port every 30 seconds. apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: k8s-app: node-exporter-metrics-monitor name: node-exporter-metrics-monitor 1 namespace: dynamation 2 spec: endpoints: - interval: 30s 3 port: exmet 4 scheme: http selector: matchLabels: servicetype: metrics 1 The name of the ServiceMonitor . 2 The namespace where the ServiceMonitor is created. 3 The interval at which the port will be queried. 4 The name of the port that is queried every 30 seconds Create the ServiceMonitor configuration for the node-exporter service. USD oc create -f node-exporter-metrics-monitor.yaml 12.4.4.1. Accessing the node exporter service outside the cluster You can access the node-exporter service outside the cluster and view the exposed metrics. Prerequisites You have access to the cluster as a user with cluster-admin privileges or the monitoring-edit role. You have enabled monitoring for the user-defined project by configuring the node-exporter service. Procedure Expose the node-exporter service. USD oc expose service -n <namespace> <node_exporter_service_name> Obtain the FQDN (Fully Qualified Domain Name) for the route. USD oc get route -o=custom-columns=NAME:.metadata.name,DNS:.spec.host Example output NAME DNS node-exporter-service node-exporter-service-dynamation.apps.cluster.example.org Use the curl command to display metrics for the node-exporter service. USD curl -s http://node-exporter-service-dynamation.apps.cluster.example.org/metrics Example output go_gc_duration_seconds{quantile="0"} 1.5382e-05 go_gc_duration_seconds{quantile="0.25"} 3.1163e-05 go_gc_duration_seconds{quantile="0.5"} 3.8546e-05 go_gc_duration_seconds{quantile="0.75"} 4.9139e-05 go_gc_duration_seconds{quantile="1"} 0.000189423 12.4.5. Additional resources Core platform monitoring first steps Enabling monitoring for user-defined projects Accessing metrics as a developer Reviewing monitoring dashboards as a developer Monitoring application health by using health checks Creating and using config maps Controlling virtual machine states 12.5. Exposing downward metrics for virtual machines As an administrator, you can expose a limited set of host and virtual machine (VM) metrics to a guest VM by first enabling a downwardMetrics feature gate and then configuring a downwardMetrics device. Users can view the metrics results by using the command line or the vm-dump-metrics tool . Note On Red Hat Enterprise Linux (RHEL) 9, use the command line to view downward metrics. See Viewing downward metrics by using the command line . The vm-dump-metrics tool is not supported on the Red Hat Enterprise Linux (RHEL) 9 platform. 12.5.1. Enabling or disabling the downwardMetrics feature gate You can enable or disable the downwardMetrics feature gate by performing either of the following actions: Editing the HyperConverged custom resource (CR) in your default editor Using the command line 12.5.1.1. Enabling or disabling the downward metrics feature gate in a YAML file To expose downward metrics for a host virtual machine, you can enable the downwardMetrics feature gate by editing a YAML file. Prerequisites You must have administrator privileges to enable the feature gate. Procedure Open the HyperConverged custom resource (CR) in your default editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Choose to enable or disable the downwardMetrics feature gate as follows: To enable the downwardMetrics feature gate, add and then set spec.featureGates.downwardMetrics to true . For example: apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: featureGates: downwardMetrics: true # ... To disable the downwardMetrics feature gate, set spec.featureGates.downwardMetrics to false . For example: apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: featureGates: downwardMetrics: false # ... 12.5.1.2. Enabling or disabling the downward metrics feature gate from the command line To expose downward metrics for a host virtual machine, you can enable the downwardMetrics feature gate by using the command line. Prerequisites You must have administrator privileges to enable the feature gate. Procedure Choose to enable or disable the downwardMetrics feature gate as follows: Enable the downwardMetrics feature gate by running the command shown in the following example: USD oc patch hco kubevirt-hyperconverged -n openshift-cnv \ --type json -p '[{"op": "replace", "path": \ "/spec/featureGates/downwardMetrics" \ "value": true}]' Disable the downwardMetrics feature gate by running the command shown in the following example: USD oc patch hco kubevirt-hyperconverged -n openshift-cnv \ --type json -p '[{"op": "replace", "path": \ "/spec/featureGates/downwardMetrics" \ "value": false}]' 12.5.2. Configuring a downward metrics device You enable the capturing of downward metrics for a host VM by creating a configuration file that includes a downwardMetrics device. Adding this device establishes that the metrics are exposed through a virtio-serial port. Prerequisites You must first enable the downwardMetrics feature gate. Procedure Edit or create a YAML file that includes a downwardMetrics device, as shown in the following example: Example downwardMetrics configuration file apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: fedora namespace: default spec: dataVolumeTemplates: - metadata: name: fedora-volume spec: sourceRef: kind: DataSource name: fedora namespace: openshift-virtualization-os-images storage: resources: {} storageClassName: hostpath-csi-basic instancetype: name: u1.medium preference: name: fedora running: true template: metadata: labels: app.kubernetes.io/name: headless spec: domain: devices: downwardMetrics: {} 1 subdomain: headless volumes: - dataVolume: name: fedora-volume name: rootdisk - cloudInitNoCloud: userData: | #cloud-config chpasswd: expire: false password: '<password>' 2 user: fedora name: cloudinitdisk 1 The downwardMetrics device. 2 The password for the fedora user. 12.5.3. Viewing downward metrics You can view downward metrics by using either of the following options: The command line interface (CLI) The vm-dump-metrics tool Note On Red Hat Enterprise Linux (RHEL) 9, use the command line to view downward metrics. The vm-dump-metrics tool is not supported on the Red Hat Enterprise Linux (RHEL) 9 platform. 12.5.3.1. Viewing downward metrics by using the command line You can view downward metrics by entering a command from inside a guest virtual machine (VM). Procedure Run the following commands: USD sudo sh -c 'printf "GET /metrics/XML\n\n" > /dev/virtio-ports/org.github.vhostmd.1' USD sudo cat /dev/virtio-ports/org.github.vhostmd.1 12.5.3.2. Viewing downward metrics by using the vm-dump-metrics tool To view downward metrics, install the vm-dump-metrics tool and then use the tool to expose the metrics results. Note On Red Hat Enterprise Linux (RHEL) 9, use the command line to view downward metrics. The vm-dump-metrics tool is not supported on the Red Hat Enterprise Linux (RHEL) 9 platform. Procedure Install the vm-dump-metrics tool by running the following command: USD sudo dnf install -y vm-dump-metrics Retrieve the metrics results by running the following command: USD sudo vm-dump-metrics Example output <metrics> <metric type="string" context="host"> <name>HostName</name> <value>node01</value> [...] <metric type="int64" context="host" unit="s"> <name>Time</name> <value>1619008605</value> </metric> <metric type="string" context="host"> <name>VirtualizationVendor</name> <value>kubevirt.io</value> </metric> </metrics> 12.6. Virtual machine health checks You can configure virtual machine (VM) health checks by defining readiness and liveness probes in the VirtualMachine resource. 12.6.1. About readiness and liveness probes Use readiness and liveness probes to detect and handle unhealthy virtual machines (VMs). You can include one or more probes in the specification of the VM to ensure that traffic does not reach a VM that is not ready for it and that a new VM is created when a VM becomes unresponsive. A readiness probe determines whether a VM is ready to accept service requests. If the probe fails, the VM is removed from the list of available endpoints until the VM is ready. A liveness probe determines whether a VM is responsive. If the probe fails, the VM is deleted and a new VM is created to restore responsiveness. You can configure readiness and liveness probes by setting the spec.readinessProbe and the spec.livenessProbe fields of the VirtualMachine object. These fields support the following tests: HTTP GET The probe determines the health of the VM by using a web hook. The test is successful if the HTTP response code is between 200 and 399. You can use an HTTP GET test with applications that return HTTP status codes when they are completely initialized. TCP socket The probe attempts to open a socket to the VM. The VM is only considered healthy if the probe can establish a connection. You can use a TCP socket test with applications that do not start listening until initialization is complete. Guest agent ping The probe uses the guest-ping command to determine if the QEMU guest agent is running on the virtual machine. 12.6.1.1. Defining an HTTP readiness probe Define an HTTP readiness probe by setting the spec.readinessProbe.httpGet field of the virtual machine (VM) configuration. Procedure Include details of the readiness probe in the VM configuration file. Sample readiness probe with an HTTP GET test apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: annotations: name: fedora-vm namespace: example-namespace # ... spec: template: spec: readinessProbe: httpGet: 1 port: 1500 2 path: /healthz 3 httpHeaders: - name: Custom-Header value: Awesome initialDelaySeconds: 120 4 periodSeconds: 20 5 timeoutSeconds: 10 6 failureThreshold: 3 7 successThreshold: 3 8 # ... 1 The HTTP GET request to perform to connect to the VM. 2 The port of the VM that the probe queries. In the above example, the probe queries port 1500. 3 The path to access on the HTTP server. In the above example, if the handler for the server's /healthz path returns a success code, the VM is considered to be healthy. If the handler returns a failure code, the VM is removed from the list of available endpoints. 4 The time, in seconds, after the VM starts before the readiness probe is initiated. 5 The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than timeoutSeconds . 6 The number of seconds of inactivity after which the probe times out and the VM is assumed to have failed. The default value is 1. This value must be lower than periodSeconds . 7 The number of times that the probe is allowed to fail. The default is 3. After the specified number of attempts, the pod is marked Unready . 8 The number of times that the probe must report success, after a failure, to be considered successful. The default is 1. Create the VM by running the following command: USD oc create -f <file_name>.yaml 12.6.1.2. Defining a TCP readiness probe Define a TCP readiness probe by setting the spec.readinessProbe.tcpSocket field of the virtual machine (VM) configuration. Procedure Include details of the TCP readiness probe in the VM configuration file. Sample readiness probe with a TCP socket test apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: annotations: name: fedora-vm namespace: example-namespace # ... spec: template: spec: readinessProbe: initialDelaySeconds: 120 1 periodSeconds: 20 2 tcpSocket: 3 port: 1500 4 timeoutSeconds: 10 5 # ... 1 The time, in seconds, after the VM starts before the readiness probe is initiated. 2 The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than timeoutSeconds . 3 The TCP action to perform. 4 The port of the VM that the probe queries. 5 The number of seconds of inactivity after which the probe times out and the VM is assumed to have failed. The default value is 1. This value must be lower than periodSeconds . Create the VM by running the following command: USD oc create -f <file_name>.yaml 12.6.1.3. Defining an HTTP liveness probe Define an HTTP liveness probe by setting the spec.livenessProbe.httpGet field of the virtual machine (VM) configuration. You can define both HTTP and TCP tests for liveness probes in the same way as readiness probes. This procedure configures a sample liveness probe with an HTTP GET test. Procedure Include details of the HTTP liveness probe in the VM configuration file. Sample liveness probe with an HTTP GET test apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: annotations: name: fedora-vm namespace: example-namespace # ... spec: template: spec: livenessProbe: initialDelaySeconds: 120 1 periodSeconds: 20 2 httpGet: 3 port: 1500 4 path: /healthz 5 httpHeaders: - name: Custom-Header value: Awesome timeoutSeconds: 10 6 # ... 1 The time, in seconds, after the VM starts before the liveness probe is initiated. 2 The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than timeoutSeconds . 3 The HTTP GET request to perform to connect to the VM. 4 The port of the VM that the probe queries. In the above example, the probe queries port 1500. The VM installs and runs a minimal HTTP server on port 1500 via cloud-init. 5 The path to access on the HTTP server. In the above example, if the handler for the server's /healthz path returns a success code, the VM is considered to be healthy. If the handler returns a failure code, the VM is deleted and a new VM is created. 6 The number of seconds of inactivity after which the probe times out and the VM is assumed to have failed. The default value is 1. This value must be lower than periodSeconds . Create the VM by running the following command: USD oc create -f <file_name>.yaml 12.6.2. Defining a watchdog You can define a watchdog to monitor the health of the guest operating system by performing the following steps: Configure a watchdog device for the virtual machine (VM). Install the watchdog agent on the guest. The watchdog device monitors the agent and performs one of the following actions if the guest operating system is unresponsive: poweroff : The VM powers down immediately. If spec.running is set to true or spec.runStrategy is not set to manual , then the VM reboots. reset : The VM reboots in place and the guest operating system cannot react. Note The reboot time might cause liveness probes to time out. If cluster-level protections detect a failed liveness probe, the VM might be forcibly rescheduled, increasing the reboot time. shutdown : The VM gracefully powers down by stopping all services. Note Watchdog is not available for Windows VMs. 12.6.2.1. Configuring a watchdog device for the virtual machine You configure a watchdog device for the virtual machine (VM). Prerequisites The VM must have kernel support for an i6300esb watchdog device. Red Hat Enterprise Linux (RHEL) images support i6300esb . Procedure Create a YAML file with the following contents: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm2-rhel84-watchdog name: <vm-name> spec: running: false template: metadata: labels: kubevirt.io/vm: vm2-rhel84-watchdog spec: domain: devices: watchdog: name: <watchdog> i6300esb: action: "poweroff" 1 # ... 1 Specify poweroff , reset , or shutdown . The example above configures the i6300esb watchdog device on a RHEL8 VM with the poweroff action and exposes the device as /dev/watchdog . This device can now be used by the watchdog binary. Apply the YAML file to your cluster by running the following command: USD oc apply -f <file_name>.yaml Important This procedure is provided for testing watchdog functionality only and must not be run on production machines. Run the following command to verify that the VM is connected to the watchdog device: USD lspci | grep watchdog -i Run one of the following commands to confirm the watchdog is active: Trigger a kernel panic: # echo c > /proc/sysrq-trigger Stop the watchdog service: # pkill -9 watchdog 12.6.2.2. Installing the watchdog agent on the guest You install the watchdog agent on the guest and start the watchdog service. Procedure Log in to the virtual machine as root user. Install the watchdog package and its dependencies: # yum install watchdog Uncomment the following line in the /etc/watchdog.conf file and save the changes: #watchdog-device = /dev/watchdog Enable the watchdog service to start on boot: # systemctl enable --now watchdog.service 12.6.3. Defining a guest agent ping probe Define a guest agent ping probe by setting the spec.readinessProbe.guestAgentPing field of the virtual machine (VM) configuration. Important The guest agent ping probe is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites The QEMU guest agent must be installed and enabled on the virtual machine. Procedure Include details of the guest agent ping probe in the VM configuration file. For example: Sample guest agent ping probe apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: annotations: name: fedora-vm namespace: example-namespace # ... spec: template: spec: readinessProbe: guestAgentPing: {} 1 initialDelaySeconds: 120 2 periodSeconds: 20 3 timeoutSeconds: 10 4 failureThreshold: 3 5 successThreshold: 3 6 # ... 1 The guest agent ping probe to connect to the VM. 2 Optional: The time, in seconds, after the VM starts before the guest agent probe is initiated. 3 Optional: The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than timeoutSeconds . 4 Optional: The number of seconds of inactivity after which the probe times out and the VM is assumed to have failed. The default value is 1. This value must be lower than periodSeconds . 5 Optional: The number of times that the probe is allowed to fail. The default is 3. After the specified number of attempts, the pod is marked Unready . 6 Optional: The number of times that the probe must report success, after a failure, to be considered successful. The default is 1. Create the VM by running the following command: USD oc create -f <file_name>.yaml 12.6.4. Additional resources Monitoring application health by using health checks 12.7. OpenShift Virtualization runbooks Runbooks for the OpenShift Virtualization Operator are maintained in the openshift/runbooks Git repository, and you can view them on GitHub. To diagnose and resolve issues that trigger OpenShift Virtualization alerts , follow the procedures in the runbooks. OpenShift Virtualization alerts are displayed in the Virtualization Overview tab in the web console. 12.7.1. CDIDataImportCronOutdated View the runbook for the CDIDataImportCronOutdated alert. 12.7.2. CDIDataVolumeUnusualRestartCount View the runbook for the CDIDataVolumeUnusualRestartCount alert. 12.7.3. CDIDefaultStorageClassDegraded View the runbook for the CDIDefaultStorageClassDegraded alert. 12.7.4. CDIMultipleDefaultVirtStorageClasses View the runbook for the CDIMultipleDefaultVirtStorageClasses alert. 12.7.5. CDINoDefaultStorageClass View the runbook for the CDINoDefaultStorageClass alert. 12.7.6. CDINotReady View the runbook for the CDINotReady alert. 12.7.7. CDIOperatorDown View the runbook for the CDIOperatorDown alert. 12.7.8. CDIStorageProfilesIncomplete View the runbook for the CDIStorageProfilesIncomplete alert. 12.7.9. CnaoDown View the runbook for the CnaoDown alert. 12.7.10. CnaoNMstateMigration View the runbook for the CnaoNMstateMigration alert. 12.7.11. HCOInstallationIncomplete View the runbook for the HCOInstallationIncomplete alert. 12.7.12. HPPNotReady View the runbook for the HPPNotReady alert. 12.7.13. HPPOperatorDown View the runbook for the HPPOperatorDown alert. 12.7.14. HPPSharingPoolPathWithOS View the runbook for the HPPSharingPoolPathWithOS alert. 12.7.15. KubemacpoolDown View the runbook for the KubemacpoolDown alert. 12.7.16. KubeMacPoolDuplicateMacsFound View the runbook for the KubeMacPoolDuplicateMacsFound alert. 12.7.17. KubeVirtComponentExceedsRequestedCPU The KubeVirtComponentExceedsRequestedCPU alert is deprecated . 12.7.18. KubeVirtComponentExceedsRequestedMemory The KubeVirtComponentExceedsRequestedMemory alert is deprecated . 12.7.19. KubeVirtCRModified View the runbook for the KubeVirtCRModified alert. 12.7.20. KubeVirtDeprecatedAPIRequested View the runbook for the KubeVirtDeprecatedAPIRequested alert. 12.7.21. KubeVirtNoAvailableNodesToRunVMs View the runbook for the KubeVirtNoAvailableNodesToRunVMs alert. 12.7.22. KubevirtVmHighMemoryUsage View the runbook for the KubevirtVmHighMemoryUsage alert. 12.7.23. KubeVirtVMIExcessiveMigrations View the runbook for the KubeVirtVMIExcessiveMigrations alert. 12.7.24. LowKVMNodesCount View the runbook for the LowKVMNodesCount alert. 12.7.25. LowReadyVirtControllersCount View the runbook for the LowReadyVirtControllersCount alert. 12.7.26. LowReadyVirtOperatorsCount View the runbook for the LowReadyVirtOperatorsCount alert. 12.7.27. LowVirtAPICount View the runbook for the LowVirtAPICount alert. 12.7.28. LowVirtControllersCount View the runbook for the LowVirtControllersCount alert. 12.7.29. LowVirtOperatorCount View the runbook for the LowVirtOperatorCount alert. 12.7.30. NetworkAddonsConfigNotReady View the runbook for the NetworkAddonsConfigNotReady alert. 12.7.31. NoLeadingVirtOperator View the runbook for the NoLeadingVirtOperator alert. 12.7.32. NoReadyVirtController View the runbook for the NoReadyVirtController alert. 12.7.33. NoReadyVirtOperator View the runbook for the NoReadyVirtOperator alert. 12.7.34. OrphanedVirtualMachineInstances View the runbook for the OrphanedVirtualMachineInstances alert. 12.7.35. OutdatedVirtualMachineInstanceWorkloads View the runbook for the OutdatedVirtualMachineInstanceWorkloads alert. 12.7.36. SingleStackIPv6Unsupported View the runbook for the SingleStackIPv6Unsupported alert. 12.7.37. SSPCommonTemplatesModificationReverted View the runbook for the SSPCommonTemplatesModificationReverted alert. 12.7.38. SSPDown View the runbook for the SSPDown alert. 12.7.39. SSPFailingToReconcile View the runbook for the SSPFailingToReconcile alert. 12.7.40. SSPHighRateRejectedVms View the runbook for the SSPHighRateRejectedVms alert. 12.7.41. SSPTemplateValidatorDown View the runbook for the SSPTemplateValidatorDown alert. 12.7.42. UnsupportedHCOModification View the runbook for the UnsupportedHCOModification alert. 12.7.43. VirtAPIDown View the runbook for the VirtAPIDown alert. 12.7.44. VirtApiRESTErrorsBurst View the runbook for the VirtApiRESTErrorsBurst alert. 12.7.45. VirtApiRESTErrorsHigh View the runbook for the VirtApiRESTErrorsHigh alert. 12.7.46. VirtControllerDown View the runbook for the VirtControllerDown alert. 12.7.47. VirtControllerRESTErrorsBurst View the runbook for the VirtControllerRESTErrorsBurst alert. 12.7.48. VirtControllerRESTErrorsHigh View the runbook for the VirtControllerRESTErrorsHigh alert. 12.7.49. VirtHandlerDaemonSetRolloutFailing View the runbook for the VirtHandlerDaemonSetRolloutFailing alert. 12.7.50. VirtHandlerRESTErrorsBurst View the runbook for the VirtHandlerRESTErrorsBurst alert. 12.7.51. VirtHandlerRESTErrorsHigh View the runbook for the VirtHandlerRESTErrorsHigh alert. 12.7.52. VirtOperatorDown View the runbook for the VirtOperatorDown alert. 12.7.53. VirtOperatorRESTErrorsBurst View the runbook for the VirtOperatorRESTErrorsBurst alert. 12.7.54. VirtOperatorRESTErrorsHigh View the runbook for the VirtOperatorRESTErrorsHigh alert. 12.7.55. VirtualMachineCRCErrors The runbook for the VirtualMachineCRCErrors alert is deprecated because the alert was renamed to VMStorageClassWarning . View the runbook for the VMStorageClassWarning alert. 12.7.56. VMCannotBeEvicted View the runbook for the VMCannotBeEvicted alert. 12.7.57. VMStorageClassWarning View the runbook for the VMStorageClassWarning alert.
[ "--- apiVersion: v1 kind: ServiceAccount metadata: name: vm-latency-checkup-sa --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: kubevirt-vm-latency-checker rules: - apiGroups: [\"kubevirt.io\"] resources: [\"virtualmachineinstances\"] verbs: [\"get\", \"create\", \"delete\"] - apiGroups: [\"subresources.kubevirt.io\"] resources: [\"virtualmachineinstances/console\"] verbs: [\"get\"] - apiGroups: [\"k8s.cni.cncf.io\"] resources: [\"network-attachment-definitions\"] verbs: [\"get\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kubevirt-vm-latency-checker subjects: - kind: ServiceAccount name: vm-latency-checkup-sa roleRef: kind: Role name: kubevirt-vm-latency-checker apiGroup: rbac.authorization.k8s.io --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: kiagnose-configmap-access rules: - apiGroups: [ \"\" ] resources: [ \"configmaps\" ] verbs: [\"get\", \"update\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kiagnose-configmap-access subjects: - kind: ServiceAccount name: vm-latency-checkup-sa roleRef: kind: Role name: kiagnose-configmap-access apiGroup: rbac.authorization.k8s.io", "oc apply -n <target_namespace> -f <latency_sa_roles_rolebinding>.yaml 1", "apiVersion: v1 kind: ConfigMap metadata: name: kubevirt-vm-latency-checkup-config labels: kiagnose/checkup-type: kubevirt-vm-latency data: spec.timeout: 5m spec.param.networkAttachmentDefinitionNamespace: <target_namespace> spec.param.networkAttachmentDefinitionName: \"blue-network\" 1 spec.param.maxDesiredLatencyMilliseconds: \"10\" 2 spec.param.sampleDurationSeconds: \"5\" 3 spec.param.sourceNode: \"worker1\" 4 spec.param.targetNode: \"worker2\" 5", "oc apply -n <target_namespace> -f <latency_config_map>.yaml", "apiVersion: batch/v1 kind: Job metadata: name: kubevirt-vm-latency-checkup labels: kiagnose/checkup-type: kubevirt-vm-latency spec: backoffLimit: 0 template: spec: serviceAccountName: vm-latency-checkup-sa restartPolicy: Never containers: - name: vm-latency-checkup image: registry.redhat.io/container-native-virtualization/vm-network-latency-checkup-rhel9:v4.16.0 securityContext: allowPrivilegeEscalation: false capabilities: drop: [\"ALL\"] runAsNonRoot: true seccompProfile: type: \"RuntimeDefault\" env: - name: CONFIGMAP_NAMESPACE value: <target_namespace> - name: CONFIGMAP_NAME value: kubevirt-vm-latency-checkup-config - name: POD_UID valueFrom: fieldRef: fieldPath: metadata.uid", "oc apply -n <target_namespace> -f <latency_job>.yaml", "oc wait job kubevirt-vm-latency-checkup -n <target_namespace> --for condition=complete --timeout 6m", "oc get configmap kubevirt-vm-latency-checkup-config -n <target_namespace> -o yaml", "apiVersion: v1 kind: ConfigMap metadata: name: kubevirt-vm-latency-checkup-config namespace: <target_namespace> labels: kiagnose/checkup-type: kubevirt-vm-latency data: spec.timeout: 5m spec.param.networkAttachmentDefinitionNamespace: <target_namespace> spec.param.networkAttachmentDefinitionName: \"blue-network\" spec.param.maxDesiredLatencyMilliseconds: \"10\" spec.param.sampleDurationSeconds: \"5\" spec.param.sourceNode: \"worker1\" spec.param.targetNode: \"worker2\" status.succeeded: \"true\" status.failureReason: \"\" status.completionTimestamp: \"2022-01-01T09:00:00Z\" status.startTimestamp: \"2022-01-01T09:00:07Z\" status.result.avgLatencyNanoSec: \"177000\" status.result.maxLatencyNanoSec: \"244000\" 1 status.result.measurementDurationSec: \"5\" status.result.minLatencyNanoSec: \"135000\" status.result.sourceNode: \"worker1\" status.result.targetNode: \"worker2\"", "oc logs job.batch/kubevirt-vm-latency-checkup -n <target_namespace>", "oc delete job -n <target_namespace> kubevirt-vm-latency-checkup", "oc delete config-map -n <target_namespace> kubevirt-vm-latency-checkup-config", "oc delete -f <latency_sa_roles_rolebinding>.yaml", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: kubevirt-storage-checkup-clustereader roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-reader subjects: - kind: ServiceAccount name: storage-checkup-sa namespace: <target_namespace> 1", "--- apiVersion: v1 kind: ServiceAccount metadata: name: storage-checkup-sa --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: storage-checkup-role rules: - apiGroups: [ \"\" ] resources: [ \"configmaps\" ] verbs: [\"get\", \"update\"] - apiGroups: [ \"kubevirt.io\" ] resources: [ \"virtualmachines\" ] verbs: [ \"create\", \"delete\" ] - apiGroups: [ \"kubevirt.io\" ] resources: [ \"virtualmachineinstances\" ] verbs: [ \"get\" ] - apiGroups: [ \"subresources.kubevirt.io\" ] resources: [ \"virtualmachineinstances/addvolume\", \"virtualmachineinstances/removevolume\" ] verbs: [ \"update\" ] - apiGroups: [ \"kubevirt.io\" ] resources: [ \"virtualmachineinstancemigrations\" ] verbs: [ \"create\" ] - apiGroups: [ \"cdi.kubevirt.io\" ] resources: [ \"datavolumes\" ] verbs: [ \"create\", \"delete\" ] - apiGroups: [ \"\" ] resources: [ \"persistentvolumeclaims\" ] verbs: [ \"delete\" ] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: storage-checkup-role subjects: - kind: ServiceAccount name: storage-checkup-sa roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: storage-checkup-role", "oc apply -n <target_namespace> -f <storage_sa_roles_rolebinding>.yaml", "--- apiVersion: v1 kind: ConfigMap metadata: name: storage-checkup-config namespace: USDCHECKUP_NAMESPACE data: spec.timeout: 10m spec.param.storageClass: ocs-storagecluster-ceph-rbd-virtualization spec.param.vmiTimeout: 3m --- apiVersion: batch/v1 kind: Job metadata: name: storage-checkup namespace: USDCHECKUP_NAMESPACE spec: backoffLimit: 0 template: spec: serviceAccount: storage-checkup-sa restartPolicy: Never containers: - name: storage-checkup image: quay.io/kiagnose/kubevirt-storage-checkup:main imagePullPolicy: Always env: - name: CONFIGMAP_NAMESPACE value: USDCHECKUP_NAMESPACE - name: CONFIGMAP_NAME value: storage-checkup-config", "oc apply -n <target_namespace> -f <storage_configmap_job>.yaml", "oc wait job storage-checkup -n <target_namespace> --for condition=complete --timeout 10m", "oc get configmap storage-checkup-config -n <target_namespace> -o yaml", "apiVersion: v1 kind: ConfigMap metadata: name: storage-checkup-config labels: kiagnose/checkup-type: kubevirt-storage data: spec.timeout: 10m status.succeeded: \"true\" 1 status.failureReason: \"\" 2 status.startTimestamp: \"2023-07-31T13:14:38Z\" 3 status.completionTimestamp: \"2023-07-31T13:19:41Z\" 4 status.result.cnvVersion: 4.16.2 5 status.result.defaultStorageClass: trident-nfs 6 status.result.goldenImagesNoDataSource: <data_import_cron_list> 7 status.result.goldenImagesNotUpToDate: <data_import_cron_list> 8 status.result.ocpVersion: 4.16.0 9 status.result.pvcBound: \"true\" 10 status.result.storageProfileMissingVolumeSnapshotClass: <storage_class_list> 11 status.result.storageProfilesWithEmptyClaimPropertySets: <storage_profile_list> 12 status.result.storageProfilesWithSmartClone: <storage_profile_list> 13 status.result.storageProfilesWithSpecClaimPropertySets: <storage_profile_list> 14 status.result.storageProfilesWithRWX: |- ocs-storagecluster-ceph-rbd ocs-storagecluster-ceph-rbd-virtualization ocs-storagecluster-cephfs trident-iscsi trident-minio trident-nfs windows-vms status.result.vmBootFromGoldenImage: VMI \"vmi-under-test-dhkb8\" successfully booted status.result.vmHotplugVolume: |- VMI \"vmi-under-test-dhkb8\" hotplug volume ready VMI \"vmi-under-test-dhkb8\" hotplug volume removed status.result.vmLiveMigration: VMI \"vmi-under-test-dhkb8\" migration completed status.result.vmVolumeClone: 'DV cloneType: \"csi-clone\"' status.result.vmsWithNonVirtRbdStorageClass: <vm_list> 15 status.result.vmsWithUnsetEfsStorageClass: <vm_list> 16", "oc delete job -n <target_namespace> storage-checkup", "oc delete config-map -n <target_namespace> storage-checkup-config", "oc delete -f <storage_sa_roles_rolebinding>.yaml", "--- apiVersion: v1 kind: ServiceAccount metadata: name: dpdk-checkup-sa --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: kiagnose-configmap-access rules: - apiGroups: [ \"\" ] resources: [ \"configmaps\" ] verbs: [ \"get\", \"update\" ] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kiagnose-configmap-access subjects: - kind: ServiceAccount name: dpdk-checkup-sa roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kiagnose-configmap-access --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: kubevirt-dpdk-checker rules: - apiGroups: [ \"kubevirt.io\" ] resources: [ \"virtualmachineinstances\" ] verbs: [ \"create\", \"get\", \"delete\" ] - apiGroups: [ \"subresources.kubevirt.io\" ] resources: [ \"virtualmachineinstances/console\" ] verbs: [ \"get\" ] - apiGroups: [ \"\" ] resources: [ \"configmaps\" ] verbs: [ \"create\", \"delete\" ] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kubevirt-dpdk-checker subjects: - kind: ServiceAccount name: dpdk-checkup-sa roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kubevirt-dpdk-checker", "oc apply -n <target_namespace> -f <dpdk_sa_roles_rolebinding>.yaml", "apiVersion: v1 kind: ConfigMap metadata: name: dpdk-checkup-config labels: kiagnose/checkup-type: kubevirt-dpdk data: spec.timeout: 10m spec.param.networkAttachmentDefinitionName: <network_name> 1 spec.param.trafficGenContainerDiskImage: \"quay.io/kiagnose/kubevirt-dpdk-checkup-traffic-gen:v0.4.0 2 spec.param.vmUnderTestContainerDiskImage: \"quay.io/kiagnose/kubevirt-dpdk-checkup-vm:v0.4.0\" 3", "oc apply -n <target_namespace> -f <dpdk_config_map>.yaml", "apiVersion: batch/v1 kind: Job metadata: name: dpdk-checkup labels: kiagnose/checkup-type: kubevirt-dpdk spec: backoffLimit: 0 template: spec: serviceAccountName: dpdk-checkup-sa restartPolicy: Never containers: - name: dpdk-checkup image: registry.redhat.io/container-native-virtualization/kubevirt-dpdk-checkup-rhel9:v4.16.0 imagePullPolicy: Always securityContext: allowPrivilegeEscalation: false capabilities: drop: [\"ALL\"] runAsNonRoot: true seccompProfile: type: \"RuntimeDefault\" env: - name: CONFIGMAP_NAMESPACE value: <target-namespace> - name: CONFIGMAP_NAME value: dpdk-checkup-config - name: POD_UID valueFrom: fieldRef: fieldPath: metadata.uid", "oc apply -n <target_namespace> -f <dpdk_job>.yaml", "oc wait job dpdk-checkup -n <target_namespace> --for condition=complete --timeout 10m", "oc get configmap dpdk-checkup-config -n <target_namespace> -o yaml", "apiVersion: v1 kind: ConfigMap metadata: name: dpdk-checkup-config labels: kiagnose/checkup-type: kubevirt-dpdk data: spec.timeout: 10m spec.param.NetworkAttachmentDefinitionName: \"dpdk-network-1\" spec.param.trafficGenContainerDiskImage: \"quay.io/kiagnose/kubevirt-dpdk-checkup-traffic-gen:v0.4.0\" spec.param.vmUnderTestContainerDiskImage: \"quay.io/kiagnose/kubevirt-dpdk-checkup-vm:v0.4.0\" status.succeeded: \"true\" 1 status.failureReason: \"\" 2 status.startTimestamp: \"2023-07-31T13:14:38Z\" 3 status.completionTimestamp: \"2023-07-31T13:19:41Z\" 4 status.result.trafficGenSentPackets: \"480000000\" 5 status.result.trafficGenOutputErrorPackets: \"0\" 6 status.result.trafficGenInputErrorPackets: \"0\" 7 status.result.trafficGenActualNodeName: worker-dpdk1 8 status.result.vmUnderTestActualNodeName: worker-dpdk2 9 status.result.vmUnderTestReceivedPackets: \"480000000\" 10 status.result.vmUnderTestRxDroppedPackets: \"0\" 11 status.result.vmUnderTestTxDroppedPackets: \"0\" 12", "oc delete job -n <target_namespace> dpdk-checkup", "oc delete config-map -n <target_namespace> dpdk-checkup-config", "oc delete -f <dpdk_sa_roles_rolebinding>.yaml", "dnf install libguestfs-tools", "composer-cli distros list", "usermod -a -G weldr user", "newgrp weldr", "cat << EOF > dpdk-vm.toml name = \"dpdk_image\" description = \"Image to use with the DPDK checkup\" version = \"0.0.1\" distro = \"rhel-87\" [[customizations.user]] name = \"root\" password = \"redhat\" [[packages]] name = \"dpdk\" [[packages]] name = \"dpdk-tools\" [[packages]] name = \"driverctl\" [[packages]] name = \"tuned-profiles-cpu-partitioning\" [customizations.kernel] append = \"default_hugepagesz=1GB hugepagesz=1G hugepages=1\" [customizations.services] disabled = [\"NetworkManager-wait-online\", \"sshd\"] EOF", "composer-cli blueprints push dpdk-vm.toml", "composer-cli compose start dpdk_image qcow2", "composer-cli compose status", "composer-cli compose image <UUID>", "cat <<EOF >customize-vm #!/bin/bash Setup hugepages mount mkdir -p /mnt/huge echo \"hugetlbfs /mnt/huge hugetlbfs defaults,pagesize=1GB 0 0\" >> /etc/fstab Create vfio-noiommu.conf echo \"options vfio enable_unsafe_noiommu_mode=1\" > /etc/modprobe.d/vfio-noiommu.conf Enable guest-exec,guest-exec-status on the qemu-guest-agent configuration sed -i '/^BLACKLIST_RPC=/ { s/guest-exec-status//; s/guest-exec//g }' /etc/sysconfig/qemu-ga sed -i '/^BLACKLIST_RPC=/ { s/,\\+/,/g; s/^,\\|,USD//g }' /etc/sysconfig/qemu-ga EOF", "virt-customize -a <UUID>-disk.qcow2 --run=customize-vm --selinux-relabel", "cat << EOF > Dockerfile FROM scratch COPY --chown=107:107 <UUID>-disk.qcow2 /disk/ EOF", "podman build . -t dpdk-rhel:latest", "podman push dpdk-rhel:latest", "topk(3, sum by (name, namespace) (rate(kubevirt_vmi_vcpu_wait_seconds_total[6m]))) > 0 1", "topk(3, sum by (name, namespace) (rate(kubevirt_vmi_network_receive_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_network_transmit_bytes_total[6m]))) > 0 1", "topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_read_traffic_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_write_traffic_bytes_total[6m]))) > 0 1", "kubevirt_vmsnapshot_disks_restored_from_source{vm_name=\"simple-vm\", vm_namespace=\"default\"} 1", "kubevirt_vmsnapshot_disks_restored_from_source_bytes{vm_name=\"simple-vm\", vm_namespace=\"default\"} 1", "topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_read_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_write_total[6m]))) > 0 1", "topk(3, sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_in_traffic_bytes[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_out_traffic_bytes[6m]))) > 0 1", "kind: Service apiVersion: v1 metadata: name: node-exporter-service 1 namespace: dynamation 2 labels: servicetype: metrics 3 spec: ports: - name: exmet 4 protocol: TCP port: 9100 5 targetPort: 9100 6 type: ClusterIP selector: monitor: metrics 7", "oc create -f node-exporter-service.yaml", "wget https://github.com/prometheus/node_exporter/releases/download/v1.3.1/node_exporter-1.3.1.linux-amd64.tar.gz", "sudo tar xvf node_exporter-1.3.1.linux-amd64.tar.gz --directory /usr/bin --strip 1 \"*/node_exporter\"", "[Unit] Description=Prometheus Metrics Exporter After=network.target StartLimitIntervalSec=0 [Service] Type=simple Restart=always RestartSec=1 User=root ExecStart=/usr/bin/node_exporter [Install] WantedBy=multi-user.target", "sudo systemctl enable node_exporter.service sudo systemctl start node_exporter.service", "curl http://localhost:9100/metrics", "go_gc_duration_seconds{quantile=\"0\"} 1.5244e-05 go_gc_duration_seconds{quantile=\"0.25\"} 3.0449e-05 go_gc_duration_seconds{quantile=\"0.5\"} 3.7913e-05", "spec: template: metadata: labels: monitor: metrics", "oc get service -n <namespace> <node-exporter-service>", "curl http://<172.30.226.162:9100>/metrics | grep -vE \"^#|^USD\"", "node_arp_entries{device=\"eth0\"} 1 node_boot_time_seconds 1.643153218e+09 node_context_switches_total 4.4938158e+07 node_cooling_device_cur_state{name=\"0\",type=\"Processor\"} 0 node_cooling_device_max_state{name=\"0\",type=\"Processor\"} 0 node_cpu_guest_seconds_total{cpu=\"0\",mode=\"nice\"} 0 node_cpu_guest_seconds_total{cpu=\"0\",mode=\"user\"} 0 node_cpu_seconds_total{cpu=\"0\",mode=\"idle\"} 1.10586485e+06 node_cpu_seconds_total{cpu=\"0\",mode=\"iowait\"} 37.61 node_cpu_seconds_total{cpu=\"0\",mode=\"irq\"} 233.91 node_cpu_seconds_total{cpu=\"0\",mode=\"nice\"} 551.47 node_cpu_seconds_total{cpu=\"0\",mode=\"softirq\"} 87.3 node_cpu_seconds_total{cpu=\"0\",mode=\"steal\"} 86.12 node_cpu_seconds_total{cpu=\"0\",mode=\"system\"} 464.15 node_cpu_seconds_total{cpu=\"0\",mode=\"user\"} 1075.2 node_disk_discard_time_seconds_total{device=\"vda\"} 0 node_disk_discard_time_seconds_total{device=\"vdb\"} 0 node_disk_discarded_sectors_total{device=\"vda\"} 0 node_disk_discarded_sectors_total{device=\"vdb\"} 0 node_disk_discards_completed_total{device=\"vda\"} 0 node_disk_discards_completed_total{device=\"vdb\"} 0 node_disk_discards_merged_total{device=\"vda\"} 0 node_disk_discards_merged_total{device=\"vdb\"} 0 node_disk_info{device=\"vda\",major=\"252\",minor=\"0\"} 1 node_disk_info{device=\"vdb\",major=\"252\",minor=\"16\"} 1 node_disk_io_now{device=\"vda\"} 0 node_disk_io_now{device=\"vdb\"} 0 node_disk_io_time_seconds_total{device=\"vda\"} 174 node_disk_io_time_seconds_total{device=\"vdb\"} 0.054 node_disk_io_time_weighted_seconds_total{device=\"vda\"} 259.79200000000003 node_disk_io_time_weighted_seconds_total{device=\"vdb\"} 0.039 node_disk_read_bytes_total{device=\"vda\"} 3.71867136e+08 node_disk_read_bytes_total{device=\"vdb\"} 366592 node_disk_read_time_seconds_total{device=\"vda\"} 19.128 node_disk_read_time_seconds_total{device=\"vdb\"} 0.039 node_disk_reads_completed_total{device=\"vda\"} 5619 node_disk_reads_completed_total{device=\"vdb\"} 96 node_disk_reads_merged_total{device=\"vda\"} 5 node_disk_reads_merged_total{device=\"vdb\"} 0 node_disk_write_time_seconds_total{device=\"vda\"} 240.66400000000002 node_disk_write_time_seconds_total{device=\"vdb\"} 0 node_disk_writes_completed_total{device=\"vda\"} 71584 node_disk_writes_completed_total{device=\"vdb\"} 0 node_disk_writes_merged_total{device=\"vda\"} 19761 node_disk_writes_merged_total{device=\"vdb\"} 0 node_disk_written_bytes_total{device=\"vda\"} 2.007924224e+09 node_disk_written_bytes_total{device=\"vdb\"} 0", "apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: k8s-app: node-exporter-metrics-monitor name: node-exporter-metrics-monitor 1 namespace: dynamation 2 spec: endpoints: - interval: 30s 3 port: exmet 4 scheme: http selector: matchLabels: servicetype: metrics", "oc create -f node-exporter-metrics-monitor.yaml", "oc expose service -n <namespace> <node_exporter_service_name>", "oc get route -o=custom-columns=NAME:.metadata.name,DNS:.spec.host", "NAME DNS node-exporter-service node-exporter-service-dynamation.apps.cluster.example.org", "curl -s http://node-exporter-service-dynamation.apps.cluster.example.org/metrics", "go_gc_duration_seconds{quantile=\"0\"} 1.5382e-05 go_gc_duration_seconds{quantile=\"0.25\"} 3.1163e-05 go_gc_duration_seconds{quantile=\"0.5\"} 3.8546e-05 go_gc_duration_seconds{quantile=\"0.75\"} 4.9139e-05 go_gc_duration_seconds{quantile=\"1\"} 0.000189423", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: featureGates: downwardMetrics: true", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: featureGates: downwardMetrics: false", "oc patch hco kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\": \"replace\", \"path\": \"/spec/featureGates/downwardMetrics\" \"value\": true}]'", "oc patch hco kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\": \"replace\", \"path\": \"/spec/featureGates/downwardMetrics\" \"value\": false}]'", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: fedora namespace: default spec: dataVolumeTemplates: - metadata: name: fedora-volume spec: sourceRef: kind: DataSource name: fedora namespace: openshift-virtualization-os-images storage: resources: {} storageClassName: hostpath-csi-basic instancetype: name: u1.medium preference: name: fedora running: true template: metadata: labels: app.kubernetes.io/name: headless spec: domain: devices: downwardMetrics: {} 1 subdomain: headless volumes: - dataVolume: name: fedora-volume name: rootdisk - cloudInitNoCloud: userData: | #cloud-config chpasswd: expire: false password: '<password>' 2 user: fedora name: cloudinitdisk", "sudo sh -c 'printf \"GET /metrics/XML\\n\\n\" > /dev/virtio-ports/org.github.vhostmd.1'", "sudo cat /dev/virtio-ports/org.github.vhostmd.1", "sudo dnf install -y vm-dump-metrics", "sudo vm-dump-metrics", "<metrics> <metric type=\"string\" context=\"host\"> <name>HostName</name> <value>node01</value> [...] <metric type=\"int64\" context=\"host\" unit=\"s\"> <name>Time</name> <value>1619008605</value> </metric> <metric type=\"string\" context=\"host\"> <name>VirtualizationVendor</name> <value>kubevirt.io</value> </metric> </metrics>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: annotations: name: fedora-vm namespace: example-namespace spec: template: spec: readinessProbe: httpGet: 1 port: 1500 2 path: /healthz 3 httpHeaders: - name: Custom-Header value: Awesome initialDelaySeconds: 120 4 periodSeconds: 20 5 timeoutSeconds: 10 6 failureThreshold: 3 7 successThreshold: 3 8", "oc create -f <file_name>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: annotations: name: fedora-vm namespace: example-namespace spec: template: spec: readinessProbe: initialDelaySeconds: 120 1 periodSeconds: 20 2 tcpSocket: 3 port: 1500 4 timeoutSeconds: 10 5", "oc create -f <file_name>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: annotations: name: fedora-vm namespace: example-namespace spec: template: spec: livenessProbe: initialDelaySeconds: 120 1 periodSeconds: 20 2 httpGet: 3 port: 1500 4 path: /healthz 5 httpHeaders: - name: Custom-Header value: Awesome timeoutSeconds: 10 6", "oc create -f <file_name>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm2-rhel84-watchdog name: <vm-name> spec: running: false template: metadata: labels: kubevirt.io/vm: vm2-rhel84-watchdog spec: domain: devices: watchdog: name: <watchdog> i6300esb: action: \"poweroff\" 1", "oc apply -f <file_name>.yaml", "lspci | grep watchdog -i", "echo c > /proc/sysrq-trigger", "pkill -9 watchdog", "yum install watchdog", "#watchdog-device = /dev/watchdog", "systemctl enable --now watchdog.service", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: annotations: name: fedora-vm namespace: example-namespace spec: template: spec: readinessProbe: guestAgentPing: {} 1 initialDelaySeconds: 120 2 periodSeconds: 20 3 timeoutSeconds: 10 4 failureThreshold: 3 5 successThreshold: 3 6", "oc create -f <file_name>.yaml" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/virtualization/monitoring
Chapter 1. Migration from OpenShift Container Platform 3 to 4 overview
Chapter 1. Migration from OpenShift Container Platform 3 to 4 overview OpenShift Container Platform 4 clusters are different from OpenShift Container Platform 3 clusters. OpenShift Container Platform 4 clusters contain new technologies and functionality that result in a cluster that is self-managing, flexible, and automated. To learn more about migrating from OpenShift Container Platform 3 to 4 see About migrating from OpenShift Container Platform 3 to 4 . 1.1. Differences between OpenShift Container Platform 3 and 4 Before migrating from OpenShift Container Platform 3 to 4, you can check differences between OpenShift Container Platform 3 and 4 . Review the following information: Architecture Installation and update Storage , network , logging , security , and monitoring considerations 1.2. Planning network considerations Before migrating from OpenShift Container Platform 3 to 4, review the differences between OpenShift Container Platform 3 and 4 for information about the following areas: DNS considerations Isolating the DNS domain of the target cluster from the clients . Setting up the target cluster to accept the source DNS domain . You can migrate stateful application workloads from OpenShift Container Platform 3 to 4 at the granularity of a namespace. To learn more about MTC see Understanding MTC . Note If you are migrating from OpenShift Container Platform 3, see About migrating from OpenShift Container Platform 3 to 4 and Installing the legacy Migration Toolkit for Containers Operator on OpenShift Container Platform 3 . 1.3. Installing MTC Review the following tasks to install the MTC: Install the Migration Toolkit for Containers Operator on target cluster by using Operator Lifecycle Manager (OLM) . Install the legacy Migration Toolkit for Containers Operator on the source cluster manually . Configure object storage to use as a replication repository . 1.4. Upgrading MTC You upgrade the Migration Toolkit for Containers (MTC) on OpenShift Container Platform 4.13 by using OLM. You upgrade MTC on OpenShift Container Platform 3 by reinstalling the legacy Migration Toolkit for Containers Operator. 1.5. Reviewing premigration checklists Before you migrate your application workloads with the Migration Toolkit for Containers (MTC), review the premigration checklists . 1.6. Migrating applications You can migrate your applications by using the MTC web console or the command line . 1.7. Advanced migration options You can automate your migrations and modify MTC custom resources to improve the performance of large-scale migrations by using the following options: Running a state migration Creating migration hooks Editing, excluding, and mapping migrated resources Configuring the migration controller for large migrations 1.8. Troubleshooting migrations You can perform the following troubleshooting tasks: Viewing migration plan resources by using the MTC web console Viewing the migration plan aggregated log file Using the migration log reader Accessing performance metrics Using the must-gather tool Using the Velero CLI to debug Backup and Restore CRs Using MTC custom resources for troubleshooting Checking common issues and concerns 1.9. Rolling back a migration You can roll back a migration by using the MTC web console, by using the CLI, or manually. 1.10. Uninstalling MTC and deleting resources You can uninstall the MTC and delete its resources to clean up the cluster.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/migrating_from_version_3_to_4/migration-from-version-3-to-4-overview
Chapter 8. Cruise Control for cluster rebalancing
Chapter 8. Cruise Control for cluster rebalancing You can deploy Cruise Control to your AMQ Streams cluster and use it to rebalance the Kafka cluster. Cruise Control is an open source system for automating Kafka operations, such as monitoring cluster workload, rebalancing a cluster based on predefined constraints, and detecting and fixing anomalies. It consists of four main components- the Load Monitor, the Analyzer, the Anomaly Detector, and the Executor- and a REST API for client interactions. AMQ Streams utilizes the REST API to support the following Cruise Control features: Generating optimization proposals from multiple optimization goals . Rebalancing a Kafka cluster based on an optimization proposal. Other Cruise Control features are not currently supported, including anomaly detection, notifications, write-your-own goals, and changing the topic replication factor. AMQ Streams provides example configuration files . Example YAML configuration files for Cruise Control are provided in examples/cruise-control/ . 8.1. Why use Cruise Control? Cruise Control reduces the time and effort involved in running an efficient and balanced Kafka cluster. A typical cluster can become unevenly loaded over time. Partitions that handle large amounts of message traffic might be unevenly distributed across the available brokers. To rebalance the cluster, administrators must monitor the load on brokers and manually reassign busy partitions to brokers with spare capacity. Cruise Control automates the cluster rebalancing process. It constructs a workload model of resource utilization for the cluster- based on CPU, disk, and network load- and generates optimization proposals (that you can approve or reject) for more balanced partition assignments. A set of configurable optimization goals is used to calculate these proposals. When you approve an optimization proposal, Cruise Control applies it to your Kafka cluster. When the cluster rebalancing operation is complete, the broker pods are used more effectively and the Kafka cluster is more evenly balanced. Additional resources Cruise Control Wiki 8.2. Optimization goals overview To rebalance a Kafka cluster, Cruise Control uses optimization goals to generate optimization proposals , which you can approve or reject. Optimization goals are constraints on workload redistribution and resource utilization across a Kafka cluster. AMQ Streams supports most of the optimization goals developed in the Cruise Control project. The supported goals, in the default descending order of priority, are as follows: Rack-awareness Minimum number of leader replicas per broker for a set of topics Replica capacity Capacity : Disk capacity, Network inbound capacity, Network outbound capacity, CPU capacity Replica distribution Potential network output Resource distribution : Disk utilization distribution, Network inbound utilization distribution, Network outbound utilization distribution, CPU utilization distribution Note The resource distribution goals are controlled using capacity limits on broker resources. Leader bytes-in rate distribution Topic replica distribution Leader replica distribution Preferred leader election Intra-broker disk capacity Intra-broker disk usage distribution For more information on each optimization goal, see Goals in the Cruise Control Wiki. Note "Write your own" goals and Kafka assigner goals are not yet supported. Goals configuration in AMQ Streams custom resources You configure optimization goals in Kafka and KafkaRebalance custom resources. Cruise Control has configurations for hard optimization goals that must be satisfied, as well as main , default , and user-provided optimization goals. Optimization goals for resource distribution (disk, network inbound, network outbound, and CPU) are subject to capacity limits on broker resources. The following sections describe each goal configuration in more detail. Hard goals and soft goals Hard goals are goals that must be satisfied in optimization proposals. Goals that are not configured as hard goals are known as soft goals . You can think of soft goals as best effort goals: they do not need to be satisfied in optimization proposals, but are included in optimization calculations. An optimization proposal that violates one or more soft goals, but satisfies all hard goals, is valid. Cruise Control will calculate optimization proposals that satisfy all the hard goals and as many soft goals as possible (in their priority order). An optimization proposal that does not satisfy all the hard goals is rejected by Cruise Control and not sent to the user for approval. Note For example, you might have a soft goal to distribute a topic's replicas evenly across the cluster (the topic replica distribution goal). Cruise Control will ignore this goal if doing so enables all the configured hard goals to be met. In Cruise Control, the following main optimization goals are preset as hard goals: You configure hard goals in the Cruise Control deployment configuration, by editing the hard.goals property in Kafka.spec.cruiseControl.config . To inherit the preset hard goals from Cruise Control, do not specify the hard.goals property in Kafka.spec.cruiseControl.config To change the preset hard goals, specify the desired goals in the hard.goals property, using their fully-qualified domain names. Example Kafka configuration for hard optimization goals apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... entityOperator: topicOperator: {} userOperator: {} cruiseControl: brokerCapacity: inboundNetwork: 10000KB/s outboundNetwork: 10000KB/s config: hard.goals: > com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkInboundCapacityGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkOutboundCapacityGoal # ... Increasing the number of configured hard goals will reduce the likelihood of Cruise Control generating valid optimization proposals. If skipHardGoalCheck: true is specified in the KafkaRebalance custom resource, Cruise Control does not check that the list of user-provided optimization goals (in KafkaRebalance.spec.goals ) contains all the configured hard goals ( hard.goals ). Therefore, if some, but not all, of the user-provided optimization goals are in the hard.goals list, Cruise Control will still treat them as hard goals even if skipHardGoalCheck: true is specified. Main optimization goals The main optimization goals are available to all users. Goals that are not listed in the main optimization goals are not available for use in Cruise Control operations. Unless you change the Cruise Control deployment configuration , AMQ Streams will inherit the following main optimization goals from Cruise Control, in descending priority order: Six of these goals are preset as hard goals . To reduce complexity, we recommend that you use the inherited main optimization goals, unless you need to completely exclude one or more goals from use in KafkaRebalance resources. The priority order of the main optimization goals can be modified, if desired, in the configuration for default optimization goals . You configure main optimization goals, if necessary, in the Cruise Control deployment configuration: Kafka.spec.cruiseControl.config.goals To accept the inherited main optimization goals, do not specify the goals property in Kafka.spec.cruiseControl.config . If you need to modify the inherited main optimization goals, specify a list of goals, in descending priority order, in the goals configuration option. Note If you change the inherited main optimization goals, you must ensure that the hard goals, if configured in the hard.goals property in Kafka.spec.cruiseControl.config , are a subset of the main optimization goals that you configured. Otherwise, errors will occur when generating optimization proposals. Default optimization goals Cruise Control uses the default optimization goals to generate the cached optimization proposal . For more information about the cached optimization proposal, see Section 8.3, "Optimization proposals overview" . You can override the default optimization goals by setting user-provided optimization goals in a KafkaRebalance custom resource. Unless you specify default.goals in the Cruise Control deployment configuration , the main optimization goals are used as the default optimization goals. In this case, the cached optimization proposal is generated using the main optimization goals. To use the main optimization goals as the default goals, do not specify the default.goals property in Kafka.spec.cruiseControl.config . To modify the default optimization goals, edit the default.goals property in Kafka.spec.cruiseControl.config . You must use a subset of the main optimization goals. Example Kafka configuration for default optimization goals apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... entityOperator: topicOperator: {} userOperator: {} cruiseControl: brokerCapacity: inboundNetwork: 10000KB/s outboundNetwork: 10000KB/s config: default.goals: > com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.ReplicaCapacityGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.DiskCapacityGoal # ... If no default optimization goals are specified, the cached proposal is generated using the main optimization goals. User-provided optimization goals User-provided optimization goals narrow down the configured default goals for a particular optimization proposal. You can set them, as required, in spec.goals in a KafkaRebalance custom resource: User-provided optimization goals can generate optimization proposals for different scenarios. For example, you might want to optimize leader replica distribution across the Kafka cluster without considering disk capacity or disk utilization. So, you create a KafkaRebalance custom resource containing a single user-provided goal for leader replica distribution. User-provided optimization goals must: Include all configured hard goals , or an error occurs Be a subset of the main optimization goals To ignore the configured hard goals when generating an optimization proposal, add the skipHardGoalCheck: true property to the KafkaRebalance custom resource. See Section 8.7, "Generating optimization proposals" . Additional resources Section 8.5, "Cruise Control configuration" Configurations in the Cruise Control Wiki. 8.3. Optimization proposals overview An optimization proposal is a summary of proposed changes that would produce a more balanced Kafka cluster, with partition workloads distributed more evenly among the brokers. Each optimization proposal is based on the set of optimization goals that was used to generate it, subject to any configured capacity limits on broker resources . All optimization proposals are estimates of the impact of a proposed rebalance. You can approve or reject a proposal. You cannot approve a cluster rebalance without first generating the optimization proposal. Contents of optimization proposals An optimization proposal comprises a summary and broker load. The summary is contained in the KafkaRebalance resource. Summary The summary provides an overview of the proposed cluster rebalance and indicates the scale of the changes involved. A summary of a successfully generated optimization proposal is contained in the Status.OptimizationResult property of the KafkaRebalance resource. The information provided is a summary of the full optimization proposal. Broker load The broker load shows before and after values for the proposed rebalance, so you can see the impact on each of the brokers in the cluster. A broker load is stored in a ConfigMap that contains data as a JSON string. 8.3.1. Approving or rejecting an optimization proposal An optimization proposal summary shows the proposed scope of changes. You can use the name of the KafkaRebalance resource to return a summary from the command line. Returning an optimization proposal summary oc describe kafkarebalance <kafka_rebalance_resource_name> -n <namespace> You can also use the jq command line JSON parser tool. Returning an optimization proposal summary using jq oc get kafkarebalance -o json | jq <jq_query> . Use the summary to decide whether to approve or reject an optimization proposal. Approving an optimization proposal You approve the optimization proposal by setting the strimzi.io/rebalance annotation of the KafkaRebalance resource to approve . Cruise Control applies the proposal to the Kafka cluster and starts a cluster rebalance operation. Rejecting an optimization proposal If you choose not to approve an optimization proposal, you can change the optimization goals or update any of the rebalance performance tuning options , and then generate another proposal. You can use the strimzi.io/refresh annotation to generate a new optimization proposal for a KafkaRebalance resource. Use optimization proposals to assess the movements required for a rebalance. For example, a summary describes inter-broker and intra-broker movements. Inter-broker rebalancing moves data between separate brokers. Intra-broker rebalancing moves data between disks on the same broker when you are using a JBOD storage configuration. Such information can be useful even if you don't go ahead and approve the proposal. You might reject an optimization proposal, or delay its approval, because of the additional load on a Kafka cluster when rebalancing. In the following example, the proposal suggests the rebalancing of data between separate brokers. The rebalance involves the movement of 55 partition replicas, totaling 12MB of data, across the brokers. Though the inter-broker movement of partition replicas has a high impact on performance, the total amount of data is not large. If the total data was much larger, you could reject the proposal, or time when to approve the rebalance to limit the impact on the performance of the Kafka cluster. Rebalance performance tuning options can help reduce the impact of data movement. If you can extend the rebalance period, you can divide the rebalance into smaller batches. Fewer data movements at a single time reduces the load on the cluster. Example optimization proposal summary Name: my-rebalance Namespace: myproject Labels: strimzi.io/cluster=my-cluster Annotations: API Version: kafka.strimzi.io/v1alpha1 Kind: KafkaRebalance Metadata: # ... Status: Conditions: Last Transition Time: 2022-04-05T14:36:11.900Z Status: ProposalReady Type: State Observed Generation: 1 Optimization Result: Data To Move MB: 0 Excluded Brokers For Leadership: Excluded Brokers For Replica Move: Excluded Topics: Intra Broker Data To Move MB: 12 Monitored Partitions Percentage: 100 Num Intra Broker Replica Movements: 0 Num Leader Movements: 24 Num Replica Movements: 55 On Demand Balancedness Score After: 82.91290759174306 On Demand Balancedness Score Before: 78.01176356230222 Recent Windows: 5 Session Id: a4f833bd-2055-4213-bfdd-ad21f95bf184 The proposal will also move 24 partition leaders to different brokers. This requires a change to the ZooKeeper configuration, which has a low impact on performance. The balancedness scores are measurements of the overall balance of the Kafka Cluster before and after the optimization proposal is approved. A balancedness score is based on optimization goals. If all goals are satisfied, the score is 100. The score is reduced for each goal that will not be met. Compare the balancedness scores to see whether the Kafka cluster is less balanced than it could be following a rebalance. Optimization proposal summary properties The following table explains the properties contained in the optimization proposal's summary section: Table 8.1. Properties contained in an optimization proposal summary JSON property Description numIntraBrokerReplicaMovements The total number of partition replicas that will be transferred between the disks of the cluster's brokers. Performance impact during rebalance operation : Relatively high, but lower than numReplicaMovements . excludedBrokersForLeadership Not yet supported. An empty list is returned. numReplicaMovements The number of partition replicas that will be moved between separate brokers. Performance impact during rebalance operation : Relatively high. onDemandBalancednessScoreBefore, onDemandBalancednessScoreAfter A measurement of the overall balancedness of a Kafka Cluster, before and after the optimization proposal was generated. The score is calculated by subtracting the sum of the BalancednessScore of each violated soft goal from 100. Cruise Control assigns a BalancednessScore to every optimization goal based on several factors, including priority- the goal's position in the list of default.goals or user-provided goals. The Before score is based on the current configuration of the Kafka cluster. The After score is based on the generated optimization proposal. intraBrokerDataToMoveMB The sum of the size of each partition replica that will be moved between disks on the same broker (see also numIntraBrokerReplicaMovements ). Performance impact during rebalance operation : Variable. The larger the number, the longer the cluster rebalance will take to complete. Moving a large amount of data between disks on the same broker has less impact than between separate brokers (see dataToMoveMB ). recentWindows The number of metrics windows upon which the optimization proposal is based. dataToMoveMB The sum of the size of each partition replica that will be moved to a separate broker (see also numReplicaMovements ). Performance impact during rebalance operation : Variable. The larger the number, the longer the cluster rebalance will take to complete. monitoredPartitionsPercentage The percentage of partitions in the Kafka cluster covered by the optimization proposal. Affected by the number of excludedTopics . excludedTopics If you specified a regular expression in the spec.excludedTopicsRegex property in the KafkaRebalance resource, all topic names matching that expression are listed here. These topics are excluded from the calculation of partition replica/leader movements in the optimization proposal. numLeaderMovements The number of partitions whose leaders will be switched to different replicas. This involves a change to ZooKeeper configuration. Performance impact during rebalance operation : Relatively low. excludedBrokersForReplicaMove Not yet supported. An empty list is returned. Broker load properties The broker load is stored in a ConfigMap (with the same name as the KafkaRebalance custom resource) as a JSON formatted string. This JSON string consists of a JSON object with keys for each broker IDs linking to a number of metrics for each broker. Each metric consist of three values. The first is the metric value before the optimization proposal is applied, the second is the expected value of the metric after the proposal is applied, and the third is the difference between the first two values (after minus before). Note The ConfigMap appears when the KafkaRebalance resource is in the ProposalReady state and remains after the rebalance is complete. You can use the name of the ConfigMap to view its data from the command line. Returning ConfigMap data oc describe configmaps <my_rebalance_configmap_name> -n <namespace> You can also use the jq command line JSON parser tool to extract the JSON string from the ConfigMap. Extracting the JSON string from the ConfigMap using jq oc get configmaps <my_rebalance_configmap_name> -o json | jq '.["data"]["brokerLoad.json"]|fromjson|.' The following table explains the properties contained in the optimization proposal's broker load ConfigMap: JSON property Description leaders The number of replicas on this broker that are partition leaders. replicas The number of replicas on this broker. cpuPercentage The CPU utilization as a percentage of the defined capacity. diskUsedPercentage The disk utilization as a percentage of the defined capacity. diskUsedMB The absolute disk usage in MB. networkOutRate The total network output rate for the broker. leaderNetworkInRate The network input rate for all partition leader replicas on this broker. followerNetworkInRate The network input rate for all follower replicas on this broker. potentialMaxNetworkOutRate The hypothetical maximum network output rate that would be realized if this broker became the leader of all the replicas it currently hosts. Cached optimization proposal Cruise Control maintains a cached optimization proposal based on the configured default optimization goals. Generated from the workload model, the cached optimization proposal is updated every 15 minutes to reflect the current state of the Kafka cluster. If you generate an optimization proposal using the default optimization goals, Cruise Control returns the most recent cached proposal. To change the cached optimization proposal refresh interval, edit the proposal.expiration.ms setting in the Cruise Control deployment configuration. Consider a shorter interval for fast changing clusters, although this increases the load on the Cruise Control server. Additional resources Section 8.2, "Optimization goals overview" Section 8.7, "Generating optimization proposals" Section 8.8, "Approving an optimization proposal" 8.4. Rebalance performance tuning overview You can adjust several performance tuning options for cluster rebalances. These options control how partition replica and leadership movements in a rebalance are executed, as well as the bandwidth that is allocated to a rebalance operation. 8.4.1. Partition reassignment commands Optimization proposals are comprised of separate partition reassignment commands. When you approve a proposal, the Cruise Control server applies these commands to the Kafka cluster. A partition reassignment command consists of either of the following types of operations: Partition movement: Involves transferring the partition replica and its data to a new location. Partition movements can take one of two forms: Inter-broker movement: The partition replica is moved to a log directory on a different broker. Intra-broker movement: The partition replica is moved to a different log directory on the same broker. Leadership movement: This involves switching the leader of the partition's replicas. Cruise Control issues partition reassignment commands to the Kafka cluster in batches. The performance of the cluster during the rebalance is affected by the number of each type of movement contained in each batch. 8.4.2. Replica movement strategies Cluster rebalance performance is also influenced by the replica movement strategy that is applied to the batches of partition reassignment commands. By default, Cruise Control uses the BaseReplicaMovementStrategy , which simply applies the commands in the order they were generated. However, if there are some very large partition reassignments early in the proposal, this strategy can slow down the application of the other reassignments. Cruise Control provides four alternative replica movement strategies that can be applied to optimization proposals: PrioritizeSmallReplicaMovementStrategy : Order reassignments in order of ascending size. PrioritizeLargeReplicaMovementStrategy : Order reassignments in order of descending size. PostponeUrpReplicaMovementStrategy : Prioritize reassignments for replicas of partitions which have no out-of-sync replicas. PrioritizeMinIsrWithOfflineReplicasStrategy : Prioritize reassignments with (At/Under)MinISR partitions with offline replicas. This strategy will only work if cruiseControl.config.concurrency.adjuster.min.isr.check.enabled is set to true in the Kafka custom resource's spec. These strategies can be configured as a sequence. The first strategy attempts to compare two partition reassignments using its internal logic. If the reassignments are equivalent, then it passes them to the strategy in the sequence to decide the order, and so on. 8.4.3. Intra-broker disk balancing Moving a large amount of data between disks on the same broker has less impact than between separate brokers. If you are running a Kafka deployment that uses JBOD storage with multiple disks on the same broker, Cruise Control can balance partitions between the disks. Note If you are using JBOD storage with a single disk, intra-broker disk balancing will result in a proposal with 0 partition movements since there are no disks to balance between. To perform an intra-broker disk balance, set rebalanceDisk to true under the KafkaRebalance.spec . When setting rebalanceDisk to true , do not set a goals field in the KafkaRebalance.spec , as Cruise Control will automatically set the intra-broker goals and ignore the inter-broker goals. Cruise Control does not perform inter-broker and intra-broker balancing at the same time. 8.4.4. Rebalance tuning options Cruise Control provides several configuration options for tuning the rebalance parameters discussed above. You can set these tuning options at either the Cruise Control server or optimization proposal levels: The Cruise Control server setting can be set in the Kafka custom resource under Kafka.spec.cruiseControl.config . The individual rebalance performance configurations can be set under KafkaRebalance.spec . The relevant configurations are summarized in the following table. Table 8.2. Rebalance performance tuning configuration Cruise Control properties KafkaRebalance properties Default Description num.concurrent.partition.movements.per.broker concurrentPartitionMovementsPerBroker 5 The maximum number of inter-broker partition movements in each partition reassignment batch num.concurrent.intra.broker.partition.movements concurrentIntraBrokerPartitionMovements 2 The maximum number of intra-broker partition movements in each partition reassignment batch num.concurrent.leader.movements concurrentLeaderMovements 1000 The maximum number of partition leadership changes in each partition reassignment batch default.replication.throttle replicationThrottle Null (no limit) The bandwidth (in bytes per second) to assign to partition reassignment default.replica.movement.strategies replicaMovementStrategies BaseReplicaMovementStrategy The list of strategies (in priority order) used to determine the order in which partition reassignment commands are executed for generated proposals. For the server setting, use a comma separated string with the fully qualified names of the strategy class (add com.linkedin.kafka.cruisecontrol.executor.strategy. to the start of each class name). For the KafkaRebalance resource setting use a YAML array of strategy class names. - rebalanceDisk false Enables intra-broker disk balancing, which balances disk space utilization between disks on the same broker. Only applies to Kafka deployments that use JBOD storage with multiple disks. Changing the default settings affects the length of time that the rebalance takes to complete, as well as the load placed on the Kafka cluster during the rebalance. Using lower values reduces the load but increases the amount of time taken, and vice versa. Additional resources Section 13.2.50, " CruiseControlSpec schema reference" . Section 13.2.131, " KafkaRebalanceSpec schema reference" . 8.5. Cruise Control configuration The config property in Kafka.spec.cruiseControl contains configuration options as keys with values as one of the following JSON types: String Number Boolean You can specify and configure all the options listed in the "Configurations" section of the Cruise Control documentation , apart from those managed directly by AMQ Streams. Specifically, you cannot modify configuration options with keys equal to or starting with one of the keys mentioned here . If restricted options are specified, they are ignored and a warning message is printed to the Cluster Operator log file. All the supported options are passed to Cruise Control. An example Cruise Control configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # ... cruiseControl: # ... config: default.goals: > com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.ReplicaCapacityGoal cpu.balance.threshold: 1.1 metadata.max.age.ms: 300000 send.buffer.bytes: 131072 # ... Cross-Origin Resource Sharing configuration Cross-Origin Resource Sharing (CORS) allows you to specify allowed methods and originating URLs for accessing REST APIs. By default, CORS is disabled for the Cruise Control REST API. When enabled, only GET requests for read-only access to the Kafka cluster state are allowed. This means that external applications, which are running in different origins than the AMQ Streams components, cannot make POST requests to the Cruise Control API. However, those applications can make GET requests to access read-only information about the Kafka cluster, such as the current cluster load or the most recent optimization proposal. Enabling CORS for Cruise Control You enable and configure CORS in Kafka.spec.cruiseControl.config . apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # ... cruiseControl: # ... config: webserver.http.cors.enabled: true webserver.http.cors.origin: "*" webserver.http.cors.exposeheaders: "User-Task-ID,Content-Type" # ... For more information, see REST APIs in the Cruise Control Wiki . Capacity configuration Cruise Control uses capacity limits to determine if optimization goals for resource distribution are being broken. There are four goals of this type: DiskUsageDistributionGoal - Disk utilization distribution CpuUsageDistributionGoal - CPU utilization distribution NetworkInboundUsageDistributionGoal - Network inbound utilization distribution NetworkOutboundUsageDistributionGoal - Network outbound utilization distribution You specify capacity limits for Kafka broker resources in the brokerCapacity property in Kafka.spec.cruiseControl . They are enabled by default and you can change their default values. Capacity limits can be set for the following broker resources: inboundNetwork - Inbound network throughput in byte units per second (Default: 10000KiB/s) outboundNetwork - Outbound network throughput in byte units per second (Default: 10000KiB/s) For network throughput, use an integer value with standard OpenShift byte units (K, M, G) or their bibyte (power of two) equivalents (Ki, Mi, Gi) per second. Note Disk and CPU capacity limits are automatically generated by AMQ Streams, so you do not need to set them. Note In order to guarantee accurate rebalance proposal when using CPU goals, you can set CPU requests equal to CPU limits in Kafka.spec.kafka.resources . That way, all CPU resources are reserved upfront and are always available. This configuration allows Cruise Control to properly evaluate the CPU utilization when preparing the rebalance proposals based on CPU goals. An example Cruise Control brokerCapacity configuration using bibyte units apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # ... cruiseControl: # ... brokerCapacity: inboundNetwork: 10000KiB/s outboundNetwork: 10000KiB/s # ... Additional resources For more information, refer to the Section 13.2.52, " BrokerCapacity schema reference" . Logging configuration Cruise Control has its own configurable logger: rootLogger.level Cruise Control uses the Apache log4j 2 logger implementation. Use the logging property to configure loggers and logger levels. You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.valueFrom.configMapKeyRef.name property to the name of the ConfigMap containing the external logging configuration. Inside the ConfigMap, the logging configuration is described using log4j.properties . Both logging.valueFrom.configMapKeyRef.name and logging.valueFrom.configMapKeyRef.key properties are mandatory. A ConfigMap using the exact logging configuration specified is created with the custom resource when the Cluster Operator is running, then recreated after each reconciliation. If you do not specify a custom ConfigMap, default logging settings are used. If a specific logger value is not set, upper-level logger settings are inherited for that logger. Here we see examples of inline and external logging. Inline logging apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka # ... spec: cruiseControl: # ... logging: type: inline loggers: rootLogger.level: "INFO" # ... External logging apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka # ... spec: cruiseControl: # ... logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: cruise-control-log4j.properties # ... Cruise Control REST API security The Cruise Control REST API is secured with HTTP Basic authentication and SSL to protect the cluster against potentially destructive Cruise Control operations, such as decommissioning Kafka brokers. We recommend that Cruise Control in AMQ Streams is only used with these settings enabled . You should not disable the built-in HTTP Basic authentication or SSL settings described below. To disable the built-in HTTP Basic authentication, set webserver.security.enable to false . To disable the built-in SSL, set webserver.ssl.enable to false . Example Cruise Control configuration to disable API authorization, authentication, and SSL apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # ... cruiseControl: config: webserver.security.enable: false webserver.ssl.enable: false # ... 8.6. Deploying Cruise Control To deploy Cruise Control to your AMQ Streams cluster, define the configuration using the cruiseControl property in the Kafka resource, and then create or update the resource. Deploy one instance of Cruise Control per Kafka cluster. Prerequisites An OpenShift cluster A running Cluster Operator Procedure Edit the Kafka resource and add the cruiseControl property. The properties you can configure are shown in this example configuration: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # ... cruiseControl: brokerCapacity: 1 inboundNetwork: 10000KB/s outboundNetwork: 10000KB/s # ... config: 2 default.goals: > com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.ReplicaCapacityGoal # ... cpu.balance.threshold: 1.1 metadata.max.age.ms: 300000 send.buffer.bytes: 131072 # ... resources: 3 requests: cpu: 1 memory: 512Mi limits: cpu: 2 memory: 2Gi logging: 4 type: inline loggers: rootLogger.level: "INFO" template: 5 pod: metadata: labels: label1: value1 securityContext: runAsUser: 1000001 fsGroup: 0 terminationGracePeriodSeconds: 120 readinessProbe: 6 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: 7 initialDelaySeconds: 15 timeoutSeconds: 5 # ... 1 Specifies capacity limits for broker resources. For more information, see Capacity configuration . 2 Defines the Cruise Control configuration, including the default optimization goals (in default.goals ) and any customizations to the main optimization goals (in goals ) or the hard goals (in hard.goals ). You can provide any standard Cruise Control configuration option apart from those managed directly by AMQ Streams. For more information on configuring optimization goals, see Section 8.2, "Optimization goals overview" . 3 CPU and memory resources reserved for Cruise Control. For more information, see Section 13.1.5, " resources " . 4 Defined loggers and log levels added directly (inline) or indirectly (external) through a ConfigMap. A custom ConfigMap must be placed under the log4j.properties key. Cruise Control has a single logger named rootLogger.level . You can set the log level to INFO, ERROR, WARN, TRACE, DEBUG, FATAL or OFF. For more information, see Logging configuration . 5 Customization of deployment templates and pods . 6 Healthcheck readiness probes . 7 Healthcheck liveness probes . Create or update the resource: oc apply -f kafka.yaml Check the status of the deployment: oc get deployments -n <my_cluster_operator_namespace> Output shows the deployment name and readiness NAME READY UP-TO-DATE AVAILABLE my-cluster-cruise-control 1/1 1 1 my-cluster is the name of the Kafka cluster. READY shows the number of replicas that are ready/expected. The deployment is successful when the AVAILABLE output shows 1 . Auto-created topics The following table shows the three topics that are automatically created when Cruise Control is deployed. These topics are required for Cruise Control to work properly and must not be deleted or changed. Table 8.3. Auto-created topics Auto-created topic Created by Function strimzi.cruisecontrol.metrics AMQ Streams Metrics Reporter Stores the raw metrics from the Metrics Reporter in each Kafka broker. strimzi.cruisecontrol.partitionmetricsamples Cruise Control Stores the derived metrics for each partition. These are created by the Metric Sample Aggregator . strimzi.cruisecontrol.modeltrainingsamples Cruise Control Stores the metrics samples used to create the Cluster Workload Model . To prevent the removal of records that are needed by Cruise Control, log compaction is disabled in the auto-created topics. What to do After configuring and deploying Cruise Control, you can generate optimization proposals . Additional resources Section 13.2.51, " CruiseControlTemplate schema reference" . 8.7. Generating optimization proposals When you create or update a KafkaRebalance resource, Cruise Control generates an optimization proposal for the Kafka cluster based on the configured optimization goals . Analyze the information in the optimization proposal and decide whether to approve it. Prerequisites You have deployed Cruise Control to your AMQ Streams cluster. You have configured optimization goals and, optionally, capacity limits on broker resources . Procedure Create a KafkaRebalance resource: To use the default optimization goals defined in the Kafka resource, leave the spec property empty: apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster spec: {} To configure user-provided optimization goals instead of using the default goals, add the goals property and enter one or more goals. In the following example, rack awareness and replica capacity are configured as user-provided optimization goals: apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster spec: goals: - RackAwareGoal - ReplicaCapacityGoal To ignore the configured hard goals, add the skipHardGoalCheck: true property: apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster spec: goals: - RackAwareGoal - ReplicaCapacityGoal skipHardGoalCheck: true Create or update the resource: oc apply -f your-file The Cluster Operator requests the optimization proposal from Cruise Control. This might take a few minutes depending on the size of the Kafka cluster. Check the status of the KafkaRebalance resource: oc describe kafkarebalance rebalance-cr-name Cruise Control returns one of two statuses: PendingProposal : The rebalance operator is polling the Cruise Control API to check if the optimization proposal is ready. ProposalReady : The optimization proposal is ready for review and, if desired, approval. The optimization proposal is contained in the Status.Optimization Result property of the KafkaRebalance resource. Review the optimization proposal. oc describe kafkarebalance rebalance-cr-name Here is an example proposal: Status: Conditions: Last Transition Time: 2020-05-19T13:50:12.533Z Status: ProposalReady Type: State Observed Generation: 1 Optimization Result: Data To Move MB: 0 Excluded Brokers For Leadership: Excluded Brokers For Replica Move: Excluded Topics: Intra Broker Data To Move MB: 0 Monitored Partitions Percentage: 100 Num Intra Broker Replica Movements: 0 Num Leader Movements: 0 Num Replica Movements: 26 On Demand Balancedness Score After: 81.8666802863978 On Demand Balancedness Score Before: 78.01176356230222 Recent Windows: 1 Session Id: 05539377-ca7b-45ef-b359-e13564f1458c The properties in the Optimization Result section describe the pending cluster rebalance operation. For descriptions of each property, see Contents of optimization proposals . Insufficient CPU capacity If a Kafka cluster is overloaded in terms of CPU utilization, you might see an insufficient CPU capacity error in the KafkaRebalance status. It's worth noting that this utilization value is unaffected by the excludedTopics configuration. Although optimization proposals will not reassign replicas of excluded topics, their load is still considered in the utilization calculation. Example CPU utilization error com.linkedin.kafka.cruisecontrol.exception.OptimizationFailureException: [CpuCapacityGoal] Insufficient capacity for cpu (Utilization 615.21, Allowed Capacity 420.00, Threshold: 0.70). Add at least 3 brokers with the same cpu capacity (100.00) as broker-0. Add at least 3 brokers with the same cpu capacity (100.00) as broker-0. Note The error shows CPU capacity as a percentage rather than CPU cores. For that reason, it does not directly map to the number of CPUs configured in Kafka CR. It is like having a single virtual CPU per broker, which has the cycles of Kafka.spec.kafka.resources.limits.cpu CPUs. This has no effect on the rebalance behavior, since the ratio between CPU utilization and capacity remains the same. What to do Section 8.8, "Approving an optimization proposal" Additional resources Section 8.3, "Optimization proposals overview" 8.8. Approving an optimization proposal You can approve an optimization proposal generated by Cruise Control, if its status is ProposalReady . Cruise Control will then apply the optimization proposal to the Kafka cluster, reassigning partitions to brokers and changing partition leadership. Caution This is not a dry run. Before you approve an optimization proposal, you must: Refresh the proposal in case it has become out of date. Carefully review the contents of the proposal . Prerequisites You have generated an optimization proposal from Cruise Control. The KafkaRebalance custom resource status is ProposalReady . Procedure Perform these steps for the optimization proposal that you want to approve: Unless the optimization proposal is newly generated, check that it is based on current information about the state of the Kafka cluster. To do so, refresh the optimization proposal to make sure it uses the latest cluster metrics: Annotate the KafkaRebalance resource in OpenShift with refresh : oc annotate kafkarebalance rebalance-cr-name strimzi.io/rebalance=refresh Check the status of the KafkaRebalance resource: oc describe kafkarebalance rebalance-cr-name Wait until the status changes to ProposalReady . Approve the optimization proposal that you want Cruise Control to apply. Annotate the KafkaRebalance resource in OpenShift: oc annotate kafkarebalance rebalance-cr-name strimzi.io/rebalance=approve The Cluster Operator detects the annotated resource and instructs Cruise Control to rebalance the Kafka cluster. Check the status of the KafkaRebalance resource: oc describe kafkarebalance rebalance-cr-name Cruise Control returns one of three statuses: Rebalancing: The cluster rebalance operation is in progress. Ready: The cluster rebalancing operation completed successfully. To use the same KafkaRebalance custom resource to generate another optimization proposal, apply the refresh annotation to the custom resource. This moves the custom resource to the PendingProposal or ProposalReady state. You can then review the optimization proposal and approve it, if desired. NotReady: An error occurred- see Section 8.10, "Fixing problems with a KafkaRebalance resource" . Additional resources Section 8.3, "Optimization proposals overview" Section 8.9, "Stopping a cluster rebalance" 8.9. Stopping a cluster rebalance Once started, a cluster rebalance operation might take some time to complete and affect the overall performance of the Kafka cluster. If you want to stop a cluster rebalance operation that is in progress, apply the stop annotation to the KafkaRebalance custom resource. This instructs Cruise Control to finish the current batch of partition reassignments and then stop the rebalance. When the rebalance has stopped, completed partition reassignments have already been applied; therefore, the state of the Kafka cluster is different when compared to prior to the start of the rebalance operation. If further rebalancing is required, you should generate a new optimization proposal. Note The performance of the Kafka cluster in the intermediate (stopped) state might be worse than in the initial state. Prerequisites You have approved the optimization proposal by annotating the KafkaRebalance custom resource with approve . The status of the KafkaRebalance custom resource is Rebalancing . Procedure Annotate the KafkaRebalance resource in OpenShift: oc annotate kafkarebalance rebalance-cr-name strimzi.io/rebalance=stop Check the status of the KafkaRebalance resource: oc describe kafkarebalance rebalance-cr-name Wait until the status changes to Stopped . Additional resources Section 8.3, "Optimization proposals overview" 8.10. Fixing problems with a KafkaRebalance resource If an issue occurs when creating a KafkaRebalance resource or interacting with Cruise Control, the error is reported in the resource status, along with details of how to fix it. The resource also moves to the NotReady state. To continue with the cluster rebalance operation, you must fix the problem in the KafkaRebalance resource itself or with the overall Cruise Control deployment. Problems might include the following: A misconfigured parameter in the KafkaRebalance resource. The strimzi.io/cluster label for specifying the Kafka cluster in the KafkaRebalance resource is missing. The Cruise Control server is not deployed as the cruiseControl property in the Kafka resource is missing. The Cruise Control server is not reachable. After fixing the issue, you need to add the refresh annotation to the KafkaRebalance resource. During a "refresh", a new optimization proposal is requested from the Cruise Control server. Prerequisites You have approved an optimization proposal . The status of the KafkaRebalance custom resource for the rebalance operation is NotReady . Procedure Get information about the error from the KafkaRebalance status: oc describe kafkarebalance rebalance-cr-name Attempt to resolve the issue in the KafkaRebalance resource. Annotate the KafkaRebalance resource in OpenShift: oc annotate kafkarebalance rebalance-cr-name strimzi.io/rebalance=refresh Check the status of the KafkaRebalance resource: oc describe kafkarebalance rebalance-cr-name Wait until the status changes to PendingProposal , or directly to ProposalReady . Additional resources Section 8.3, "Optimization proposals overview"
[ "RackAwareGoal; MinTopicLeadersPerBrokerGoal; ReplicaCapacityGoal; DiskCapacityGoal; NetworkInboundCapacityGoal; NetworkOutboundCapacityGoal; CpuCapacityGoal", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: topicOperator: {} userOperator: {} cruiseControl: brokerCapacity: inboundNetwork: 10000KB/s outboundNetwork: 10000KB/s config: hard.goals: > com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkInboundCapacityGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkOutboundCapacityGoal #", "RackAwareGoal; ReplicaCapacityGoal; DiskCapacityGoal; NetworkInboundCapacityGoal; NetworkOutboundCapacityGoal; CpuCapacityGoal; ReplicaDistributionGoal; PotentialNwOutGoal; DiskUsageDistributionGoal; NetworkInboundUsageDistributionGoal; NetworkOutboundUsageDistributionGoal; CpuUsageDistributionGoal; TopicReplicaDistributionGoal; LeaderReplicaDistributionGoal; LeaderBytesInDistributionGoal; PreferredLeaderElectionGoal", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: topicOperator: {} userOperator: {} cruiseControl: brokerCapacity: inboundNetwork: 10000KB/s outboundNetwork: 10000KB/s config: default.goals: > com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.ReplicaCapacityGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.DiskCapacityGoal #", "KafkaRebalance.spec.goals", "describe kafkarebalance <kafka_rebalance_resource_name> -n <namespace>", "get kafkarebalance -o json | jq <jq_query> .", "Name: my-rebalance Namespace: myproject Labels: strimzi.io/cluster=my-cluster Annotations: API Version: kafka.strimzi.io/v1alpha1 Kind: KafkaRebalance Metadata: Status: Conditions: Last Transition Time: 2022-04-05T14:36:11.900Z Status: ProposalReady Type: State Observed Generation: 1 Optimization Result: Data To Move MB: 0 Excluded Brokers For Leadership: Excluded Brokers For Replica Move: Excluded Topics: Intra Broker Data To Move MB: 12 Monitored Partitions Percentage: 100 Num Intra Broker Replica Movements: 0 Num Leader Movements: 24 Num Replica Movements: 55 On Demand Balancedness Score After: 82.91290759174306 On Demand Balancedness Score Before: 78.01176356230222 Recent Windows: 5 Session Id: a4f833bd-2055-4213-bfdd-ad21f95bf184", "describe configmaps <my_rebalance_configmap_name> -n <namespace>", "get configmaps <my_rebalance_configmap_name> -o json | jq '.[\"data\"][\"brokerLoad.json\"]|fromjson|.'", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # cruiseControl: # config: default.goals: > com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.ReplicaCapacityGoal cpu.balance.threshold: 1.1 metadata.max.age.ms: 300000 send.buffer.bytes: 131072 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # cruiseControl: # config: webserver.http.cors.enabled: true webserver.http.cors.origin: \"*\" webserver.http.cors.exposeheaders: \"User-Task-ID,Content-Type\" #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # cruiseControl: # brokerCapacity: inboundNetwork: 10000KiB/s outboundNetwork: 10000KiB/s #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: cruiseControl: # logging: type: inline loggers: rootLogger.level: \"INFO\" #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: cruiseControl: # logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: cruise-control-log4j.properties #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # cruiseControl: config: webserver.security.enable: false webserver.ssl.enable: false", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # cruiseControl: brokerCapacity: 1 inboundNetwork: 10000KB/s outboundNetwork: 10000KB/s # config: 2 default.goals: > com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.ReplicaCapacityGoal # cpu.balance.threshold: 1.1 metadata.max.age.ms: 300000 send.buffer.bytes: 131072 # resources: 3 requests: cpu: 1 memory: 512Mi limits: cpu: 2 memory: 2Gi logging: 4 type: inline loggers: rootLogger.level: \"INFO\" template: 5 pod: metadata: labels: label1: value1 securityContext: runAsUser: 1000001 fsGroup: 0 terminationGracePeriodSeconds: 120 readinessProbe: 6 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: 7 initialDelaySeconds: 15 timeoutSeconds: 5", "apply -f kafka.yaml", "get deployments -n <my_cluster_operator_namespace>", "NAME READY UP-TO-DATE AVAILABLE my-cluster-cruise-control 1/1 1 1", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster spec: {}", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster spec: goals: - RackAwareGoal - ReplicaCapacityGoal", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster spec: goals: - RackAwareGoal - ReplicaCapacityGoal skipHardGoalCheck: true", "apply -f your-file", "describe kafkarebalance rebalance-cr-name", "describe kafkarebalance rebalance-cr-name", "Status: Conditions: Last Transition Time: 2020-05-19T13:50:12.533Z Status: ProposalReady Type: State Observed Generation: 1 Optimization Result: Data To Move MB: 0 Excluded Brokers For Leadership: Excluded Brokers For Replica Move: Excluded Topics: Intra Broker Data To Move MB: 0 Monitored Partitions Percentage: 100 Num Intra Broker Replica Movements: 0 Num Leader Movements: 0 Num Replica Movements: 26 On Demand Balancedness Score After: 81.8666802863978 On Demand Balancedness Score Before: 78.01176356230222 Recent Windows: 1 Session Id: 05539377-ca7b-45ef-b359-e13564f1458c", "com.linkedin.kafka.cruisecontrol.exception.OptimizationFailureException: [CpuCapacityGoal] Insufficient capacity for cpu (Utilization 615.21, Allowed Capacity 420.00, Threshold: 0.70). Add at least 3 brokers with the same cpu capacity (100.00) as broker-0. Add at least 3 brokers with the same cpu capacity (100.00) as broker-0.", "annotate kafkarebalance rebalance-cr-name strimzi.io/rebalance=refresh", "describe kafkarebalance rebalance-cr-name", "annotate kafkarebalance rebalance-cr-name strimzi.io/rebalance=approve", "describe kafkarebalance rebalance-cr-name", "annotate kafkarebalance rebalance-cr-name strimzi.io/rebalance=stop", "describe kafkarebalance rebalance-cr-name", "describe kafkarebalance rebalance-cr-name", "annotate kafkarebalance rebalance-cr-name strimzi.io/rebalance=refresh", "describe kafkarebalance rebalance-cr-name" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.1/html/configuring_amq_streams_on_openshift/cruise-control-concepts-str
10.5.4. Timeout
10.5.4. Timeout Timeout defines, in seconds, the amount of time that the server waits for receipts and transmissions during communications. Timeout is set to 300 seconds by default, which is appropriate for most situations.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-apache-timeout
Chapter 11. IngressClass [networking.k8s.io/v1]
Chapter 11. IngressClass [networking.k8s.io/v1] Description IngressClass represents the class of the Ingress, referenced by the Ingress Spec. The ingressclass.kubernetes.io/is-default-class annotation can be used to indicate that an IngressClass should be considered default. When a single IngressClass resource has this annotation set to true, new Ingress resources without a class specified will be assigned this default class. Type object 11.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object IngressClassSpec provides information about the class of an Ingress. 11.1.1. .spec Description IngressClassSpec provides information about the class of an Ingress. Type object Property Type Description controller string controller refers to the name of the controller that should handle this class. This allows for different "flavors" that are controlled by the same controller. For example, you may have different parameters for the same implementing controller. This should be specified as a domain-prefixed path no more than 250 characters in length, e.g. "acme.io/ingress-controller". This field is immutable. parameters object IngressClassParametersReference identifies an API object. This can be used to specify a cluster or namespace-scoped resource. 11.1.2. .spec.parameters Description IngressClassParametersReference identifies an API object. This can be used to specify a cluster or namespace-scoped resource. Type object Required kind name Property Type Description apiGroup string apiGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string kind is the type of resource being referenced. name string name is the name of resource being referenced. namespace string namespace is the namespace of the resource being referenced. This field is required when scope is set to "Namespace" and must be unset when scope is set to "Cluster". scope string scope represents if this refers to a cluster or namespace scoped resource. This may be set to "Cluster" (default) or "Namespace". 11.2. API endpoints The following API endpoints are available: /apis/networking.k8s.io/v1/ingressclasses DELETE : delete collection of IngressClass GET : list or watch objects of kind IngressClass POST : create an IngressClass /apis/networking.k8s.io/v1/watch/ingressclasses GET : watch individual changes to a list of IngressClass. deprecated: use the 'watch' parameter with a list operation instead. /apis/networking.k8s.io/v1/ingressclasses/{name} DELETE : delete an IngressClass GET : read the specified IngressClass PATCH : partially update the specified IngressClass PUT : replace the specified IngressClass /apis/networking.k8s.io/v1/watch/ingressclasses/{name} GET : watch changes to an object of kind IngressClass. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 11.2.1. /apis/networking.k8s.io/v1/ingressclasses Table 11.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of IngressClass Table 11.2. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 11.3. Body parameters Parameter Type Description body DeleteOptions schema Table 11.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind IngressClass Table 11.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 11.6. HTTP responses HTTP code Reponse body 200 - OK IngressClassList schema 401 - Unauthorized Empty HTTP method POST Description create an IngressClass Table 11.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.8. Body parameters Parameter Type Description body IngressClass schema Table 11.9. HTTP responses HTTP code Reponse body 200 - OK IngressClass schema 201 - Created IngressClass schema 202 - Accepted IngressClass schema 401 - Unauthorized Empty 11.2.2. /apis/networking.k8s.io/v1/watch/ingressclasses Table 11.10. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of IngressClass. deprecated: use the 'watch' parameter with a list operation instead. Table 11.11. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 11.2.3. /apis/networking.k8s.io/v1/ingressclasses/{name} Table 11.12. Global path parameters Parameter Type Description name string name of the IngressClass Table 11.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an IngressClass Table 11.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 11.15. Body parameters Parameter Type Description body DeleteOptions schema Table 11.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified IngressClass Table 11.17. HTTP responses HTTP code Reponse body 200 - OK IngressClass schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified IngressClass Table 11.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 11.19. Body parameters Parameter Type Description body Patch schema Table 11.20. HTTP responses HTTP code Reponse body 200 - OK IngressClass schema 201 - Created IngressClass schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified IngressClass Table 11.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.22. Body parameters Parameter Type Description body IngressClass schema Table 11.23. HTTP responses HTTP code Reponse body 200 - OK IngressClass schema 201 - Created IngressClass schema 401 - Unauthorized Empty 11.2.4. /apis/networking.k8s.io/v1/watch/ingressclasses/{name} Table 11.24. Global path parameters Parameter Type Description name string name of the IngressClass Table 11.25. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind IngressClass. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 11.26. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/network_apis/ingressclass-networking-k8s-io-v1
Chapter 8. Optional - Enable SAP HA interface for Management of Cluster-controlled ASCS/ERS instances using SAP Management Tools
Chapter 8. Optional - Enable SAP HA interface for Management of Cluster-controlled ASCS/ERS instances using SAP Management Tools When a system admin controls a SAP instance that is running inside the Pacemaker cluster, either manually or using tools such as SAP Management Console (MC/MMC), the change needs to be done through the HA interface that's provided by the HA cluster software. SAP Start Service sapstartsrv controls the SAP instances and needs to be configured to communicate with the pacemaker cluster software through the HA interface. Please follow the kbase article to configure the HAlib : How to enable the SAP HA Interface for SAP ABAP application server instances managed by the RHEL HA Add-On? .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/8/html/configuring_a_cost-optimized_sap_s4hana_ha_cluster_hana_system_replication_ensa2_using_the_rhel_ha_add-on/asmb_cco_enable_sap_ha_interface_configuring-cost-optimized-sap
4.314. system-config-printer
4.314. system-config-printer 4.314.1. RHBA-2011:1638 - system-config-printer bug fix update Updated system-config-printer packages that fix multiple bugs are now available for Red Hat Enterprise Linux 6. The system-config-printer package contains a print queue configuration tool with a graphical user interface. Bug Fixes BZ# 556548 Previously, when a printer queue was added, CUPS (Common Unix Printing System) could leave a symbolic link in the /tmp directory. With this update, CUPS is modified to clean this data. BZ# 579864 Prior to this update, the probe_printer.py file contained a typo. As a consequence, the system-config-printer utility could terminate with a traceback if authentication was required for a CIFS (Common Internet File System) share. The typo has been corrected and tracebacks no longer occur. BZ# 591633 Prior to this update, the default firewall could prevent discovery of Multicast DNS (mDNS) devices. As a consequence, a device could not be found over the network. With this update, system-config-printer uses the D-Bus API of the system-config-firewall utility, which adjusts the firewall so that it allows network printer discovery. BZ# 608070 Due to a bug in the source code, the system-config-printer utility could terminate unexpectedly with an error message on 32-bit architectures. This problem occurred when the user changed the number of copies on the Job Options page, then pressed the Reset button to return the number of copies back to 1, and applied the changes. With this update, the system-config-printer utility is now modified and no longer terminates. BZ# 613708 Previously, only the system-config-printer base package contained the COPYING file. With this update, the COPYING file is also included in the system-config-printer-libs sub-package. BZ# 633595 Prior to this update, Korean characters were not aligned properly in certain dialog boxes. This update corrects the alignment of Korean characters. BZ# 634252 Previously, the system-config-printer utility could become unresponsive if the user provided an empty or wrong credential on a password request and closed the "Not authorized" dialog box. With this update, a D-Bus timeout is set. A new printer window now appears if the user closes the "Not authorized" dialog box. BZ# 634436 Prior to this update, multiple strings were not translated in various translations. With this update, these texts are now translated. BZ# 636523 When renaming a printer queue with a name different only in the case of some characters (lowercase/uppercase), the printer queue was deleted instead of being renamed. With this update, this type of renaming is not allowed, which prevents the queue from being unexpectedly deleted. BZ# 639624 Previously, the getJockeyDriver_thread() call tried to use D-Bus from a separate thread. As a consequence, system-config-printer could terminate unexpectedly with a segmentation fault. With this update, an error message informs users that Jockey drivers cannot be used. BZ# 645426 Prior to this update, the system-config-printer-applet could repeatedly query the CUPS scheduler for printers and jobs. As a consequence, the applet would cause high CPU consumption. With this update, system-config-printer-applet is modified and does not cause high CPU consumption any longer. BZ# 676339 , BZ# 676343 When executing the system-config-printer and system-config-printer-applet utilities in a non-graphical environment using the Secure Shell (SSH) connection, the utilities failed with a traceback. With this update, the utilities are now modified to provide an error message instead of a traceback. All users of system-config-printer are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/system-config-printer
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/creating_and_managing_images/making-open-source-more-inclusive
Chapter 5. KafkaClusterSpec schema reference
Chapter 5. KafkaClusterSpec schema reference Used in: KafkaSpec Full list of KafkaClusterSpec schema properties Configures a Kafka cluster using the Kafka custom resource. The config properties are one part of the overall configuration for the resource. Use the config properties to configure Kafka broker options as keys. Example Kafka configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: version: 3.9.0 metadataVersion: 3.9 # ... config: auto.create.topics.enable: "false" offsets.topic.replication.factor: 3 transaction.state.log.replication.factor: 3 transaction.state.log.min.isr: 2 default.replication.factor: 3 min.insync.replicas: 2 # ... The values can be one of the following JSON types: String Number Boolean Exceptions You can specify and configure the options listed in the Apache Kafka documentation . However, Streams for Apache Kafka takes care of configuring and managing options related to the following, which cannot be changed: Security (encryption, authentication, and authorization) Listener configuration Broker ID configuration Configuration of log data directories Inter-broker communication ZooKeeper connectivity Properties with the following prefixes cannot be set: advertised. authorizer. broker. controller cruise.control.metrics.reporter.bootstrap. cruise.control.metrics.topic host.name inter.broker.listener.name listener. listeners. log.dir password. port process.roles sasl. security. servers,node.id ssl. super.user zookeeper.clientCnxnSocket zookeeper.connect zookeeper.set.acl zookeeper.ssl If the config property contains an option that cannot be changed, it is disregarded, and a warning message is logged to the Cluster Operator log file. All other supported options are forwarded to Kafka, including the following exceptions to the options configured by Streams for Apache Kafka: Any ssl configuration for supported TLS versions and cipher suites Configuration for the zookeeper.connection.timeout.ms property to set the maximum time allowed for establishing a ZooKeeper connection Cruise Control metrics properties: cruise.control.metrics.topic.num.partitions cruise.control.metrics.topic.replication.factor cruise.control.metrics.topic.retention.ms cruise.control.metrics.topic.auto.create.retries cruise.control.metrics.topic.auto.create.timeout.ms cruise.control.metrics.topic.min.insync.replicas Controller properties: controller.quorum.election.backoff.max.ms controller.quorum.election.timeout.ms controller.quorum.fetch.timeout.ms 5.1. Configuring rack awareness and init container images Rack awareness is enabled using the rack property. When rack awareness is enabled, Kafka broker pods use init container to collect the labels from the OpenShift cluster nodes. The container image for this init container can be specified using the brokerRackInitImage property. If the brokerRackInitImage field is not provided, the images used are prioritized as follows: Container image specified in STRIMZI_DEFAULT_KAFKA_INIT_IMAGE environment variable in the Cluster Operator configuration. registry.redhat.io/amq-streams/strimzi-rhel9-operator:2.9.0 container image. Example brokerRackInitImage configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... rack: topologyKey: topology.kubernetes.io/zone brokerRackInitImage: my-org/my-image:latest # ... Note Overriding container images is recommended only in special situations, such as when your network does not allow access to the container registry used by Streams for Apache Kafka. In such cases, you should either copy the Streams for Apache Kafka images or build them from the source. Be aware that if the configured image is not compatible with Streams for Apache Kafka images, it might not work properly. 5.2. Logging Kafka has its own configurable loggers, which include the following: log4j.logger.org.apache.zookeeper log4j.logger.kafka log4j.logger.org.apache.kafka log4j.logger.kafka.request.logger log4j.logger.kafka.network.Processor log4j.logger.kafka.server.KafkaApis log4j.logger.kafka.network.RequestChannelUSD log4j.logger.kafka.controller log4j.logger.kafka.log.LogCleaner log4j.logger.state.change.logger log4j.logger.kafka.authorizer.logger Kafka uses the Apache log4j logger implementation. Use the logging property to configure loggers and logger levels. You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.valueFrom.configMapKeyRef.name property to the name of the ConfigMap containing the external logging configuration. Inside the ConfigMap, the logging configuration is described using log4j.properties . Both logging.valueFrom.configMapKeyRef.name and logging.valueFrom.configMapKeyRef.key properties are mandatory. A ConfigMap using the exact logging configuration specified is created with the custom resource when the Cluster Operator is running, then recreated after each reconciliation. If you do not specify a custom ConfigMap, default logging settings are used. If a specific logger value is not set, upper-level logger settings are inherited for that logger. For more information about log levels, see Apache logging services . Here we see examples of inline and external logging. The inline logging specifies the root logger level. You can also set log levels for specific classes or loggers by adding them to the loggers property. Inline logging apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # ... kafka: # ... logging: type: inline loggers: kafka.root.logger.level: INFO log4j.logger.kafka.coordinator.transaction: TRACE log4j.logger.kafka.log.LogCleanerManager: DEBUG log4j.logger.kafka.request.logger: DEBUG log4j.logger.io.strimzi.kafka.oauth: DEBUG log4j.logger.org.openpolicyagents.kafka.OpaAuthorizer: DEBUG # ... Note Setting a log level to DEBUG may result in a large amount of log output and may have performance implications. External logging apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # ... logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: kafka-log4j.properties # ... Any available loggers that are not configured have their level set to OFF . If Kafka was deployed using the Cluster Operator, changes to Kafka logging levels are applied dynamically. If you use external logging, a rolling update is triggered when logging appenders are changed. Garbage collector (GC) Garbage collector logging can also be enabled (or disabled) using the jvmOptions property . 5.3. KafkaClusterSpec schema properties Property Property type Description version string The Kafka broker version. Defaults to the latest version. Consult the user documentation to understand the process required to upgrade or downgrade the version. metadataVersion string Added in Streams for Apache Kafka 2.7. The KRaft metadata version used by the Kafka cluster. This property is ignored when running in ZooKeeper mode. If the property is not set, it defaults to the metadata version that corresponds to the version property. replicas integer The number of pods in the cluster. This property is required when node pools are not used. image string The container image used for Kafka pods. If the property is not set, the default Kafka image version is determined based on the version configuration. The image names are specifically mapped to corresponding versions in the Cluster Operator configuration. Changing the Kafka image version does not automatically update the image versions for other components, such as Kafka Exporter. listeners GenericKafkaListener array Configures listeners to provide access to Kafka brokers. config map Kafka broker config properties with the following prefixes cannot be set: listeners, advertised., broker., listener., host.name, port, inter.broker.listener.name, sasl., ssl., security., password., log.dir, zookeeper.connect, zookeeper.set.acl, zookeeper.ssl, zookeeper.clientCnxnSocket, authorizer., super.user, cruise.control.metrics.topic, cruise.control.metrics.reporter.bootstrap.servers, node.id, process.roles, controller., metadata.log.dir, zookeeper.metadata.migration.enable, client.quota.callback.static.kafka.admin., client.quota.callback.static.produce, client.quota.callback.static.fetch, client.quota.callback.static.storage.per.volume.limit.min.available., client.quota.callback.static.excluded.principal.name.list (with the exception of: zookeeper.connection.timeout.ms, sasl.server.max.receive.size, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols, ssl.secure.random.implementation, cruise.control.metrics.topic.num.partitions, cruise.control.metrics.topic.replication.factor, cruise.control.metrics.topic.retention.ms, cruise.control.metrics.topic.auto.create.retries, cruise.control.metrics.topic.auto.create.timeout.ms, cruise.control.metrics.topic.min.insync.replicas, controller.quorum.election.backoff.max.ms, controller.quorum.election.timeout.ms, controller.quorum.fetch.timeout.ms). storage EphemeralStorage , PersistentClaimStorage , JbodStorage Storage configuration (disk). Cannot be updated. This property is required when node pools are not used. authorization KafkaAuthorizationSimple , KafkaAuthorizationOpa , KafkaAuthorizationKeycloak , KafkaAuthorizationCustom Authorization configuration for Kafka brokers. rack Rack Configuration of the broker.rack broker config. brokerRackInitImage string The image of the init container used for initializing the broker.rack . livenessProbe Probe Pod liveness checking. readinessProbe Probe Pod readiness checking. jvmOptions JvmOptions JVM Options for pods. jmxOptions KafkaJmxOptions JMX Options for Kafka brokers. resources ResourceRequirements CPU and memory resources to reserve. metricsConfig JmxPrometheusExporterMetrics Metrics configuration. logging InlineLogging , ExternalLogging Logging configuration for Kafka. template KafkaClusterTemplate Template for Kafka cluster resources. The template allows users to specify how the OpenShift resources are generated. tieredStorage TieredStorageCustom Configure the tiered storage feature for Kafka brokers. quotas QuotasPluginKafka , QuotasPluginStrimzi Quotas plugin configuration for Kafka brokers allows setting quotas for disk usage, produce/fetch rates, and more. Supported plugin types include kafka (default) and strimzi . If not specified, the default kafka quotas plugin is used.
[ "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: version: 3.9.0 metadataVersion: 3.9 # config: auto.create.topics.enable: \"false\" offsets.topic.replication.factor: 3 transaction.state.log.replication.factor: 3 transaction.state.log.min.isr: 2 default.replication.factor: 3 min.insync.replicas: 2", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # rack: topologyKey: topology.kubernetes.io/zone brokerRackInitImage: my-org/my-image:latest #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # kafka: # logging: type: inline loggers: kafka.root.logger.level: INFO log4j.logger.kafka.coordinator.transaction: TRACE log4j.logger.kafka.log.LogCleanerManager: DEBUG log4j.logger.kafka.request.logger: DEBUG log4j.logger.io.strimzi.kafka.oauth: DEBUG log4j.logger.org.openpolicyagents.kafka.OpaAuthorizer: DEBUG #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: kafka-log4j.properties #" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-kafkaclusterspec-reference
Preface
Preface This document describes steps for an in-place upgrade from RHEL 6 to RHEL 7. The available in-place upgrade path is from RHEL 6.10 to RHEL 7.9. Note that for RHEL 6.10, only the Extended Life Phase (ELP) support is available. If you are using SAP HANA, follow How do I upgrade from RHEL 6 to RHEL 7 with SAP HANA instead. Note that the upgrade path for RHEL with SAP HANA might differ. The process of upgrading from the latest version of RHEL 6 to the latest version of RHEL 7 consists of the following steps: Check that an upgrade of your system is available. See Chapter 1, Planning an upgrade for more information. Prepare your system for the upgrade by installing required repositories and packages and by removing unsupported packages. See Chapter 2, Preparing a RHEL 6 system for the upgrade got more information. Check your system for problems that might affect your upgrade using the Preupgrade Assistant. See Chapter 3, Assessing upgrade suitability for more information. Upgrade your system by running the Red Hat Upgrade Tool. See Chapter 4, Upgrading your system from RHEL 6 to RHEL 7 for more information.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/upgrading_from_rhel_6_to_rhel_7/pr01
Chapter 6. Test environment supportability test
Chapter 6. Test environment supportability test The Supportability tests, also known as openstack/supportable, ensure that the test environment is compliant with Red Hat's support policy. This test is required for all OpenStack software certifications. The test confirms that the test node (an OpenStack deployment-under-test) consists only of components supported by Red Hat (Red Hat OpenStack Platform, Red Hat Enterprise Linux) or supported by the Partner. An OpenStack deployment-under-test refers to the node where the plugin/application-under-test is installed and also the Undercloud Director node. The openstack/supportable tests include the following subtests. 6.1. Kernel subtest The kernel subtest checks the kernel module running on the test environment. The version of the kernel can be either the original General Availability (GA) version or any subsequent kernel update released for the RHEL major and minor releases. The kernel subtest also ensures that the kernel is not tainted when running in the environment. Success criteria The running kernel is a Red Hat kernel. The running kernel is released by Red Hat for use with the RHEL version. The running kernel is not tainted. The running kernel has not been modified. Additional resources Red Hat Enterprise Linux Life Cycle Red Hat Enterprise Linux Release Dates Why is the kernel "tainted" and how are the taint values deciphered? 6.2. Kernel modules subtest The kernel modules subtest verifies that loaded kernel modules are released by Red Hat, either as part of the kernel's package or added through a Red Hat Driver Update. The kernel module subtest also ensures that kernel modules do not identify as Technology Preview. Success criteria The kernel modules are released by Red Hat and supported. Additional resources What does a "Technology Preview" feature mean? 6.3. Hardware Health subtest The Hardware Health subtest checks the system's health by testing if the hardware is supported, meets the requirements, and has any known hardware vulnerabilities. The subtest does the following: Checks that the Red Hat Enterprise Linux (RHEL) kernel does not identify hardware as unsupported. When the kernel identifies unsupported hardware, it will display an unsupported hardware message in the system logs and/or trigger an unsupported kernel taint. This subtest prevents customers from possible production risks which may arise from running Red Hat products on unsupported configurations and environments. In hypervisor, partitioning, cloud instances, and other virtual machine situations, the kernel may trigger an unsupported hardware message or taint based on the hardware data presented to RHEL by the virtual machine (VM). Checks that the system under test (SUT) meets the minimum hardware requirements. RHEL 8 and 9 : Minimum system RAM should be 1.5GB, per CPU logical core count. Checks if the kernel has reported any known hardware vulnerabilities, if those vulnerabilities have mitigations and if those mitigations have resolved the vulnerability. Many mitigations are automatic to ensure that customers do not need to take active steps to resolve vulnerabilities. In some cases this is not possible; where most of these remaining cases require changes to the configuration of the system BIOS/firmware which may not be modifiable by customers in all situations. Confirms the system does not have any offline CPUs. Confirms if Simultaneous Multithreading (SMT) is available, enabled, and active in the system. Failing any of these tests will result in a WARN from the test suite and should be verified by the partner to have correct and intended behavior. Success criteria The kernel does not have the UNSUPPORTEDHARDWARE taint bit set. The kernel does not report an unsupported hardware system message. The kernel should not report any vulnerabilities with mitigations as vulnerable. The kernel does not report the logic core to installed memory ratio as out of range. The kernel does not report CPUs in an offline state. Additional resources Minimum required memory Hardware support available in RHEL 8 but removed from RHEL 9 . 6.4. Installed RPMs subtest The installed RPMs subtest verifies that RPM packages installed on the system are released by Red Hat and not modified. Modified packages may introduce risks and impact the supportability of the customer's environment. You might install non-Red Hat packages if necessary, but you must add them to your product's documentation, and they must not modify or conflict with any Red Hat packages. Red Hat will review the output of this test if you install non-Red Hat packages. Success criteria The installed Red Hat RPMs are not modified. The installed non-Red Hat RPMs are necessary and documented. The installed non-Red Hat RPMs do not conflict with Red Hat RPMs or software. For example, you may develop custom packages to manage CPU affinity of interrupt requests (IRQs) for network interfaces. However, such packages might conflict with Red Hat's tuned package, which already provides similar functionality for performance tuning. Additional resources Production Support Scope of Coverage 6.5. SELinux subtest This subtest confirms that Security-Enhanced Linux (SELinux) is running in enforcing mode on the OpenStack deployment-under test. SELinux adds Mandatory Access Control (MAC) to the Linux kernel, and is enabled by default in Red Hat Enterprise Linux. SELinux policy is administratively-defined, enforced system-wide, and is not set at user discretion reducing vulnerability to privilege escalation attacks helping limit the damage made by configuration mistakes. If a process becomes compromised, the attacker only has access to the normal functions of that process and to files the process has been configured to.. Success criteria SELinux is configured and running in enforcing mode on the OpenStack deployment-under-test. Additional Resources For more information on SELinux in RHEL, see SELinux Users and Administrators Guide .
null
https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_openstack_certification_policy_guide/assembly-test-environment-supportability_rhosp-designate
Release Notes
Release Notes Red Hat Trusted Profile Analyzer 1.3 Release notes for Red Hat Trusted Profile Analyzer 1.3.1 Red Hat Trusted Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_trusted_profile_analyzer/1/html/release_notes/index
Chapter 3. BareMetalHost [metal3.io/v1alpha1]
Chapter 3. BareMetalHost [metal3.io/v1alpha1] Description BareMetalHost is the Schema for the baremetalhosts API Type object 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object BareMetalHostSpec defines the desired state of BareMetalHost. status object BareMetalHostStatus defines the observed state of BareMetalHost. 3.1.1. .spec Description BareMetalHostSpec defines the desired state of BareMetalHost. Type object Required online Property Type Description architecture string CPU architecture of the host, e.g. "x86_64" or "aarch64". If unset, eventually populated by inspection. automatedCleaningMode string When set to disabled, automated cleaning will be skipped during provisioning and deprovisioning. bmc object How do we connect to the BMC (Baseboard Management Controller) on the host? bootMACAddress string The MAC address of the NIC used for provisioning the host. In case of network boot, this is the MAC address of the PXE booting interface. The MAC address of the BMC must never be used here! bootMode string Select the method of initializing the hardware during boot. Defaults to UEFI. Legacy boot should only be used for hardware that does not support UEFI correctly. Set to UEFISecureBoot to turn secure boot on automatically after provisioning. consumerRef object ConsumerRef can be used to store information about something that is using a host. When it is not empty, the host is considered "in use". The common use case is a link to a Machine resource when the host is used by Cluster API. customDeploy object A custom deploy procedure. This is an advanced feature that allows using a custom deploy step provided by a site-specific deployment ramdisk. Most users will want to use "image" instead. Setting this field triggers provisioning. description string Description is a human-entered text used to help identify the host. externallyProvisioned boolean ExternallyProvisioned means something else has provisioned the image running on the host, and the operator should only manage the power status. This field is used for integration with already provisioned hosts and when pivoting hosts between clusters. If unsure, leave this field as false. firmware object Firmware (BIOS) configuration for bare metal server. If set, the requested settings will be applied before the host is provisioned. Only some vendor drivers support this field. An alternative is to use HostFirmwareSettings resources that allow changing arbitrary values and support the generic Redfish-based drivers. hardwareProfile string What is the name of the hardware profile for this host? Hardware profiles are deprecated and should not be used. Use the separate fields Architecture and RootDeviceHints instead. Set to "empty" to prepare for the future version of the API without hardware profiles. image object Image holds the details of the image to be provisioned. Populating the image will cause the host to start provisioning. metaData object MetaData holds the reference to the Secret containing host metadata which is passed to the Config Drive. By default, metadata will be generated for the host, so most users do not need to set this field. networkData object NetworkData holds the reference to the Secret containing network configuration which is passed to the Config Drive and interpreted by the first boot software such as cloud-init. online boolean Should the host be powered on? If the host is currently in a stable state (e.g. provisioned), its power state will be forced to match this value. preprovisioningNetworkDataName string PreprovisioningNetworkDataName is the name of the Secret in the local namespace containing network configuration which is passed to the preprovisioning image, and to the Config Drive if not overridden by specifying NetworkData. raid object RAID configuration for bare metal server. If set, the RAID settings will be applied before the host is provisioned. If not, the current settings will not be modified. Only one of the sub-fields hardwareRAIDVolumes and softwareRAIDVolumes can be set at the same time. rootDeviceHints object Provide guidance about how to choose the device for the image being provisioned. The default is currently to use /dev/sda as the root device. taints array Taints is the full, authoritative list of taints to apply to the corresponding Machine. This list will overwrite any modifications made to the Machine on an ongoing basis. taints[] object The node this Taint is attached to has the "effect" on any pod that does not tolerate the Taint. userData object UserData holds the reference to the Secret containing the user data which is passed to the Config Drive and interpreted by the first-boot software such as cloud-init. The format of user data is specific to the first-boot software. 3.1.2. .spec.bmc Description How do we connect to the BMC (Baseboard Management Controller) on the host? Type object Required address credentialsName Property Type Description address string Address holds the URL for accessing the controller on the network. The scheme part designates the driver to use with the host. credentialsName string The name of the secret containing the BMC credentials (requires keys "username" and "password"). disableCertificateVerification boolean DisableCertificateVerification disables verification of server certificates when using HTTPS to connect to the BMC. This is required when the server certificate is self-signed, but is insecure because it allows a man-in-the-middle to intercept the connection. 3.1.3. .spec.consumerRef Description ConsumerRef can be used to store information about something that is using a host. When it is not empty, the host is considered "in use". The common use case is a link to a Machine resource when the host is used by Cluster API. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 3.1.4. .spec.customDeploy Description A custom deploy procedure. This is an advanced feature that allows using a custom deploy step provided by a site-specific deployment ramdisk. Most users will want to use "image" instead. Setting this field triggers provisioning. Type object Required method Property Type Description method string Custom deploy method name. This name is specific to the deploy ramdisk used. If you don't have a custom deploy ramdisk, you shouldn't use CustomDeploy. 3.1.5. .spec.firmware Description Firmware (BIOS) configuration for bare metal server. If set, the requested settings will be applied before the host is provisioned. Only some vendor drivers support this field. An alternative is to use HostFirmwareSettings resources that allow changing arbitrary values and support the generic Redfish-based drivers. Type object Property Type Description simultaneousMultithreadingEnabled boolean Allows a single physical processor core to appear as several logical processors. sriovEnabled boolean SR-IOV support enables a hypervisor to create virtual instances of a PCI-express device, potentially increasing performance. virtualizationEnabled boolean Supports the virtualization of platform hardware. 3.1.6. .spec.image Description Image holds the details of the image to be provisioned. Populating the image will cause the host to start provisioning. Type object Required url Property Type Description checksum string Checksum is the checksum for the image. Required for all formats except for "live-iso". checksumType string ChecksumType is the checksum algorithm for the image, e.g md5, sha256 or sha512. The special value "auto" can be used to detect the algorithm from the checksum. If missing, MD5 is used. If in doubt, use "auto". format string Format contains the format of the image (raw, qcow2, ... ). When set to "live-iso", an ISO 9660 image referenced by the url will be live-booted and not deployed to disk. url string URL is a location of an image to deploy. 3.1.7. .spec.metaData Description MetaData holds the reference to the Secret containing host metadata which is passed to the Config Drive. By default, metadata will be generated for the host, so most users do not need to set this field. Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 3.1.8. .spec.networkData Description NetworkData holds the reference to the Secret containing network configuration which is passed to the Config Drive and interpreted by the first boot software such as cloud-init. Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 3.1.9. .spec.raid Description RAID configuration for bare metal server. If set, the RAID settings will be applied before the host is provisioned. If not, the current settings will not be modified. Only one of the sub-fields hardwareRAIDVolumes and softwareRAIDVolumes can be set at the same time. Type object Property Type Description hardwareRAIDVolumes `` The list of logical disks for hardware RAID, if rootDeviceHints isn't used, first volume is root volume. You can set the value of this field to [] to clear all the hardware RAID configurations. softwareRAIDVolumes `` The list of logical disks for software RAID, if rootDeviceHints isn't used, first volume is root volume. If HardwareRAIDVolumes is set this item will be invalid. The number of created Software RAID devices must be 1 or 2. If there is only one Software RAID device, it has to be a RAID-1. If there are two, the first one has to be a RAID-1, while the RAID level for the second one can be 0, 1, or 1+0. As the first RAID device will be the deployment device, enforcing a RAID-1 reduces the risk of ending up with a non-booting host in case of a disk failure. Software RAID will always be deleted. 3.1.10. .spec.rootDeviceHints Description Provide guidance about how to choose the device for the image being provisioned. The default is currently to use /dev/sda as the root device. Type object Property Type Description deviceName string A Linux device name like "/dev/vda", or a by-path link to it like "/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0". The hint must match the actual value exactly. hctl string A SCSI bus address like 0:0:0:0. The hint must match the actual value exactly. minSizeGigabytes integer The minimum size of the device in Gigabytes. model string A vendor-specific device identifier. The hint can be a substring of the actual value. rotational boolean True if the device should use spinning media, false otherwise. serialNumber string Device serial number. The hint must match the actual value exactly. vendor string The name of the vendor or manufacturer of the device. The hint can be a substring of the actual value. wwn string Unique storage identifier. The hint must match the actual value exactly. wwnVendorExtension string Unique vendor storage identifier. The hint must match the actual value exactly. wwnWithExtension string Unique storage identifier with the vendor extension appended. The hint must match the actual value exactly. 3.1.11. .spec.taints Description Taints is the full, authoritative list of taints to apply to the corresponding Machine. This list will overwrite any modifications made to the Machine on an ongoing basis. Type array 3.1.12. .spec.taints[] Description The node this Taint is attached to has the "effect" on any pod that does not tolerate the Taint. Type object Required effect key Property Type Description effect string Required. The effect of the taint on pods that do not tolerate the taint. Valid effects are NoSchedule, PreferNoSchedule and NoExecute. key string Required. The taint key to be applied to a node. timeAdded string TimeAdded represents the time at which the taint was added. It is only written for NoExecute taints. value string The taint value corresponding to the taint key. 3.1.13. .spec.userData Description UserData holds the reference to the Secret containing the user data which is passed to the Config Drive and interpreted by the first-boot software such as cloud-init. The format of user data is specific to the first-boot software. Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 3.1.14. .status Description BareMetalHostStatus defines the observed state of BareMetalHost. Type object Required errorCount errorMessage operationalStatus poweredOn provisioning Property Type Description errorCount integer ErrorCount records how many times the host has encoutered an error since the last successful operation errorMessage string The last error message reported by the provisioning subsystem. errorType string ErrorType indicates the type of failure encountered when the OperationalStatus is OperationalStatusError goodCredentials object The last credentials we were able to validate as working. hardware object The hardware discovered to exist on the host. This field will be removed in the API version in favour of the separate HardwareData resource. hardwareProfile string The name of the profile matching the hardware details. Hardware profiles are deprecated and should not be relied on. lastUpdated string LastUpdated identifies when this status was last observed. operationHistory object OperationHistory holds information about operations performed on this host. operationalStatus string OperationalStatus holds the status of the host poweredOn boolean The currently detected power state of the host. This field may get briefly out of sync with the actual state of the hardware while provisioning processes are running. provisioning object Information tracked by the provisioner. triedCredentials object The last credentials we sent to the provisioning backend. 3.1.15. .status.goodCredentials Description The last credentials we were able to validate as working. Type object Property Type Description credentials object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace credentialsVersion string 3.1.16. .status.goodCredentials.credentials Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 3.1.17. .status.hardware Description The hardware discovered to exist on the host. This field will be removed in the API version in favour of the separate HardwareData resource. Type object Property Type Description cpu object Details of the CPU(s) in the system. firmware object System firmware information. hostname string nics array List of network interfaces for the host. nics[] object NIC describes one network interface on the host. ramMebibytes integer The host's amount of memory in Mebibytes. storage array List of storage (disk, SSD, etc.) available to the host. storage[] object Storage describes one storage device (disk, SSD, etc.) on the host. systemVendor object System vendor information. 3.1.18. .status.hardware.cpu Description Details of the CPU(s) in the system. Type object Property Type Description arch string clockMegahertz number ClockSpeed is a clock speed in MHz count integer flags array (string) model string 3.1.19. .status.hardware.firmware Description System firmware information. Type object Property Type Description bios object The BIOS for this firmware 3.1.20. .status.hardware.firmware.bios Description The BIOS for this firmware Type object Property Type Description date string The release/build date for this BIOS vendor string The vendor name for this BIOS version string The version of the BIOS 3.1.21. .status.hardware.nics Description List of network interfaces for the host. Type array 3.1.22. .status.hardware.nics[] Description NIC describes one network interface on the host. Type object Property Type Description ip string The IP address of the interface. This will be an IPv4 or IPv6 address if one is present. If both IPv4 and IPv6 addresses are present in a dual-stack environment, two nics will be output, one with each IP. mac string The device MAC address model string The vendor and product IDs of the NIC, e.g. "0x8086 0x1572" name string The name of the network interface, e.g. "en0" pxe boolean Whether the NIC is PXE Bootable speedGbps integer The speed of the device in Gigabits per second vlanId integer The untagged VLAN ID vlans array The VLANs available vlans[] object VLAN represents the name and ID of a VLAN. 3.1.23. .status.hardware.nics[].vlans Description The VLANs available Type array 3.1.24. .status.hardware.nics[].vlans[] Description VLAN represents the name and ID of a VLAN. Type object Property Type Description id integer VLANID is a 12-bit 802.1Q VLAN identifier name string 3.1.25. .status.hardware.storage Description List of storage (disk, SSD, etc.) available to the host. Type array 3.1.26. .status.hardware.storage[] Description Storage describes one storage device (disk, SSD, etc.) on the host. Type object Property Type Description alternateNames array (string) A list of alternate Linux device names of the disk, e.g. "/dev/sda". Note that this list is not exhaustive, and names may not be stable across reboots. hctl string The SCSI location of the device model string Hardware model name string A Linux device name of the disk, e.g. "/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0". This will be a name that is stable across reboots if one is available. rotational boolean Whether this disk represents rotational storage. This field is not recommended for usage, please prefer using 'Type' field instead, this field will be deprecated eventually. serialNumber string The serial number of the device sizeBytes integer The size of the disk in Bytes type string Device type, one of: HDD, SSD, NVME. vendor string The name of the vendor of the device wwn string The WWN of the device wwnVendorExtension string The WWN Vendor extension of the device wwnWithExtension string The WWN with the extension 3.1.27. .status.hardware.systemVendor Description System vendor information. Type object Property Type Description manufacturer string productName string serialNumber string 3.1.28. .status.operationHistory Description OperationHistory holds information about operations performed on this host. Type object Property Type Description deprovision object OperationMetric contains metadata about an operation (inspection, provisioning, etc.) used for tracking metrics. inspect object OperationMetric contains metadata about an operation (inspection, provisioning, etc.) used for tracking metrics. provision object OperationMetric contains metadata about an operation (inspection, provisioning, etc.) used for tracking metrics. register object OperationMetric contains metadata about an operation (inspection, provisioning, etc.) used for tracking metrics. 3.1.29. .status.operationHistory.deprovision Description OperationMetric contains metadata about an operation (inspection, provisioning, etc.) used for tracking metrics. Type object Property Type Description end `` start `` 3.1.30. .status.operationHistory.inspect Description OperationMetric contains metadata about an operation (inspection, provisioning, etc.) used for tracking metrics. Type object Property Type Description end `` start `` 3.1.31. .status.operationHistory.provision Description OperationMetric contains metadata about an operation (inspection, provisioning, etc.) used for tracking metrics. Type object Property Type Description end `` start `` 3.1.32. .status.operationHistory.register Description OperationMetric contains metadata about an operation (inspection, provisioning, etc.) used for tracking metrics. Type object Property Type Description end `` start `` 3.1.33. .status.provisioning Description Information tracked by the provisioner. Type object Required ID state Property Type Description ID string The hosts's ID from the underlying provisioning tool (e.g. the Ironic node UUID). bootMode string BootMode indicates the boot mode used to provision the host. customDeploy object Custom deploy procedure applied to the host. firmware object The firmware settings that have been applied. image object Image holds the details of the last image successfully provisioned to the host. raid object The RAID configuration that has been applied. rootDeviceHints object The root device hints used to provision the host. state string An indicator for what the provisioner is doing with the host. 3.1.34. .status.provisioning.customDeploy Description Custom deploy procedure applied to the host. Type object Required method Property Type Description method string Custom deploy method name. This name is specific to the deploy ramdisk used. If you don't have a custom deploy ramdisk, you shouldn't use CustomDeploy. 3.1.35. .status.provisioning.firmware Description The firmware settings that have been applied. Type object Property Type Description simultaneousMultithreadingEnabled boolean Allows a single physical processor core to appear as several logical processors. sriovEnabled boolean SR-IOV support enables a hypervisor to create virtual instances of a PCI-express device, potentially increasing performance. virtualizationEnabled boolean Supports the virtualization of platform hardware. 3.1.36. .status.provisioning.image Description Image holds the details of the last image successfully provisioned to the host. Type object Required url Property Type Description checksum string Checksum is the checksum for the image. Required for all formats except for "live-iso". checksumType string ChecksumType is the checksum algorithm for the image, e.g md5, sha256 or sha512. The special value "auto" can be used to detect the algorithm from the checksum. If missing, MD5 is used. If in doubt, use "auto". format string Format contains the format of the image (raw, qcow2, ... ). When set to "live-iso", an ISO 9660 image referenced by the url will be live-booted and not deployed to disk. url string URL is a location of an image to deploy. 3.1.37. .status.provisioning.raid Description The RAID configuration that has been applied. Type object Property Type Description hardwareRAIDVolumes `` The list of logical disks for hardware RAID, if rootDeviceHints isn't used, first volume is root volume. You can set the value of this field to [] to clear all the hardware RAID configurations. softwareRAIDVolumes `` The list of logical disks for software RAID, if rootDeviceHints isn't used, first volume is root volume. If HardwareRAIDVolumes is set this item will be invalid. The number of created Software RAID devices must be 1 or 2. If there is only one Software RAID device, it has to be a RAID-1. If there are two, the first one has to be a RAID-1, while the RAID level for the second one can be 0, 1, or 1+0. As the first RAID device will be the deployment device, enforcing a RAID-1 reduces the risk of ending up with a non-booting host in case of a disk failure. Software RAID will always be deleted. 3.1.38. .status.provisioning.rootDeviceHints Description The root device hints used to provision the host. Type object Property Type Description deviceName string A Linux device name like "/dev/vda", or a by-path link to it like "/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0". The hint must match the actual value exactly. hctl string A SCSI bus address like 0:0:0:0. The hint must match the actual value exactly. minSizeGigabytes integer The minimum size of the device in Gigabytes. model string A vendor-specific device identifier. The hint can be a substring of the actual value. rotational boolean True if the device should use spinning media, false otherwise. serialNumber string Device serial number. The hint must match the actual value exactly. vendor string The name of the vendor or manufacturer of the device. The hint can be a substring of the actual value. wwn string Unique storage identifier. The hint must match the actual value exactly. wwnVendorExtension string Unique vendor storage identifier. The hint must match the actual value exactly. wwnWithExtension string Unique storage identifier with the vendor extension appended. The hint must match the actual value exactly. 3.1.39. .status.triedCredentials Description The last credentials we sent to the provisioning backend. Type object Property Type Description credentials object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace credentialsVersion string 3.1.40. .status.triedCredentials.credentials Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 3.2. API endpoints The following API endpoints are available: /apis/metal3.io/v1alpha1/baremetalhosts GET : list objects of kind BareMetalHost /apis/metal3.io/v1alpha1/namespaces/{namespace}/baremetalhosts DELETE : delete collection of BareMetalHost GET : list objects of kind BareMetalHost POST : create a BareMetalHost /apis/metal3.io/v1alpha1/namespaces/{namespace}/baremetalhosts/{name} DELETE : delete a BareMetalHost GET : read the specified BareMetalHost PATCH : partially update the specified BareMetalHost PUT : replace the specified BareMetalHost /apis/metal3.io/v1alpha1/namespaces/{namespace}/baremetalhosts/{name}/status GET : read status of the specified BareMetalHost PATCH : partially update status of the specified BareMetalHost PUT : replace status of the specified BareMetalHost 3.2.1. /apis/metal3.io/v1alpha1/baremetalhosts HTTP method GET Description list objects of kind BareMetalHost Table 3.1. HTTP responses HTTP code Reponse body 200 - OK BareMetalHostList schema 401 - Unauthorized Empty 3.2.2. /apis/metal3.io/v1alpha1/namespaces/{namespace}/baremetalhosts HTTP method DELETE Description delete collection of BareMetalHost Table 3.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind BareMetalHost Table 3.3. HTTP responses HTTP code Reponse body 200 - OK BareMetalHostList schema 401 - Unauthorized Empty HTTP method POST Description create a BareMetalHost Table 3.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.5. Body parameters Parameter Type Description body BareMetalHost schema Table 3.6. HTTP responses HTTP code Reponse body 200 - OK BareMetalHost schema 201 - Created BareMetalHost schema 202 - Accepted BareMetalHost schema 401 - Unauthorized Empty 3.2.3. /apis/metal3.io/v1alpha1/namespaces/{namespace}/baremetalhosts/{name} Table 3.7. Global path parameters Parameter Type Description name string name of the BareMetalHost HTTP method DELETE Description delete a BareMetalHost Table 3.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified BareMetalHost Table 3.10. HTTP responses HTTP code Reponse body 200 - OK BareMetalHost schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified BareMetalHost Table 3.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.12. HTTP responses HTTP code Reponse body 200 - OK BareMetalHost schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified BareMetalHost Table 3.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.14. Body parameters Parameter Type Description body BareMetalHost schema Table 3.15. HTTP responses HTTP code Reponse body 200 - OK BareMetalHost schema 201 - Created BareMetalHost schema 401 - Unauthorized Empty 3.2.4. /apis/metal3.io/v1alpha1/namespaces/{namespace}/baremetalhosts/{name}/status Table 3.16. Global path parameters Parameter Type Description name string name of the BareMetalHost HTTP method GET Description read status of the specified BareMetalHost Table 3.17. HTTP responses HTTP code Reponse body 200 - OK BareMetalHost schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified BareMetalHost Table 3.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.19. HTTP responses HTTP code Reponse body 200 - OK BareMetalHost schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified BareMetalHost Table 3.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.21. Body parameters Parameter Type Description body BareMetalHost schema Table 3.22. HTTP responses HTTP code Reponse body 200 - OK BareMetalHost schema 201 - Created BareMetalHost schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/provisioning_apis/baremetalhost-metal3-io-v1alpha1
Chapter 19. Network policy
Chapter 19. Network policy 19.1. About network policy As a developer, you can define network policies that restrict traffic to pods in your cluster. 19.1.1. About network policy In a cluster using a network plugin that supports Kubernetes network policy, network isolation is controlled entirely by NetworkPolicy objects. In OpenShift Container Platform 4.14, OpenShift SDN supports using network policy in its default network isolation mode. Warning Network policy does not apply to the host network namespace. Pods with host networking enabled are unaffected by network policy rules. However, pods connecting to the host-networked pods might be affected by the network policy rules. Network policies cannot block traffic from localhost or from their resident nodes. By default, all pods in a project are accessible from other pods and network endpoints. To isolate one or more pods in a project, you can create NetworkPolicy objects in that project to indicate the allowed incoming connections. Project administrators can create and delete NetworkPolicy objects within their own project. If a pod is matched by selectors in one or more NetworkPolicy objects, then the pod will accept only connections that are allowed by at least one of those NetworkPolicy objects. A pod that is not selected by any NetworkPolicy objects is fully accessible. A network policy applies to only the TCP, UDP, ICMP, and SCTP protocols. Other protocols are not affected. The following example NetworkPolicy objects demonstrate supporting different scenarios: Deny all traffic: To make a project deny by default, add a NetworkPolicy object that matches all pods but accepts no traffic: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: {} ingress: [] Only allow connections from the OpenShift Container Platform Ingress Controller: To make a project allow only connections from the OpenShift Container Platform Ingress Controller, add the following NetworkPolicy object. apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress Only accept connections from pods within a project: Important To allow ingress connections from hostNetwork pods in the same namespace, you need to apply the allow-from-hostnetwork policy together with the allow-same-namespace policy. To make pods accept connections from other pods in the same project, but reject all other connections from pods in other projects, add the following NetworkPolicy object: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {} Only allow HTTP and HTTPS traffic based on pod labels: To enable only HTTP and HTTPS access to the pods with a specific label ( role=frontend in following example), add a NetworkPolicy object similar to the following: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-http-and-https spec: podSelector: matchLabels: role: frontend ingress: - ports: - protocol: TCP port: 80 - protocol: TCP port: 443 Accept connections by using both namespace and pod selectors: To match network traffic by combining namespace and pod selectors, you can use a NetworkPolicy object similar to the following: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-pod-and-namespace-both spec: podSelector: matchLabels: name: test-pods ingress: - from: - namespaceSelector: matchLabels: project: project_name podSelector: matchLabels: name: test-pods NetworkPolicy objects are additive, which means you can combine multiple NetworkPolicy objects together to satisfy complex network requirements. For example, for the NetworkPolicy objects defined in samples, you can define both allow-same-namespace and allow-http-and-https policies within the same project. Thus allowing the pods with the label role=frontend , to accept any connection allowed by each policy. That is, connections on any port from pods in the same namespace, and connections on ports 80 and 443 from pods in any namespace. 19.1.1.1. Using the allow-from-router network policy Use the following NetworkPolicy to allow external traffic regardless of the router configuration: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-router spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/ingress: "" 1 podSelector: {} policyTypes: - Ingress 1 policy-group.network.openshift.io/ingress:"" label supports both OpenShift-SDN and OVN-Kubernetes. 19.1.1.2. Using the allow-from-hostnetwork network policy Add the following allow-from-hostnetwork NetworkPolicy object to direct traffic from the host network pods. apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-hostnetwork spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/host-network: "" podSelector: {} policyTypes: - Ingress 19.1.2. Optimizations for network policy with OpenShift SDN Use a network policy to isolate pods that are differentiated from one another by labels within a namespace. It is inefficient to apply NetworkPolicy objects to large numbers of individual pods in a single namespace. Pod labels do not exist at the IP address level, so a network policy generates a separate Open vSwitch (OVS) flow rule for every possible link between every pod selected with a podSelector . For example, if the spec podSelector and the ingress podSelector within a NetworkPolicy object each match 200 pods, then 40,000 (200*200) OVS flow rules are generated. This might slow down a node. When designing your network policy, refer to the following guidelines: Reduce the number of OVS flow rules by using namespaces to contain groups of pods that need to be isolated. NetworkPolicy objects that select a whole namespace, by using the namespaceSelector or an empty podSelector , generate only a single OVS flow rule that matches the VXLAN virtual network ID (VNID) of the namespace. Keep the pods that do not need to be isolated in their original namespace, and move the pods that require isolation into one or more different namespaces. Create additional targeted cross-namespace network policies to allow the specific traffic that you do want to allow from the isolated pods. 19.1.3. Optimizations for network policy with OVN-Kubernetes network plugin When designing your network policy, refer to the following guidelines: For network policies with the same spec.podSelector spec, it is more efficient to use one network policy with multiple ingress or egress rules, than multiple network policies with subsets of ingress or egress rules. Every ingress or egress rule based on the podSelector or namespaceSelector spec generates the number of OVS flows proportional to number of pods selected by network policy + number of pods selected by ingress or egress rule . Therefore, it is preferable to use the podSelector or namespaceSelector spec that can select as many pods as you need in one rule, instead of creating individual rules for every pod. For example, the following policy contains two rules: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy spec: podSelector: {} ingress: - from: - podSelector: matchLabels: role: frontend - from: - podSelector: matchLabels: role: backend The following policy expresses those same two rules as one: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy spec: podSelector: {} ingress: - from: - podSelector: matchExpressions: - {key: role, operator: In, values: [frontend, backend]} The same guideline applies to the spec.podSelector spec. If you have the same ingress or egress rules for different network policies, it might be more efficient to create one network policy with a common spec.podSelector spec. For example, the following two policies have different rules: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: policy1 spec: podSelector: matchLabels: role: db ingress: - from: - podSelector: matchLabels: role: frontend --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: policy2 spec: podSelector: matchLabels: role: client ingress: - from: - podSelector: matchLabels: role: frontend The following network policy expresses those same two rules as one: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: policy3 spec: podSelector: matchExpressions: - {key: role, operator: In, values: [db, client]} ingress: - from: - podSelector: matchLabels: role: frontend You can apply this optimization when only multiple selectors are expressed as one. In cases where selectors are based on different labels, it may not be possible to apply this optimization. In those cases, consider applying some new labels for network policy optimization specifically. 19.1.4. steps Creating a network policy Optional: Defining a default network policy 19.1.5. Additional resources Projects and namespaces Configuring multitenant network policy NetworkPolicy API 19.2. Creating a network policy As a user with the admin role, you can create a network policy for a namespace. 19.2.1. Example NetworkPolicy object The following annotates an example NetworkPolicy object: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017 1 The name of the NetworkPolicy object. 2 A selector that describes the pods to which the policy applies. The policy object can only select pods in the project that defines the NetworkPolicy object. 3 A selector that matches the pods from which the policy object allows ingress traffic. The selector matches pods in the same namespace as the NetworkPolicy. 4 A list of one or more destination ports on which to accept traffic. 19.2.2. Creating a network policy using the CLI To define granular rules describing ingress or egress network traffic allowed for namespaces in your cluster, you can create a network policy. Note If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace that the network policy applies to. Procedure Create a policy rule: Create a <policy_name>.yaml file: USD touch <policy_name>.yaml where: <policy_name> Specifies the network policy file name. Define a network policy in the file that you just created, such as in the following examples: Deny ingress from all pods in all namespaces This is a fundamental policy, blocking all cross-pod networking other than cross-pod traffic allowed by the configuration of other Network Policies. kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: {} policyTypes: - Ingress ingress: [] Allow ingress from all pods in the same namespace kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {} Allow ingress traffic to one pod from a particular namespace This policy allows traffic to pods labelled pod-a from pods running in namespace-y . kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-traffic-pod spec: podSelector: matchLabels: pod: pod-a policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: namespace-y To create the network policy object, enter the following command: USD oc apply -f <policy_name>.yaml -n <namespace> where: <policy_name> Specifies the network policy file name. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Example output networkpolicy.networking.k8s.io/deny-by-default created Note If you log in to the web console with cluster-admin privileges, you have a choice of creating a network policy in any namespace in the cluster directly in YAML or from a form in the web console. 19.2.3. Creating a default deny all network policy This is a fundamental policy, blocking all cross-pod networking other than network traffic allowed by the configuration of other deployed network policies. This procedure enforces a default deny-by-default policy. Note If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace that the network policy applies to. Procedure Create the following YAML that defines a deny-by-default policy to deny ingress from all pods in all namespaces. Save the YAML in the deny-by-default.yaml file: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default namespace: default 1 spec: podSelector: {} 2 ingress: [] 3 1 namespace: default deploys this policy to the default namespace. 2 podSelector: is empty, this means it matches all the pods. Therefore, the policy applies to all pods in the default namespace. 3 There are no ingress rules specified. This causes incoming traffic to be dropped to all pods. Apply the policy by entering the following command: USD oc apply -f deny-by-default.yaml Example output networkpolicy.networking.k8s.io/deny-by-default created 19.2.4. Creating a network policy to allow traffic from external clients With the deny-by-default policy in place you can proceed to configure a policy that allows traffic from external clients to a pod with the label app=web . Note If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster. Follow this procedure to configure a policy that allows external service from the public Internet directly or by using a Load Balancer to access the pod. Traffic is only allowed to a pod with the label app=web . Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace that the network policy applies to. Procedure Create a policy that allows traffic from the public Internet directly or by using a load balancer to access the pod. Save the YAML in the web-allow-external.yaml file: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-allow-external namespace: default spec: policyTypes: - Ingress podSelector: matchLabels: app: web ingress: - {} Apply the policy by entering the following command: USD oc apply -f web-allow-external.yaml Example output networkpolicy.networking.k8s.io/web-allow-external created This policy allows traffic from all resources, including external traffic as illustrated in the following diagram: 19.2.5. Creating a network policy allowing traffic to an application from all namespaces Note If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster. Follow this procedure to configure a policy that allows traffic from all pods in all namespaces to a particular application. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace that the network policy applies to. Procedure Create a policy that allows traffic from all pods in all namespaces to a particular application. Save the YAML in the web-allow-all-namespaces.yaml file: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-allow-all-namespaces namespace: default spec: podSelector: matchLabels: app: web 1 policyTypes: - Ingress ingress: - from: - namespaceSelector: {} 2 1 Applies the policy only to app:web pods in default namespace. 2 Selects all pods in all namespaces. Note By default, if you omit specifying a namespaceSelector it does not select any namespaces, which means the policy allows traffic only from the namespace the network policy is deployed to. Apply the policy by entering the following command: USD oc apply -f web-allow-all-namespaces.yaml Example output networkpolicy.networking.k8s.io/web-allow-all-namespaces created Verification Start a web service in the default namespace by entering the following command: USD oc run web --namespace=default --image=nginx --labels="app=web" --expose --port=80 Run the following command to deploy an alpine image in the secondary namespace and to start a shell: USD oc run test-USDRANDOM --namespace=secondary --rm -i -t --image=alpine -- sh Run the following command in the shell and observe that the request is allowed: # wget -qO- --timeout=2 http://web.default Expected output <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> 19.2.6. Creating a network policy allowing traffic to an application from a namespace Note If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster. Follow this procedure to configure a policy that allows traffic to a pod with the label app=web from a particular namespace. You might want to do this to: Restrict traffic to a production database only to namespaces where production workloads are deployed. Enable monitoring tools deployed to a particular namespace to scrape metrics from the current namespace. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace that the network policy applies to. Procedure Create a policy that allows traffic from all pods in a particular namespaces with a label purpose=production . Save the YAML in the web-allow-prod.yaml file: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-allow-prod namespace: default spec: podSelector: matchLabels: app: web 1 policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: purpose: production 2 1 Applies the policy only to app:web pods in the default namespace. 2 Restricts traffic to only pods in namespaces that have the label purpose=production . Apply the policy by entering the following command: USD oc apply -f web-allow-prod.yaml Example output networkpolicy.networking.k8s.io/web-allow-prod created Verification Start a web service in the default namespace by entering the following command: USD oc run web --namespace=default --image=nginx --labels="app=web" --expose --port=80 Run the following command to create the prod namespace: USD oc create namespace prod Run the following command to label the prod namespace: USD oc label namespace/prod purpose=production Run the following command to create the dev namespace: USD oc create namespace dev Run the following command to label the dev namespace: USD oc label namespace/dev purpose=testing Run the following command to deploy an alpine image in the dev namespace and to start a shell: USD oc run test-USDRANDOM --namespace=dev --rm -i -t --image=alpine -- sh Run the following command in the shell and observe that the request is blocked: # wget -qO- --timeout=2 http://web.default Expected output wget: download timed out Run the following command to deploy an alpine image in the prod namespace and start a shell: USD oc run test-USDRANDOM --namespace=prod --rm -i -t --image=alpine -- sh Run the following command in the shell and observe that the request is allowed: # wget -qO- --timeout=2 http://web.default Expected output <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> 19.2.7. Additional resources Accessing the web console Logging for egress firewall and network policy rules 19.3. Viewing a network policy As a user with the admin role, you can view a network policy for a namespace. 19.3.1. Example NetworkPolicy object The following annotates an example NetworkPolicy object: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017 1 The name of the NetworkPolicy object. 2 A selector that describes the pods to which the policy applies. The policy object can only select pods in the project that defines the NetworkPolicy object. 3 A selector that matches the pods from which the policy object allows ingress traffic. The selector matches pods in the same namespace as the NetworkPolicy. 4 A list of one or more destination ports on which to accept traffic. 19.3.2. Viewing network policies using the CLI You can examine the network policies in a namespace. Note If you log in with a user with the cluster-admin role, then you can view any network policy in the cluster. Prerequisites You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace where the network policy exists. Procedure List network policies in a namespace: To view network policy objects defined in a namespace, enter the following command: USD oc get networkpolicy Optional: To examine a specific network policy, enter the following command: USD oc describe networkpolicy <policy_name> -n <namespace> where: <policy_name> Specifies the name of the network policy to inspect. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. For example: USD oc describe networkpolicy allow-same-namespace Output for oc describe command Name: allow-same-namespace Namespace: ns1 Created on: 2021-05-24 22:28:56 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: PodSelector: <none> Not affecting egress traffic Policy Types: Ingress Note If you log in to the web console with cluster-admin privileges, you have a choice of viewing a network policy in any namespace in the cluster directly in YAML or from a form in the web console. 19.4. Editing a network policy As a user with the admin role, you can edit an existing network policy for a namespace. 19.4.1. Editing a network policy You can edit a network policy in a namespace. Note If you log in with a user with the cluster-admin role, then you can edit a network policy in any namespace in the cluster. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace where the network policy exists. Procedure Optional: To list the network policy objects in a namespace, enter the following command: USD oc get networkpolicy where: <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Edit the network policy object. If you saved the network policy definition in a file, edit the file and make any necessary changes, and then enter the following command. USD oc apply -n <namespace> -f <policy_file>.yaml where: <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. <policy_file> Specifies the name of the file containing the network policy. If you need to update the network policy object directly, enter the following command: USD oc edit networkpolicy <policy_name> -n <namespace> where: <policy_name> Specifies the name of the network policy. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Confirm that the network policy object is updated. USD oc describe networkpolicy <policy_name> -n <namespace> where: <policy_name> Specifies the name of the network policy. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Note If you log in to the web console with cluster-admin privileges, you have a choice of editing a network policy in any namespace in the cluster directly in YAML or from the policy in the web console through the Actions menu. 19.4.2. Example NetworkPolicy object The following annotates an example NetworkPolicy object: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017 1 The name of the NetworkPolicy object. 2 A selector that describes the pods to which the policy applies. The policy object can only select pods in the project that defines the NetworkPolicy object. 3 A selector that matches the pods from which the policy object allows ingress traffic. The selector matches pods in the same namespace as the NetworkPolicy. 4 A list of one or more destination ports on which to accept traffic. 19.4.3. Additional resources Creating a network policy 19.5. Deleting a network policy As a user with the admin role, you can delete a network policy from a namespace. 19.5.1. Deleting a network policy using the CLI You can delete a network policy in a namespace. Note If you log in with a user with the cluster-admin role, then you can delete any network policy in the cluster. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace where the network policy exists. Procedure To delete a network policy object, enter the following command: USD oc delete networkpolicy <policy_name> -n <namespace> where: <policy_name> Specifies the name of the network policy. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Example output networkpolicy.networking.k8s.io/default-deny deleted Note If you log in to the web console with cluster-admin privileges, you have a choice of deleting a network policy in any namespace in the cluster directly in YAML or from the policy in the web console through the Actions menu. 19.6. Defining a default network policy for projects As a cluster administrator, you can modify the new project template to automatically include network policies when you create a new project. If you do not yet have a customized template for new projects, you must first create one. 19.6.1. Modifying the template for new projects As a cluster administrator, you can modify the default project template so that new projects are created using your custom requirements. To create your own custom project template: Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. Procedure Log in as a user with cluster-admin privileges. Generate the default project template: USD oc adm create-bootstrap-project-template -o yaml > template.yaml Use a text editor to modify the generated template.yaml file by adding objects or modifying existing objects. The project template must be created in the openshift-config namespace. Load your modified template: USD oc create -f template.yaml -n openshift-config Edit the project configuration resource using the web console or CLI. Using the web console: Navigate to the Administration Cluster Settings page. Click Configuration to view all configuration resources. Find the entry for Project and click Edit YAML . Using the CLI: Edit the project.config.openshift.io/cluster resource: USD oc edit project.config.openshift.io/cluster Update the spec section to include the projectRequestTemplate and name parameters, and set the name of your uploaded project template. The default name is project-request . Project configuration resource with custom project template apiVersion: config.openshift.io/v1 kind: Project metadata: # ... spec: projectRequestTemplate: name: <template_name> # ... After you save your changes, create a new project to verify that your changes were successfully applied. 19.6.2. Adding network policies to the new project template As a cluster administrator, you can add network policies to the default template for new projects. OpenShift Container Platform will automatically create all the NetworkPolicy objects specified in the template in the project. Prerequisites Your cluster uses a default CNI network plugin that supports NetworkPolicy objects, such as the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You must log in to the cluster with a user with cluster-admin privileges. You must have created a custom default project template for new projects. Procedure Edit the default template for a new project by running the following command: USD oc edit template <project_template> -n openshift-config Replace <project_template> with the name of the default template that you configured for your cluster. The default template name is project-request . In the template, add each NetworkPolicy object as an element to the objects parameter. The objects parameter accepts a collection of one or more objects. In the following example, the objects parameter collection includes several NetworkPolicy objects. objects: - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {} - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-kube-apiserver-operator spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-kube-apiserver-operator podSelector: matchLabels: app: kube-apiserver-operator policyTypes: - Ingress ... Optional: Create a new project to confirm that your network policy objects are created successfully by running the following commands: Create a new project: USD oc new-project <project> 1 1 Replace <project> with the name for the project you are creating. Confirm that the network policy objects in the new project template exist in the new project: USD oc get networkpolicy NAME POD-SELECTOR AGE allow-from-openshift-ingress <none> 7s allow-from-same-namespace <none> 7s 19.7. Configuring multitenant isolation with network policy As a cluster administrator, you can configure your network policies to provide multitenant network isolation. Note If you are using the OpenShift SDN network plugin, configuring network policies as described in this section provides network isolation similar to multitenant mode but with network policy mode set. 19.7.1. Configuring multitenant isolation by using network policy You can configure your project to isolate it from pods and services in other project namespaces. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. Procedure Create the following NetworkPolicy objects: A policy named allow-from-openshift-ingress . USD cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/ingress: "" podSelector: {} policyTypes: - Ingress EOF Note policy-group.network.openshift.io/ingress: "" is the preferred namespace selector label for OpenShift SDN. You can use the network.openshift.io/policy-group: ingress namespace selector label, but this is a legacy label. A policy named allow-from-openshift-monitoring : USD cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-monitoring spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: monitoring podSelector: {} policyTypes: - Ingress EOF A policy named allow-same-namespace : USD cat << EOF| oc create -f - kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {} EOF A policy named allow-from-kube-apiserver-operator : USD cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-kube-apiserver-operator spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-kube-apiserver-operator podSelector: matchLabels: app: kube-apiserver-operator policyTypes: - Ingress EOF For more details, see New kube-apiserver-operator webhook controller validating health of webhook . Optional: To confirm that the network policies exist in your current project, enter the following command: USD oc describe networkpolicy Example output Name: allow-from-openshift-ingress Namespace: example1 Created on: 2020-06-09 00:28:17 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: ingress Not affecting egress traffic Policy Types: Ingress Name: allow-from-openshift-monitoring Namespace: example1 Created on: 2020-06-09 00:29:57 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: monitoring Not affecting egress traffic Policy Types: Ingress 19.7.2. steps Defining a default network policy 19.7.3. Additional resources OpenShift SDN network isolation modes
[ "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: {} ingress: []", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {}", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-http-and-https spec: podSelector: matchLabels: role: frontend ingress: - ports: - protocol: TCP port: 80 - protocol: TCP port: 443", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-pod-and-namespace-both spec: podSelector: matchLabels: name: test-pods ingress: - from: - namespaceSelector: matchLabels: project: project_name podSelector: matchLabels: name: test-pods", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-router spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/ingress: \"\" 1 podSelector: {} policyTypes: - Ingress", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-hostnetwork spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/host-network: \"\" podSelector: {} policyTypes: - Ingress", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy spec: podSelector: {} ingress: - from: - podSelector: matchLabels: role: frontend - from: - podSelector: matchLabels: role: backend", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy spec: podSelector: {} ingress: - from: - podSelector: matchExpressions: - {key: role, operator: In, values: [frontend, backend]}", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: policy1 spec: podSelector: matchLabels: role: db ingress: - from: - podSelector: matchLabels: role: frontend --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: policy2 spec: podSelector: matchLabels: role: client ingress: - from: - podSelector: matchLabels: role: frontend", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: policy3 spec: podSelector: matchExpressions: - {key: role, operator: In, values: [db, client]} ingress: - from: - podSelector: matchLabels: role: frontend", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017", "touch <policy_name>.yaml", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: {} policyTypes: - Ingress ingress: []", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {}", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-traffic-pod spec: podSelector: matchLabels: pod: pod-a policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: namespace-y", "oc apply -f <policy_name>.yaml -n <namespace>", "networkpolicy.networking.k8s.io/deny-by-default created", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default namespace: default 1 spec: podSelector: {} 2 ingress: [] 3", "oc apply -f deny-by-default.yaml", "networkpolicy.networking.k8s.io/deny-by-default created", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-allow-external namespace: default spec: policyTypes: - Ingress podSelector: matchLabels: app: web ingress: - {}", "oc apply -f web-allow-external.yaml", "networkpolicy.networking.k8s.io/web-allow-external created", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-allow-all-namespaces namespace: default spec: podSelector: matchLabels: app: web 1 policyTypes: - Ingress ingress: - from: - namespaceSelector: {} 2", "oc apply -f web-allow-all-namespaces.yaml", "networkpolicy.networking.k8s.io/web-allow-all-namespaces created", "oc run web --namespace=default --image=nginx --labels=\"app=web\" --expose --port=80", "oc run test-USDRANDOM --namespace=secondary --rm -i -t --image=alpine -- sh", "wget -qO- --timeout=2 http://web.default", "<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href=\"http://nginx.org/\">nginx.org</a>.<br/> Commercial support is available at <a href=\"http://nginx.com/\">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-allow-prod namespace: default spec: podSelector: matchLabels: app: web 1 policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: purpose: production 2", "oc apply -f web-allow-prod.yaml", "networkpolicy.networking.k8s.io/web-allow-prod created", "oc run web --namespace=default --image=nginx --labels=\"app=web\" --expose --port=80", "oc create namespace prod", "oc label namespace/prod purpose=production", "oc create namespace dev", "oc label namespace/dev purpose=testing", "oc run test-USDRANDOM --namespace=dev --rm -i -t --image=alpine -- sh", "wget -qO- --timeout=2 http://web.default", "wget: download timed out", "oc run test-USDRANDOM --namespace=prod --rm -i -t --image=alpine -- sh", "wget -qO- --timeout=2 http://web.default", "<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href=\"http://nginx.org/\">nginx.org</a>.<br/> Commercial support is available at <a href=\"http://nginx.com/\">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017", "oc get networkpolicy", "oc describe networkpolicy <policy_name> -n <namespace>", "oc describe networkpolicy allow-same-namespace", "Name: allow-same-namespace Namespace: ns1 Created on: 2021-05-24 22:28:56 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: PodSelector: <none> Not affecting egress traffic Policy Types: Ingress", "oc get networkpolicy", "oc apply -n <namespace> -f <policy_file>.yaml", "oc edit networkpolicy <policy_name> -n <namespace>", "oc describe networkpolicy <policy_name> -n <namespace>", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017", "oc delete networkpolicy <policy_name> -n <namespace>", "networkpolicy.networking.k8s.io/default-deny deleted", "oc adm create-bootstrap-project-template -o yaml > template.yaml", "oc create -f template.yaml -n openshift-config", "oc edit project.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestTemplate: name: <template_name>", "oc edit template <project_template> -n openshift-config", "objects: - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {} - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-kube-apiserver-operator spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-kube-apiserver-operator podSelector: matchLabels: app: kube-apiserver-operator policyTypes: - Ingress", "oc new-project <project> 1", "oc get networkpolicy NAME POD-SELECTOR AGE allow-from-openshift-ingress <none> 7s allow-from-same-namespace <none> 7s", "cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/ingress: \"\" podSelector: {} policyTypes: - Ingress EOF", "cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-monitoring spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: monitoring podSelector: {} policyTypes: - Ingress EOF", "cat << EOF| oc create -f - kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {} EOF", "cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-kube-apiserver-operator spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-kube-apiserver-operator podSelector: matchLabels: app: kube-apiserver-operator policyTypes: - Ingress EOF", "oc describe networkpolicy", "Name: allow-from-openshift-ingress Namespace: example1 Created on: 2020-06-09 00:28:17 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: ingress Not affecting egress traffic Policy Types: Ingress Name: allow-from-openshift-monitoring Namespace: example1 Created on: 2020-06-09 00:29:57 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: monitoring Not affecting egress traffic Policy Types: Ingress" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/networking/network-policy
Chapter 4. How to write an OpenAPI document for use as a 3scale API Management OpenAPI spec
Chapter 4. How to write an OpenAPI document for use as a 3scale API Management OpenAPI spec If you only want to read the code, all the examples are on OAS Petstore example source code . 3scale ActiveDocs are based on the specification of RESTful web services called Swagger (from Wordnik ). This example is based on the Extended OpenAPI Specification Petstore example and draws all the specification data from the OpenAPI Specification 2.0 specification document . Prerequisites An OpenAPI Specification (OAS) compliant specification for your REST API is required to power ActiveDocs on your Developer Portal. OAS is not only a specification. It also provides a full feature framework: Servers for the specification of the resources in multiple languages (NodeJS, Scala, and others). A set of HTML/CSS/Javascripts assets that take the specification file and generate the attractive UI. A OAS codegen project , which allows generation of client libraries automatically from a Swagger-compliant server. Support to create client-side libraries in a number of modern languages. 4.1. Setting up 3scale API Management ActiveDocs and OAS ActiveDocs is an instance of OAS. With ActiveDocs, you do not have to run your own OAS server or deal with the user interface components of the interactive documentation. The interactive documentation is served and rendered from your 3scale Developer Portal. 3scale 2.8 introduced OAS 3.0 with limited support in ActiveDocs. This means that some features working with ActiveDocs, such as autocompletion, are not yet fully integrated, and consequently 3scale defaults to OAS 2.0 when creating new accounts. For more details about OAS 3.0 and ActiveDocs, refer to Section 2.1, "OpenAPI Specification 3.0 usage with 3scale API Management" . Prerequisites Ensure that the template used in the Developer Portal implements the same OAS version specified in the Admin Portal. Procedure Build a specification of your API compliant with OAS. Add the specification to your Admin Portal. Results Interactive documentation for your API is now available. API consumers can send requests to your API through your Developer Portal. If you already have an OAS-compliant specification of your API, you can add it in your Developer Portal. See the tutorial on the ActiveDocs configuration . 3scale extends OAS in several ways to accommodate certain features that are needed for Developer Portal interactive API documentation: Auto-fill of API keys. 4.2. OpenAPI document example: Petstore API To read the specification from the original source, see the OpenAPI Specification . On the OAS site, there are multiple examples of OpenAPI documents that define APIs. If you like to learn by example, you can follow the example of the Petstore API by the OAS API Team. The Petstore API is an extremely simple API. It is meant as a learning tool, not for production. Petstore API methods The Petstore API is composed of 4 methods: GET /api/pets returns all pets from the system POST /api/pets creates a new pet in the store GET /api/pets/{id} returns a pet based on a single ID DELETE /api/pets/{id} deletes a single pet based on the ID The Petstore API is integrated with 3scale, and for this reason you must add an additional parameter for authentication. For example, with the user key authentication method, an API consumer must put the user key parameter in the header of each request. For information about other authentication methods, see Authentication patterns . User key parameters user_key: {user_key} The user_key will be sent by the API consumers in their requests to your API. The API consumers will obtain those keys the 3scale administrator's Developer Portal. On receiving the key, the 3scale administrator must perform the authorization check against 3scale, using the Service Management API. More on the OpenAPI Specification For your API consumers, the documentation of your API represented in cURL calls would look like this: 4.3. Additional OAS specification information If you want your documentation to look like the OAS Petstore Documentation , you must create a Swagger-compliant specification like the associated Petstore swagger.json file. You can use this specification out-of-the-box to test your ActiveDocs. But remember that this is not your API. OAS relies on a resource declaration that maps to a hash encoded in JSON. Use the Petstore swagger.json file as an example and learn about each object. OAS object This is the root document object for the API specification. It lists all the highest level fields. info object The info object provides the metadata about the API. This content is presented in the ActiveDocs page. paths object The paths object holds the relative paths to the individual endpoints. The path is appended to the basePath to construct the full URL. The paths might be empty because of access control list (ACL) constraints. Parameters that are not objects use primitive data types. In Swagger, primitive data types are based on the types supported by the JSON-Schema Draft 4 . There is an additional primitive data type file but 3scale uses it only if the API endpoint has CORS enabled. With CORS enabled, the upload does not go through the api-docs gateway, where it would be rejected. Currently OAS supports the following dataTypes : integer with possible formats: int32 and int64. Both formats are signed. number with possible formats: float and double. plain string. string with possible formats: byte, date, date-time, password and binary. boolean. Additional resources OpenAPI Object Info Object Paths Object API Server and Base URL 4.4. OAS design and editing tools The following tools are useful for designing and editing the OpenAPI specification that defines your API: The open source Apicurio Studio enables you to design and edit your OpenAPI-based APIs in a web-based application. Apicurio Studio provides a design view, so you do not need detailed knowledge of the OpenAPI specification. The source view enables expert users to edit directly in YAML or JSON. For more details, see Getting Started with Apicurio Studio . Red Hat also provides a lightweight version of Apicurio Studio named API Designer, which is included with Fuse Online on OpenShift. For more details, see Developing and Deploying API Provider Integrations . The JSON Editor Online is useful if you are very familiar with the JSON notation. It gives a pretty format to compact JSON and provides a JSON object browser. The Swagger Editor enables you to create and edit your OAS API specification written in YAML in your browser and preview it in real time. You can also generate a valid JSON specification, which you can upload later in your 3scale Admin Portal. You can use the live demo version with limited functionality, or deploy your own OAS Editor. 4.5. ActiveDocs auto-fill of API credentials Auto-fill of API credentials is a useful extension to OAS in 3scale ActiveDocs. You can define the x-data-threescale-name field with the following values depending on your API authentication mode: user_keys : Returns the user keys for applications of the services that use API key authentication only. app_ids : Returns the IDs for applications of the services that use App ID/App Key. OAuth and OpenID Connect are also supported for backwards compatibility. app_keys : Returns the keys for applications of services that use App ID/App Key. OAuth and OpenID Connect are also supported for backwards compatibility. Note The x-data-threescale-name field is an OAS extension that is ignored outside the domain of ActiveDocs. API key authentication example The following example shows using "x-data-threescale-name": "user_keys" for API key authentication only: "parameters": [ { "name": "user_key", "in": "query", "description": "Your API access key", "required": true, "schema": { "type": "string" }, "x-data-threescale-name": "user_keys" } ] For the parameters declared with x-data-threescale-name , when you log in to the Developer Portal you will see a drop-down list with the 5 latest keys, user key, App Id or App key, according to the value configured in the specification. So you can auto-fill the input without having to copy and paste the value:
[ "curl -X GET \"http://example.com/api/pets?tags=TAGS&limit=LIMIT\" -H \"user_key: {user_key}\" curl -X POST \"http://example.com/api/pets\" -H \"user_key: {user_key}\" -d \"{ \"name\": \"NAME\", \"tag\": \"TAG\", \"id\": ID }\" curl -X GET \"http://example.com/api/pets/{id}\" -H \"user_key: {user_key}\" curl -X DELETE \"http://example.com/api/pets/{id}\" -H \"user_key: {user_key}\"", "\"parameters\": [ { \"name\": \"user_key\", \"in\": \"query\", \"description\": \"Your API access key\", \"required\": true, \"schema\": { \"type\": \"string\" }, \"x-data-threescale-name\": \"user_keys\" } ]" ]
https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/providing_apis_in_the_developer_portal/how-to-write-an-openapi-document-for-use-as-a-threescale-openapi-spec_creating-a-new-service-based-on-oas
probe::scsi.iocompleted
probe::scsi.iocompleted Name probe::scsi.iocompleted - SCSI mid-layer running the completion processing for block device I/O requests Synopsis scsi.iocompleted Values device_state The current state of the device dev_id The scsi device id req_addr The current struct request pointer, as a number data_direction_str Data direction, as a string device_state_str The current state of the device, as a string lun The lun number goodbytes The bytes completed data_direction The data_direction specifies whether this command is from/to the device channel The channel number host_no The host number
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-scsi-iocompleted
Appendix C. Ceph Monitor configuration options
Appendix C. Ceph Monitor configuration options The following are Ceph monitor configuration options that can be set up during deployment. You can set these configuration options with the ceph config set mon CONFIGURATION_OPTION VALUE command. Configuration option Description Type Default mon_force_quorum_join Force monitor to join quorum even if it has been previously removed from the map Boolean False mon_dns_srv_name The service name used for querying the DNS for the monitor hosts/addresses. String ceph-mon fsid The cluster ID. One per cluster. UUID N/A. May be generated by a deployment tool if not specified. mon_data_size_warn Ceph issues a HEALTH_WARN status in the cluster log when the monitor's data store reaches this threshold. The default value is 15GB. Integer 15*1024*1024*1024* mon_data_avail_warn Ceph issues a HEALTH_WARN status in the cluster log when the available disk space of the monitor's data store is lower than or equal to this percentage. Integer 30 mon_data_avail_crit Ceph issues a HEALTH_ERR status in the cluster log when the available disk space of the monitor's data store is lower or equal to this percentage. Integer 5 mon_warn_on_cache_pools_without_hit_sets Ceph issues a HEALTH_WARN status in the cluster log if a cache pool does not have the hit_set_type parameter set. Boolean True mon_warn_on_crush_straw_calc_version_zero Ceph issues a HEALTH_WARN status in the cluster log if the CRUSH's straw_calc_version is zero. See CRUSH tunables for details. Boolean True mon_warn_on_legacy_crush_tunables Ceph issues a HEALTH_WARN status in the cluster log if CRUSH tunables are too old (older than mon_min_crush_required_version ). Boolean True mon_crush_min_required_version This setting defines the minimum tunable profile version required by the cluster. String hammer mon_warn_on_osd_down_out_interval_zero Ceph issues a HEALTH_WARN status in the cluster log if the mon_osd_down_out_interval setting is zero, because the Leader behaves in a similar manner when the noout flag is set. Administrators find it easier to troubleshoot a cluster by setting the noout flag. Ceph issues the warning to ensure administrators know that the setting is zero. Boolean True mon_health_to_clog This setting enables Ceph to send a health summary to the cluster log periodically. Boolean True mon_health_detail_to_clog This setting enable Ceph to send a health details to the cluster log periodically. Boolean True mon_op_complaint_time Number of seconds after which the Ceph Monitor operation is considered blocked after no updates. Integer 30 mon_health_to_clog_tick_interval How often (in seconds) the monitor sends a health summary to the cluster log. A non-positive number disables it. If the current health summary is empty or identical to the last time, the monitor will not send the status to the cluster log. Float 60.000000 mon_health_to_clog_interval How often (in seconds) the monitor sends a health summary to the cluster log. A non-positive number disables it. The monitor will always send the summary to the cluster log. Integer 600 mon_sync_timeout The number of seconds the monitor will wait for the update message from its sync provider before it gives up and bootstraps again. Double 60.000000 mon_sync_max_payload_size The maximum size for a sync payload (in bytes). 32-bit Integer 1045676 paxos_max_join_drift The maximum Paxos iterations before we must first sync the monitor data stores. When a monitor finds that its peer is too far ahead of it, it will first sync with data stores before moving on. Integer 10 paxos_stash_full_interval How often (in commits) to stash a full copy of the PaxosService state. Currently this setting only affects mds , mon , auth , and mgr PaxosServices. Integer 25 paxos_propose_interval Gather updates for this time interval before proposing a map update. Double 1.0 paxos_min The minimum number of paxos states to keep around Integer 500 paxos_min_wait The minimum amount of time to gather updates after a period of inactivity. Double 0.05 paxos_trim_min Number of extra proposals tolerated before trimming Integer 250 paxos_trim_max The maximum number of extra proposals to trim at a time Integer 500 paxos_service_trim_min The minimum amount of versions to trigger a trim (0 disables it) Integer 250 paxos_service_trim_max The maximum amount of versions to trim during a single proposal (0 disables it) Integer 500 mon_mds_force_trim_to Force monitor to trim mdsmaps to this point (0 disables it. Dangerous, use with care) Integer 0 mon_osd_force_trim_to Force monitor to trim osdmaps to this point, even if there is PGs not clean at the specified epoch (0 disables it. dangerous, use with care) Integer 0 mon_osd_cache_size The size of osdmaps cache, not to rely on underlying store's cache Integer 500 mon_election_timeout On election proposer, maximum waiting time for all ACKs in seconds. Float 5 mon_lease The length (in seconds) of the lease on the monitor's versions. Float 5 mon_lease_renew_interval_factor mon lease * mon lease renew interval factor will be the interval for the Leader to renew the other monitor's leases. The factor should be less than 1.0 . Float 0.6 mon_lease_ack_timeout_factor The Leader will wait mon lease * mon lease ack timeout factor for the Providers to acknowledge the lease extension. Float 2.0 mon_min_osdmap_epochs Minimum number of OSD map epochs to keep at all times. 32-bit Integer 500 mon_max_log_epochs Maximum number of Log epochs the monitor should keep. 32-bit Integer 500 mon_tick_interval A monitor's tick interval in seconds. 32-bit Integer 5 mon_clock_drift_allowed The clock drift in seconds allowed between monitors. Float .050 mon_clock_drift_warn_backoff Exponential backoff for clock drift warnings. Float 5 mon_timecheck_interval The time check interval (clock drift check) in seconds for the leader. Float 300.0 mon_timecheck_skew_interval The time check interval (clock drift check) in seconds when in the presence of a skew in seconds for the Leader. Float 30.0 mon_max_osd The maximum number of OSDs allowed in the cluster. 32-bit Integer 10000 mon_globalid_prealloc The number of global IDs to pre-allocate for clients and daemons in the cluster. 32-bit Integer 10000 mon_subscribe_interval The refresh interval, in seconds, for subscriptions. The subscription mechanism enables obtaining the cluster maps and log information. Double 86400.000000 mon_stat_smooth_intervals Ceph will smooth statistics over the last N PG maps. Integer 6 mon_probe_timeout Number of seconds the monitor will wait to find peers before bootstrapping. Double 2.0 mon_daemon_bytes The message memory cap for metadata server and OSD messages (in bytes). 64-bit Integer Unsigned 400ul << 20 mon_max_log_entries_per_event The maximum number of log entries per event. Integer 4096 mon_osd_prime_pg_temp Enables or disable priming the PGMap with the OSDs when an out OSD comes back into the cluster. With the true setting, the clients will continue to use the OSDs until the newly in OSDs as that PG peered. Boolean true mon_osd_prime_pg_temp_max_time How much time in seconds the monitor should spend trying to prime the PGMap when an out OSD comes back into the cluster. Float 0.5 mon_lease_ack_timeout_factor The Leader will wait mon lease * mon lease ack timeout factor for the Providers to acknowledge the lease extension. Float 2.0 mon_accept_timeout_factor The Leader will wait mon lease * mon accept timeout factor for the Requesters to accept a Paxos update. It is also used during the Paxos recovery phase for similar purposes. Float 2.0 mon_min_osdmap_epochs Minimum number of OSD map epochs to keep at all times. 32-bit Integer 500 mon_max_pgmap_epochs Maximum number of PG map epochs the monitor should keep. 32-bit Integer 500 mon_max_log_epochs Maximum number of Log epochs the monitor should keep. 32-bit Integer 500 clock_offset How much to offset the system clock. See Clock.cc for details. Double 0 mon_tick_interval A monitor's tick interval in seconds. 32-bit Integer 5 mon_clock_drift_allowed The clock drift in seconds allowed between monitors. Float .050 mon_clock_drift_warn_backoff Exponential backoff for clock drift warnings. Float 5 mon_timecheck_interval The time check interval (clock drift check) in seconds for the leader. Float 300.0 mon_timecheck_skew_interval The time check interval (clock drift check) in seconds when in the presence of a skew in seconds for the Leader. Float 30.0 mon_max_osd The maximum number of OSDs allowed in the cluster. 32-bit Integer 10000 mon_globalid_prealloc The number of global IDs to pre-allocate for clients and daemons in the cluster. 32-bit Integer 10000 mon_sync_fs_threshold Synchronize with the filesystem when writing the specified number of objects. Set it to 0 to disable it. 32-bit Integer 5 mon_subscribe_interval The refresh interval, in seconds, for subscriptions. The subscription mechanism enables obtaining the cluster maps and log information. Double 86400.000000 mon_stat_smooth_intervals Ceph will smooth statistics over the last N PG maps. Integer 6 mon_probe_timeout Number of seconds the monitor will wait to find peers before bootstrapping. Double 2.0 mon_daemon_bytes The message memory cap for metadata server and OSD messages (in bytes). 64-bit Integer Unsigned 400ul << 20 mon_max_log_entries_per_event The maximum number of log entries per event. Integer 4096 mon_osd_prime_pg_temp Enables or disable priming the PGMap with the OSDs when an out OSD comes back into the cluster. With the true setting, the clients will continue to use the OSDs until the newly in OSDs as that PG peered. Boolean true mon_osd_prime_pg_temp_max_time How much time in seconds the monitor should spend trying to prime the PGMap when an out OSD comes back into the cluster. Float 0.5 mon_mds_skip_sanity Skip safety assertions on FSMap, in case of bugs where we want to continue anyway. Monitor terminates if the FSMap sanity check fails, but we can disable it by enabling this option. Boolean False mon_max_mdsmap_epochs The maximum amount of mdsmap epochs to trim during a single proposal. Integer 500 mon_config_key_max_entry_size The maximum size of config-key entry (in bytes). Integer 65536 mon_warn_pg_not_scrubbed_ratio The percentage of the scrub max interval past the scrub max interval to warn. float 0.5 mon_warn_pg_not_deep_scrubbed_ratio The percentage of the deep scrub interval past the deep scrub interval to warn float 0.75 mon_scrub_interval How often, in seconds, the monitor scrub its store by comparing the stored checksums with the computed ones of all the stored keys. Integer 3600*24 mon_scrub_timeout The timeout to restart scrub of mon quorum participant does not respond for the latest chunk. Integer 5 min mon_scrub_max_keys The maximum number of keys to scrub each time. Integer 100 mon_scrub_inject_missing_keys The probability of injecting missing keys into mon scrub. Float 0 mon_compact_on_start Compact the database used as Ceph Monitor store on ceph-mon start. A manual compaction helps to shrink the monitor database and improve its performance if the regular compaction fails to work. Boolean False mon_compact_on_bootstrap Compact the database used as Ceph Monitor store on bootstrap. The monitor starts probing each other for creating a quorum after bootstrap. If it times out before joining the quorum, it will start over and bootstrap itself again. Boolean False mon_compact_on_trim Compact a certain prefix (including paxos) when we trim its old states. Boolean True mon_osd_mapping_pgs_per_chunk We calculate the mapping from the placement group to OSDs in chunks. This option specifies the number of placement groups per chunk. Integer 4096 rados_mon_op_timeout Number of seconds to wait for a response from the monitor before returning an error from a rados operation. 0 means at limit, or no wait time. Double 0 Additional Resources Pool Values CRUSH tunables
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/configuration_guide/ceph-monitor-configuration-options_conf
Chapter 3. Using realmd to Connect to an Active Directory Domain
Chapter 3. Using realmd to Connect to an Active Directory Domain The realmd system provides a clear and simple way to discover and join identity domains to achieve direct domain integration. It configures underlying Linux system services, such as SSSD or Winbind, to connect to the domain. Chapter 2, Using Active Directory as an Identity Provider for SSSD describes how to use the System Security Services Daemon (SSSD) on a local system and Active Directory as a back-end identity provider. Ensuring that the system is properly configured for this can be a complex task: there are a number of different configuration parameters for each possible identity provider and for SSSD itself. In addition, all domain information must be available in advance and then properly formatted in the SSSD configuration for SSSD to integrate the local system with AD. The realmd system simplifies that configuration. It can run a discovery search to identify available AD and Identity Management domains and then join the system to the domain, as well as set up the required client services used to connect to the given identity domain and manage user access. Additionally, because SSSD as an underlying service supports multiple domains, realmd can discover and support multiple domains as well. 3.1. Supported Domain Types and Clients The realmd system supports the following domain types: Microsoft Active Directory Red Hat Enterprise Linux Identity Management The following domain clients are supported by realmd : SSSD for both Red Hat Enterprise Linux Identity Management and Microsoft Active Directory Winbind for Microsoft Active Directory
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/windows_integration_guide/ch-Configuring_Authentication
Chapter 19. OverlappingRangeIPReservation [whereabouts.cni.cncf.io/v1alpha1]
Chapter 19. OverlappingRangeIPReservation [whereabouts.cni.cncf.io/v1alpha1] Description OverlappingRangeIPReservation is the Schema for the OverlappingRangeIPReservations API Type object Required spec 19.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object OverlappingRangeIPReservationSpec defines the desired state of OverlappingRangeIPReservation 19.1.1. .spec Description OverlappingRangeIPReservationSpec defines the desired state of OverlappingRangeIPReservation Type object Required podref Property Type Description containerid string ifname string podref string 19.2. API endpoints The following API endpoints are available: /apis/whereabouts.cni.cncf.io/v1alpha1/overlappingrangeipreservations GET : list objects of kind OverlappingRangeIPReservation /apis/whereabouts.cni.cncf.io/v1alpha1/namespaces/{namespace}/overlappingrangeipreservations DELETE : delete collection of OverlappingRangeIPReservation GET : list objects of kind OverlappingRangeIPReservation POST : create an OverlappingRangeIPReservation /apis/whereabouts.cni.cncf.io/v1alpha1/namespaces/{namespace}/overlappingrangeipreservations/{name} DELETE : delete an OverlappingRangeIPReservation GET : read the specified OverlappingRangeIPReservation PATCH : partially update the specified OverlappingRangeIPReservation PUT : replace the specified OverlappingRangeIPReservation 19.2.1. /apis/whereabouts.cni.cncf.io/v1alpha1/overlappingrangeipreservations HTTP method GET Description list objects of kind OverlappingRangeIPReservation Table 19.1. HTTP responses HTTP code Reponse body 200 - OK OverlappingRangeIPReservationList schema 401 - Unauthorized Empty 19.2.2. /apis/whereabouts.cni.cncf.io/v1alpha1/namespaces/{namespace}/overlappingrangeipreservations HTTP method DELETE Description delete collection of OverlappingRangeIPReservation Table 19.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind OverlappingRangeIPReservation Table 19.3. HTTP responses HTTP code Reponse body 200 - OK OverlappingRangeIPReservationList schema 401 - Unauthorized Empty HTTP method POST Description create an OverlappingRangeIPReservation Table 19.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 19.5. Body parameters Parameter Type Description body OverlappingRangeIPReservation schema Table 19.6. HTTP responses HTTP code Reponse body 200 - OK OverlappingRangeIPReservation schema 201 - Created OverlappingRangeIPReservation schema 202 - Accepted OverlappingRangeIPReservation schema 401 - Unauthorized Empty 19.2.3. /apis/whereabouts.cni.cncf.io/v1alpha1/namespaces/{namespace}/overlappingrangeipreservations/{name} Table 19.7. Global path parameters Parameter Type Description name string name of the OverlappingRangeIPReservation HTTP method DELETE Description delete an OverlappingRangeIPReservation Table 19.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 19.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified OverlappingRangeIPReservation Table 19.10. HTTP responses HTTP code Reponse body 200 - OK OverlappingRangeIPReservation schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified OverlappingRangeIPReservation Table 19.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 19.12. HTTP responses HTTP code Reponse body 200 - OK OverlappingRangeIPReservation schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified OverlappingRangeIPReservation Table 19.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 19.14. Body parameters Parameter Type Description body OverlappingRangeIPReservation schema Table 19.15. HTTP responses HTTP code Reponse body 200 - OK OverlappingRangeIPReservation schema 201 - Created OverlappingRangeIPReservation schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/network_apis/overlappingrangeipreservation-whereabouts-cni-cncf-io-v1alpha1
Chapter 3. Using the Cluster Samples Operator with an alternate registry
Chapter 3. Using the Cluster Samples Operator with an alternate registry You can use the Cluster Samples Operator with an alternate registry by first creating a mirror registry. Important You must have access to the internet to obtain the necessary container images. In this procedure, you place the mirror registry on a mirror host that has access to both your network and the internet. 3.1. About the mirror registry You can mirror the images that are required for OpenShift Container Platform installation and subsequent product updates to a container mirror registry such as Red Hat Quay, JFrog Artifactory, Sonatype Nexus Repository, or Harbor. If you do not have access to a large-scale container registry, you can use the mirror registry for Red Hat OpenShift , a small-scale container registry included with OpenShift Container Platform subscriptions. You can use any container registry that supports Docker v2-2 , such as Red Hat Quay, the mirror registry for Red Hat OpenShift , Artifactory, Sonatype Nexus Repository, or Harbor. Regardless of your chosen registry, the procedure to mirror content from Red Hat hosted sites on the internet to an isolated image registry is the same. After you mirror the content, you configure each cluster to retrieve this content from your mirror registry. Important The OpenShift image registry cannot be used as the target registry because it does not support pushing without a tag, which is required during the mirroring process. If choosing a container registry that is not the mirror registry for Red Hat OpenShift , it must be reachable by every machine in the clusters that you provision. If the registry is unreachable, installation, updating, or normal operations such as workload relocation might fail. For that reason, you must run mirror registries in a highly available way, and the mirror registries must at least match the production availability of your OpenShift Container Platform clusters. When you populate your mirror registry with OpenShift Container Platform images, you can follow two scenarios. If you have a host that can access both the internet and your mirror registry, but not your cluster nodes, you can directly mirror the content from that machine. This process is referred to as connected mirroring . If you have no such host, you must mirror the images to a file system and then bring that host or removable media into your restricted environment. This process is referred to as disconnected mirroring . For mirrored registries, to view the source of pulled images, you must review the Trying to access log entry in the CRI-O logs. Other methods to view the image pull source, such as using the crictl images command on a node, show the non-mirrored image name, even though the image is pulled from the mirrored location. Note Red Hat does not test third party registries with OpenShift Container Platform. Additional information For information on viewing the CRI-O logs to view the image source, see Viewing the image pull source . 3.1.1. Preparing the mirror host Before you create the mirror registry, you must prepare the mirror host. 3.1.2. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.13. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.13 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 3.2. Configuring credentials that allow images to be mirrored Create a container image registry credentials file that allows mirroring images from Red Hat to your mirror. Prerequisites You configured a mirror registry to use in your disconnected environment. Procedure Complete the following steps on the installation host: Download your registry.redhat.io pull secret from the Red Hat OpenShift Cluster Manager . Make a copy of your pull secret in JSON format: USD cat ./pull-secret | jq . > <path>/<pull_secret_file_in_json> 1 1 Specify the path to the folder to store the pull secret in and a name for the JSON file that you create. The contents of the file resemble the following example: { "auths": { "cloud.openshift.com": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "quay.io": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "registry.connect.redhat.com": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" }, "registry.redhat.io": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" } } } Generate the base64-encoded user name and password or token for your mirror registry: USD echo -n '<user_name>:<password>' | base64 -w0 1 BGVtbYk3ZHAtqXs= 1 For <user_name> and <password> , specify the user name and password that you configured for your registry. Edit the JSON file and add a section that describes your registry to it: "auths": { "<mirror_registry>": { 1 "auth": "<credentials>", 2 "email": "[email protected]" } }, 1 For <mirror_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example, registry.example.com or registry.example.com:8443 2 For <credentials> , specify the base64-encoded user name and password for the mirror registry. The file resembles the following example: { "auths": { "registry.example.com": { "auth": "BGVtbYk3ZHAtqXs=", "email": "[email protected]" }, "cloud.openshift.com": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "quay.io": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "registry.connect.redhat.com": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" }, "registry.redhat.io": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" } } } 3.3. Mirroring the OpenShift Container Platform image repository Mirror the OpenShift Container Platform image repository to your registry to use during cluster installation or upgrade. Prerequisites Your mirror host has access to the internet. You configured a mirror registry to use in your restricted network and can access the certificate and credentials that you configured. You downloaded the pull secret from the Red Hat OpenShift Cluster Manager and modified it to include authentication to your mirror repository. If you use self-signed certificates, you have specified a Subject Alternative Name in the certificates. Procedure Complete the following steps on the mirror host: Review the OpenShift Container Platform downloads page to determine the version of OpenShift Container Platform that you want to install and determine the corresponding tag on the Repository Tags page. Set the required environment variables: Export the release version: USD OCP_RELEASE=<release_version> For <release_version> , specify the tag that corresponds to the version of OpenShift Container Platform to install, such as 4.5.4 . Export the local registry name and host port: USD LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>' For <local_registry_host_name> , specify the registry domain name for your mirror repository, and for <local_registry_host_port> , specify the port that it serves content on. Export the local repository name: USD LOCAL_REPOSITORY='<local_repository_name>' For <local_repository_name> , specify the name of the repository to create in your registry, such as ocp4/openshift4 . Export the name of the repository to mirror: USD PRODUCT_REPO='openshift-release-dev' For a production release, you must specify openshift-release-dev . Export the path to your registry pull secret: USD LOCAL_SECRET_JSON='<path_to_pull_secret>' For <path_to_pull_secret> , specify the absolute path to and file name of the pull secret for your mirror registry that you created. Export the release mirror: USD RELEASE_NAME="ocp-release" For a production release, you must specify ocp-release . Export the type of architecture for your cluster: USD ARCHITECTURE=<cluster_architecture> 1 1 Specify the architecture of the cluster, such as x86_64 , aarch64 , s390x , or ppc64le . Export the path to the directory to host the mirrored images: USD REMOVABLE_MEDIA_PATH=<path> 1 1 Specify the full path, including the initial forward slash (/) character. Mirror the version images to the mirror registry: If your mirror host does not have internet access, take the following actions: Connect the removable media to a system that is connected to the internet. Review the images and configuration manifests to mirror: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} \ --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} \ --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} \ --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run Record the entire imageContentSources section from the output of the command. The information about your mirrors is unique to your mirrored repository, and you must add the imageContentSources section to the install-config.yaml file during installation. Mirror the images to a directory on the removable media: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} Take the media to the restricted network environment and upload the images to the local container registry. USD oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror "file://openshift/release:USD{OCP_RELEASE}*" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1 1 For REMOVABLE_MEDIA_PATH , you must use the same path that you specified when you mirrored the images. Important Running oc image mirror might result in the following error: error: unable to retrieve source image . This error occurs when image indexes include references to images that no longer exist on the image registry. Image indexes might retain older references to allow users running those images an upgrade path to newer points on the upgrade graph. As a temporary workaround, you can use the --skip-missing option to bypass the error and continue downloading the image index. For more information, see Service Mesh Operator mirroring failed . If the local container registry is connected to the mirror host, take the following actions: Directly push the release images to the local registry by using following command: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} \ --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} \ --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} \ --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} This command pulls the release information as a digest, and its output includes the imageContentSources data that you require when you install your cluster. Record the entire imageContentSources section from the output of the command. The information about your mirrors is unique to your mirrored repository, and you must add the imageContentSources section to the install-config.yaml file during installation. Note The image name gets patched to Quay.io during the mirroring process, and the podman images will show Quay.io in the registry on the bootstrap virtual machine. To create the installation program that is based on the content that you mirrored, extract it and pin it to the release: If your mirror host does not have internet access, run the following command: USD oc adm release extract -a USD{LOCAL_SECRET_JSON} --icsp-file=<file> --command=openshift-install "USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}" If the local container registry is connected to the mirror host, run the following command: USD oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-install "USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}" Important To ensure that you use the correct images for the version of OpenShift Container Platform that you selected, you must extract the installation program from the mirrored content. You must perform this step on a machine with an active internet connection. For clusters using installer-provisioned infrastructure, run the following command: USD openshift-install 3.4. Using Cluster Samples Operator image streams with alternate or mirrored registries Most image streams in the openshift namespace managed by the Cluster Samples Operator point to images located in the Red Hat registry at registry.redhat.io . Note The cli , installer , must-gather , and tests image streams, while part of the install payload, are not managed by the Cluster Samples Operator. These are not addressed in this procedure. Important The Cluster Samples Operator must be set to Managed in a disconnected environment. To install the image streams, you have a mirrored registry. Prerequisites Access to the cluster as a user with the cluster-admin role. Create a pull secret for your mirror registry. Procedure Access the images of a specific image stream to mirror, for example: USD oc get is <imagestream> -n openshift -o json | jq .spec.tags[].from.name | grep registry.redhat.io Mirror images from registry.redhat.io associated with any image streams you need USD oc image mirror registry.redhat.io/rhscl/ruby-25-rhel7:latest USD{MIRROR_ADDR}/rhscl/ruby-25-rhel7:latest Create the cluster's image configuration object: USD oc create configmap registry-config --from-file=USD{MIRROR_ADDR_HOSTNAME}..5000=USDpath/ca.crt -n openshift-config Add the required trusted CAs for the mirror in the cluster's image configuration object: USD oc patch image.config.openshift.io/cluster --patch '{"spec":{"additionalTrustedCA":{"name":"registry-config"}}}' --type=merge Update the samplesRegistry field in the Cluster Samples Operator configuration object to contain the hostname portion of the mirror location defined in the mirror configuration: USD oc edit configs.samples.operator.openshift.io -n openshift-cluster-samples-operator Note This is required because the image stream import process does not use the mirror or search mechanism at this time. Add any image streams that are not mirrored into the skippedImagestreams field of the Cluster Samples Operator configuration object. Or if you do not want to support any of the sample image streams, set the Cluster Samples Operator to Removed in the Cluster Samples Operator configuration object. Note The Cluster Samples Operator issues alerts if image stream imports are failing but the Cluster Samples Operator is either periodically retrying or does not appear to be retrying them. Many of the templates in the openshift namespace reference the image streams. So using Removed to purge both the image streams and templates will eliminate the possibility of attempts to use them if they are not functional because of any missing image streams. 3.4.1. Cluster Samples Operator assistance for mirroring During installation, OpenShift Container Platform creates a config map named imagestreamtag-to-image in the openshift-cluster-samples-operator namespace. The imagestreamtag-to-image config map contains an entry, the populating image, for each image stream tag. The format of the key for each entry in the data field in the config map is <image_stream_name>_<image_stream_tag_name> . During a disconnected installation of OpenShift Container Platform, the status of the Cluster Samples Operator is set to Removed . If you choose to change it to Managed , it installs samples. Note The use of samples in a network-restricted or discontinued environment may require access to services external to your network. Some example services include: Github, Maven Central, npm, RubyGems, PyPi and others. There might be additional steps to take that allow the cluster samples operators's objects to reach the services they require. You can use this config map as a reference for which images need to be mirrored for your image streams to import. While the Cluster Samples Operator is set to Removed , you can create your mirrored registry, or determine which existing mirrored registry you want to use. Mirror the samples you want to the mirrored registry using the new config map as your guide. Add any of the image streams you did not mirror to the skippedImagestreams list of the Cluster Samples Operator configuration object. Set samplesRegistry of the Cluster Samples Operator configuration object to the mirrored registry. Then set the Cluster Samples Operator to Managed to install the image streams you have mirrored. See Using Cluster Samples Operator image streams with alternate or mirrored registries for a detailed procedure.
[ "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "cat ./pull-secret | jq . > <path>/<pull_secret_file_in_json> 1", "{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }", "echo -n '<user_name>:<password>' | base64 -w0 1 BGVtbYk3ZHAtqXs=", "\"auths\": { \"<mirror_registry>\": { 1 \"auth\": \"<credentials>\", 2 \"email\": \"[email protected]\" } },", "{ \"auths\": { \"registry.example.com\": { \"auth\": \"BGVtbYk3ZHAtqXs=\", \"email\": \"[email protected]\" }, \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }", "OCP_RELEASE=<release_version>", "LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>'", "LOCAL_REPOSITORY='<local_repository_name>'", "PRODUCT_REPO='openshift-release-dev'", "LOCAL_SECRET_JSON='<path_to_pull_secret>'", "RELEASE_NAME=\"ocp-release\"", "ARCHITECTURE=<cluster_architecture> 1", "REMOVABLE_MEDIA_PATH=<path> 1", "oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run", "oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE}", "oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror \"file://openshift/release:USD{OCP_RELEASE}*\" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1", "oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}", "oc adm release extract -a USD{LOCAL_SECRET_JSON} --icsp-file=<file> --command=openshift-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}\"", "oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}\"", "openshift-install", "oc get is <imagestream> -n openshift -o json | jq .spec.tags[].from.name | grep registry.redhat.io", "oc image mirror registry.redhat.io/rhscl/ruby-25-rhel7:latest USD{MIRROR_ADDR}/rhscl/ruby-25-rhel7:latest", "oc create configmap registry-config --from-file=USD{MIRROR_ADDR_HOSTNAME}..5000=USDpath/ca.crt -n openshift-config", "oc patch image.config.openshift.io/cluster --patch '{\"spec\":{\"additionalTrustedCA\":{\"name\":\"registry-config\"}}}' --type=merge", "oc edit configs.samples.operator.openshift.io -n openshift-cluster-samples-operator" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/images/samples-operator-alt-registry
Chapter 1. Support policy for Red Hat build of OpenJDK
Chapter 1. Support policy for Red Hat build of OpenJDK Red Hat will support select major versions of Red Hat build of OpenJDK in its products. For consistency, these are the same versions that Oracle designates as long-term support (LTS) for the Oracle JDK. A major version of Red Hat build of OpenJDK will be supported for a minimum of six years from the time that version is first introduced. For more information, see the OpenJDK Life Cycle and Support Policy . Note RHEL 6 reached the end of life in November 2020. Because of this, Red Hat build of OpenJDK is not supporting RHEL 6 as a supported configuration.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.18/rn-openjdk-support-policy
Chapter 7. Enabling notifications and integrations
Chapter 7. Enabling notifications and integrations You can enable the notifications service on Red Hat Hybrid Cloud Console to send notifications whenever a compliance policy is triggered. For example, you can configure the notifications service to automatically send an email message whenever a compliance policy falls below a certain threshold, or to send an email digest of all the compliance policy events that take place each day. Using the notifications service frees you from having to continually check the Red Hat Insights for RHEL dashboard for compliance event-triggered notifications. Enabling the notifications service requires three main steps: First, an Organization Administrator creates a User access group with the Notifications administrator role, and then adds account members to the group. , a Notifications administrator sets up behavior groups for events in the notifications service. Behavior groups specify the delivery method for each notification. For example, a behavior group can specify whether email notifications are sent to all users, or just to Organization administrators. Finally, users who receive email notifications from events must set their user preferences so that they receive individual emails for each event or a daily digest of all compliance events. In addition to sending email messages, you can configure the notifications service to send event data in other ways: Using an authenticated client to query Red Hat Insights APIs for event data Using webhooks to send events to third-party applications that accept inbound requests Integrating notifications with applications such as Splunk to route compliance events to the application dashboard Additional resources For more information about how to set up notifications for compliance events, see Configuring notifications on the Red Hat Hybrid Cloud Console .
null
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/generating_compliance_service_reports/assembly-enabling-notifications-integrations-for-compliance
Chapter 3. Keeping Your System Up-to-Date
Chapter 3. Keeping Your System Up-to-Date This chapter describes the process of keeping your system up-to-date, which involves planning and configuring the way security updates are installed, applying changes introduced by newly updated packages, and using the Red Hat Customer Portal for keeping track of security advisories. 3.1. Maintaining Installed Software As security vulnerabilities are discovered, the affected software must be updated in order to limit any potential security risks. If the software is a part of a package within a Red Hat Enterprise Linux distribution that is currently supported, Red Hat is committed to releasing updated packages that fix the vulnerabilities as soon as possible. Often, announcements about a given security exploit are accompanied with a patch (or source code) that fixes the problem. This patch is then applied to the Red Hat Enterprise Linux package and tested and released as an erratum update. However, if an announcement does not include a patch, Red Hat developers first work with the maintainer of the software to fix the problem. Once the problem is fixed, the package is tested and released as an erratum update. If an erratum update is released for software used on your system, it is highly recommended that you update the affected packages as soon as possible to minimize the amount of time the system is potentially vulnerable. 3.1.1. Planning and Configuring Security Updates All software contains bugs. Often, these bugs can result in a vulnerability that can expose your system to malicious users. Packages that have not been updated are a common cause of computer intrusions. Implement a plan for installing security patches in a timely manner to quickly eliminate discovered vulnerabilities, so they cannot be exploited. Test security updates when they become available and schedule them for installation. Additional controls need to be used to protect the system during the time between the release of the update and its installation on the system. These controls depend on the exact vulnerability, but may include additional firewall rules, the use of external firewalls, or changes in software settings. Bugs in supported packages are fixed using the errata mechanism. An erratum consists of one or more RPM packages accompanied by a brief explanation of the problem that the particular erratum deals with. All errata are distributed to customers with active subscriptions through the Red Hat Subscription Management service. Errata that address security issues are called Red Hat Security Advisories . For more information on working with security errata, see Section 3.2.1, "Viewing Security Advisories on the Customer Portal" . For detailed information about the Red Hat Subscription Management service, including instructions on how to migrate from RHN Classic , see the documentation related to this service: Red Hat Subscription Management . 3.1.1.1. Using the Security Features of Yum The Yum package manager includes several security-related features that can be used to search, list, display, and install security errata. These features also make it possible to use Yum to install nothing but security updates. To check for security-related updates available for your system, enter the following command as root : Note that the above command runs in a non-interactive mode, so it can be used in scripts for automated checking whether there are any updates available. The command returns an exit value of 100 when there are any security updates available and 0 when there are not. On encountering an error, it returns 1 . Analogously, use the following command to only install security-related updates: Use the updateinfo subcommand to display or act upon information provided by repositories about available updates. The updateinfo subcommand itself accepts a number of commands, some of which pertain to security-related uses. See Table 3.1, "Security-related commands usable with yum updateinfo" for an overview of these commands. Table 3.1. Security-related commands usable with yum updateinfo Command Description advisory [ advisories ] Displays information about one or more advisories. Replace advisories with an advisory number or numbers. cves Displays the subset of information that pertains to CVE ( Common Vulnerabilities and Exposures ). security or sec Displays all security-related information. severity [ severity_level ] or sev [ severity_level ] Displays information about security-relevant packages of the supplied severity_level . 3.1.2. Updating and Installing Packages When updating software on a system, it is important to download the update from a trusted source. An attacker can easily rebuild a package with the same version number as the one that is supposed to fix the problem but with a different security exploit and release it on the Internet. If this happens, using security measures, such as verifying files against the original RPM , does not detect the exploit. Thus, it is very important to only download RPMs from trusted sources, such as from Red Hat, and to check the package signatures to verify their integrity. See the Yum chapter of the Red Hat Enterprise Linux 7 System Administrator's Guide for detailed information on how to use the Yum package manager. 3.1.2.1. Verifying Signed Packages All Red Hat Enterprise Linux packages are signed with the Red Hat GPG key. GPG stands for GNU Privacy Guard , or GnuPG , a free software package used for ensuring the authenticity of distributed files. If the verification of a package signature fails, the package may be altered and therefore cannot be trusted. The Yum package manager allows for an automatic verification of all packages it installs or upgrades. This feature is enabled by default. To configure this option on your system, make sure the gpgcheck configuration directive is set to 1 in the /etc/yum.conf configuration file. Use the following command to manually verify package files on your filesystem: rpmkeys --checksig package_file.rpm See the Product Signing (GPG) Keys article on the Red Hat Customer Portal for additional information about Red Hat package-signing practices. 3.1.2.2. Installing Signed Packages To install verified packages (see Section 3.1.2.1, "Verifying Signed Packages" for information on how to verify packages) from your filesystem, use the yum install command as the root user as follows: yum install package_file.rpm Use a shell glob to install several packages at once. For example, the following commands installs all .rpm packages in the current directory: yum install *.rpm Important Before installing any security errata, be sure to read any special instructions contained in the erratum report and execute them accordingly. See Section 3.1.3, "Applying Changes Introduced by Installed Updates" for general instructions about applying changes made by errata updates. 3.1.3. Applying Changes Introduced by Installed Updates After downloading and installing security errata and updates, it is important to halt the usage of the old software and begin using the new software. How this is done depends on the type of software that has been updated. The following list itemizes the general categories of software and provides instructions for using updated versions after a package upgrade. Note In general, rebooting the system is the surest way to ensure that the latest version of a software package is used; however, this option is not always required, nor is it always available to the system administrator. Applications User-space applications are any programs that can be initiated by the user. Typically, such applications are used only when the user, a script, or an automated task utility launch them. Once such a user-space application is updated, halt any instances of the application on the system, and launch the program again to use the updated version. Kernel The kernel is the core software component for the Red Hat Enterprise Linux 7 operating system. It manages access to memory, the processor, and peripherals, and it schedules all tasks. Because of its central role, the kernel cannot be restarted without also rebooting the computer. Therefore, an updated version of the kernel cannot be used until the system is rebooted. KVM When the qemu-kvm and libvirt packages are updated, it is necessary to stop all guest virtual machines, reload relevant virtualization modules (or reboot the host system), and restart the virtual machines. Use the lsmod command to determine which modules from the following are loaded: kvm , kvm-intel , or kvm-amd . Then use the modprobe -r command to remove and subsequently the modprobe -a command to reload the affected modules. Fox example: Shared Libraries Shared libraries are units of code, such as glibc , that are used by a number of applications and services. Applications utilizing a shared library typically load the shared code when the application is initialized, so any applications using an updated library must be halted and relaunched. To determine which running applications link against a particular library, use the lsof command: lsof library For example, to determine which running applications link against the libwrap.so.0 library, type: This command returns a list of all the running programs that use TCP wrappers for host-access control. Therefore, any program listed must be halted and relaunched when the tcp_wrappers package is updated. systemd Services systemd services are persistent server programs usually launched during the boot process. Examples of systemd services include sshd or vsftpd . Because these programs usually persist in memory as long as a machine is running, each updated systemd service must be halted and relaunched after its package is upgraded. This can be done as the root user using the systemctl command: systemctl restart service_name Replace service_name with the name of the service you want to restart, such as sshd . Other Software Follow the instructions outlined by the resources linked below to correctly update the following applications. Red Hat Directory Server - See the Release Notes for the version of the Red Hat Directory Server in question at https://access.redhat.com/documentation/en-US/Red_Hat_Directory_Server/ . Red Hat Enterprise Virtualization Manager - See the Installation Guide for the version of the Red Hat Enterprise Virtualization in question at https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/ .
[ "~]# yum check-update --security Loaded plugins: langpacks, product-id, subscription-manager rhel-7-workstation-rpms/x86_64 | 3.4 kB 00:00:00 No packages needed for security; 0 packages available", "~]# yum update --security", "~]# lsmod | grep kvm kvm_intel 143031 0 kvm 460181 1 kvm_intel ~]# modprobe -r kvm-intel ~]# modprobe -r kvm ~]# modprobe -a kvm kvm-intel", "~]# lsof /lib64/libwrap.so.0 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME pulseaudi 12363 test mem REG 253,0 42520 34121785 /usr/lib64/libwrap.so.0.7.6 gnome-set 12365 test mem REG 253,0 42520 34121785 /usr/lib64/libwrap.so.0.7.6 gnome-she 12454 test mem REG 253,0 42520 34121785 /usr/lib64/libwrap.so.0.7.6" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/security_guide/chap-keeping_your_system_up-to-date
Chapter 28. HTTP Sink
Chapter 28. HTTP Sink Forwards an event to a HTTP endpoint 28.1. Configuration Options The following table summarizes the configuration options available for the http-sink Kamelet: Property Name Description Type Default Example url * URL The URL to send data to string "https://my-service/path" method Method The HTTP method to use string "POST" Note Fields marked with an asterisk (*) are mandatory. 28.2. Dependencies At runtime, the http-sink Kamelet relies upon the presence of the following dependencies: camel:http camel:kamelet camel:core 28.3. Usage This section describes how you can use the http-sink . 28.3.1. Knative Sink You can use the http-sink Kamelet as a Knative sink by binding it to a Knative object. http-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: http-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: http-sink properties: url: "https://my-service/path" 28.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 28.3.1.2. Procedure for using the cluster CLI Save the http-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f http-sink-binding.yaml 28.3.1.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind channel:mychannel http-sink -p "sink.url=https://my-service/path" This command creates the KameletBinding in the current namespace on the cluster. 28.3.2. Kafka Sink You can use the http-sink Kamelet as a Kafka sink by binding it to a Kafka topic. http-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: http-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: http-sink properties: url: "https://my-service/path" 28.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 28.3.2.2. Procedure for using the cluster CLI Save the http-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f http-sink-binding.yaml 28.3.2.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic http-sink -p "sink.url=https://my-service/path" This command creates the KameletBinding in the current namespace on the cluster. 28.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/http-sink.kamelet.yaml
[ "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: http-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: http-sink properties: url: \"https://my-service/path\"", "apply -f http-sink-binding.yaml", "kamel bind channel:mychannel http-sink -p \"sink.url=https://my-service/path\"", "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: http-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: http-sink properties: url: \"https://my-service/path\"", "apply -f http-sink-binding.yaml", "kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic http-sink -p \"sink.url=https://my-service/path\"" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.5/html/kamelets_reference/http-sink
Chapter 7. Collecting OpenShift sandboxed containers data
Chapter 7. Collecting OpenShift sandboxed containers data When troubleshooting OpenShift sandboxed containers, you can open a support case and provide debugging information using the must-gather tool. If you are a cluster administrator, you can also review logs on your own, enabling a more detailed level of logs. 7.1. Collecting OpenShift sandboxed containers data for Red Hat Support When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support. The must-gather tool enables you to collect diagnostic information about your OpenShift Container Platform cluster, including virtual machines and other data related to OpenShift sandboxed containers. For prompt support, supply diagnostic information for both OpenShift Container Platform and OpenShift sandboxed containers. 7.1.1. About the must-gather tool The oc adm must-gather CLI command collects the information from your cluster that is most likely needed for debugging issues, including: Resource definitions Service logs By default, the oc adm must-gather command uses the default plugin image and writes into ./must-gather.local . Alternatively, you can collect specific information by running the command with the appropriate arguments as described in the following sections: To collect data related to one or more specific features, use the --image argument with an image, as listed in a following section. For example: USD oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.10.0 To collect the audit logs, use the -- /usr/bin/gather_audit_logs argument, as described in a following section. For example: USD oc adm must-gather -- /usr/bin/gather_audit_logs Note Audit logs are not collected as part of the default set of information to reduce the size of the files. When you run oc adm must-gather , a new pod with a random name is created in a new project on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local . This directory is created in the current working directory. For example: NAMESPACE NAME READY STATUS RESTARTS AGE ... openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s ... To collect OpenShift sandboxed containers data with must-gather , you must specify the OpenShift sandboxed containers image: --image=registry.redhat.io/openshift-sandboxed-containers/osc-must-gather-rhel8:1.2.0 7.2. About OpenShift sandboxed containers log data When you collect log data about your cluster, the following features and objects are associated with OpenShift sandboxed containers: All namespaces and their child objects that belong to any OpenShift sandboxed containers resources All OpenShift sandboxed containers custom resource definitions (CRDs) The following OpenShift sandboxed containers component logs are collected for each pod running with the kata runtime: Kata agent logs Kata runtime logs QEMU logs Audit logs CRI-O logs 7.3. Enabling debug logs for OpenShift sandboxed containers As a cluster administrator, you can collect a more detailed level of logs for OpenShift sandboxed containers. Enhance logging by changing the log_level in the CRI-O runtime for the worker nodes running OpenShift sandboxed containers. Procedure Create a YAML file for the ContainerRuntimeConfig CR with the following manifest: apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: crio-debug spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: '' 1 containerRuntimeConfig: logLevel: debug 1 Specify a label for the machine config pool that you want you want to modify. Create the ContainerRuntimeConfig CR: USD oc create -f ctrcfg.yaml Note The file name listed above is a suggestion. You can create this file using another name. Verify the CR is created: USD oc get ctrcfg Example output NAME AGE crio-debug 3m19s Verification Monitor the machine config pool until the UPDATED field for all worker nodes appears as True : USD oc get mcp worker Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-169 False True False 3 1 1 0 9h Verify that the log_level was updated in CRI-O: Open an oc debug session to a node in the machine config pool and run chroot /host . USD oc debug node/<node_name> sh-4.4# chroot /host Verify the changes in the crio.conf file: sh-4.4# crio config | egrep 'log_level Example output log_level = "debug" 7.3.1. Viewing debug logs for OpenShift sandboxed containers Cluster administrators can use the enhanced debug logs for OpenShift sandboxed containers to troubleshoot issues. The logs for each node are printed to the node journal. You can review the logs for the following OpenShift sandboxed containers components: Kata agent Kata runtime ( containerd-shim-kata-v2 ) virtiofsd Logs for QEMU do not print to the node journal. However, a QEMU failure is reported to the runtime, and the console of the QEMU guest is printed to the node journal. You can view these logs together with the Kata agent logs. Prerequisites You have installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. Procedure To review the Kata agent logs and guest console logs, run: USD oc debug node/<nodename> -- journalctl -D /host/var/log/journal -t kata -g "reading guest console" To review the kata runtime logs, run: USD oc debug node/<nodename> -- journalctl -D /host/var/log/journal -t kata To review the virtiofsd logs, run: USD oc debug node/<nodename> -- journalctl -D /host/var/log/journal -t virtiofsd 7.4. Additional resources For more information about gathering data for support, see Gathering data about your cluster .
[ "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.10.0", "oc adm must-gather -- /usr/bin/gather_audit_logs", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s", "--image=registry.redhat.io/openshift-sandboxed-containers/osc-must-gather-rhel8:1.2.0", "apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: crio-debug spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: '' 1 containerRuntimeConfig: logLevel: debug", "oc create -f ctrcfg.yaml", "oc get ctrcfg", "NAME AGE crio-debug 3m19s", "oc get mcp worker", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-169 False True False 3 1 1 0 9h", "oc debug node/<node_name>", "sh-4.4# chroot /host", "sh-4.4# crio config | egrep 'log_level", "log_level = \"debug\"", "oc debug node/<nodename> -- journalctl -D /host/var/log/journal -t kata -g \"reading guest console\"", "oc debug node/<nodename> -- journalctl -D /host/var/log/journal -t kata", "oc debug node/<nodename> -- journalctl -D /host/var/log/journal -t virtiofsd" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/sandboxed_containers_support_for_openshift/troubleshooting-sandboxed-containers
5.3. Enabling Referential Integrity
5.3. Enabling Referential Integrity This section describes how to enable the Referential Integrity Postoperation plug-in. 5.3.1. Enabling Referential Integrity Using the Command Line To enable the Referential Integrity Postoperation plug-in using the command line: Use the dsconf utility to enable the plug-in: Restart the instance: 5.3.2. Enabling Referential Integrity Using the Web Console To enable the Referential Integrity plug-in using the web console: Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Select the Plugins menu. Select the Referential Integrity plug-in and click Show Advanced Settings . Change the status to ON to enable the plug-in.
[ "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin referential-integrity enable", "dsctl instance_name restart" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/enabling_referential_integrity
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/deploying_openshift_data_foundation_using_google_cloud/providing-feedback-on-red-hat-documentation_gcp
Chapter 6. Configuring tagging for your integrations
Chapter 6. Configuring tagging for your integrations The cost management application tracks cloud and infrastructure costs with tags. Tags are also known as labels in OpenShift. You can refine tags in cost management to filter and attribute resources, organize your resources by cost, and allocate costs to different parts of your cloud infrastructure. Important You can only configure tags and labels directly on an integration. You can choose the tags that you activate in cost management, however, you cannot edit tags and labels in the cost management application. To learn more about the following topics, see Managing cost data using tagging : Planning your tagging strategy to organize your view of cost data Understanding how cost management associates tags Configuring tags and labels on your integrations
null
https://docs.redhat.com/en/documentation/cost_management_service/1-latest/html/integrating_microsoft_azure_data_into_cost_management/configure-tagging-next-step_next-steps-azure
2.8.9.6. IPTables and IPv6
2.8.9.6. IPTables and IPv6 If the iptables-ipv6 package is installed, netfilter in Red Hat Enterprise Linux can filter the -generation IPv6 Internet protocol. The command used to manipulate the IPv6 netfilter is ip6tables . Most directives for this command are identical to those used for iptables , except the nat table is not yet supported. This means that it is not yet possible to perform IPv6 network address translation tasks, such as masquerading and port forwarding. Rules for ip6tables are saved in the /etc/sysconfig/ip6tables file. rules saved by the ip6tables initscripts are saved in the /etc/sysconfig/ip6tables.save file. Configuration options for the ip6tables init script are stored in /etc/sysconfig/ip6tables-config , and the names for each directive vary slightly from their iptables counterparts. For example, for the iptables-config directive IPTABLES_MODULES the equivalent in the ip6tables-config file is IP6TABLES_MODULES .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-Security_Guide-IPTables-IPTables_and_IPv6
Appendix B. Image configuration parameters
Appendix B. Image configuration parameters You can use the following keys with the property option for both the glance image-create and glance image-update commands. Table B.1. Property keys Specific to Key Description Supported values All architecture The CPU architecture that must be supported by the hypervisor. For example, x86_64 , arm , or ppc64 . Run uname -m to get the architecture of a machine. aarch - ARM 64-bit alpha - DEC 64-bit RISC armv7l - ARM Cortex-A7 MPCore cris - Ethernet, Token Ring, AXis-Code Reduced Instruction Set i686 - Intel sixth-generation x86 (P6 micro architecture) ia64 - Itanium lm32 - Lattice Micro32 m68k - Motorola 68000 microblaze - Xilinx 32-bit FPGA (Big Endian) microblazeel - Xilinx 32-bit FPGA (Little Endian) mips - MIPS 32-bit RISC (Big Endian) mipsel - MIPS 32-bit RISC (Little Endian) mips64 - MIPS 64-bit RISC (Big Endian) mips64el - MIPS 64-bit RISC (Little Endian) openrisc - OpenCores RISC parisc - HP Precision Architecture RISC parisc64 - HP Precision Architecture 64-bit RISC ppc - PowerPC 32-bit ppc64 - PowerPC 64-bit ppcemb - PowerPC (Embedded 32-bit) s390 - IBM Enterprise Systems Architecture/390 s390x - S/390 64-bit sh4 - SuperH SH-4 (Little Endian) sh4eb - SuperH SH-4 (Big Endian) sparc - Scalable Processor Architecture, 32-bit sparc64 - Scalable Processor Architecture, 64-bit unicore32 - Microprocessor Research and Development Center RISC Unicore32 x86_64 - 64-bit extension of IA-32 xtensa - Tensilica Xtensa configurable microprocessor core xtensaeb - Tensilica Xtensa configurable microprocessor core (Big Endian) All hypervisor_type The hypervisor type. kvm , vmware All instance_uuid For snapshot images, this is the UUID of the server used to create this image. Valid server UUID All kernel_id The ID of an image stored in the Image Service that should be used as the kernel when booting an AMI-style image. Valid image ID All os_distro The common name of the operating system distribution in lowercase. arch - Arch Linux. Do not use archlinux or org.archlinux . centos - Community Enterprise Operating System. Do not use org.centos or CentOS . debian - Debian. Do not use Debian or org.debian . fedora - Fedora. Do not use Fedora , org.fedora , or org.fedoraproject . freebsd - FreeBSD. Do not use org.freebsd , freeBSD , or FreeBSD . gentoo - Gentoo Linux. Do not use Gentoo or org.gentoo . mandrake - Mandrakelinux (MandrakeSoft) distribution. Do not use mandrakelinux or MandrakeLinux . mandriva - Mandriva Linux. Do not use mandrivalinux . mes - Mandriva Enterprise Server. Do not use mandrivaent or mandrivaES . msdos - Microsoft Disc Operating System. Do not use ms-dos . netbsd - NetBSD. Do not use NetBSD or org.netbsd . netware - Novell NetWare. Do not use novell or NetWare . openbsd - OpenBSD. Do not use OpenBSD or org.openbsd . opensolaris - OpenSolaris. Do not use OpenSolaris or org.opensolaris . opensuse - openSUSE. Do not use suse , SuSE , or org.opensuse . rhel - Red Hat Enterprise Linux. Do not use redhat , RedHat , or com.redhat . sled - SUSE Linux Enterprise Desktop. Do not use com.suse . ubuntu - Ubuntu. Do not use Ubuntu , com.ubuntu , org.ubuntu , or canonical . windows - Microsoft Windows. Do not use com.microsoft.server . All os_version The operating system version as specified by the distributor. Version number (for example, "11.10") All ramdisk_id The ID of image stored in the Image Service that should be used as the ramdisk when booting an AMI-style image. Valid image ID All vm_mode The virtual machine mode. This represents the host/guest ABI (application binary interface) used for the virtual machine. hvm -Fully virtualized. This is the mode used by QEMU and KVM. libvirt API driver hw_cdrom_bus Specifies the type of disk controller to attach CD-ROM devices to. scsi , virtio , ide , or usb . If you specify iscsi , you must set the hw_scsi_model parameter to virtio-scsi . libvirt API driver hw_disk_bus Specifies the type of disk controller to attach disk devices to. scsi , virtio , ide , or usb . Note that if using iscsi , the hw_scsi_model needs to be set to virtio-scsi . libvirt API driver hw_firmware_type Specifies the type of firmware to use to boot the instance. Set to one of the following valid values: bios uefi libvirt API driver hw_machine_type Enables booting an ARM system using the specified machine type. If an ARM image is used and its machine type is not explicitly specified, then Compute uses the virt machine type as the default for ARMv7 and AArch64. Valid types can be viewed by using the virsh capabilities command. The machine types are displayed in the machine tag. libvirt API driver hw_numa_nodes Number of NUMA nodes to expose to the instance (does not override flavor definition). Integer. libvirt API driver hw_numa_cpus.0 Mapping of vCPUs N-M to NUMA node 0 (does not override flavor definition). Comma-separated list of integers. libvirt API driver hw_numa_cpus.1 Mapping of vCPUs N-M to NUMA node 1 (does not override flavor definition). Comma-separated list of integers. libvirt API driver hw_numa_mem.0 Mapping N MB of RAM to NUMA node 0 (does not override flavor definition). Integer libvirt API driver hw_numa_mem.1 Mapping N MB of RAM to NUMA node 1 (does not override flavor definition). Integer libvirt API driver hw_pci_numa_affinity_policy Specifies the NUMA affinity policy for PCI passthrough devices and SR-IOV interfaces. Set to one of the following valid values: required : The Compute service creates an instance that requests a PCI device only when at least one of the NUMA nodes of the instance has affinity with the PCI device. This option provides the best performance. preferred : The Compute service attempts a best effort selection of PCI devices based on NUMA affinity. If affinity is not possible, then the Compute service schedules the instance on a NUMA node that has no affinity with the PCI device. legacy : (Default) The Compute service creates instances that request a PCI device in one of the following cases: The PCI device has affinity with at least one of the NUMA nodes. The PCI devices do not provide information about their NUMA affinities. libvirt API driver hw_qemu_guest_agent Guest agent support. If set to yes , and if qemu-ga is also installed, file systems can be quiesced (frozen) and snapshots created automatically. yes / no libvirt API driver hw_rng_model Adds a random number generator (RNG) device to instances launched with this image. The instance flavor enables the RNG device by default. To disable the RNG device, the cloud administrator must set hw_rng:allowed to False on the flavor. The default entropy source is /dev/random . To specify a hardware RNG device, set rng_dev_path to /dev/hwrng in your Compute environment file. virtio , or other supported device. libvirt API driver hw_scsi_model Enables the use of VirtIO SCSI (virtio-scsi) to provide block device access for compute instances; by default, instances use VirtIO Block (virtio-blk). VirtIO SCSI is a para-virtualized SCSI controller device that provides improved scalability and performance, and supports advanced SCSI hardware. virtio-scsi libvirt API driver hw_video_model The video device driver for the display device to use in virtual machine instances. Set to one of the following values to specify the supported driver to use: virtio - (Default) Recommended Driver for the virtual machine display device, supported by most architectures. The VirtIO GPU driver is included in RHEL-7 and later, and Linux kernel versions 4.4 and later. If an instance kernel has the VirtIO GPU driver, then the instance can use all the VirtIO GPU features. If an instance kernel does not have the VirtIO GPU driver, the VirtIO GPU device gracefully falls back to VGA compatibility mode, which provides a working display for the instance. qxl - Deprecated Driver for Spice or noVNC environments that is no longer maintained. cirrus - Legacy driver, supported only for backward compatibility. Do not use for new instances. vga - Use this driver for IBM Power environments. gop - Not supported for QEMU/KVM environments. xen - Not supported for KVM environments. vmvga - Legacy driver, do not use. none - Use this value to disable emulated graphics or video in virtual GPU (vGPU) instances where the driver is configured separately. libvirt API driver hw_video_ram Maximum RAM for the video image. Used only if a hw_video:ram_max_mb value has been set in the flavor's extra_specs and that value is higher than the value set in hw_video_ram . Integer in MB (for example, 64 ) libvirt API driver hw_watchdog_action Enables a virtual hardware watchdog device that carries out the specified action if the server hangs. The watchdog uses the i6300esb device (emulating a PCI Intel 6300ESB). If hw_watchdog_action is not specified, the watchdog is disabled. disabled-The device is not attached. Allows the user to disable the watchdog for the image, even if it has been enabled using the image's flavor. The default value for this parameter is disabled. reset-Forcefully reset the guest. poweroff-Forcefully power off the guest. pause-Pause the guest. none-Only enable the watchdog; do nothing if the server hangs. libvirt API driver os_command_line The kernel command line to be used by the libvirt driver, instead of the default. For Linux Containers(LXC), the value is used as arguments for initialization. This key is valid only for Amazon kernel, ramdisk, or machine images (aki, ari, or ami). libvirt API driver os_secure_boot Use to create an instance that is protected with UEFI Secure Boot. Set to one of the following valid values: required : Enables Secure Boot for instances launched with this image. The instance is only launched if the Compute service locates a host that can support Secure Boot. If no host is found, the Compute service returns a "No valid host" error. disabled : Disables Secure Boot for instances launched with this image. Disabled by default. optional : Enables Secure Boot for instances launched with this image only when the Compute service determines that the host can support Secure Boot. libvirt API driver and VMware API driver hw_vif_model Specifies the model of virtual network interface device to use. The valid options depend on the configured hypervisor. KVM and QEMU: e1000, ne2k_pci, pcnet, rtl8139, and virtio. VMware: e1000, e1000e, VirtualE1000, VirtualE1000e, VirtualPCNet32, VirtualSriovEthernetCard, and VirtualVmxnet. Xen: e1000, netfront, ne2k_pci, pcnet, and rtl8139. VMware API driver vmware_adaptertype The virtual SCSI or IDE controller used by the hypervisor. lsiLogic , busLogic , or ide VMware API driver vmware_ostype A VMware GuestID which describes the operating system installed in the image. This value is passed to the hypervisor when creating a virtual machine. If not specified, the key defaults to otherGuest . For more information, see Images with VMware vSphere . VMware API driver vmware_image_version Currently unused. 1 XenAPI driver auto_disk_config If true, the root partition on the disk is automatically resized before the instance boots. This value is only taken into account by the Compute service when using a Xen-based hypervisor with the XenAPI driver. The Compute service will only attempt to resize if there is a single partition on the image, and only if the partition is in ext3 or ext4 format. true / false libvirt API driver and XenAPI driver os_type The operating system installed on the image. The XenAPI driver contains logic that takes different actions depending on the value of the os_type parameter of the image. For example, for os_type=windows images, it creates a FAT32-based swap partition instead of a Linux swap partition, and it limits the injected host name to less than 16 characters. linux or windows
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/creating_and_managing_images/assembly_image-config-parameters_osp
Replacing devices
Replacing devices Red Hat OpenShift Data Foundation 4.14 Instructions for safely replacing operational or failed devices Red Hat Storage Documentation Team Abstract This document explains how to safely replace storage devices for Red Hat OpenShift Data Foundation.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/replacing_devices/index
Chapter 5. Configuring Visual Studio Code - Open Source ("Code - OSS")
Chapter 5. Configuring Visual Studio Code - Open Source ("Code - OSS") Learn how to configure Visual Studio Code - Open Source ("Code - OSS"). Section 5.1, "Configuring single and multiroot workspaces" 5.1. Configuring single and multiroot workspaces With the multi-root workspace feature, you can work with multiple project folders in the same workspace. This is useful when you are working on several related projects at once, such as product documentation and product code repositories. Tip See What is a VS Code "workspace" for better understanding and authoring the workspace files. Note The workspace is set to open in multi-root mode by default. Once workspace is started, the /projects/.code-workspace workspace file is generated. The workspace file will contain all the projects described in the devfile. { "folders": [ { "name": "project-1", "path": "/projects/project-1" }, { "name": "project-2", "path": "/projects/project-2" } ] } If the workspace file already exist, it will be updated and all missing projects will be taken from the devfile. If you remove a project from the devfile, it will be left in the workspace file. You can change the default behavior and provide your own workspace file or switch to a single-root workspace. Procedure Provide your own workspace file. Put a workspace file with the name .code-workspace into the root of your repository. After workspace creation, the Visual Studio Code - Open Source ("Code - OSS") will use the workspace file as it is. { "folders": [ { "name": "project-name", "path": "." } ] } Important Be careful when creating a workspace file. In case of errors, an empty Visual Studio Code - Open Source ("Code - OSS") will be opened instead. Important If you have several projects, the workspace file will be taken from the first project. If the workspace file does not exist in the first project, a new one will be created and placed in the /projects directory. Specify alternative workspace file. Define the VSCODE_DEFAULT_WORKSPACE environment variable in your devfile and specify the right location to the workspace file. env: - name: VSCODE_DEFAULT_WORKSPACE value: "/projects/project-name/workspace-file" Open a workspace in a single-root mode. Define VSCODE_DEFAULT_WORKSPACE environment variable and set it to the root. env: - name: VSCODE_DEFAULT_WORKSPACE value: "/"
[ "{ \"folders\": [ { \"name\": \"project-1\", \"path\": \"/projects/project-1\" }, { \"name\": \"project-2\", \"path\": \"/projects/project-2\" } ] }", "{ \"folders\": [ { \"name\": \"project-name\", \"path\": \".\" } ] }", "env: - name: VSCODE_DEFAULT_WORKSPACE value: \"/projects/project-name/workspace-file\"", "env: - name: VSCODE_DEFAULT_WORKSPACE value: \"/\"" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.14/html/administration_guide/configuring-visual-studio-code
23.3. SMBIOS System Information
23.3. SMBIOS System Information Some hypervisors allow control over what system information is presented to the guest virtual machine (for example, SMBIOS fields can be populated by a hypervisor and inspected using the dmidecode command in the guest virtual machine). The optional sysinfo element covers all such categories of information. ... <os> <smbios mode='sysinfo'/> ... </os> <sysinfo type='smbios'> <bios> <entry name='vendor'>LENOVO</entry> </bios> <system> <entry name='manufacturer'>Fedora</entry> <entry name='vendor'>Virt-Manager</entry> </system> </sysinfo> ... Figure 23.5. SMBIOS system information The <sysinfo> element has a mandatory attribute type that determines the layout of sub-elements, and may be defined as follows: <smbios> - Sub-elements call out specific SMBIOS values, which will affect the guest virtual machine if used in conjunction with the smbios sub-element of the <os> element. Each sub-element of <sysinfo> names a SMBIOS block, and within those elements can be a list of entry elements that describe a field within the block. The following blocks and entries are recognized: <bios> - This is block 0 of SMBIOS, with entry names drawn from vendor , version , date , and release . <system> - This is block 1 of SMBIOS, with entry names drawn from manufacturer , product , version , serial , uuid , sku , and family . If a uuid entry is provided alongside a top-level uuid element, the two values must match.
[ "<os> <smbios mode='sysinfo'/> </os> <sysinfo type='smbios'> <bios> <entry name='vendor'>LENOVO</entry> </bios> <system> <entry name='manufacturer'>Fedora</entry> <entry name='vendor'>Virt-Manager</entry> </system> </sysinfo>" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-Manipulating_the_domain_xml-SMBIOS_system_information
8.7. Configuring Debug Options
8.7. Configuring Debug Options You can enable debugging for all daemons in a cluster, or you can enable logging for specific cluster processing. To enable debugging for all daemons, add the following to the /etc/cluster/cluster.conf . By default, logging is directed to the /var/log/cluster/ daemon .log file. To enable debugging for individual cluster processes, add the following lines to the /etc/cluster/cluster.conf file. Per-daemon logging configuration overrides the global settings. For a list of the logging daemons for which you can enable logging as well as the additional logging options you can configure for both global and per-daemon logging, see the cluster.conf (5) man page.
[ "<cluster config_version=\"7\" name=\"rh6cluster\"> <logging debug=\"on\"/> </cluster>", "<cluster config_version=\"7\" name=\"rh6cluster\"> <logging> <!-- turning on per-subsystem debug logging --> <logging_daemon name=\"corosync\" debug=\"on\" /> <logging_daemon name=\"fenced\" debug=\"on\" /> <logging_daemon name=\"qdiskd\" debug=\"on\" /> <logging_daemon name=\"rgmanager\" debug=\"on\" /> <logging_daemon name=\"dlm_controld\" debug=\"on\" /> <logging_daemon name=\"gfs_controld\" debug=\"on\" /> </logging> </cluster>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-config-debug-cli-CA
Chapter 9. User statistics in Red Hat Developer Hub
Chapter 9. User statistics in Red Hat Developer Hub In Red Hat Developer Hub, the licensed-users-info-backend plugin provides statistical information about the logged-in users using the Web UI or REST API endpoints. The licensed-users-info-backend plugin enables administrators to monitor the number of active users on Developer Hub. Using this feature, organizations can compare their actual usage with the number of licenses they have purchased. Additionally, you can share the user metrics with Red Hat for transparency and accurate licensing. The licensed-users-info-backend plugin is enabled by default. This plugin enables a Download User List link at the bottom of the Administration RBAC tab. 9.1. Downloading active users list in Red Hat Developer Hub You can download the list of users in CSV format using the Developer Hub web interface. Prerequisites RBAC plugins ( @janus-idp/backstage-plugin-rbac and @janus-idp/backstage-plugin-rbac-backend ) must be enabled in Red Hat Developer Hub. An administrator role must be assigned. Procedure In Red Hat Developer Hub, navigate to Administration and select the RBAC tab. At the bottom of the RBAC page, click Download User List . Optional: Modify the file name in the Save as field and click Save . To access the downloaded users list, go to the Downloads folder on your local machine and open the CSV file.
null
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html/authorization/con-user-stats-rhdh_title-authorization
Chapter 9. Managing Directory Quotas
Chapter 9. Managing Directory Quotas Warning Quota is considered deprecated as of Red Hat Gluster Storage 3.5. Red Hat no longer recommends its use, and does not support Quota in new deployments and existing deployments that upgrade to Red Hat Gluster Storage 3.5.3. Quotas allow you to set limits on the disk space used by a directory. Storage administrators can control the disk space utilization at the directory and volume levels. This is particularly useful in cloud deployments to facilitate the use of utility billing models. 9.1. Enabling and Disabling Quotas To limit disk usage, you need to enable quota usage on a volume by running the following command: This command only enables quota behavior on the volume; it does not set any default disk usage limits. Note On a gluster volume with quota enabled, the CPU and memory consumption accelarates based on various factors. For example, complexity of the file system tree, number of bricks, nodes in the pool, number of quota limits placed across the filesystem, and the frequency of quota traversals across the filesystem. To disable quota behavior on a volume, including any set disk usage limits, run the following command: Important When you disable quotas on Red Hat Gluster Storage 3.1.1 and earlier, all previously configured limits are removed from the volume by a cleanup process, quota-remove-xattr.sh . If you re-enable quotas while the cleanup process is still running, the extended attributes that enable quotas may be removed by the cleanup process. This has negative effects on quota accounting.
[ "gluster volume quota VOLNAME enable", "gluster volume quota VOLNAME disable" ]
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/chap-Managing_Directory_Quotas
Troubleshooting Central
Troubleshooting Central Red Hat Advanced Cluster Security for Kubernetes 4.6 Troubleshooting Central Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/troubleshooting_central/index
Chapter 4. Selecting a system-wide archive Red Hat build of OpenJDK version
Chapter 4. Selecting a system-wide archive Red Hat build of OpenJDK version If you have multiple versions of Red Hat build of OpenJDK installed with the archive on RHEL, you can select a specific Red Hat build of OpenJDK version to use system-wide. Prerequisites Know the locations of the Red Hat build of OpenJDK versions installed using the archive. Procedure To specify the Red Hat build of OpenJDK version to use for a single session: Configure JAVA_HOME with the path to the Red Hat build of OpenJDK version you want used system-wide. USD export JAVA_HOME=/opt/jdk/jdk-11.0.9 Add USDJAVA_HOME/bin to the PATH environment variable. USD export PATH="USDJAVA_HOME/bin:USDPATH" To specify the Red Hat build of OpenJDK version to use permanently for a single user, add these commands into ~/.bashrc : To specify the Red Hat build of OpenJDK version to use permanently for all users, add these commands into /etc/bashrc : Note If you do not want to redefine JAVA_HOME , add only the PATH command to bashrc , specifying the path to the Java binary. For example, export PATH="/opt/jdk/jdk-11.0.3/bin:USDPATH" . Additional resources Be aware of the exact meaning of JAVA_HOME . For more information, see Changes/Decouple system java setting from java command setting .
[ "export JAVA_HOME=/opt/jdk/jdk-11.0.9 export PATH=\"USDJAVA_HOME/bin:USDPATH\"", "export JAVA_HOME=/opt/jdk/jdk-11.0.9 export PATH=\"USDJAVA_HOME/bin:USDPATH\"" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/configuring_red_hat_build_of_openjdk_11_on_rhel/selecting-systemwide-archive-openjdk-version
Chapter 9. Migration from PostgreSQL 9 to PostgreSQL 13
Chapter 9. Migration from PostgreSQL 9 to PostgreSQL 13 By the 11th of November, 2021, the PostgreSQL version 9.6 came out of support, and CodeReady Workspaces team recommends that all users undergo migrating to version 13. Follow the procedure below to migrate to a newer version of PostgreSQL successfully without any data loss. Prerequisites The oc tool is available. An instance of CodeReady Workspaces running in OpenShift. Procedure Save and push changes back to the Git repositories for all running workspaces of the CodeReady Workspaces instance. Stop all workspaces in the CodeReady Workspaces instance. Scale down the CodeReady Workspaces and RH-SSO deployments: Backup available databases: Copy the obtained backups to a local file system: Scale down the PostgreSQL deployment: Delete the corresponding PVC unit to clean up old data: After deleting the PVC from the step above, a new PVC will automatically appear in a few seconds. Set the version of the new PostgreSQL database to 13.3: Scale up the PostgreSQL deployments: Provision a database: Copy the backups to the PostgreSQL Pod: Restore the database: Scale up the RH-SSO and CodeReady Workspaces deployments:
[ "scale deployment codeready --replicas=0 -n openshift-workspaces scale deployment keycloak --replicas=0 -n openshift-workspaces", "POSTGRES_POD=USD(oc get pods -n openshift-workspaces | grep postgres | awk '{print USD1}') CHE_POSTGRES_DB=USD(oc get checluster/codeready-workspaces -n openshift-workspaces -o json | jq '.spec.database.chePostgresDb') exec -it USDPOSTGRES_POD -n openshift-workspaces -- bash -c \"pg_dump USDCHE_POSTGRES_DB > /tmp/che.sql\" exec -it USDPOSTGRES_POD -n openshift-workspaces -- bash -c \"pg_dump keycloak > /tmp/keycloak.sql\"", "cp openshift-workspaces/USDPOSTGRES_POD:/tmp/che.sql che.sql cp openshift-workspaces/USDPOSTGRES_POD:/tmp/keycloak.sql keycloak.sql", "scale deployment postgres --replicas=0 -n openshift-workspaces", "delete pvc postgres-data -n openshift-workspaces", "patch checluster codeready-workspaces -n openshift-workspaces --type=json -p '[{\"op\": \"replace\", \"path\": \"/spec/database/postgresVersion\", \"value\": \"13.3\"}]'", "scale deployment postgres --replicas=1 -n openshift-workspaces wait --for=condition=ready pod -l app.kubernetes.io/component=postgres -n openshift-workspaces --timeout=120s", "POSTGRES_POD=USD(oc get pods -n openshift-workspaces | grep postgres | awk '{print USD1}') OPERATOR_POD=USD(oc get pods -n openshift-workspaces | grep codeready-operator | awk '{print USD1}') IDENTITY_POSTGRES_SECRET=USD(oc get checluster/codeready-workspaces -n openshift-workspaces -o json | jq -r '.spec.auth.identityProviderPostgresSecret') IDENTITY_POSTGRES_PASSWORD=USD(if [ -z \"USDIDENTITY_POSTGRES_SECRET\" ] || [ USDIDENTITY_POSTGRES_SECRET = \"null\" ]; then oc get checluster/codeready-workspaces -n openshift-workspaces -o json | jq -r '.spec.auth.identityProviderPostgresPassword'; else oc get secret USDIDENTITY_POSTGRES_SECRET -n openshift-workspaces -o json | jq -r '.data.password' | base64 -d; fi) exec -it USDPOSTGRES_POD -n openshift-workspaces -- bash -c \"psql postgres -tAc \\\"CREATE USER keycloak WITH PASSWORD 'USDIDENTITY_POSTGRES_PASSWORD'\\\"\" exec -it USDPOSTGRES_POD -n openshift-workspaces -- bash -c \"psql postgres -tAc \\\"CREATE DATABASE keycloak\\\"\" exec -it USDPOSTGRES_POD -n openshift-workspaces -- bash -c \"psql postgres -tAc \\\"GRANT ALL PRIVILEGES ON DATABASE keycloak TO keycloak\\\"\" POSTGRES_SECRET=USD(oc get checluster/codeready-workspaces -n openshift-workspaces -o json | jq -r '.spec.database.chePostgresSecret') CHE_USER=USD(if [ -z \"USDPOSTGRES_SECRET\" ] || [ USDPOSTGRES_SECRET = \"null\" ]; then oc get checluster/codeready-workspaces -n openshift-workspaces -o json | jq -r '.spec.database.chePostgresUser'; else oc get secret USDPOSTGRES_SECRET -n openshift-workspaces -o json | jq -r '.data.user' | base64 -d; fi) exec -it USDPOSTGRES_POD -n openshift-workspaces -- bash -c \"psql postgres -tAc \\\"ALTER USER USDCHE_USER WITH SUPERUSER\\\"\"", "cp che.sql openshift-workspaces/USDPOSTGRES_POD:/tmp/che.sql cp keycloak.sql openshift-workspaces/USDPOSTGRES_POD:/tmp/keycloak.sql", "exec -it USDPOSTGRES_POD -n openshift-workspaces -- bash -c \"psql keycloak < /tmp/keycloak.sql\" exec -it USDPOSTGRES_POD -n openshift-workspaces -- bash -c \"psql USDCHE_POSTGRES_DB < /tmp/che.sql\"", "scale deployment keycloak --replicas=1 -n openshift-workspaces wait --for=condition=ready pod -l app.kubernetes.io/component=keycloak -n openshift-workspaces --timeout=120s scale deployment codeready --replicas=1 -n openshift-workspaces wait --for=condition=ready pod -l app.kubernetes.io/component=codeready -n openshift-workspaces --timeout=120s" ]
https://docs.redhat.com/en/documentation/red_hat_codeready_workspaces/2.15/html/administration_guide/migration-from-postgresql-9-to-postgresql-13_crw
Chapter 8. Using metrics with dashboards and alerts
Chapter 8. Using metrics with dashboards and alerts The Network Observability Operator uses the flowlogs-pipeline to generate metrics from flow logs. You can utilize these metrics by setting custom alerts and viewing dashboards. 8.1. Viewing Network Observability metrics dashboards On the Overview tab in the OpenShift Container Platform console, you can view the overall aggregated metrics of the network traffic flow on the cluster. You can choose to display the information by node, namespace, owner, pod, and service. You can also use filters and display options to further refine the metrics. Procedure In the web console Observe Dashboards , select the Netobserv dashboard. View network traffic metrics in the following categories, with each having the subset per node, namespace, source, and destination: Byte rates Packet drops DNS RTT Select the Netobserv/Health dashboard. View metrics about the health of the Operator in the following categories, with each having the subset per node, namespace, source, and destination. Flows Flows Overhead Flow rates Agents Processor Operator Infrastructure and Application metrics are shown in a split-view for namespace and workloads. 8.2. Predefined metrics Metrics generated by the flowlogs-pipeline are configurable in the spec.processor.metrics.includeList of the FlowCollector custom resource to add or remove metrics. 8.3. Network Observability metrics You can also create alerts by using the includeList metrics in Prometheus rules, as shown in the example "Creating alerts". When looking for these metrics in Prometheus, such as in the Console through Observe Metrics , or when defining alerts, all the metrics names are prefixed with netobserv_ . For example, netobserv_namespace_flows_total . Available metrics names are as follows: includeList metrics names Names followed by an asterisk * are enabled by default. namespace_egress_bytes_total namespace_egress_packets_total namespace_ingress_bytes_total namespace_ingress_packets_total namespace_flows_total * node_egress_bytes_total node_egress_packets_total node_ingress_bytes_total * node_ingress_packets_total node_flows_total workload_egress_bytes_total workload_egress_packets_total workload_ingress_bytes_total * workload_ingress_packets_total workload_flows_total PacketDrop metrics names When the PacketDrop feature is enabled in spec.agent.ebpf.features (with privileged mode), the following additional metrics are available: namespace_drop_bytes_total namespace_drop_packets_total * node_drop_bytes_total node_drop_packets_total workload_drop_bytes_total workload_drop_packets_total DNS metrics names When the DNSTracking feature is enabled in spec.agent.ebpf.features , the following additional metrics are available: namespace_dns_latency_seconds * node_dns_latency_seconds workload_dns_latency_seconds FlowRTT metrics names When the FlowRTT feature is enabled in spec.agent.ebpf.features , the following additional metrics are available: namespace_rtt_seconds * node_rtt_seconds workload_rtt_seconds 8.4. Creating alerts You can create custom alerting rules for the Netobserv dashboard metrics to trigger alerts when some defined conditions are met. Prerequisites You have access to the cluster as a user with the cluster-admin role or with view permissions for all projects. You have the Network Observability Operator installed. Procedure Create a YAML file by clicking the import icon, + . Add an alerting rule configuration to the YAML file. In the YAML sample that follows, an alert is created for when the cluster ingress traffic reaches a given threshold of 10 MBps per destination workload. apiVersion: monitoring.openshift.io/v1 kind: AlertingRule metadata: name: netobserv-alerts namespace: openshift-monitoring spec: groups: - name: NetObservAlerts rules: - alert: NetObservIncomingBandwidth annotations: message: |- {{ USDlabels.job }}: incoming traffic exceeding 10 MBps for 30s on {{ USDlabels.DstK8S_OwnerType }} {{ USDlabels.DstK8S_OwnerName }} ({{ USDlabels.DstK8S_Namespace }}). summary: "High incoming traffic." expr: sum(rate(netobserv_workload_ingress_bytes_total {SrcK8S_Namespace="openshift-ingress"}[1m])) by (job, DstK8S_Namespace, DstK8S_OwnerName, DstK8S_OwnerType) > 10000000 1 for: 30s labels: severity: warning 1 The netobserv_workload_ingress_bytes_total metric is enabled by default in spec.processor.metrics.includeList . Click Create to apply the configuration file to the cluster. 8.5. Custom metrics You can create custom metrics out of the flowlogs data using the FlowMetric API. In every flowlogs data that is collected, there are a number of fields labeled per log, such as source name and destination name. These fields can be leveraged as Prometheus labels to enable the customization of cluster information on your dashboard. 8.6. Configuring custom metrics by using FlowMetric API You can configure the FlowMetric API to create custom metrics by using flowlogs data fields as Prometheus labels. You can add multiple FlowMetric resources to a project to see multiple dashboard views. Procedure In the web console, navigate to Operators Installed Operators . In the Provided APIs heading for the NetObserv Operator , select FlowMetric . In the Project: dropdown list, select the project of the Network Observability Operator instance. Click Create FlowMetric . Configure the FlowMetric resource, similar to the following sample configurations: Example 8.1. Generate a metric that tracks ingress bytes received from cluster external sources apiVersion: flows.netobserv.io/v1alpha1 kind: FlowMetric metadata: name: flowmetric-cluster-external-ingress-traffic namespace: netobserv 1 spec: metricName: cluster_external_ingress_bytes_total 2 type: Counter 3 valueField: Bytes direction: Ingress 4 labels: [DstK8S_HostName,DstK8S_Namespace,DstK8S_OwnerName,DstK8S_OwnerType] 5 filters: 6 - field: SrcSubnetLabel matchType: Absence 1 The FlowMetric resources need to be created in the namespace defined in the FlowCollector spec.namespace , which is netobserv by default. 2 The name of the Prometheus metric, which in the web console appears with the prefix netobserv-<metricName> . 3 The type specifies the type of metric. The Counter type is useful for counting bytes or packets. 4 The direction of traffic to capture. If not specified, both ingress and egress are captured, which can lead to duplicated counts. 5 Labels define what the metrics look like and the relationship between the different entities and also define the metrics cardinality. For example, SrcK8S_Name is a high cardinality metric. 6 Refines results based on the listed criteria. In this example, selecting only the cluster external traffic is done by matching only flows where SrcSubnetLabel is absent. This assumes the subnet labels feature is enabled (via spec.processor.subnetLabels ), which is done by default. Verification Once the pods refresh, navigate to Observe Metrics . In the Expression field, type the metric name to view the corresponding result. You can also enter an expression, such as topk(5, sum(rate(netobserv_cluster_external_ingress_bytes_total{DstK8S_Namespace="my-namespace"}[2m])) by (DstK8S_HostName, DstK8S_OwnerName, DstK8S_OwnerType)) Example 8.2. Show RTT latency for cluster external ingress traffic apiVersion: flows.netobserv.io/v1alpha1 kind: FlowMetric metadata: name: flowmetric-cluster-external-ingress-rtt namespace: netobserv 1 spec: metricName: cluster_external_ingress_rtt_seconds type: Histogram 2 valueField: TimeFlowRttNs direction: Ingress labels: [DstK8S_HostName,DstK8S_Namespace,DstK8S_OwnerName,DstK8S_OwnerType] filters: - field: SrcSubnetLabel matchType: Absence - field: TimeFlowRttNs matchType: Presence divider: "1000000000" 3 buckets: [".001", ".005", ".01", ".02", ".03", ".04", ".05", ".075", ".1", ".25", "1"] 4 1 The FlowMetric resources need to be created in the namespace defined in the FlowCollector spec.namespace , which is netobserv by default. 2 The type specifies the type of metric. The Histogram type is useful for a latency value ( TimeFlowRttNs ). 3 Since the Round-trip time (RTT) is provided as nanos in flows, use a divider of 1 billion to convert into seconds, which is standard in Prometheus guidelines. 4 The custom buckets specify precision on RTT, with optimal precision ranging between 5ms and 250ms. Verification Once the pods refresh, navigate to Observe Metrics . In the Expression field, you can type the metric name to view the corresponding result. Important High cardinality can affect the memory usage of Prometheus. You can check whether specific labels have high cardinality in the Network Flows format reference . 8.7. Configuring custom charts using FlowMetric API You can generate charts for dashboards in the OpenShift Container Platform web console, which you can view as an administrator in the Dashboard menu by defining the charts section of the FlowMetric resource. Procedure In the web console, navigate to Operators Installed Operators . In the Provided APIs heading for the NetObserv Operator , select FlowMetric . In the Project: dropdown list, select the project of the Network Observability Operator instance. Click Create FlowMetric . Configure the FlowMetric resource, similar to the following sample configurations: Example 8.3. Chart for tracking ingress bytes received from cluster external sources apiVersion: flows.netobserv.io/v1alpha1 kind: FlowMetric metadata: name: flowmetric-cluster-external-ingress-traffic namespace: netobserv 1 # ... charts: - dashboardName: Main 2 title: External ingress traffic unit: Bps type: SingleStat queries: - promQL: "sum(rate(USDMETRIC[2m]))" legend: "" - dashboardName: Main 3 sectionName: External title: Top external ingress traffic per workload unit: Bps type: StackArea queries: - promQL: "sum(rate(USDMETRIC{DstK8S_Namespace!=\"\"}[2m])) by (DstK8S_Namespace, DstK8S_OwnerName)" legend: "{{DstK8S_Namespace}} / {{DstK8S_OwnerName}}" # ... 1 The FlowMetric resources need to be created in the namespace defined in the FlowCollector spec.namespace , which is netobserv by default. Verification Once the pods refresh, navigate to Observe Dashboards . Search for the NetObserv / Main dashboard. View two panels under the NetObserv / Main dashboard, or optionally a dashboard name that you create: A textual single statistic showing the global external ingress rate summed across all dimensions A timeseries graph showing the same metric per destination workload For more information about the query language, refer to the Prometheus documentation . Example 8.4. Chart for RTT latency for cluster external ingress traffic apiVersion: flows.netobserv.io/v1alpha1 kind: FlowMetric metadata: name: flowmetric-cluster-external-ingress-traffic namespace: netobserv 1 # ... charts: - dashboardName: Main 2 title: External ingress TCP latency unit: seconds type: SingleStat queries: - promQL: "histogram_quantile(0.99, sum(rate(USDMETRIC_bucket[2m])) by (le)) > 0" legend: "p99" - dashboardName: Main 3 sectionName: External title: "Top external ingress sRTT per workload, p50 (ms)" unit: seconds type: Line queries: - promQL: "histogram_quantile(0.5, sum(rate(USDMETRIC_bucket{DstK8S_Namespace!=\"\"}[2m])) by (le,DstK8S_Namespace,DstK8S_OwnerName))*1000 > 0" legend: "{{DstK8S_Namespace}} / {{DstK8S_OwnerName}}" - dashboardName: Main 4 sectionName: External title: "Top external ingress sRTT per workload, p99 (ms)" unit: seconds type: Line queries: - promQL: "histogram_quantile(0.99, sum(rate(USDMETRIC_bucket{DstK8S_Namespace!=\"\"}[2m])) by (le,DstK8S_Namespace,DstK8S_OwnerName))*1000 > 0" legend: "{{DstK8S_Namespace}} / {{DstK8S_OwnerName}}" # ... 1 The FlowMetric resources need to be created in the namespace defined in the FlowCollector spec.namespace , which is netobserv by default. 2 3 4 Using a different dashboardName creates a new dashboard that is prefixed with Netobserv . For example, Netobserv / <dashboard_name> . This example uses the histogram_quantile function to show p50 and p99 . You can show averages of histograms by dividing the metric, USDMETRIC_sum , by the metric, USDMETRIC_count , which are automatically generated when you create a histogram. With the preceding example, the Prometheus query to do this is as follows: promQL: "(sum(rate(USDMETRIC_sum{DstK8S_Namespace!=\"\"}[2m])) by (DstK8S_Namespace,DstK8S_OwnerName) / sum(rate(USDMETRIC_count{DstK8S_Namespace!=\"\"}[2m])) by (DstK8S_Namespace,DstK8S_OwnerName))*1000" Verification Once the pods refresh, navigate to Observe Dashboards . Search for the NetObserv / Main dashboard. View the new panel under the NetObserv / Main dashboard, or optionally a dashboard name that you create. For more information about the query language, refer to the Prometheus documentation . 8.8. Detecting SYN flooding using the FlowMetric API and TCP flags You can create an AlertingRule resouce to alert for SYN flooding. Procedure In the web console, navigate to Operators Installed Operators . In the Provided APIs heading for the NetObserv Operator , select FlowMetric . In the Project dropdown list, select the project of the Network Observability Operator instance. Click Create FlowMetric . Create FlowMetric resources to add the following configurations: Configuration counting flows per destination host and resource, with TCP flags apiVersion: flows.netobserv.io/v1alpha1 kind: FlowMetric metadata: name: flows-with-flags-per-destination spec: metricName: flows_with_flags_per_destination_total type: Counter labels: [SrcSubnetLabel,DstSubnetLabel,DstK8S_Name,DstK8S_Type,DstK8S_HostName,DstK8S_Namespace,Flags] Configuration counting flows per source host and resource, with TCP flags apiVersion: flows.netobserv.io/v1alpha1 kind: FlowMetric metadata: name: flows-with-flags-per-source spec: metricName: flows_with_flags_per_source_total type: Counter labels: [DstSubnetLabel,SrcSubnetLabel,SrcK8S_Name,SrcK8S_Type,SrcK8S_HostName,SrcK8S_Namespace,Flags] Deploy the following AlertingRule resource to alert for SYN flooding: AlertingRule for SYN flooding apiVersion: monitoring.openshift.io/v1 kind: AlertingRule metadata: name: netobserv-syn-alerts namespace: openshift-monitoring # ... spec: groups: - name: NetObservSYNAlerts rules: - alert: NetObserv-SYNFlood-in annotations: message: |- {{ USDlabels.job }}: incoming SYN-flood attack suspected to Host={{ USDlabels.DstK8S_HostName}}, Namespace={{ USDlabels.DstK8S_Namespace }}, Resource={{ USDlabels.DstK8S_Name }}. This is characterized by a high volume of SYN-only flows with different source IPs and/or ports. summary: "Incoming SYN-flood" expr: sum(rate(netobserv_flows_with_flags_per_destination_total{Flags="2"}[1m])) by (job, DstK8S_HostName, DstK8S_Namespace, DstK8S_Name) > 300 1 for: 15s labels: severity: warning app: netobserv - alert: NetObserv-SYNFlood-out annotations: message: |- {{ USDlabels.job }}: outgoing SYN-flood attack suspected from Host={{ USDlabels.SrcK8S_HostName}}, Namespace={{ USDlabels.SrcK8S_Namespace }}, Resource={{ USDlabels.SrcK8S_Name }}. This is characterized by a high volume of SYN-only flows with different source IPs and/or ports. summary: "Outgoing SYN-flood" expr: sum(rate(netobserv_flows_with_flags_per_source_total{Flags="2"}[1m])) by (job, SrcK8S_HostName, SrcK8S_Namespace, SrcK8S_Name) > 300 2 for: 15s labels: severity: warning app: netobserv # ... 1 2 In this example, the threshold for the alert is 300 ; however, you can adapt this value empirically. A threshold that is too low might produce false-positives, and if it's too high it might miss actual attacks. Verification In the web console, click Manage Columns in the Network Traffic table view and click TCP flags . In the Network Traffic table view, filter on TCP protocol SYN TCPFlag . A large number of flows with the same byteSize indicates a SYN flood. Go to Observe Alerting and select the Alerting Rules tab. Filter on netobserv-synflood-in alert . The alert should fire when SYN flooding occurs. Additional resources Filtering eBPF flow data using a global rule Creating alerting rules for user-defined projects . Troubleshooting high cardinality metrics- Determining why Prometheus is consuming a lot of disk space
[ "apiVersion: monitoring.openshift.io/v1 kind: AlertingRule metadata: name: netobserv-alerts namespace: openshift-monitoring spec: groups: - name: NetObservAlerts rules: - alert: NetObservIncomingBandwidth annotations: message: |- {{ USDlabels.job }}: incoming traffic exceeding 10 MBps for 30s on {{ USDlabels.DstK8S_OwnerType }} {{ USDlabels.DstK8S_OwnerName }} ({{ USDlabels.DstK8S_Namespace }}). summary: \"High incoming traffic.\" expr: sum(rate(netobserv_workload_ingress_bytes_total {SrcK8S_Namespace=\"openshift-ingress\"}[1m])) by (job, DstK8S_Namespace, DstK8S_OwnerName, DstK8S_OwnerType) > 10000000 1 for: 30s labels: severity: warning", "apiVersion: flows.netobserv.io/v1alpha1 kind: FlowMetric metadata: name: flowmetric-cluster-external-ingress-traffic namespace: netobserv 1 spec: metricName: cluster_external_ingress_bytes_total 2 type: Counter 3 valueField: Bytes direction: Ingress 4 labels: [DstK8S_HostName,DstK8S_Namespace,DstK8S_OwnerName,DstK8S_OwnerType] 5 filters: 6 - field: SrcSubnetLabel matchType: Absence", "apiVersion: flows.netobserv.io/v1alpha1 kind: FlowMetric metadata: name: flowmetric-cluster-external-ingress-rtt namespace: netobserv 1 spec: metricName: cluster_external_ingress_rtt_seconds type: Histogram 2 valueField: TimeFlowRttNs direction: Ingress labels: [DstK8S_HostName,DstK8S_Namespace,DstK8S_OwnerName,DstK8S_OwnerType] filters: - field: SrcSubnetLabel matchType: Absence - field: TimeFlowRttNs matchType: Presence divider: \"1000000000\" 3 buckets: [\".001\", \".005\", \".01\", \".02\", \".03\", \".04\", \".05\", \".075\", \".1\", \".25\", \"1\"] 4", "apiVersion: flows.netobserv.io/v1alpha1 kind: FlowMetric metadata: name: flowmetric-cluster-external-ingress-traffic namespace: netobserv 1 charts: - dashboardName: Main 2 title: External ingress traffic unit: Bps type: SingleStat queries: - promQL: \"sum(rate(USDMETRIC[2m]))\" legend: \"\" - dashboardName: Main 3 sectionName: External title: Top external ingress traffic per workload unit: Bps type: StackArea queries: - promQL: \"sum(rate(USDMETRIC{DstK8S_Namespace!=\\\"\\\"}[2m])) by (DstK8S_Namespace, DstK8S_OwnerName)\" legend: \"{{DstK8S_Namespace}} / {{DstK8S_OwnerName}}\"", "apiVersion: flows.netobserv.io/v1alpha1 kind: FlowMetric metadata: name: flowmetric-cluster-external-ingress-traffic namespace: netobserv 1 charts: - dashboardName: Main 2 title: External ingress TCP latency unit: seconds type: SingleStat queries: - promQL: \"histogram_quantile(0.99, sum(rate(USDMETRIC_bucket[2m])) by (le)) > 0\" legend: \"p99\" - dashboardName: Main 3 sectionName: External title: \"Top external ingress sRTT per workload, p50 (ms)\" unit: seconds type: Line queries: - promQL: \"histogram_quantile(0.5, sum(rate(USDMETRIC_bucket{DstK8S_Namespace!=\\\"\\\"}[2m])) by (le,DstK8S_Namespace,DstK8S_OwnerName))*1000 > 0\" legend: \"{{DstK8S_Namespace}} / {{DstK8S_OwnerName}}\" - dashboardName: Main 4 sectionName: External title: \"Top external ingress sRTT per workload, p99 (ms)\" unit: seconds type: Line queries: - promQL: \"histogram_quantile(0.99, sum(rate(USDMETRIC_bucket{DstK8S_Namespace!=\\\"\\\"}[2m])) by (le,DstK8S_Namespace,DstK8S_OwnerName))*1000 > 0\" legend: \"{{DstK8S_Namespace}} / {{DstK8S_OwnerName}}\"", "promQL: \"(sum(rate(USDMETRIC_sum{DstK8S_Namespace!=\\\"\\\"}[2m])) by (DstK8S_Namespace,DstK8S_OwnerName) / sum(rate(USDMETRIC_count{DstK8S_Namespace!=\\\"\\\"}[2m])) by (DstK8S_Namespace,DstK8S_OwnerName))*1000\"", "apiVersion: flows.netobserv.io/v1alpha1 kind: FlowMetric metadata: name: flows-with-flags-per-destination spec: metricName: flows_with_flags_per_destination_total type: Counter labels: [SrcSubnetLabel,DstSubnetLabel,DstK8S_Name,DstK8S_Type,DstK8S_HostName,DstK8S_Namespace,Flags]", "apiVersion: flows.netobserv.io/v1alpha1 kind: FlowMetric metadata: name: flows-with-flags-per-source spec: metricName: flows_with_flags_per_source_total type: Counter labels: [DstSubnetLabel,SrcSubnetLabel,SrcK8S_Name,SrcK8S_Type,SrcK8S_HostName,SrcK8S_Namespace,Flags]", "apiVersion: monitoring.openshift.io/v1 kind: AlertingRule metadata: name: netobserv-syn-alerts namespace: openshift-monitoring spec: groups: - name: NetObservSYNAlerts rules: - alert: NetObserv-SYNFlood-in annotations: message: |- {{ USDlabels.job }}: incoming SYN-flood attack suspected to Host={{ USDlabels.DstK8S_HostName}}, Namespace={{ USDlabels.DstK8S_Namespace }}, Resource={{ USDlabels.DstK8S_Name }}. This is characterized by a high volume of SYN-only flows with different source IPs and/or ports. summary: \"Incoming SYN-flood\" expr: sum(rate(netobserv_flows_with_flags_per_destination_total{Flags=\"2\"}[1m])) by (job, DstK8S_HostName, DstK8S_Namespace, DstK8S_Name) > 300 1 for: 15s labels: severity: warning app: netobserv - alert: NetObserv-SYNFlood-out annotations: message: |- {{ USDlabels.job }}: outgoing SYN-flood attack suspected from Host={{ USDlabels.SrcK8S_HostName}}, Namespace={{ USDlabels.SrcK8S_Namespace }}, Resource={{ USDlabels.SrcK8S_Name }}. This is characterized by a high volume of SYN-only flows with different source IPs and/or ports. summary: \"Outgoing SYN-flood\" expr: sum(rate(netobserv_flows_with_flags_per_source_total{Flags=\"2\"}[1m])) by (job, SrcK8S_HostName, SrcK8S_Namespace, SrcK8S_Name) > 300 2 for: 15s labels: severity: warning app: netobserv" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/network_observability/metrics-dashboards-alerts
Chapter 13. Tuning the performance of a Samba server
Chapter 13. Tuning the performance of a Samba server Learn what settings can improve the performance of Samba in certain situations, and which settings can have a negative performance impact. Parts of this section were adopted from the Performance Tuning documentation published in the Samba Wiki. License: CC BY 4.0 . Authors and contributors: See the history tab on the Wiki page. Prerequisites Samba is set up as a file or print server See Using Samba as a server . 13.1. Setting the SMB protocol version Each new SMB version adds features and improves the performance of the protocol. The recent Windows and Windows Server operating systems always supports the latest protocol version. If Samba also uses the latest protocol version, Windows clients connecting to Samba benefit from the performance improvements. In Samba, the default value of the server max protocol is set to the latest supported stable SMB protocol version. Note To always have the latest stable SMB protocol version enabled, do not set the server max protocol parameter. If you set the parameter manually, you will need to modify the setting with each new version of the SMB protocol, to have the latest protocol version enabled. The following procedure explains how to use the default value in the server max protocol parameter. Procedure Remove the server max protocol parameter from the [global] section in the /etc/samba/smb.conf file. Reload the Samba configuration 13.2. Tuning shares with directories that contain a large number of files Linux supports case-sensitive file names. For this reason, Samba needs to scan directories for uppercase and lowercase file names when searching or accessing a file. You can configure a share to create new files only in lowercase or uppercase, which improves the performance. Prerequisites Samba is configured as a file server Procedure Rename all files on the share to lowercase. Note Using the settings in this procedure, files with names other than in lowercase will no longer be displayed. Set the following parameters in the share's section: For details about the parameters, see their descriptions in the smb.conf(5) man page on your system. Verify the /etc/samba/smb.conf file: Reload the Samba configuration: After you applied these settings, the names of all newly created files on this share use lowercase. Because of these settings, Samba no longer needs to scan the directory for uppercase and lowercase, which improves the performance. 13.3. Settings that can have a negative performance impact By default, the kernel in Red Hat Enterprise Linux is tuned for high network performance. For example, the kernel uses an auto-tuning mechanism for buffer sizes. Setting the socket options parameter in the /etc/samba/smb.conf file overrides these kernel settings. As a result, setting this parameter decreases the Samba network performance in most cases. To use the optimized settings from the Kernel, remove the socket options parameter from the [global] section in the /etc/samba/smb.conf .
[ "smbcontrol all reload-config", "case sensitive = true default case = lower preserve case = no short preserve case = no", "testparm", "smbcontrol all reload-config" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/monitoring_and_managing_system_status_and_performance/assembly_tuning-the-performance-of-a-samba-server_monitoring-and-managing-system-status-and-performance
Chapter 12. Creating a customized instance
Chapter 12. Creating a customized instance Cloud users can specify additional data to use when they launch an instance, such as a shell script that the instance runs on boot. The cloud user can use the following methods to pass data to instances: User data Use to include instructions in the instance launch command for cloud-init to execute. Instance metadata A list of key-value pairs that you can specify when you create or update an instance. You can access the additional data passed to the instance by using a config drive or the metadata service. Config drive You can attach a config drive to an instance when it boots. The config drive is presented to the instance as a read-only drive. The instance can mount this drive and read files from it. You can use the config drive as a source for cloud-init information. Config drives are useful when combined with cloud-init for server bootstrapping, and when you want to pass large files to your instances. For example, you can configure cloud-init to automatically mount the config drive and run the setup scripts during the initial instance boot. Config drives are created with the volume label of config-2 , and attached to the instance when it boots. The contents of any additional files passed to the config drive are added to the user_data file in the openstack/{version}/ directory of the config drive. cloud-init retrieves the user data from this file. Metadata service Provides a REST API to retrieve data specific to an instance. Instances access this service at 169.254.169.254 or at fe80::a9fe:a9fe . cloud-init can use both a config drive and the metadata service to consume the additional data for customizing an instance. The cloud-init package supports several data input formats. Shell scripts and the cloud-config format are the most common input formats: Shell scripts: The data declaration begins with #! or Content-Type: text/x-shellscript . Shell scripts are invoked last in the boot process. cloud-config format: The data declaration begins with #cloud-config or Content-Type: text/cloud-config . cloud-config files must be valid YAML to be parsed and executed by cloud-init . Note cloud-init has a maximum user data size of 16384 bytes for data passed to an instance. You cannot change the size limit, therefore use gzip compression when you need to exceed the size limit. Vendor-specific data The RHOSP administrator can also pass data to instances when they are being created. This data may not be visible to you as the cloud user, for example, a cryptographic token that registers the instance with Active Directory. The RHOSP administrator uses the vendordata feature to pass data to instances. Vendordata configuration is read only, and is located in one of the following files: /openstack/{version}/vendor_data.json /openstack/{version}/vendor_data2.json You can view these files using the metadata service or from the config drive on your instance. To access the files by using the metadata service, make a GET request to either http://169.254.169.254/openstack/{version}/vendor_data.json or http://169.254.169.254/openstack/{version}/vendor_data2.json . 12.1. Customizing an instance by using user data You can use user data to include instructions in the instance launch command. cloud-init executes these commands to customize the instance as the last step in the boot process. Procedure Create a file with instructions for cloud-init . For example, create a bash script that installs and enables a web server on the instance: Launch an instance with the --user-data option to pass the bash script: When the instance state is active, attach a floating IP address: Log in to the instance with SSH: Check that the customization was successfully performed. For example, to check that the web server has been installed and enabled, enter the following command: Review the /var/log/cloud-init.log file for relevant messages, such as whether or not the cloud-init executed: 12.2. Customizing an instance by using metadata You can use instance metadata to specify the properties of an instance in the instance launch command. Procedure Launch an instance with the --property <key=value> option. For example, to mark the instance as a webserver, set the following property: Optional: Add an additional property to the instance after it is created, for example: 12.3. Customizing an instance by using a config drive You can create a config drive for an instance that is attached during the instance boot process. You can pass content to the config drive that the config drive makes available to the instance. Procedure Enable the config drive, and specify a file that contains content that you want to make available in the config drive. For example, the following command creates a new instance named config-drive-instance and attaches a config drive that contains the contents of the file my-user-data.txt : This command creates the config drive with the volume label of config-2 , which is attached to the instance when it boots, and adds the contents of my-user-data.txt to the user_data file in the openstack/{version}/ directory of the config drive. Log in to the instance. Mount the config drive: If the instance OS uses udev : If the instance OS does not use udev , you need to first identify the block device that corresponds to the config drive:
[ "vim /home/scripts/install_httpd #!/bin/bash -y install httpd python-psycopg2 systemctl enable httpd --now", "openstack server create --image rhel8 --flavor default --nic net-id=web-server-network --security-group default --key-name web-server-keypair --user-data /home/scripts/install_httpd --wait web-server-instance", "openstack floating ip create web-server-network openstack server add floating ip web-server-instance 172.25.250.123", "ssh -i ~/.ssh/web-server-keypair [email protected]", "curl http://localhost | grep Test <title>Test Page for the Apache HTTP Server on Red Hat Enterprise Linux</title> <h1>Red Hat Enterprise Linux <strong>Test Page</strong></h1>", "sudo less /var/log/cloud-init.log ...output omitted ...util.py[DEBUG]: Cloud-init v. 0.7.9 finished at Sat, 23 Jun 2018 02:26:02 +0000. Datasource DataSourceOpenStack [net,ver=2]. Up 21.25 seconds", "openstack server create --image rhel8 --flavor default --property role=webservers --wait web-server-instance", "openstack server set --property region=emea --wait web-server-instance", "(overcloud)USD openstack server create --flavor m1.tiny --config-drive true --user-data ./my-user-data.txt --image cirros config-drive-instance", "mkdir -p /mnt/config mount /dev/disk/by-label/config-2 /mnt/config", "blkid -t LABEL=\"config-2\" -odevice /dev/vdb mkdir -p /mnt/config mount /dev/vdb /mnt/config" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/creating_and_managing_instances/assembly_creating-a-customized-instance_instances
Chapter 7. Monitoring performance with Performance Co-Pilot
Chapter 7. Monitoring performance with Performance Co-Pilot Performance Co-Pilot (PCP) is a suite of tools, services, and libraries for monitoring, visualizing, storing, and analyzing system-level performance measurements. As a system administrator, you can monitor the system's performance using the PCP application in Red Hat Enterprise Linux 9. 7.1. Monitoring postfix with pmda-postfix This procedure describes how to monitor performance metrics of the postfix mail server with pmda-postfix . It helps to check how many emails are received per second. Prerequisites PCP is installed. For more information, see Installing and enabling PCP . The pmlogger service is enabled. For more information, see Enabling the pmlogger service . Procedure Install the following packages: Install the pcp-system-tools : Install the pmda-postfix package to monitor postfix : Install the logging daemon: Install the mail client for testing: Enable the postfix and rsyslog services: Enable the SELinux boolean, so that pmda-postfix can access the required log files: Install the PMDA : Verification Verify the pmda-postfix operation: Verify the available metrics: Additional resources rsyslogd(8) , postfix(1) , and setsebool(8) man pages on your system System services and tools distributed with PCP 7.2. Visually tracing PCP log archives with the PCP Charts application After recording metric data, you can replay the PCP log archives as graphs. The metrics are sourced from one or more live hosts with alternative options to use metric data from PCP log archives as a source of historical data. To customize the PCP Charts application interface to display the data from the performance metrics, you can use line plot, bar graphs, or utilization graphs. Using the PCP Charts application, you can: Replay the data in the PCP Charts application application and use graphs to visualize the retrospective data alongside live data of the system. Plot performance metric values into graphs. Display multiple charts simultaneously. Prerequisites PCP is installed. For more information, see Installing and enabling PCP . Logged performance data with the pmlogger . For more information, see Logging performance data with pmlogger . Install the pcp-gui package: Procedure Launch the PCP Charts application from the command line: Figure 7.1. PCP Charts application The pmtime server settings are located at the bottom. The start and pause button allows you to control: The interval in which PCP polls the metric data The date and time for the metrics of historical data Click File and then New Chart to select metric from both the local machine and remote machines by specifying their host name or address. Advanced configuration options include the ability to manually set the axis values for the chart, and to manually choose the color of the plots. Record the views created in the PCP Charts application: Following are the options to take images or record the views created in the PCP Charts application: Click File and then Export to save an image of the current view. Click Record and then Start to start a recording. Click Record and then Stop to stop the recording. After stopping the recording, the recorded metrics are archived to be viewed later. Optional: In the PCP Charts application, the main configuration file, known as the view , allows the metadata associated with one or more charts to be saved. This metadata describes all chart aspects, including the metrics used and the chart columns. Save the custom view configuration by clicking File and then Save View , and load the view configuration later. The following example of the PCP Charts application view configuration file describes a stacking chart graph showing the total number of bytes read and written to the given XFS file system loop1 : Additional resources pmchart(1) and pmtime(1) man pages on your system System services and tools distributed with PCP 7.3. Collecting data from SQL server using PCP The SQL Server agent is available in Performance Co-Pilot (PCP), which helps you to monitor and analyze database performance issues. This procedure describes how to collect data for Microsoft SQL Server via pcp on your system. Prerequisites You have installed Microsoft SQL Server for Red Hat Enterprise Linux and established a 'trusted' connection to an SQL server. You have installed the Microsoft ODBC driver for SQL Server for Red Hat Enterprise Linux. Procedure Install PCP: Install packages required for the pyodbc driver: Install the mssql agent: Install the Microsoft SQL Server domain agent for PCP: Edit the /etc/pcp/mssql/mssql.conf file to configure the SQL server account's username and password for the mssql agent. Ensure that the account you configure has access rights to performance data. Replace user_name with the SQL Server account and user_password with the SQL Server user password for this account. Install the agent: Verification Using the pcp command, verify if the SQL Server PMDA ( mssql ) is loaded and running: View the complete list of metrics that PCP can collect from the SQL Server: After viewing the list of metrics, you can report the rate of transactions. For example, to report on the overall transaction count per second, over a five second time window: View the graphical chart of these metrics on your system by using the pmchart command. For more information, see Visually tracing PCP log archives with the PCP Charts application . Additional resources pcp(1) , pminfo(1) , pmval(1) , pmchart(1) , and pmdamssql(1) man pages on your system Performance Co-Pilot for Microsoft SQL Server with RHEL 8.2 Red Hat Developers Blog post 7.4. Generating PCP archives from sadc archives You can use the sadf tool provided by the sysstat package to generate PCP archives from native sadc archives. Prerequisites A sadc archive has been created: In this example, sadc is sampling system data 1 time in a 5 second interval. The outfile is specified as - which results in sadc writing the data to the standard system activity daily data file. This file is named saDD and is located in the /var/log/sa directory by default. Procedure Generate a PCP archive from a sadc archive: In this example, using the -2 option results in sadf generating a PCP archive from a sadc archive recorded 2 days ago. Verification You can use PCP commands to inspect and analyze the PCP archive generated from a sadc archive as you would a native PCP archive. For example: To show a list of metrics in the PCP archive generated from an sadc archive archive, run: To show the timespace of the archive and hostname of the PCP archive, run: To plot performance metrics values into graphs, run:
[ "dnf install pcp-system-tools", "dnf install pcp-pmda-postfix postfix", "dnf install rsyslog", "dnf install mutt", "systemctl enable postfix rsyslog systemctl restart postfix rsyslog", "setsebool -P pcp_read_generic_logs=on", "cd /var/lib/pcp/pmdas/postfix/ ./Install Updating the Performance Metrics Name Space (PMNS) Terminate PMDA if already installed Updating the PMCD control file, and notifying PMCD Waiting for pmcd to terminate Starting pmcd Check postfix metrics have appeared ... 7 metrics and 58 values", "echo testmail | mutt root", "pminfo postfix postfix.received postfix.sent postfix.queues.incoming postfix.queues.maildrop postfix.queues.hold postfix.queues.deferred postfix.queues.active", "dnf install pcp-gui", "pmchart", "#kmchart version 1 chart title \"Filesystem Throughput /loop1\" style stacking antialiasing off plot legend \"Read rate\" metric xfs.read_bytes instance \"loop1\" plot legend \"Write rate\" metric xfs.write_bytes instance \"loop1\"", "dnf install pcp-zeroconf", "dnf install python3-pyodbc", "dnf install pcp-pmda-mssql", "username: user_name password: user_password", "cd /var/lib/pcp/pmdas/mssql ./Install Updating the Performance Metrics Name Space (PMNS) Terminate PMDA if already installed Updating the PMCD control file, and notifying PMCD Check mssql metrics have appeared ... 168 metrics and 598 values [...]", "pcp Performance Co-Pilot configuration on rhel.local: platform: Linux rhel.local 4.18.0-167.el8.x86_64 #1 SMP Sun Dec 15 01:24:23 UTC 2019 x86_64 hardware: 2 cpus, 1 disk, 1 node, 2770MB RAM timezone: PDT+7 services: pmcd pmproxy pmcd: Version 5.0.2-1, 12 agents, 4 clients pmda: root pmcd proc pmproxy xfs linux nfsclient mmv kvm mssql jbd2 dm pmlogger: primary logger: /var/log/pcp/pmlogger/rhel.local/20200326.16.31 pmie: primary engine: /var/log/pcp/pmie/rhel.local/pmie.log", "pminfo mssql", "pmval -t 1 -T 5 mssql.databases.transactions", "/usr/lib64/sa/sadc 1 5 -", "sadf -l -O pcparchive=/tmp/recording -2", "pminfo --archive /tmp/recording Disk.dev.avactive Disk.dev.read Disk.dev.write Disk.dev.blkread [...]", "pmdumplog --label /tmp/recording Log Label (Log Format Version 2) Performance metrics from host shard commencing Tue Jul 20 00:10:30.642477 2021 ending Wed Jul 21 00:10:30.222176 2021", "pmchart --archive /tmp/recording" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/monitoring_and_managing_system_status_and_performance/monitoring-performance-with-performance-co-pilot_monitoring-and-managing-system-status-and-performance
Chapter 4. Configuring Capsule Server with external services
Chapter 4. Configuring Capsule Server with external services If you do not want to configure the DNS, DHCP, and TFTP services on Capsule Server, use this section to configure your Capsule Server to work with external DNS, DHCP, and TFTP services. 4.1. Configuring Capsule Server with external DNS You can configure Capsule Server with external DNS. Capsule Server uses the nsupdate utility to update DNS records on the remote server. To make any changes persistent, you must enter the satellite-installer command with the options appropriate for your environment. Prerequisites You must have a configured external DNS server. This guide assumes you have an existing installation. Procedure Copy the /etc/rndc.key file from the external DNS server to Capsule Server: Configure the ownership, permissions, and SELinux context: To test the nsupdate utility, add a host remotely: Enter the satellite-installer command to make the following persistent changes to the /etc/foreman-proxy/settings.d/dns.yml file: In the Satellite web UI, navigate to Infrastructure > Capsules . Locate the Capsule Server and select Refresh from the list in the Actions column. Associate the DNS service with the appropriate subnets and domain. 4.2. Configuring Capsule Server with external DHCP To configure Capsule Server with external DHCP, you must complete the following procedures: Section 4.2.1, "Configuring an external DHCP server to use with Capsule Server" Section 4.2.2, "Configuring Satellite Server with an external DHCP server" 4.2.1. Configuring an external DHCP server to use with Capsule Server To configure an external DHCP server running Red Hat Enterprise Linux to use with Capsule Server, you must install the ISC DHCP Service and Berkeley Internet Name Domain (BIND) utilities packages. You must also share the DHCP configuration and lease files with Capsule Server. The example in this procedure uses the distributed Network File System (NFS) protocol to share the DHCP configuration and lease files. Note If you use dnsmasq as an external DHCP server, enable the dhcp-no-override setting. This is required because Satellite creates configuration files on the TFTP server under the grub2/ subdirectory. If the dhcp-no-override setting is disabled, hosts fetch the bootloader and its configuration from the root directory, which might cause an error. Procedure On your Red Hat Enterprise Linux host, install the ISC DHCP Service and Berkeley Internet Name Domain (BIND) utilities packages: Generate a security token: Edit the dhcpd configuration file for all subnets and add the key generated by tsig-keygen . The following is an example: Note that the option routers value is the IP address of your Satellite Server or Capsule Server that you want to use with an external DHCP service. On Satellite Server, define each subnet. Do not set DHCP Capsule for the defined Subnet yet. To prevent conflicts, set up the lease and reservation ranges separately. For example, if the lease range is 192.168.38.10 to 192.168.38.100, in the Satellite web UI define the reservation range as 192.168.38.101 to 192.168.38.250. Configure the firewall for external access to the DHCP server: Make the changes persistent: On Satellite Server, determine the UID and GID of the foreman user: On the DHCP server, create the foreman user and group with the same IDs as determined in a step: To ensure that the configuration files are accessible, restore the read and execute flags: Enable and start the DHCP service: Export the DHCP configuration and lease files using NFS: Create directories for the DHCP configuration and lease files that you want to export using NFS: To create mount points for the created directories, add the following line to the /etc/fstab file: Mount the file systems in /etc/fstab : Ensure the following lines are present in /etc/exports : Note that the IP address that you enter is the Satellite or Capsule IP address that you want to use with an external DHCP service. Reload the NFS server: Configure the firewall for DHCP omapi port 7911: Optional: Configure the firewall for external access to NFS. Clients are configured using NFSv3. Make the changes persistent: 4.2.2. Configuring Satellite Server with an external DHCP server You can configure Capsule Server with an external DHCP server. Prerequisites Ensure that you have configured an external DHCP server and that you have shared the DHCP configuration and lease files with Capsule Server. For more information, see Section 4.2.1, "Configuring an external DHCP server to use with Capsule Server" . Procedure Install the nfs-utils package: Create the DHCP directories for NFS: Change the file owner: Verify communication with the NFS server and the Remote Procedure Call (RPC) communication paths: Add the following lines to the /etc/fstab file: Mount the file systems on /etc/fstab : To verify that the foreman-proxy user can access the files that are shared over the network, display the DHCP configuration and lease files: Enter the satellite-installer command to make the following persistent changes to the /etc/foreman-proxy/settings.d/dhcp.yml file: Associate the DHCP service with the appropriate subnets and domain. 4.3. Configuring Capsule Server with external TFTP You can configure Capsule Server with external TFTP services. Procedure Create the TFTP directory for NFS: In the /etc/fstab file, add the following line: Mount the file systems in /etc/fstab : Enter the satellite-installer command to make the following persistent changes to the /etc/foreman-proxy/settings.d/tftp.yml file: If the TFTP service is running on a different server than the DHCP service, update the tftp_servername setting with the FQDN or IP address of the server that the TFTP service is running on: In the Satellite web UI, navigate to Infrastructure > Capsules . Locate the Capsule Server and select Refresh from the list in the Actions column. Associate the TFTP service with the appropriate subnets and domain. 4.4. Configuring Capsule Server with external IdM DNS When Satellite Server adds a DNS record for a host, it first determines which Capsule is providing DNS for that domain. It then communicates with the Capsule that is configured to provide DNS service for your deployment and adds the record. The hosts are not involved in this process. Therefore, you must install and configure the IdM client on the Satellite or Capsule that is currently configured to provide a DNS service for the domain you want to manage by using the IdM server. Capsule Server can be configured to use a Red Hat Identity Management (IdM) server to provide DNS service. For more information about Red Hat Identity Management, see the Red Hat Enterprise Linux 7 Linux Domain Identity, Authentication, and Policy Guide . To configure Capsule Server to use a Red Hat Identity Management (IdM) server to provide DNS service, use one of the following procedures: Section 4.4.1, "Configuring dynamic DNS update with GSS-TSIG authentication" Section 4.4.2, "Configuring dynamic DNS update with TSIG authentication" To revert to internal DNS service, use the following procedure: Section 4.4.3, "Reverting to internal DNS service" Note You are not required to use Capsule Server to manage DNS. When you are using the realm enrollment feature of Satellite, where provisioned hosts are enrolled automatically to IdM, the ipa-client-install script creates DNS records for the client. Configuring Capsule Server with external IdM DNS and realm enrollment are mutually exclusive. For more information about configuring realm enrollment, see Configuring Satellite to manage the lifecycle of a host registered to a Identity Management realm in Installing Satellite Server in a connected network environment . 4.4.1. Configuring dynamic DNS update with GSS-TSIG authentication You can configure the IdM server to use the generic security service algorithm for secret key transaction (GSS-TSIG) technology defined in RFC3645 . To configure the IdM server to use the GSS-TSIG technology, you must install the IdM client on the Capsule Server base operating system. Prerequisites You must ensure the IdM server is deployed and the host-based firewall is configured correctly. For more information, see Port requirements for IdM in Red Hat Enterprise Linux 9 Installing Identity Management or Port requirements for IdM in Red Hat Enterprise Linux 8 Installing Identity Management . You must contact the IdM server administrator to ensure that you obtain an account on the IdM server with permissions to create zones on the IdM server. You should create a backup of the answer file. You can use the backup to restore the answer file to its original state if it becomes corrupted. For more information, see Configuring Satellite Server . Procedure To configure dynamic DNS update with GSS-TSIG authentication, complete the following steps: Creating a Kerberos principal on the IdM server Obtain a Kerberos ticket for the account obtained from the IdM administrator: Create a new Kerberos principal for Capsule Server to use to authenticate on the IdM server: Installing and configuring the idM client On the base operating system of either the Satellite or Capsule that is managing the DNS service for your deployment, install the ipa-client package: Configure the IdM client by running the installation script and following the on-screen prompts: Obtain a Kerberos ticket: Remove any preexisting keytab : Obtain the keytab for this system: Note When adding a keytab to a standby system with the same host name as the original system in service, add the r option to prevent generating new credentials and rendering the credentials on the original system invalid. For the dns.keytab file, set the group and owner to foreman-proxy : Optional: To verify that the keytab file is valid, enter the following command: Configuring DNS zones in the IdM web UI Create and configure the zone that you want to manage: Navigate to Network Services > DNS > DNS Zones . Select Add and enter the zone name. For example, example.com . Click Add and Edit . Click the Settings tab and in the BIND update policy box, add the following to the semi-colon separated list: Set Dynamic update to True . Enable Allow PTR sync . Click Save to save the changes. Create and configure the reverse zone: Navigate to Network Services > DNS > DNS Zones . Click Add . Select Reverse zone IP network and add the network address in CIDR format to enable reverse lookups. Click Add and Edit . Click the Settings tab and in the BIND update policy box, add the following to the semi-colon separated list: Set Dynamic update to True . Click Save to save the changes. Configuring the Satellite or Capsule Server that manages the DNS service for the domain Configure your Satellite Server or Capsule Server to connect to your DNS service: For each affected Capsule, update the configuration of that Capsule in the Satellite web UI: In the Satellite web UI, navigate to Infrastructure > Capsules , locate the Capsule Server, and from the list in the Actions column, select Refresh . Configure the domain: In the Satellite web UI, navigate to Infrastructure > Domains and select the domain name. In the Domain tab, ensure DNS Capsule is set to the Capsule where the subnet is connected. Configure the subnet: In the Satellite web UI, navigate to Infrastructure > Subnets and select the subnet name. In the Subnet tab, set IPAM to None . In the Domains tab, select the domain that you want to manage using the IdM server. In the Capsules tab, ensure Reverse DNS Capsule is set to the Capsule where the subnet is connected. Click Submit to save the changes. 4.4.2. Configuring dynamic DNS update with TSIG authentication You can configure an IdM server to use the secret key transaction authentication for DNS (TSIG) technology that uses the rndc.key key file for authentication. The TSIG protocol is defined in RFC2845 . Prerequisites You must ensure the IdM server is deployed and the host-based firewall is configured correctly. For more information, see Port Requirements in the Red Hat Enterprise Linux 7 Linux Domain Identity, Authentication, and Policy Guide . You must obtain root user access on the IdM server. You must confirm whether Satellite Server or Capsule Server is configured to provide DNS service for your deployment. You must configure DNS, DHCP and TFTP services on the base operating system of either the Satellite or Capsule that is managing the DNS service for your deployment. You must create a backup of the answer file. You can use the backup to restore the answer file to its original state if it becomes corrupted. For more information, see Configuring Satellite Server . Procedure To configure dynamic DNS update with TSIG authentication, complete the following steps: Enabling external updates to the DNS zone in the IdM server On the IdM Server, add the following to the top of the /etc/named.conf file: ######################################################################## include "/etc/rndc.key"; controls { inet _IdM_Server_IP_Address_ port 953 allow { _Satellite_IP_Address_; } keys { "rndc-key"; }; }; ######################################################################## Reload the named service to make the changes take effect: In the IdM web UI, navigate to Network Services > DNS > DNS Zones and click the name of the zone. In the Settings tab, apply the following changes: Add the following in the BIND update policy box: grant "rndc-key" zonesub ANY; Set Dynamic update to True . Click Update to save the changes. Copy the /etc/rndc.key file from the IdM server to the base operating system of your Satellite Server. Enter the following command: To set the correct ownership, permissions, and SELinux context for the rndc.key file, enter the following command: Assign the foreman-proxy user to the named group manually. Normally, satellite-installer ensures that the foreman-proxy user belongs to the named UNIX group, however, in this scenario Satellite does not manage users and groups, therefore you need to assign the foreman-proxy user to the named group manually. On Satellite Server, enter the following satellite-installer command to configure Satellite to use the external DNS server: Testing external updates to the DNS zone in the IdM server Ensure that the key in the /etc/rndc.key file on Satellite Server is the same key file that is used on the IdM server: key "rndc-key" { algorithm hmac-md5; secret " secret-key =="; }; On Satellite Server, create a test DNS entry for a host. For example, host test.example.com with an A record of 192.168.25.20 on the IdM server at 192.168.25.1 . On Satellite Server, test the DNS entry: Example output: Server: 192.168.25.1 Address: 192.168.25.1#53 Name: test.example.com Address: 192.168.25.20 To view the entry in the IdM web UI, navigate to Network Services > DNS > DNS Zones . Click the name of the zone and search for the host by name. If resolved successfully, remove the test DNS entry: Confirm that the DNS entry was removed: The above nslookup command fails and returns the SERVFAIL error message if the record was successfully deleted. 4.4.3. Reverting to internal DNS service You can revert to using Satellite Server and Capsule Server as your DNS providers. You can use a backup of the answer file that was created before configuring external DNS, or you can create a backup of the answer file. For more information about answer files, see Configuring Satellite Server . Procedure On the Satellite or Capsule Server that you want to configure to manage DNS service for the domain, complete the following steps: Configuring Satellite or Capsule as a DNS server If you have created a backup of the answer file before configuring external DNS, restore the answer file and then enter the satellite-installer command: If you do not have a suitable backup of the answer file, create a backup of the answer file now. To configure Satellite or Capsule as DNS server without using an answer file, enter the following satellite-installer command on Satellite or Capsule: For more information, see Section 3.6, "Configuring DNS, DHCP, and TFTP on Capsule Server" . After you run the satellite-installer command to make any changes to your Capsule configuration, you must update the configuration of each affected Capsule in the Satellite web UI. Updating the configuration in the Satellite web UI In the Satellite web UI, navigate to Infrastructure > Capsules . For each Capsule that you want to update, from the Actions list, select Refresh . Configure the domain: In the Satellite web UI, navigate to Infrastructure > Domains and click the domain name that you want to configure. In the Domain tab, set DNS Capsule to the Capsule where the subnet is connected. Configure the subnet: In the Satellite web UI, navigate to Infrastructure > Subnets and select the subnet name. In the Subnet tab, set IPAM to DHCP or Internal DB . In the Domains tab, select the domain that you want to manage using Satellite or Capsule. In the Capsules tab, set Reverse DNS Capsule to the Capsule where the subnet is connected. Click Submit to save the changes. 4.5. Configuring Satellite to manage the lifecycle of a host registered to a Identity Management realm As well as providing access to Satellite Server, hosts provisioned with Satellite can also be integrated with Identity Management realms. Red Hat Satellite has a realm feature that automatically manages the lifecycle of any system registered to a realm or domain provider. Use this section to configure Satellite Server or Capsule Server for Identity Management realm support, then add hosts to the Identity Management realm group. Prerequisites Satellite Server that is registered to the Content Delivery Network or an external Capsule Server that is registered to Satellite Server. A deployed realm or domain provider such as Identity Management. To install and configure Identity Management packages on Satellite Server or Capsule Server: To use Identity Management for provisioned hosts, complete the following steps to install and configure Identity Management packages on Satellite Server or Capsule Server: Install the ipa-client package on Satellite Server or Capsule Server: Configure the server as a Identity Management client: Create a realm proxy user, realm-capsule , and the relevant roles in Identity Management: Note the principal name that returns and your Identity Management server configuration details because you require them for the following procedure. To configure Satellite Server or Capsule Server for Identity Management realm support: Complete the following procedure on Satellite and every Capsule that you want to use: Copy the /root/freeipa.keytab file to any Capsule Server that you want to include in the same principal and realm: Move the /root/freeipa.keytab file to the /etc/foreman-proxy directory and set the ownership settings to the foreman-proxy user: Enter the following command on all Capsules that you want to include in the realm. If you use the integrated Capsule on Satellite, enter this command on Satellite Server: You can also use these options when you first configure the Satellite Server. Ensure that the most updated versions of the ca-certificates package is installed and trust the Identity Management Certificate Authority: Optional: If you configure Identity Management on an existing Satellite Server or Capsule Server, complete the following steps to ensure that the configuration changes take effect: Restart the foreman-proxy service: In the Satellite web UI, navigate to Infrastructure > Capsules . Locate the Capsule you have configured for Identity Management and from the list in the Actions column, select Refresh . To create a realm for the Identity Management-enabled Capsule After you configure your integrated or external Capsule with Identity Management, you must create a realm and add the Identity Management-configured Capsule to the realm. Procedure In the Satellite web UI, navigate to Infrastructure > Realms and click Create Realm . In the Name field, enter a name for the realm. From the Realm Type list, select the type of realm. From the Realm Capsule list, select Capsule Server where you have configured Identity Management. Click the Locations tab and from the Locations list, select the location where you want to add the new realm. Click the Organizations tab and from the Organizations list, select the organization where you want to add the new realm. Click Submit . Updating host groups with realm information You must update any host groups that you want to use with the new realm information. In the Satellite web UI, navigate to Configure > Host Groups , select the host group that you want to update, and click the Network tab. From the Realm list, select the realm you create as part of this procedure, and then click Submit . Adding hosts to a Identity Management host group Identity Management supports the ability to set up automatic membership rules based on a system's attributes. Red Hat Satellite's realm feature provides administrators with the ability to map the Red Hat Satellite host groups to the Identity Management parameter userclass which allow administrators to configure automembership. When nested host groups are used, they are sent to the Identity Management server as they are displayed in the Red Hat Satellite User Interface. For example, "Parent/Child/Child". Satellite Server or Capsule Server sends updates to the Identity Management server, however automembership rules are only applied at initial registration. To add hosts to a Identity Management host group: On the Identity Management server, create a host group: Create an automembership rule: Where you can use the following options: automember-add flags the group as an automember group. --type=hostgroup identifies that the target group is a host group, not a user group. automember_rule adds the name you want to identify the automember rule by. Define an automembership condition based on the userclass attribute: Where you can use the following options: automember-add-condition adds regular expression conditions to identify group members. --key=userclass specifies the key attribute as userclass . --type=hostgroup identifies that the target group is a host group, not a user group. --inclusive-regex= ^webserver identifies matching values with a regular expression pattern. hostgroup_name - identifies the target host group's name. When a system is added to Satellite Server's hostgroup_name host group, it is added automatically to the Identity Management server's " hostgroup_name " host group. Identity Management host groups allow for Host-Based Access Controls (HBAC), sudo policies and other Identity Management functions.
[ "scp root@ dns.example.com :/etc/rndc.key /etc/foreman-proxy/rndc.key", "restorecon -v /etc/foreman-proxy/rndc.key chown -v root:foreman-proxy /etc/foreman-proxy/rndc.key chmod -v 640 /etc/foreman-proxy/rndc.key", "echo -e \"server DNS_IP_Address \\n update add aaa.example.com 3600 IN A Host_IP_Address \\n send\\n\" | nsupdate -k /etc/foreman-proxy/rndc.key nslookup aaa.example.com DNS_IP_Address echo -e \"server DNS_IP_Address \\n update delete aaa.example.com 3600 IN A Host_IP_Address \\n send\\n\" | nsupdate -k /etc/foreman-proxy/rndc.key", "satellite-installer --foreman-proxy-dns=true --foreman-proxy-dns-managed=false --foreman-proxy-dns-provider=nsupdate --foreman-proxy-dns-server=\" DNS_IP_Address \" --foreman-proxy-keyfile=/etc/foreman-proxy/rndc.key", "dnf install dhcp-server bind-utils", "tsig-keygen -a hmac-md5 omapi_key", "cat /etc/dhcp/dhcpd.conf default-lease-time 604800; max-lease-time 2592000; log-facility local7; subnet 192.168.38.0 netmask 255.255.255.0 { range 192.168.38.10 192.168.38.100 ; option routers 192.168.38.1 ; option subnet-mask 255.255.255.0 ; option domain-search \" virtual.lan \"; option domain-name \" virtual.lan \"; option domain-name-servers 8.8.8.8 ; } omapi-port 7911; key omapi_key { algorithm hmac-md5; secret \" My_Secret \"; }; omapi-key omapi_key;", "firewall-cmd --add-service dhcp", "firewall-cmd --runtime-to-permanent", "id -u foreman 993 id -g foreman 990", "groupadd -g 990 foreman useradd -u 993 -g 990 -s /sbin/nologin foreman", "chmod o+rx /etc/dhcp/ chmod o+r /etc/dhcp/dhcpd.conf chattr +i /etc/dhcp/ /etc/dhcp/dhcpd.conf", "systemctl enable --now dhcpd", "dnf install nfs-utils systemctl enable --now nfs-server", "mkdir -p /exports/var/lib/dhcpd /exports/etc/dhcp", "/var/lib/dhcpd /exports/var/lib/dhcpd none bind,auto 0 0 /etc/dhcp /exports/etc/dhcp none bind,auto 0 0", "mount -a", "/exports 192.168.38.1 (rw,async,no_root_squash,fsid=0,no_subtree_check) /exports/etc/dhcp 192.168.38.1 (ro,async,no_root_squash,no_subtree_check,nohide) /exports/var/lib/dhcpd 192.168.38.1 (ro,async,no_root_squash,no_subtree_check,nohide)", "exportfs -rva", "firewall-cmd --add-port=7911/tcp", "firewall-cmd --add-service mountd --add-service nfs --add-service rpc-bind --zone public", "firewall-cmd --runtime-to-permanent", "satellite-maintain packages install nfs-utils", "mkdir -p /mnt/nfs/etc/dhcp /mnt/nfs/var/lib/dhcpd", "chown -R foreman-proxy /mnt/nfs", "showmount -e DHCP_Server_FQDN rpcinfo -p DHCP_Server_FQDN", "DHCP_Server_FQDN :/exports/etc/dhcp /mnt/nfs/etc/dhcp nfs ro,vers=3,auto,nosharecache,context=\"system_u:object_r:dhcp_etc_t:s0\" 0 0 DHCP_Server_FQDN :/exports/var/lib/dhcpd /mnt/nfs/var/lib/dhcpd nfs ro,vers=3,auto,nosharecache,context=\"system_u:object_r:dhcpd_state_t:s0\" 0 0", "mount -a", "su foreman-proxy -s /bin/bash cat /mnt/nfs/etc/dhcp/dhcpd.conf cat /mnt/nfs/var/lib/dhcpd/dhcpd.leases exit", "satellite-installer --enable-foreman-proxy-plugin-dhcp-remote-isc --foreman-proxy-dhcp-provider=remote_isc --foreman-proxy-dhcp-server= My_DHCP_Server_FQDN --foreman-proxy-dhcp=true --foreman-proxy-plugin-dhcp-remote-isc-dhcp-config /mnt/nfs/etc/dhcp/dhcpd.conf --foreman-proxy-plugin-dhcp-remote-isc-dhcp-leases /mnt/nfs/var/lib/dhcpd/dhcpd.leases --foreman-proxy-plugin-dhcp-remote-isc-key-name=omapi_key --foreman-proxy-plugin-dhcp-remote-isc-key-secret= My_Secret --foreman-proxy-plugin-dhcp-remote-isc-omapi-port=7911", "mkdir -p /mnt/nfs/var/lib/tftpboot", "TFTP_Server_IP_Address :/exports/var/lib/tftpboot /mnt/nfs/var/lib/tftpboot nfs rw,vers=3,auto,nosharecache,context=\"system_u:object_r:tftpdir_rw_t:s0\" 0 0", "mount -a", "satellite-installer --foreman-proxy-tftp-root /mnt/nfs/var/lib/tftpboot --foreman-proxy-tftp=true", "satellite-installer --foreman-proxy-tftp-servername= TFTP_Server_FQDN", "kinit idm_user", "ipa service-add capsule.example.com", "satellite-maintain packages install ipa-client", "ipa-client-install", "kinit admin", "rm /etc/foreman-proxy/dns.keytab", "ipa-getkeytab -p capsule/ [email protected] -s idm1.example.com -k /etc/foreman-proxy/dns.keytab", "chown foreman-proxy:foreman-proxy /etc/foreman-proxy/dns.keytab", "kinit -kt /etc/foreman-proxy/dns.keytab capsule/ [email protected]", "grant capsule\\047 [email protected] wildcard * ANY;", "grant capsule\\047 [email protected] wildcard * ANY;", "satellite-installer --foreman-proxy-dns-managed=false --foreman-proxy-dns-provider=nsupdate_gss --foreman-proxy-dns-server=\" idm1.example.com \" --foreman-proxy-dns-tsig-keytab=/etc/foreman-proxy/dns.keytab --foreman-proxy-dns-tsig-principal=\"capsule/ [email protected] \" --foreman-proxy-dns=true", "######################################################################## include \"/etc/rndc.key\"; controls { inet _IdM_Server_IP_Address_ port 953 allow { _Satellite_IP_Address_; } keys { \"rndc-key\"; }; }; ########################################################################", "systemctl reload named", "grant \"rndc-key\" zonesub ANY;", "scp /etc/rndc.key root@ satellite.example.com :/etc/rndc.key", "restorecon -v /etc/rndc.key chown -v root:named /etc/rndc.key chmod -v 640 /etc/rndc.key", "usermod -a -G named foreman-proxy", "satellite-installer --foreman-proxy-dns-managed=false --foreman-proxy-dns-provider=nsupdate --foreman-proxy-dns-server=\" IdM_Server_IP_Address \" --foreman-proxy-dns-ttl=86400 --foreman-proxy-dns=true --foreman-proxy-keyfile=/etc/rndc.key", "key \"rndc-key\" { algorithm hmac-md5; secret \" secret-key ==\"; };", "echo -e \"server 192.168.25.1\\n update add test.example.com 3600 IN A 192.168.25.20\\n send\\n\" | nsupdate -k /etc/rndc.key", "nslookup test.example.com 192.168.25.1", "Server: 192.168.25.1 Address: 192.168.25.1#53 Name: test.example.com Address: 192.168.25.20", "echo -e \"server 192.168.25.1\\n update delete test.example.com 3600 IN A 192.168.25.20\\n send\\n\" | nsupdate -k /etc/rndc.key", "nslookup test.example.com 192.168.25.1", "satellite-installer", "satellite-installer --foreman-proxy-dns-managed=true --foreman-proxy-dns-provider=nsupdate --foreman-proxy-dns-server=\"127.0.0.1\" --foreman-proxy-dns=true", "satellite-maintain packages install ipa-client", "ipa-client-install", "foreman-prepare-realm admin realm-capsule", "scp /root/freeipa.keytab root@ capsule.example.com :/etc/foreman-proxy/freeipa.keytab", "mv /root/freeipa.keytab /etc/foreman-proxy chown foreman-proxy:foreman-proxy /etc/foreman-proxy/freeipa.keytab", "satellite-installer --foreman-proxy-realm true --foreman-proxy-realm-keytab /etc/foreman-proxy/freeipa.keytab --foreman-proxy-realm-principal [email protected] --foreman-proxy-realm-provider freeipa", "cp /etc/ipa/ca.crt /etc/pki/ca-trust/source/anchors/ipa.crt update-ca-trust enable update-ca-trust", "systemctl restart foreman-proxy", "ipa hostgroup-add hostgroup_name --desc= hostgroup_description", "ipa automember-add --type=hostgroup hostgroup_name automember_rule", "ipa automember-add-condition --key=userclass --type=hostgroup --inclusive-regex= ^webserver hostgroup_name ---------------------------------- Added condition(s) to \" hostgroup_name \" ---------------------------------- Automember Rule: automember_rule Inclusive Regex: userclass= ^webserver ---------------------------- Number of conditions added 1 ----------------------------" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/installing_capsule_server/configuring-external-services
Chapter 6. Device Drivers
Chapter 6. Device Drivers 6.1. New drivers Network drivers Maxlinear Ethernet GPY Driver ( mxl-gpy ) Realtek 802.11ax wireless 8852A driver ( rtw89_8852a ) Realtek 802.11ax wireless 8852AE driver ( rtw89_8852ae ) Graphics drivers and miscellaneous drivers MHI Host Interface ( mhi ) Modem Host Interface (MHI) PCI controller driver ( mhi_pci_generic ) IDXD driver dsa_bus_type driver ( idxd_bus ) AMD PassThru DMA driver ( ptdma ) Cirrus Logic DSP Support ( cs_dsp ) DRM DisplayPort helper ( drm_dp_helper ) DRM Buddy Allocator ( drm_buddy ) DRM SHMEM memory-management helpers ( drm_shmem_helper ) DRM driver using bochs dispi interface ( bochs ) Intel(R) PMT Class driver ( pmt_class ) Intel(R) PMT Crashlog driver ( pmt_crashlog ) Intel(R) PMT Telemetry driver ( pmt_telemetry ) Intel(R) speed select interface driver ( isst_if_common ) Intel(R) speed select interface mailbox driver ( isst_if_mbox_msr ) Intel(R) speed select interface pci mailbox driver ( isst_if_mbox_pci ) Intel(R) speed select interface mmio driver ( isst_if_mmio ) Intel(R) Software Defined Silicon driver ( intel_sdsi ) Intel(R) Extended Capabilities auxiliary bus driver ( intel_vsec ) ISH ISHTP eclite client opregion driver ( ishtp_eclite ) Serial multi instantiate pseudo device driver ( serial-multi-instantiate ) AMD(R) SPI Master Controller Driver ( spi-amd ) 6.2. Updated drivers Network drivers VMware vmxnet3 virtual NIC driver ( vmxnet3 ) has been updated to version 1.7.0.0-k. Intel(R) PRO/1000 Network Driver ( e1000e ) has been updated to version 4.18.0-425.3.1. Intel(R) Ethernet Switch Host Interface Driver ( fm10k ) has been updated to version 4.18.0-425.3.1. Intel(R) Ethernet Connection XL710 Network Driver ( i40e ) has been updated to version 4.18.0-425.3.1. Intel(R) Ethernet Adaptive Virtual Function Network Driver ( iavf ) has been updated to version 4.18.0-425.3.1. Intel(R) Gigabit Ethernet Network Driver ( igb ) has been updated to version 4.18.0-425.3.1. Intel(R) Gigabit Virtual Function Network Driver ( igbvf ) has been updated to version 4.18.0-425.3.1. Intel(R) 2.5G Ethernet Linux Driver ( igc ) has been updated to version 4.18.0-425.3.1. Intel(R) 10 Gigabit PCI Express Network Driver ( ixgbe ) has been updated to version 4.18.0-425.3.1. Intel(R) 10 Gigabit Virtual Function Network Driver ( ixgbevf ) has been updated to version 4.18.0-425.3.1. Mellanox 5th generation network adapters (ConnectX series) core driver ( mlx5_core ) has been updated to version 4.18.0-425.3.1. Storage drivers Emulex LightPulse Fibre Channel SCSI driver ( lpfc ) has been updated to version 14.0.0.15. MPI3 Storage Controller Device Driver ( mpi3mr ) has been updated to version 8.0.0.69.0. LSI MPT Fusion SAS 3.0 Device Driver ( mpt3sas ) has been updated to version 42.100.00.00. QLogic Fibre Channel HBA Driver ( qla2xxx ) has been updated to version 10.02.07.400-k. Driver for Microchip Smart Family Controller ( smartpqi ) has been updated to version 2.1.18-045. Graphics and miscellaneous driver updates Standalone drm driver for the VMware SVGA device ( vmwgfx ) has been updated to version 2.20.0.0.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.7_release_notes/device_drivers
Chapter 5. Asset execution options with Red Hat Decision Manager
Chapter 5. Asset execution options with Red Hat Decision Manager After you build and deploy your Red Hat Decision Manager project to KIE Server or other environment, you can execute the deployed assets for testing or for runtime consumption. You can also execute assets locally in addition to or instead of executing them after deployment. The following options are the main methods for Red Hat Decision Manager asset execution: Table 5.1. Asset execution options Execution option Description Documentation Execution in KIE Server If you deployed Red Hat Decision Manager project assets to KIE Server, you can use the KIE Server REST API or Java client API to execute and interact with the deployed assets. You can also use Business Central or the headless Process Automation Manager controller outside of Business Central to manage the configurations and KIE containers in the KIE Server instances associated with your deployed assets. Interacting with Red Hat Decision Manager using KIE APIs Execution in an embedded Java application If you deployed Red Hat Decision Manager project assets in your own Java virtual machine (JVM) environment, microservice, or application server, you can use custom APIs or application interactions with core KIE APIs (not KIE Server APIs) to execute assets in the embedded engine. KIE Public API Execution in a local environment for extended testing As part of your development cycle, you can execute assets locally to ensure that the assets you have created in Red Hat Decision Manager function as intended. You can use local execution in addition to or instead of executing assets after deployment. "Executing rules" in Designing a decision service using DRL rules
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/designing_your_decision_management_architecture_for_red_hat_decision_manager/project-asset-execution-options-ref_decision-management-architecture
33.2. Installation Method
33.2. Installation Method Figure 33.2. Installation Method The Installation Method screen allows you to choose whether to perform a new installation or an upgrade. If you choose upgrade, the Partition Information and Package Selection options are disabled. They are not supported for kickstart upgrades. Choose the type of kickstart installation or upgrade from the following options: DVD - Choose this option to install or upgrade from the Red Hat Enterprise Linux DVD. NFS - Choose this option to install or upgrade from an NFS shared directory. In the text field for the NFS server, enter a fully-qualified domain name or IP address. For the NFS directory, enter the name of the NFS directory that contains the variant directory of the installation tree. For example, if the NFS server contains the directory /mirrors/redhat/i386/Server/ , enter /mirrors/redhat/i386/ for the NFS directory. FTP - Choose this option to install or upgrade from an FTP server. In the FTP server text field, enter a fully-qualified domain name or IP address. For the FTP directory, enter the name of the FTP directory that contains the variant directory. For example, if the FTP server contains the directory /mirrors/redhat/i386/Server/ , enter /mirrors/redhat/i386/Server/ for the FTP directory. If the FTP server requires a username and password, specify them as well. HTTP - Choose this option to install or upgrade from an HTTP server. In the text field for the HTTP server, enter the fully-qualified domain name or IP address. For the HTTP directory, enter the name of the HTTP directory that contains the variant directory. For example, if the HTTP server contains the directory /mirrors/redhat/i386/Server/ , enter /mirrors/redhat/i386/Server/ for the HTTP directory. Hard Drive - Choose this option to install or upgrade from a hard drive. Hard drive installations require the use of ISO images. Be sure to verify that the ISO images are intact before you start the installation. To verify them, use an md5sum program as well as the linux mediacheck boot option as discussed in Section 28.6.1, "Verifying Boot Media" . Enter the hard drive partition that contains the ISO images (for example, /dev/hda1 ) in the Hard Drive Partition text box. Enter the directory that contains the ISO images in the Hard Drive Directory text box.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s1-redhat-config-kickstart-install
Chapter 5. Authentication and Interoperability
Chapter 5. Authentication and Interoperability Support for central management of SSH keys Previously, it was not possible to centrally manage host and user SSH public keys. Red Hat Enterprise Linux 6.3 includes SSH public key management for Identity Management servers as a Technology Preview. OpenSSH on Identity Management clients is automatically configured to use public keys which are stored on the Identity Management server. SSH host and user identities can now be managed centrally in Identity Management. SELinux user mapping Red Hat Enterprise Linux 6.3 introduces the ability to control the SELinux context of a user on a remote system. SELinux user map rules can be defined and, optionally, associated with HBAC rules. These maps define the context a user receives depending on the host they are logging into and the group membership. When a user logs into a remote host which is configured to use SSSD with the Identity Management backend, the user's SELinux context is automatically set according to mapping rules defined for that user. For more information, refer to http://freeipa.org/page/SELinux_user_mapping . This feature is considered a Technology Preview. Multiple required methods of authentication for sshd SSH can now be set up to require multiple ways of authentication (whereas previously SSH allowed multiple ways of authentication of which only one was required for a successful login); for example, logging in to an SSH-enabled machine requires both a passphrase and a public key to be entered. The RequiredAuthentications1 and RequiredAuthentications2 options can be configured in the /etc/ssh/sshd_config file to specify authentications that are required for a successful log in. For example: For more information on the aforementioned /etc/ssh/sshd_config options, refer to the sshd_config man page. SSSD support for automount map caching In Red Hat Enterprise Linux 6.3, SSSD includes a new Technology Preview feature: support for caching automount maps. This feature provides several advantages to environments that operate with autofs : Cached automount maps make it easy for a client machine to perform mount operations even when the LDAP server is unreachable, but the NFS server remains reachable. When the autofs daemon is configured to look up automount maps via SSSD, only a single file has to be configured: /etc/sssd/sssd.conf . Previously, the /etc/sysconfig/autofs file had to be configured to fetch autofs data. Caching the automount maps results in faster performance on the client and lower traffic on the LDAP server. Change in SSSD debug_level behavior SSSD has changed the behavior of the debug_level option in the /etc/sssd/sssd.conf file. Previously, it was possible to set the debug_level option in the [sssd] configuration section and the result would be that this became the default setting for other configuration sections, unless they explicitly overrode it. Several changes to internal debug logging features necessitated that the debug_level option must always be specified independently in each section of the configuration file, instead of acquiring its default from the [sssd] section. As a result, after updating to the latest version of SSSD, users may need to update their configurations in order to continue receiving debug logging at the same level. Users that configure SSSD on a per-machine basis can use a simple Python utility that updates their existing configuration in a compatible way. This can be accomplished by running the following command as root: This utility makes the following changes to the configuration file: it checks to see if the debug_level option was specified in the [sssd] section. If so, it adds that same level value to each other section in the sssd.conf file for which debug_level is unspecified. If the debug_level option already exists explicitly in another section, it is left unchanged. Users who rely on central configuration management tools need to make these same changes manually in the appropriate tool. New ldap_chpass_update_last_change option A new option, ldap_chpass_update_last_change , has been added to SSSD configuration. If this option is enabled, SSSD attempts to change the shadowLastChange LDAP attribute to the current time. Note that this is only related to a case when the LDAP password policy is used (usually taken care of by LDAP server), that is, the LDAP extended operation is used to change the password. Also note that the attribute has to be writable by the user who is changing the password.
[ "~]# echo \"RequiredAuthentications2 publickey,password\" >> /etc/ssh/sshd_config", "~]# python /usr/lib/python2.6/site-packages/sssd_update_debug_levels.py" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_release_notes/authentication_interoperability
Chapter 6. DNS Operator in OpenShift Container Platform
Chapter 6. DNS Operator in OpenShift Container Platform The DNS Operator deploys and manages CoreDNS to provide a name resolution service to pods, enabling DNS-based Kubernetes Service discovery in OpenShift Container Platform. 6.1. DNS Operator The DNS Operator implements the dns API from the operator.openshift.io API group. The Operator deploys CoreDNS using a daemon set, creates a service for the daemon set, and configures the kubelet to instruct pods to use the CoreDNS service IP address for name resolution. Procedure The DNS Operator is deployed during installation with a Deployment object. Use the oc get command to view the deployment status: USD oc get -n openshift-dns-operator deployment/dns-operator Example output NAME READY UP-TO-DATE AVAILABLE AGE dns-operator 1/1 1 1 23h Use the oc get command to view the state of the DNS Operator: USD oc get clusteroperator/dns Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE dns 4.1.0-0.11 True False False 92m AVAILABLE , PROGRESSING and DEGRADED provide information about the status of the operator. AVAILABLE is True when at least 1 pod from the CoreDNS daemon set reports an Available status condition. 6.2. Changing the DNS Operator managementState DNS manages the CoreDNS component to provide a name resolution service for pods and services in the cluster. The managementState of the DNS Operator is set to Managed by default, which means that the DNS Operator is actively managing its resources. You can change it to Unmanaged , which means the DNS Operator is not managing its resources. The following are use cases for changing the DNS Operator managementState : You are a developer and want to test a configuration change to see if it fixes an issue in CoreDNS. You can stop the DNS Operator from overwriting the fix by setting the managementState to Unmanaged . You are a cluster administrator and have reported an issue with CoreDNS, but need to apply a workaround until the issue is fixed. You can set the managementState field of the DNS Operator to Unmanaged to apply the workaround. Procedure Change managementState DNS Operator: oc patch dns.operator.openshift.io default --type merge --patch '{"spec":{"managementState":"Unmanaged"}}' 6.3. Controlling DNS pod placement The DNS Operator has two daemon sets: one for CoreDNS and one for managing the /etc/hosts file. The daemon set for /etc/hosts must run on every node host to add an entry for the cluster image registry to support pulling images. Security policies can prohibit communication between pairs of nodes, which prevents the daemon set for CoreDNS from running on every node. As a cluster administrator, you can use a custom node selector to configure the daemon set for CoreDNS to run or not run on certain nodes. Prerequisites You installed the oc CLI. You are logged in to the cluster with a user with cluster-admin privileges. Procedure To prevent communication between certain nodes, configure the spec.nodePlacement.nodeSelector API field: Modify the DNS Operator object named default : USD oc edit dns.operator/default Specify a node selector that includes only control plane nodes in the spec.nodePlacement.nodeSelector API field: spec: nodePlacement: nodeSelector: node-role.kubernetes.io/worker: "" To allow the daemon set for CoreDNS to run on nodes, configure a taint and toleration: Modify the DNS Operator object named default : USD oc edit dns.operator/default Specify a taint key and a toleration for the taint: spec: nodePlacement: tolerations: - effect: NoExecute key: "dns-only" operators: Equal value: abc tolerationSeconds: 3600 1 1 If the taint is dns-only , it can be tolerated indefinitely. You can omit tolerationSeconds . 6.4. View the default DNS Every new OpenShift Container Platform installation has a dns.operator named default . Procedure Use the oc describe command to view the default dns : USD oc describe dns.operator/default Example output Name: default Namespace: Labels: <none> Annotations: <none> API Version: operator.openshift.io/v1 Kind: DNS ... Status: Cluster Domain: cluster.local 1 Cluster IP: 172.30.0.10 2 ... 1 The Cluster Domain field is the base DNS domain used to construct fully qualified pod and service domain names. 2 The Cluster IP is the address pods query for name resolution. The IP is defined as the 10th address in the service CIDR range. To find the service CIDR of your cluster, use the oc get command: USD oc get networks.config/cluster -o jsonpath='{USD.status.serviceNetwork}' Example output [172.30.0.0/16] 6.5. Using DNS forwarding You can use DNS forwarding to override the default forwarding configuration in the /etc/resolv.conf file in the following ways: Specify name servers for every zone. If the forwarded zone is the Ingress domain managed by OpenShift Container Platform, then the upstream name server must be authorized for the domain. Provide a list of upstream DNS servers. Change the default forwarding policy. Note A DNS forwarding configuration for the default domain can have both the default servers specified in the /etc/resolv.conf file and the upstream DNS servers. Procedure Modify the DNS Operator object named default : USD oc edit dns.operator/default After you issue the command, the Operator creates and updates the config map named dns-default with additional server configuration blocks based on Server . If none of the servers have a zone that matches the query, then name resolution falls back to the upstream DNS servers. Configuring DNS forwarding apiVersion: operator.openshift.io/v1 kind: DNS metadata: name: default spec: servers: - name: example-server 1 zones: 2 - example.com forwardPlugin: policy: Random 3 upstreams: 4 - 1.1.1.1 - 2.2.2.2:5353 upstreamResolvers: 5 policy: Random 6 upstreams: 7 - type: SystemResolvConf 8 - type: Network address: 1.2.3.4 9 port: 53 10 1 Must comply with the rfc6335 service name syntax. 2 Must conform to the definition of a subdomain in the rfc1123 service name syntax. The cluster domain, cluster.local , is an invalid subdomain for the zones field. 3 Defines the policy to select upstream resolvers. Default value is Random . You can also use the values RoundRobin , and Sequential . 4 A maximum of 15 upstreams is allowed per forwardPlugin . 5 Optional. You can use it to override the default policy and forward DNS resolution to the specified DNS resolvers (upstream resolvers) for the default domain. If you do not provide any upstream resolvers, the DNS name queries go to the servers in /etc/resolv.conf . 6 Determines the order in which upstream servers are selected for querying. You can specify one of these values: Random , RoundRobin , or Sequential . The default value is Sequential . 7 Optional. You can use it to provide upstream resolvers. 8 You can specify two types of upstreams - SystemResolvConf and Network . SystemResolvConf configures the upstream to use /etc/resolv.conf and Network defines a Networkresolver . You can specify one or both. 9 If the specified type is Network , you must provide an IP address. The address field must be a valid IPv4 or IPv6 address. 10 If the specified type is Network , you can optionally provide a port. The port field must have a value between 1 and 65535 . If you do not specify a port for the upstream, by default port 853 is tried. Optional: When working in a highly regulated environment, you might need the ability to secure DNS traffic when forwarding requests to upstream resolvers so that you can ensure additional DNS traffic and data privacy. Cluster administrators can configure transport layer security (TLS) for forwarded DNS queries. Configuring DNS forwarding with TLS apiVersion: operator.openshift.io/v1 kind: DNS metadata: name: default spec: servers: - name: example-server 1 zones: 2 - example.com forwardPlugin: transportConfig: transport: TLS 3 tls: caBundle: name: mycacert serverName: dnstls.example.com 4 policy: Random 5 upstreams: 6 - 1.1.1.1 - 2.2.2.2:5353 upstreamResolvers: 7 transportConfig: transport: TLS tls: caBundle: name: mycacert serverName: dnstls.example.com upstreams: - type: Network 8 address: 1.2.3.4 9 port: 53 10 1 Must comply with the rfc6335 service name syntax. 2 Must conform to the definition of a subdomain in the rfc1123 service name syntax. The cluster domain, cluster.local , is an invalid subdomain for the zones field. The cluster domain, cluster.local , is an invalid subdomain for zones . 3 When configuring TLS for forwarded DNS queries, set the transport field to have the value TLS . By default, CoreDNS caches forwarded connections for 10 seconds. CoreDNS will hold a TCP connection open for those 10 seconds if no request is issued. With large clusters, ensure that your DNS server is aware that it might get many new connections to hold open because you can initiate a connection per node. Set up your DNS hierarchy accordingly to avoid performance issues. 4 When configuring TLS for forwarded DNS queries, this is a mandatory server name used as part of the server name indication (SNI) to validate the upstream TLS server certificate. 5 Defines the policy to select upstream resolvers. Default value is Random . You can also use the values RoundRobin , and Sequential . 6 Required. You can use it to provide upstream resolvers. A maximum of 15 upstreams entries are allowed per forwardPlugin entry. 7 Optional. You can use it to override the default policy and forward DNS resolution to the specified DNS resolvers (upstream resolvers) for the default domain. If you do not provide any upstream resolvers, the DNS name queries go to the servers in /etc/resolv.conf . 8 Network type indicates that this upstream resolver should handle forwarded requests separately from the upstream resolvers listed in /etc/resolv.conf . Only the Network type is allowed when using TLS and you must provide an IP address. 9 The address field must be a valid IPv4 or IPv6 address. 10 You can optionally provide a port. The port must have a value between 1 and 65535 . If you do not specify a port for the upstream, by default port 853 is tried. Note If servers is undefined or invalid, the config map only contains the default server. Verification View the config map: USD oc get configmap/dns-default -n openshift-dns -o yaml Sample DNS ConfigMap based on sample DNS apiVersion: v1 data: Corefile: | example.com:5353 { forward . 1.1.1.1 2.2.2.2:5353 } bar.com:5353 example.com:5353 { forward . 3.3.3.3 4.4.4.4:5454 1 } .:5353 { errors health kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure upstream fallthrough in-addr.arpa ip6.arpa } prometheus :9153 forward . /etc/resolv.conf 1.2.3.4:53 { policy Random } cache 30 reload } kind: ConfigMap metadata: labels: dns.operator.openshift.io/owning-dns: default name: dns-default namespace: openshift-dns 1 Changes to the forwardPlugin triggers a rolling update of the CoreDNS daemon set. Additional resources For more information on DNS forwarding, see the CoreDNS forward documentation . 6.6. DNS Operator status You can inspect the status and view the details of the DNS Operator using the oc describe command. Procedure View the status of the DNS Operator: USD oc describe clusteroperators/dns 6.7. DNS Operator logs You can view DNS Operator logs by using the oc logs command. Procedure View the logs of the DNS Operator: USD oc logs -n openshift-dns-operator deployment/dns-operator -c dns-operator 6.8. Setting the CoreDNS log level You can configure the CoreDNS log level to determine the amount of detail in logged error messages. The valid values for CoreDNS log level are Normal , Debug , and Trace . The default logLevel is Normal . Note The errors plugin is always enabled. The following logLevel settings report different error responses: logLevel : Normal enables the "errors" class: log . { class error } . logLevel : Debug enables the "denial" class: log . { class denial error } . logLevel : Trace enables the "all" class: log . { class all } . Procedure To set logLevel to Debug , enter the following command: USD oc patch dnses.operator.openshift.io/default -p '{"spec":{"logLevel":"Debug"}}' --type=merge To set logLevel to Trace , enter the following command: USD oc patch dnses.operator.openshift.io/default -p '{"spec":{"logLevel":"Trace"}}' --type=merge Verification To ensure the desired log level was set, check the config map: USD oc get configmap/dns-default -n openshift-dns -o yaml 6.9. Setting the CoreDNS Operator log level Cluster administrators can configure the Operator log level to more quickly track down OpenShift DNS issues. The valid values for operatorLogLevel are Normal , Debug , and Trace . Trace has the most detailed information. The default operatorlogLevel is Normal . There are seven logging levels for issues: Trace, Debug, Info, Warning, Error, Fatal and Panic. After the logging level is set, log entries with that severity or anything above it will be logged. operatorLogLevel: "Normal" sets logrus.SetLogLevel("Info") . operatorLogLevel: "Debug" sets logrus.SetLogLevel("Debug") . operatorLogLevel: "Trace" sets logrus.SetLogLevel("Trace") . Procedure To set operatorLogLevel to Debug , enter the following command: USD oc patch dnses.operator.openshift.io/default -p '{"spec":{"operatorLogLevel":"Debug"}}' --type=merge To set operatorLogLevel to Trace , enter the following command: USD oc patch dnses.operator.openshift.io/default -p '{"spec":{"operatorLogLevel":"Trace"}}' --type=merge 6.10. Tuning the CoreDNS cache You can configure the maximum duration of both successful or unsuccessful caching, also known as positive or negative caching respectively, done by CoreDNS. Tuning the duration of caching of DNS query responses can reduce the load for any upstream DNS resolvers. Procedure Edit the DNS Operator object named default by running the following command: USD oc edit dns.operator.openshift.io/default Modify the time-to-live (TTL) caching values: Configuring DNS caching apiVersion: operator.openshift.io/v1 kind: DNS metadata: name: default spec: cache: positiveTTL: 1h 1 negativeTTL: 0.5h10m 2 1 The string value 1h is converted to its respective number of seconds by CoreDNS. If this field is omitted, the value is assumed to be 0s and the cluster uses the internal default value of 900s as a fallback. 2 The string value can be a combination of units such as 0.5h10m and is converted to its respective number of seconds by CoreDNS. If this field is omitted, the value is assumed to be 0s and the cluster uses the internal default value of 30s as a fallback. Warning Setting TTL fields to low values could lead to an increased load on the cluster, any upstream resolvers, or both.
[ "oc get -n openshift-dns-operator deployment/dns-operator", "NAME READY UP-TO-DATE AVAILABLE AGE dns-operator 1/1 1 1 23h", "oc get clusteroperator/dns", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE dns 4.1.0-0.11 True False False 92m", "patch dns.operator.openshift.io default --type merge --patch '{\"spec\":{\"managementState\":\"Unmanaged\"}}'", "oc edit dns.operator/default", "spec: nodePlacement: nodeSelector: node-role.kubernetes.io/worker: \"\"", "oc edit dns.operator/default", "spec: nodePlacement: tolerations: - effect: NoExecute key: \"dns-only\" operators: Equal value: abc tolerationSeconds: 3600 1", "oc describe dns.operator/default", "Name: default Namespace: Labels: <none> Annotations: <none> API Version: operator.openshift.io/v1 Kind: DNS Status: Cluster Domain: cluster.local 1 Cluster IP: 172.30.0.10 2", "oc get networks.config/cluster -o jsonpath='{USD.status.serviceNetwork}'", "[172.30.0.0/16]", "oc edit dns.operator/default", "apiVersion: operator.openshift.io/v1 kind: DNS metadata: name: default spec: servers: - name: example-server 1 zones: 2 - example.com forwardPlugin: policy: Random 3 upstreams: 4 - 1.1.1.1 - 2.2.2.2:5353 upstreamResolvers: 5 policy: Random 6 upstreams: 7 - type: SystemResolvConf 8 - type: Network address: 1.2.3.4 9 port: 53 10", "apiVersion: operator.openshift.io/v1 kind: DNS metadata: name: default spec: servers: - name: example-server 1 zones: 2 - example.com forwardPlugin: transportConfig: transport: TLS 3 tls: caBundle: name: mycacert serverName: dnstls.example.com 4 policy: Random 5 upstreams: 6 - 1.1.1.1 - 2.2.2.2:5353 upstreamResolvers: 7 transportConfig: transport: TLS tls: caBundle: name: mycacert serverName: dnstls.example.com upstreams: - type: Network 8 address: 1.2.3.4 9 port: 53 10", "oc get configmap/dns-default -n openshift-dns -o yaml", "apiVersion: v1 data: Corefile: | example.com:5353 { forward . 1.1.1.1 2.2.2.2:5353 } bar.com:5353 example.com:5353 { forward . 3.3.3.3 4.4.4.4:5454 1 } .:5353 { errors health kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure upstream fallthrough in-addr.arpa ip6.arpa } prometheus :9153 forward . /etc/resolv.conf 1.2.3.4:53 { policy Random } cache 30 reload } kind: ConfigMap metadata: labels: dns.operator.openshift.io/owning-dns: default name: dns-default namespace: openshift-dns", "oc describe clusteroperators/dns", "oc logs -n openshift-dns-operator deployment/dns-operator -c dns-operator", "oc patch dnses.operator.openshift.io/default -p '{\"spec\":{\"logLevel\":\"Debug\"}}' --type=merge", "oc patch dnses.operator.openshift.io/default -p '{\"spec\":{\"logLevel\":\"Trace\"}}' --type=merge", "oc get configmap/dns-default -n openshift-dns -o yaml", "oc patch dnses.operator.openshift.io/default -p '{\"spec\":{\"operatorLogLevel\":\"Debug\"}}' --type=merge", "oc patch dnses.operator.openshift.io/default -p '{\"spec\":{\"operatorLogLevel\":\"Trace\"}}' --type=merge", "oc edit dns.operator.openshift.io/default", "apiVersion: operator.openshift.io/v1 kind: DNS metadata: name: default spec: cache: positiveTTL: 1h 1 negativeTTL: 0.5h10m 2" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/networking/dns-operator
Red Hat build of Apache Camel for Quarkus Reference
Red Hat build of Apache Camel for Quarkus Reference Red Hat build of Apache Camel 4.0 Red Hat build of Apache Camel for Quarkus provided by Red Hat Red Hat build of Apache Camel Documentation Team [email protected] Red Hat build of Apache Camel Support Team http://access.redhat.com/support
null
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_quarkus_reference/index
Chapter 79. tripleo
Chapter 79. tripleo This chapter describes the commands under the tripleo command. 79.1. tripleo config generate ansible Generate the default ansible.cfg for deployments. Usage: Table 79.1. Command arguments Value Summary --deployment-user DEPLOYMENT_USER User who executes the tripleo config generate command. Defaults to stack. --output-dir OUTPUT_DIR Directory to output ansible.cfg and ansible.log files. 79.2. tripleo container image build Build tripleo container images with tripleo-ansible. Usage: Table 79.2. Command arguments Value Summary -h, --help Show this help message and exit --authfile <authfile> Path of the authentication file. use REGISTRY_AUTH_FILE environment variable to override. (default: /root/containers/auth.json) --base <base-image> Base image name, with optional version. can be centos:8 , base name image will be centos but centos:8 will be pulled to build the base image. (default: ubi8) --config-file <config-file> Yaml config file specifying the images to build. (default: tripleo_containers.yaml) --config-path <config-path> Base configuration path. this is the base path for all container-image files. The defined containers must reside within a tcib folder that is in this path. If this option is set, the default path for <config-file> will be modified. (default: /usr/share/tripleo- common/container-images) --distro <distro> Distro name, if undefined the system will build using the host distro. (default: centos) --exclude <container-name> Name of one container to match against the list of containers to be built to skip. Should be specified multiple times when skipping multiple containers. (default: []) --extra-config <extra-config> Apply additional options from a given configuration YAML file. This will apply to all containers built. (default: None) --namespace <registry-namespace> Container registry namespace (default: tripleotrain) --registry <registry-url> Container registry url (default: localhost) --skip-build Skip or not the build of the images (default: false) --tag <image-tag> Image tag (default: latest) --prefix <image-prefix> Image prefix. (default: openstack) --push Enable image push to a given registry. (default: False) --label <label-data> Add labels to the containers. this option can be specified multiple times. Each label is a key=value pair. --volume <volume-path> Container bind mount used when building the image. Should be specified multiple times if multiple volumes.(default: [ /etc/yum.repos.d:/etc/distro.repos.d:z , /etc/pki/rpm-gpg:/etc/pki/rpm-gpg:z ]) --work-dir <work-directory> Tripleo container builds directory, storing configs and logs for each image and its dependencies. (default: /tmp/container-builds) --rhel-modules <rhel-modules> A comma separated list of rhel modules to enable with their version. Example: mariadb:10.3,virt:8.3 . 79.3. tripleo container image delete Delete specified image from registry. Usage: Table 79.3. Positional arguments Value Summary <image to delete> Full url of image to be deleted in the form <fqdn>:<port>/path/to/image Table 79.4. Command arguments Value Summary -h, --help Show this help message and exit --registry-url <registry url> Url of registry images are to be listed from in the form <fqdn>:<port>. --username <username> Username for image registry. --password <password> Password for image registry. -y, --yes Skip yes/no prompt (assume yes). 79.4. tripleo container image hotfix Hotfix tripleo container images with tripleo-ansible. Usage: Table 79.5. Command arguments Value Summary -h, --help Show this help message and exit --image <images> Fully qualified reference to the source image to be modified. Can be specified multiple times (one per image) (default: []). --rpms-path <rpms-path> Path containing rpms to install (default: none). --tag <image-tag> Image hotfix tag (default: latest) 79.5. tripleo container image list List images discovered in registry. Usage: Table 79.6. Command arguments Value Summary -h, --help Show this help message and exit --registry-url <registry url> Url of registry images are to be listed from in the form <fqdn>:<port>. --username <username> Username for image registry. --password <password> Password for image registry. Table 79.7. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 79.8. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 79.9. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 79.10. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 79.6. tripleo container image prepare default Generate a default ContainerImagePrepare parameter. Usage: Table 79.11. Command arguments Value Summary -h, --help Show this help message and exit --output-env-file <file path> File to write environment file containing default ContainerImagePrepare value. --local-push-destination Include a push_destination to trigger upload to a local registry. --enable-registry-login Use this flag to enable the flag to have systems attempt to login to a remote registry prior to pulling their containers. This flag should be used when --local-push-destination is NOT used and the target systems will have network connectivity to the remote registries. Do not use this for an overcloud that may not have network connectivity to a remote registry. 79.7. tripleo container image prepare Prepare and upload containers from a single command. Usage: Table 79.12. Command arguments Value Summary -h, --help Show this help message and exit --environment-file <file path>, -e <file path> Environment file containing the containerimageprepare parameter which specifies all prepare actions. Also, environment files specifying which services are containerized. Entries will be filtered to only contain images used by containerized services. (Can be specified more than once.) --environment-directory <HEAT ENVIRONMENT DIRECTORY> Environment file directories that are automatically added to the environment. Can be specified more than once. Files in directories are loaded in ascending sort order. --roles-file ROLES_FILE, -r ROLES_FILE Roles file, overrides the default roles_data.yaml in the t-h-t templates directory used for deployment. May be an absolute path or the path relative to the templates dir. --output-env-file <file path> File to write heat environment file which specifies all image parameters. Any existing file will be overwritten. --dry-run Do not perform any pull, modify, or push operations. The environment file will still be populated as if these operations were performed. --cleanup <full, partial, none> Cleanup behavior for local images left after upload. The default full will attempt to delete all local images. partial will leave images required for deployment on this host. none will do no cleanup. 79.8. tripleo container image push Push specified image to registry. Usage: Table 79.13. Positional arguments Value Summary <image to push> Container image to upload. should be in the form of <registry>/<namespace>/<name>:<tag>. If tag is not provided, then latest will be used. Table 79.14. Command arguments Value Summary -h, --help Show this help message and exit --local Use this flag if the container image is already on the current system and does not need to be pulled from a remote registry. --registry-url <registry url> Url of the destination registry in the form <fqdn>:<port>. --append-tag APPEND_TAG Tag to append to the existing tag when pushing the container. --username <username> Username for the destination image registry. --password <password> Password for the destination image registry. --source-username <source_username> Username for the source image registry. --source-password <source_password> Password for the source image registry. --dry-run Perform a dry run upload. the upload action is not performed, but the authentication process is attempted. --multi-arch Enable multi arch support for the upload. --cleanup Remove local copy of the image after uploading 79.9. tripleo container image show Show image selected from the registry. Usage: Table 79.15. Positional arguments Value Summary <image to inspect> Image to be inspected, for example: docker.io/library/centos:7 or docker://docker.io/library/centos:7 Table 79.16. Command arguments Value Summary -h, --help Show this help message and exit --username <username> Username for image registry. --password <password> Password for image registry. Table 79.17. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to json -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 79.18. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 79.19. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 79.20. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 79.10. tripleo deploy Deploy containerized Undercloud Usage: Table 79.21. Command arguments Value Summary --templates [TEMPLATES] The directory containing the heat templates to deploy --standalone Run deployment as a standalone deployment with no undercloud. --upgrade Upgrade an existing deployment. -y, --yes Skip yes/no prompt (assume yes). --stack STACK Name for the ephemeral (one-time create and forget) heat stack. --output-dir OUTPUT_DIR Directory to output state, processed heat templates, ansible deployment files. --output-only Do not execute the ansible playbooks. by default the playbooks are saved to the output-dir and then executed. --standalone-role STANDALONE_ROLE The role to use for standalone configuration when populating the deployment actions. -t <TIMEOUT>, --timeout <TIMEOUT> Deployment timeout in minutes. -e <HEAT ENVIRONMENT FILE>, --environment-file <HEAT ENVIRONMENT FILE> Environment files to be passed to the heat stack- create or heat stack-update command. (Can be specified more than once.) --roles-file ROLES_FILE, -r ROLES_FILE Roles file, overrides the default roles_data_undercloud.yaml in the t-h-t templates directory used for deployment. May be an absolute path or the path relative to the templates dir. --networks-file NETWORKS_FILE, -n NETWORKS_FILE Roles file, overrides the default /dev/null in the t-h-t templates directory used for deployment. May be an absolute path or the path relative to the templates dir. --plan-environment-file PLAN_ENVIRONMENT_FILE, -p PLAN_ENVIRONMENT_FILE Plan environment file, overrides the default plan- environment.yaml in the --templates directory --heat-api-port <HEAT_API_PORT> Heat api port to use for the installers private heat API instance. Optional. Default: 8006.) --heat-user <HEAT_USER> User to execute the non-privileged heat-all process. Defaults to heat. --deployment-user DEPLOYMENT_USER User who executes the tripleo deploy command. defaults to USDSUDO_USER. If USDSUDO_USER is unset it defaults to stack. --deployment-python-interpreter DEPLOYMENT_PYTHON_INTERPRETER The path to python interpreter to use for the deployment actions. If not specified the python version of the openstackclient will be used. This may need to be used if deploying on a python2 host from a python3 system or vice versa. --heat-container-image <HEAT_CONTAINER_IMAGE> The container image to use when launching the heat-all process. Defaults to: tripleomaster/centos-binary- heat-all:current-tripleo --heat-native [HEAT_NATIVE] Execute the heat-all process natively on this host. This option requires that the heat-all binaries be installed locally on this machine. This option is enabled by default which means heat-all is executed on the host OS directly. --local-ip <LOCAL_IP> Local ip/cidr for undercloud traffic. required. --control-virtual-ip <CONTROL_VIRTUAL_IP> Control plane vip. this allows the undercloud installer to configure a custom VIP on the control plane. --public-virtual-ip <PUBLIC_VIRTUAL_IP> Public nw vip. this allows the undercloud installer to configure a custom VIP on the public (external) NW. --local-domain <LOCAL_DOMAIN> Local domain for standalone cloud and its api endpoints --cleanup Cleanup temporary files. using this flag will remove the temporary files used during deployment in after the command is run. --hieradata-override [HIERADATA_OVERRIDE] Path to hieradata override file. when it points to a heat env file, it is passed in t-h-t via --environment-file. When the file contains legacy instack data, it is wrapped with <role>ExtraConfig and also passed in for t-h-t as a temp file created in --output-dir. Note, instack hiera data may be not t-h-t compatible and will highly likely require a manual revision. --keep-running Keep the ephemeral heat running after the stack operation is complete. This is for debugging purposes only. The ephemeral Heat can be used by openstackclient with: OS_AUTH_TYPE=none OS_ENDPOINT=http://127.0.0.1:8006/v1/admin openstack stack list where 8006 is the port specified by --heat- api-port. --inflight-validations Activate in-flight validations during the deploy. in- flight validations provide a robust way to ensure deployed services are running right after their activation. Defaults to False. --ansible-forks ANSIBLE_FORKS The number of ansible forks to use for the config- download ansible-playbook command. --force-stack-update Do a virtual update of the ephemeral heat stack (it cannot take real updates). New or failed deployments always have the stack_action=CREATE. This option enforces stack_action=UPDATE. --force-stack-create Do a virtual create of the ephemeral heat stack. new or failed deployments always have the stack_action=CREATE. This option enforces stack_action=CREATE. 79.11. tripleo upgrade Upgrade TripleO Usage: Table 79.22. Command arguments Value Summary --templates [TEMPLATES] The directory containing the heat templates to deploy --standalone Run deployment as a standalone deployment with no undercloud. --upgrade Upgrade an existing deployment. -y, --yes Skip yes/no prompt (assume yes). --stack STACK Name for the ephemeral (one-time create and forget) heat stack. --output-dir OUTPUT_DIR Directory to output state, processed heat templates, ansible deployment files. --output-only Do not execute the ansible playbooks. by default the playbooks are saved to the output-dir and then executed. --standalone-role STANDALONE_ROLE The role to use for standalone configuration when populating the deployment actions. -t <TIMEOUT>, --timeout <TIMEOUT> Deployment timeout in minutes. -e <HEAT ENVIRONMENT FILE>, --environment-file <HEAT ENVIRONMENT FILE> Environment files to be passed to the heat stack- create or heat stack-update command. (Can be specified more than once.) --roles-file ROLES_FILE, -r ROLES_FILE Roles file, overrides the default roles_data_undercloud.yaml in the t-h-t templates directory used for deployment. May be an absolute path or the path relative to the templates dir. --networks-file NETWORKS_FILE, -n NETWORKS_FILE Roles file, overrides the default /dev/null in the t-h-t templates directory used for deployment. May be an absolute path or the path relative to the templates dir. --plan-environment-file PLAN_ENVIRONMENT_FILE, -p PLAN_ENVIRONMENT_FILE Plan environment file, overrides the default plan- environment.yaml in the --templates directory --heat-api-port <HEAT_API_PORT> Heat api port to use for the installers private heat API instance. Optional. Default: 8006.) --heat-user <HEAT_USER> User to execute the non-privileged heat-all process. Defaults to heat. --deployment-user DEPLOYMENT_USER User who executes the tripleo deploy command. defaults to USDSUDO_USER. If USDSUDO_USER is unset it defaults to stack. --deployment-python-interpreter DEPLOYMENT_PYTHON_INTERPRETER The path to python interpreter to use for the deployment actions. If not specified the python version of the openstackclient will be used. This may need to be used if deploying on a python2 host from a python3 system or vice versa. --heat-container-image <HEAT_CONTAINER_IMAGE> The container image to use when launching the heat-all process. Defaults to: tripleomaster/centos-binary- heat-all:current-tripleo --heat-native [HEAT_NATIVE] Execute the heat-all process natively on this host. This option requires that the heat-all binaries be installed locally on this machine. This option is enabled by default which means heat-all is executed on the host OS directly. --local-ip <LOCAL_IP> Local ip/cidr for undercloud traffic. required. --control-virtual-ip <CONTROL_VIRTUAL_IP> Control plane vip. this allows the undercloud installer to configure a custom VIP on the control plane. --public-virtual-ip <PUBLIC_VIRTUAL_IP> Public nw vip. this allows the undercloud installer to configure a custom VIP on the public (external) NW. --local-domain <LOCAL_DOMAIN> Local domain for standalone cloud and its api endpoints --cleanup Cleanup temporary files. using this flag will remove the temporary files used during deployment in after the command is run. --hieradata-override [HIERADATA_OVERRIDE] Path to hieradata override file. when it points to a heat env file, it is passed in t-h-t via --environment-file. When the file contains legacy instack data, it is wrapped with <role>ExtraConfig and also passed in for t-h-t as a temp file created in --output-dir. Note, instack hiera data may be not t-h-t compatible and will highly likely require a manual revision. --keep-running Keep the ephemeral heat running after the stack operation is complete. This is for debugging purposes only. The ephemeral Heat can be used by openstackclient with: OS_AUTH_TYPE=none OS_ENDPOINT=http://127.0.0.1:8006/v1/admin openstack stack list where 8006 is the port specified by --heat- api-port. --inflight-validations Activate in-flight validations during the deploy. in- flight validations provide a robust way to ensure deployed services are running right after their activation. Defaults to False. --ansible-forks ANSIBLE_FORKS The number of ansible forks to use for the config- download ansible-playbook command. --force-stack-update Do a virtual update of the ephemeral heat stack (it cannot take real updates). New or failed deployments always have the stack_action=CREATE. This option enforces stack_action=UPDATE. --force-stack-create Do a virtual create of the ephemeral heat stack. new or failed deployments always have the stack_action=CREATE. This option enforces stack_action=CREATE. 79.12. tripleo validator group info Display detailed information about a Group Usage: Table 79.23. Command arguments Value Summary -h, --help Show this help message and exit -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric --noindent Whether to disable indenting the json --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --config CONFIG Config file path for validation. --validation-dir VALIDATION_DIR Path where the validation playbooks are located. 79.13. tripleo validator init Create the paths and infrastructure to create a community validation Usage: Table 79.24. Positional arguments Value Summary <validation_name> The name of the community validation: Validation name is limited to contain only lowercase alphanumeric characters, plus _ or - and starts with an alpha character. Ex: my-val, my_val2. This will generate an Ansible role and a playbook in /root/community-validations. Note that the structure of this directory will be created at the first use. Table 79.25. Command arguments Value Summary -h, --help Show this help message and exit --config CONFIG Config file path for validation. 79.14. tripleo validator list List the available validations Usage: Table 79.26. Command arguments Value Summary -h, --help Show this help message and exit -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric --noindent Whether to disable indenting the json --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --config CONFIG Config file path for validation. --group <group_id>[,<group_id>,... ], -g <group_id>[,<group_id>,... ] List specific group of validations, if more than one group is required separate the group names with commas. --category <category_id>[,<category_id>,... ] List specific category of validations, if more than one category is required separate the category names with commas. --product <product_id>[,<product_id>,... ] List specific product of validations, if more than one product is required separate the product names with commas. --validation-dir VALIDATION_DIR Path where the validation playbooks are located. 79.15. tripleo validator run Run the available validations Usage: Table 79.27. Command arguments Value Summary -h, --help Show this help message and exit --config CONFIG Config file path for validation. --limit <host1>[,<host2>,<host3>,... ] A string that identifies a single node or comma- separated list of nodes to be upgraded in parallel in this upgrade run invocation. --ssh-user SSH_USER Ssh user name for the ansible ssh connection. --validation-dir VALIDATION_DIR Path where the validation playbooks is located. --ansible-base-dir ANSIBLE_BASE_DIR Path where the ansible roles, library and plugins are located. --validation-log-dir VALIDATION_LOG_DIR Path where the log files and artifacts will be located. --inventory INVENTORY, -i INVENTORY Path of the ansible inventory. --output-log OUTPUT_LOG Path where the run result will be stored. --junitxml JUNITXML Path where the run result in junitxml format will be stored. --python-interpreter --python-interpreter <PYTHON_INTERPRETER_PATH> Python interpreter for ansible execution. --extra-env-vars key1=<val1> [--extra-env-vars key2=<val2>] Add extra environment variables you may need to provide to your Ansible execution as KEY=VALUE pairs. Note that if you pass the same KEY multiple times, the last given VALUE for that same KEY will override the other(s) --skiplist SKIP_LIST Path where the skip list is stored. an example of the skiplist format could be found at the root of the validations-libs repository. --extra-vars key1=<val1> [--extra-vars key2=<val2>] Add ansible extra variables to the validation(s) execution as KEY=VALUE pair(s). Note that if you pass the same KEY multiple times, the last given VALUE for that same KEY will override the other(s) --extra-vars-file /tmp/my_vars_file.[json|yaml] Absolute or relative path to a json/yaml file containing extra variable(s) to pass to one or multiple validation(s) execution. --validation <validation_id>[,<validation_id>,... ] Run specific validations, if more than one validation is required separate the names with commas. --group <group_id>[,<group_id>,... ], -g <group_id>[,<group_id>,... ] Run specific group validations, if more than one group is required separate the group names with commas. --category <category_id>[,<category_id>,... ] Run specific validations by category, if more than one category is required separate the category names with commas. --product <product_id>[,<product_id>,... ] Run specific validations by product, if more than one product is required separate the product names with commas. 79.16. tripleo validator show history Display Validations execution history Usage: Table 79.28. Command arguments Value Summary -h, --help Show this help message and exit -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric --noindent Whether to disable indenting the json --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --config CONFIG Config file path for validation. --validation <validation_id> Display execution history for a validation --limit HISTORY_LIMIT Display <n> most recent runs of the selected <validation>. <n> must be > 0 The default display limit is set to 15. --validation-log-dir VALIDATION_LOG_DIR Path where the validation log files is located. 79.17. tripleo validator show parameter Display Validations Parameters Usage: Table 79.29. Command arguments Value Summary -h, --help Show this help message and exit --config CONFIG Config file path for validation. --validation-dir VALIDATION_DIR Path where the validation playbooks are located. --validation <validation_id>[,<validation_id>,... ] List specific validations, if more than one validation is required separate the names with commas. --group <group_id>[,<group_id>,... ], -g <group_id>[,<group_id>,... ] List specific group validations, if more than one group is required separate the group names with commas. --category <category_id>[,<category_id>,... ] List specific validations by category, if more than one category is required separate the category names with commas. --product <product_id>[,<product_id>,... ] List specific validations by product, if more than one product is required separate the product names with commas. --download DOWNLOAD Create a json or a yaml file containing all the variables available for the validations: /tmp/myvars --format-output <format_output> Print representation of the validation. the choices of the output format is json,yaml. Table 79.30. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 79.31. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 79.32. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 79.33. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 79.18. tripleo validator show run Display details about a Validation execution Usage: Table 79.34. Positional arguments Value Summary <uuid> Validation uuid run Table 79.35. Command arguments Value Summary -h, --help Show this help message and exit --config CONFIG Config file path for validation. --full Show full details for the run --validation-log-dir VALIDATION_LOG_DIR Path where the validation log files is located. 79.19. tripleo validator show Display detailed information about a Validation Usage: Table 79.36. Positional arguments Value Summary <validation> Show a specific validation. Table 79.37. Command arguments Value Summary -h, --help Show this help message and exit --config CONFIG Config file path for validation. --validation-dir VALIDATION_DIR Path where the validation playbooks are located. Table 79.38. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 79.39. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 79.40. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 79.41. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack tripleo config generate ansible [--deployment-user DEPLOYMENT_USER] [--output-dir OUTPUT_DIR]", "openstack tripleo container image build [-h] [--authfile <authfile>] [--base <base-image>] [--config-file <config-file>] [--config-path <config-path>] [--distro <distro>] [--exclude <container-name>] [--extra-config <extra-config>] [--namespace <registry-namespace>] [--registry <registry-url>] [--skip-build] [--tag <image-tag>] [--prefix <image-prefix>] [--push] [--label <label-data>] [--volume <volume-path>] [--work-dir <work-directory>] [--rhel-modules <rhel-modules>]", "openstack tripleo container image delete [-h] [--registry-url <registry url>] [--username <username>] [--password <password>] [-y] <image to delete>", "openstack tripleo container image hotfix [-h] --image <images> --rpms-path <rpms-path> [--tag <image-tag>]", "openstack tripleo container image list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--registry-url <registry url>] [--username <username>] [--password <password>]", "openstack tripleo container image prepare default [-h] [--output-env-file <file path>] [--local-push-destination] [--enable-registry-login]", "openstack tripleo container image prepare [-h] [--environment-file <file path>] [--environment-directory <HEAT ENVIRONMENT DIRECTORY>] [--roles-file ROLES_FILE] [--output-env-file <file path>] [--dry-run] [--cleanup <full, partial, none>]", "openstack tripleo container image push [-h] [--local] [--registry-url <registry url>] [--append-tag APPEND_TAG] [--username <username>] [--password <password>] [--source-username <source_username>] [--source-password <source_password>] [--dry-run] [--multi-arch] [--cleanup] <image to push>", "openstack tripleo container image show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--username <username>] [--password <password>] <image to inspect>", "openstack tripleo deploy [--templates [TEMPLATES]] [--standalone] [--upgrade] [-y] [--stack STACK] [--output-dir OUTPUT_DIR] [--output-only] [--standalone-role STANDALONE_ROLE] [-t <TIMEOUT>] [-e <HEAT ENVIRONMENT FILE>] [--roles-file ROLES_FILE] [--networks-file NETWORKS_FILE] [--plan-environment-file PLAN_ENVIRONMENT_FILE] [--heat-api-port <HEAT_API_PORT>] [--heat-user <HEAT_USER>] [--deployment-user DEPLOYMENT_USER] [--deployment-python-interpreter DEPLOYMENT_PYTHON_INTERPRETER] [--heat-container-image <HEAT_CONTAINER_IMAGE>] [--heat-native [HEAT_NATIVE]] [--local-ip <LOCAL_IP>] [--control-virtual-ip <CONTROL_VIRTUAL_IP>] [--public-virtual-ip <PUBLIC_VIRTUAL_IP>] [--local-domain <LOCAL_DOMAIN>] [--cleanup] [--hieradata-override [HIERADATA_OVERRIDE]] [--keep-running] [--inflight-validations] [--ansible-forks ANSIBLE_FORKS] [--force-stack-update | --force-stack-create]", "openstack tripleo upgrade [--templates [TEMPLATES]] [--standalone] [--upgrade] [-y] [--stack STACK] [--output-dir OUTPUT_DIR] [--output-only] [--standalone-role STANDALONE_ROLE] [-t <TIMEOUT>] [-e <HEAT ENVIRONMENT FILE>] [--roles-file ROLES_FILE] [--networks-file NETWORKS_FILE] [--plan-environment-file PLAN_ENVIRONMENT_FILE] [--heat-api-port <HEAT_API_PORT>] [--heat-user <HEAT_USER>] [--deployment-user DEPLOYMENT_USER] [--deployment-python-interpreter DEPLOYMENT_PYTHON_INTERPRETER] [--heat-container-image <HEAT_CONTAINER_IMAGE>] [--heat-native [HEAT_NATIVE]] [--local-ip <LOCAL_IP>] [--control-virtual-ip <CONTROL_VIRTUAL_IP>] [--public-virtual-ip <PUBLIC_VIRTUAL_IP>] [--local-domain <LOCAL_DOMAIN>] [--cleanup] [--hieradata-override [HIERADATA_OVERRIDE]] [--keep-running] [--inflight-validations] [--ansible-forks ANSIBLE_FORKS] [--force-stack-update | --force-stack-create]", "openstack tripleo validator group info [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--config CONFIG] [--validation-dir VALIDATION_DIR]", "openstack tripleo validator init [-h] [--config CONFIG] <validation_name>", "openstack tripleo validator list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--config CONFIG] [--group <group_id>[,<group_id>,...]] [--category <category_id>[,<category_id>,...]] [--product <product_id>[,<product_id>,...]] [--validation-dir VALIDATION_DIR]", "openstack tripleo validator run [-h] [--config CONFIG] [--limit <host1>[,<host2>,<host3>,...]] [--ssh-user SSH_USER] [--validation-dir VALIDATION_DIR] [--ansible-base-dir ANSIBLE_BASE_DIR] [--validation-log-dir VALIDATION_LOG_DIR] [--inventory INVENTORY] [--output-log OUTPUT_LOG] [--junitxml JUNITXML] [--python-interpreter --python-interpreter <PYTHON_INTERPRETER_PATH>] [--extra-env-vars key1=<val1> [--extra-env-vars key2=<val2>]] [--skiplist SKIP_LIST] [--extra-vars key1=<val1> [--extra-vars key2=<val2>] | --extra-vars-file /tmp/my_vars_file.[json|yaml]] (--validation <validation_id>[,<validation_id>,...] | --group <group_id>[,<group_id>,...] | --category <category_id>[,<category_id>,...] | --product <product_id>[,<product_id>,...])", "openstack tripleo validator show history [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--config CONFIG] [--validation <validation_id>] [--limit HISTORY_LIMIT] [--validation-log-dir VALIDATION_LOG_DIR]", "openstack tripleo validator show parameter [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--config CONFIG] [--validation-dir VALIDATION_DIR] [--validation <validation_id>[,<validation_id>,...] | --group <group_id>[,<group_id>,...] | --category <category_id>[,<category_id>,...] | --product <product_id>[,<product_id>,...]] [--download DOWNLOAD] [--format-output <format_output>]", "openstack tripleo validator show run [-h] [--config CONFIG] [--full] [--validation-log-dir VALIDATION_LOG_DIR] <uuid>", "openstack tripleo validator show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--config CONFIG] [--validation-dir VALIDATION_DIR] <validation>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/command_line_interface_reference/tripleo
Chapter 19. Cassandra CQL
Chapter 19. Cassandra CQL Both producer and consumer are supported Apache Cassandra is an open source NoSQL database designed to handle large amounts on commodity hardware. Like Amazon's DynamoDB, Cassandra has a peer-to-peer and master-less architecture to avoid single point of failure and garanty high availability. Like Google's BigTable, Cassandra data is structured using column families which can be accessed through the Thrift RPC API or a SQL-like API called CQL. Note This component aims at integrating Cassandra 2.0+ using the CQL3 API (not the Thrift API). It's based on Cassandra Java Driver provided by DataStax. 19.1. Dependencies When using cql with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-cassandraql-starter</artifactId> </dependency> 19.2. Configuring Options Camel components are configured on two levels: Component level Endpoint level 19.2.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 19.2.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 19.3. Component Options The Cassandra CQL component supports 3 options, which are listed below. Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 19.4. Endpoint Options The Cassandra CQL endpoint is configured using URI syntax: with the following path and query parameters: 19.4.1. Path Parameters (4 parameters) Name Description Default Type beanRef (common) beanRef is defined using bean:id. String hosts (common) Hostname(s) Cassandra server(s). Multiple hosts can be separated by comma. String port (common) Port number of Cassandra server(s). Integer keyspace (common) Keyspace to use. String 19.4.2. Query Parameters (30 parameters) Name Description Default Type clusterName (common) Cluster name. String consistencyLevel (common) Consistency level to use. Enum values: ANY ONE TWO THREE QUORUM ALL LOCAL_ONE LOCAL_QUORUM EACH_QUORUM SERIAL LOCAL_SERIAL DefaultConsistencyLevel cql (common) CQL query to perform. Can be overridden with the message header with key CamelCqlQuery. String datacenter (common) Datacenter to use. datacenter1 String loadBalancingPolicyClass (common) To use a specific LoadBalancingPolicyClass. String password (common) Password for session authentication. String prepareStatements (common) Whether to use PreparedStatements or regular Statements. true boolean resultSetConversionStrategy (common) To use a custom class that implements logic for converting ResultSet into message body ALL, ONE, LIMIT_10, LIMIT_100... ResultSetConversionStrategy session (common) To use the Session instance (you would normally not use this option). CqlSession username (common) Username for session authentication. String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean sendEmptyMessageWhenIdle (consumer) If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern pollStrategy (consumer (advanced)) A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. PollingConsumerPollStrategy lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean backoffErrorThreshold (scheduler) The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. int backoffIdleThreshold (scheduler) The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. int backoffMultiplier (scheduler) To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. int delay (scheduler) Milliseconds before the poll. 500 long greedy (scheduler) If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the run polled 1 or more messages. false boolean initialDelay (scheduler) Milliseconds before the first poll starts. 1000 long repeatCount (scheduler) Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever. 0 long runLoggingLevel (scheduler) The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. Enum values: TRACE DEBUG INFO WARN ERROR OFF TRACE LoggingLevel scheduledExecutorService (scheduler) Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. ScheduledExecutorService scheduler (scheduler) To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler. none Object schedulerProperties (scheduler) To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler. Map startScheduler (scheduler) Whether the scheduler should be auto started. true boolean timeUnit (scheduler) Time unit for initialDelay and delay options. Enum values: NANOSECONDS MICROSECONDS MILLISECONDS SECONDS MINUTES HOURS DAYS MILLISECONDS TimeUnit useFixedDelay (scheduler) Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. true boolean 19.5. Endpoint Connection Syntax The endpoint can initiate the Cassandra connection or use an existing one. URI Description cql:localhost/keyspace Single host, default port, usual for testing cql:host1,host2/keyspace Multi host, default port cql:host1,host2:9042/keyspace Multi host, custom port cql:host1,host2 Default port and keyspace cql:bean:sessionRef Provided Session reference cql:bean:clusterRef/keyspace Provided Cluster reference To fine tune the Cassandra connection (SSL options, pooling options, load balancing policy, retry policy, reconnection policy... ), create your own Cluster instance and give it to the Camel endpoint. 19.6. Messages 19.6.1. Incoming Message The Camel Cassandra endpoint expects a bunch of simple objects ( Object or Object[] or Collection<Object> ) which will be bound to the CQL statement as query parameters. If message body is null or empty, then CQL query will be executed without binding parameters. Headers CamelCqlQuery (optional, String or RegularStatement ) CQL query either as a plain String or built using the QueryBuilder . 19.6.2. Outgoing Message The Camel Cassandra endpoint produces one or many a Cassandra Row objects depending on the resultSetConversionStrategy : List<Row> if resultSetConversionStrategy is ALL or LIMIT_[0-9]+ Single` Row` if resultSetConversionStrategy is ONE Anything else, if resultSetConversionStrategy is a custom implementation of the ResultSetConversionStrategy 19.7. Repositories Cassandra can be used to store message keys or messages for the idempotent and aggregation EIP. Cassandra might not be the best tool for queuing use cases yet, read Cassandra anti-patterns queues and queue like datasets . It's advised to use LeveledCompaction and a small GC grace setting for these tables to allow tombstoned rows to be removed quickly. 19.8. Idempotent repository The NamedCassandraIdempotentRepository stores messages keys in a Cassandra table like this: CAMEL_IDEMPOTENT.cql CREATE TABLE CAMEL_IDEMPOTENT ( NAME varchar, -- Repository name KEY varchar, -- Message key PRIMARY KEY (NAME, KEY) ) WITH compaction = {'class':'LeveledCompactionStrategy'} AND gc_grace_seconds = 86400; This repository implementation uses lightweight transactions (also known as Compare and Set) and requires Cassandra 2.0.7+. Alternatively, the CassandraIdempotentRepository does not have a NAME column and can be extended to use a different data model. Option Default Description table CAMEL_IDEMPOTENT Table name pkColumns NAME ,` KEY` Primary key columns name Repository name, value used for NAME column ttl Key time to live writeConsistencyLevel Consistency level used to insert/delete key: ANY , ONE , TWO , QUORUM , LOCAL_QUORUM ... readConsistencyLevel Consistency level used to read/check key: ONE , TWO , QUORUM , LOCAL_QUORUM ... 19.9. Aggregation repository The NamedCassandraAggregationRepository stores exchanges by correlation key in a Cassandra table like this: CAMEL_AGGREGATION.cql CREATE TABLE CAMEL_AGGREGATION ( NAME varchar, -- Repository name KEY varchar, -- Correlation id EXCHANGE_ID varchar, -- Exchange id EXCHANGE blob, -- Serialized exchange PRIMARY KEY (NAME, KEY) ) WITH compaction = {'class':'LeveledCompactionStrategy'} AND gc_grace_seconds = 86400; Alternatively, the CassandraAggregationRepository does not have a NAME column and can be extended to use a different data model. Option Default Description table CAMEL_AGGREGATION Table name pkColumns NAME , KEY Primary key columns exchangeIdColumn EXCHANGE_ID Exchange Id column exchangeColumn EXCHANGE Exchange content column name Repository name, value used for NAME column ttl Exchange time to live writeConsistencyLevel Consistency level used to insert/delete exchange: ANY , ONE , TWO , QUORUM , LOCAL_QUORUM ... readConsistencyLevel Consistency level used to read/check exchange: ONE , TWO , QUORUM , LOCAL_QUORUM ... 19.10. Examples To insert something on a table you can use the following code: String CQL = "insert into camel_user(login, first_name, last_name) values (?, ?, ?)"; from("direct:input") .to("cql://localhost/camel_ks?cql=" + CQL); At this point you should be able to insert data by using a list as body Arrays.asList("davsclaus", "Claus", "Ibsen") The same approach can be used for updating or querying the table. 19.11. Spring Boot Auto-Configuration The component supports 4 options, which are listed below. Name Description Default Type camel.component.cql.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.cql.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.cql.enabled Whether to enable auto configuration of the cql component. This is enabled by default. Boolean camel.component.cql.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-cassandraql-starter</artifactId> </dependency>", "cql:beanRef:hosts:port/keyspace", "CREATE TABLE CAMEL_IDEMPOTENT ( NAME varchar, -- Repository name KEY varchar, -- Message key PRIMARY KEY (NAME, KEY) ) WITH compaction = {'class':'LeveledCompactionStrategy'} AND gc_grace_seconds = 86400;", "CREATE TABLE CAMEL_AGGREGATION ( NAME varchar, -- Repository name KEY varchar, -- Correlation id EXCHANGE_ID varchar, -- Exchange id EXCHANGE blob, -- Serialized exchange PRIMARY KEY (NAME, KEY) ) WITH compaction = {'class':'LeveledCompactionStrategy'} AND gc_grace_seconds = 86400;", "String CQL = \"insert into camel_user(login, first_name, last_name) values (?, ?, ?)\"; from(\"direct:input\") .to(\"cql://localhost/camel_ks?cql=\" + CQL);", "Arrays.asList(\"davsclaus\", \"Claus\", \"Ibsen\")" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-cassandra-cql-component-starter
Chapter 20. configuration
Chapter 20. configuration This chapter describes the commands under the configuration command. 20.1. configuration show Display configuration details Usage: Table 20.1. Optional Arguments Value Summary -h, --help Show this help message and exit --mask Attempt to mask passwords (default) --unmask Show password in clear text Table 20.2. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 20.3. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 20.4. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 20.5. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack configuration show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--mask | --unmask]" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/command_line_interface_reference/configuration
Chapter 115. KafkaMirrorMaker schema reference
Chapter 115. KafkaMirrorMaker schema reference The type KafkaMirrorMaker has been deprecated. Please use KafkaMirrorMaker2 instead. Property Description spec The specification of Kafka MirrorMaker. KafkaMirrorMakerSpec status The status of Kafka MirrorMaker. KafkaMirrorMakerStatus
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-KafkaMirrorMaker-reference
Chapter 2. ClusterRoleBinding [rbac.authorization.k8s.io/v1]
Chapter 2. ClusterRoleBinding [rbac.authorization.k8s.io/v1] Description ClusterRoleBinding references a ClusterRole, but not contain it. It can reference a ClusterRole in the global namespace, and adds who information via Subject. Type object Required roleRef 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. roleRef object RoleRef contains information that points to the role being used subjects array Subjects holds references to the objects the role applies to. subjects[] object Subject contains a reference to the object or user identities a role binding applies to. This can either hold a direct API object reference, or a value for non-objects such as user and group names. 2.1.1. .roleRef Description RoleRef contains information that points to the role being used Type object Required apiGroup kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 2.1.2. .subjects Description Subjects holds references to the objects the role applies to. Type array 2.1.3. .subjects[] Description Subject contains a reference to the object or user identities a role binding applies to. This can either hold a direct API object reference, or a value for non-objects such as user and group names. Type object Required kind name Property Type Description apiGroup string APIGroup holds the API group of the referenced subject. Defaults to "" for ServiceAccount subjects. Defaults to "rbac.authorization.k8s.io" for User and Group subjects. kind string Kind of object being referenced. Values defined by this API group are "User", "Group", and "ServiceAccount". If the Authorizer does not recognized the kind value, the Authorizer should report an error. name string Name of the object being referenced. namespace string Namespace of the referenced object. If the object kind is non-namespace, such as "User" or "Group", and this value is not empty the Authorizer should report an error. 2.2. API endpoints The following API endpoints are available: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings DELETE : delete collection of ClusterRoleBinding GET : list or watch objects of kind ClusterRoleBinding POST : create a ClusterRoleBinding /apis/rbac.authorization.k8s.io/v1/watch/clusterrolebindings GET : watch individual changes to a list of ClusterRoleBinding. deprecated: use the 'watch' parameter with a list operation instead. /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/{name} DELETE : delete a ClusterRoleBinding GET : read the specified ClusterRoleBinding PATCH : partially update the specified ClusterRoleBinding PUT : replace the specified ClusterRoleBinding /apis/rbac.authorization.k8s.io/v1/watch/clusterrolebindings/{name} GET : watch changes to an object of kind ClusterRoleBinding. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 2.2.1. /apis/rbac.authorization.k8s.io/v1/clusterrolebindings Table 2.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of ClusterRoleBinding Table 2.2. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 2.3. Body parameters Parameter Type Description body DeleteOptions schema Table 2.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind ClusterRoleBinding Table 2.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 2.6. HTTP responses HTTP code Reponse body 200 - OK ClusterRoleBindingList schema 401 - Unauthorized Empty HTTP method POST Description create a ClusterRoleBinding Table 2.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.8. Body parameters Parameter Type Description body ClusterRoleBinding schema Table 2.9. HTTP responses HTTP code Reponse body 200 - OK ClusterRoleBinding schema 201 - Created ClusterRoleBinding schema 202 - Accepted ClusterRoleBinding schema 401 - Unauthorized Empty 2.2.2. /apis/rbac.authorization.k8s.io/v1/watch/clusterrolebindings Table 2.10. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of ClusterRoleBinding. deprecated: use the 'watch' parameter with a list operation instead. Table 2.11. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 2.2.3. /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/{name} Table 2.12. Global path parameters Parameter Type Description name string name of the ClusterRoleBinding Table 2.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a ClusterRoleBinding Table 2.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 2.15. Body parameters Parameter Type Description body DeleteOptions schema Table 2.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ClusterRoleBinding Table 2.17. HTTP responses HTTP code Reponse body 200 - OK ClusterRoleBinding schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ClusterRoleBinding Table 2.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 2.19. Body parameters Parameter Type Description body Patch schema Table 2.20. HTTP responses HTTP code Reponse body 200 - OK ClusterRoleBinding schema 201 - Created ClusterRoleBinding schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ClusterRoleBinding Table 2.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.22. Body parameters Parameter Type Description body ClusterRoleBinding schema Table 2.23. HTTP responses HTTP code Reponse body 200 - OK ClusterRoleBinding schema 201 - Created ClusterRoleBinding schema 401 - Unauthorized Empty 2.2.4. /apis/rbac.authorization.k8s.io/v1/watch/clusterrolebindings/{name} Table 2.24. Global path parameters Parameter Type Description name string name of the ClusterRoleBinding Table 2.25. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind ClusterRoleBinding. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 2.26. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/rbac_apis/clusterrolebinding-rbac-authorization-k8s-io-v1
Appendix A. Tips for Developers
Appendix A. Tips for Developers Every good programming textbook covers problems with memory allocation and the performance of specific functions. As you develop your software, be aware of issues that might increase power consumption on the systems on which the software runs. Although these considerations do not affect every line of code, you can optimize your code in areas which are frequent bottlenecks for performance. Some techniques that are often problematic include: using threads. unnecessary CPU wake-ups and not using wake-ups efficiently. If you must wake up, do everything at once (race to idle) and as quickly as possible. using [f]sync() unnecessarily. unnecessary active polling or using short, regular timeouts. (React to events instead). not using wake-ups efficiently. inefficient disk access. Use large buffers to avoid frequent disk access. Write one large block at a time. inefficient use of timers. Group timers across applications (or even across systems) if possible. excessive I/O, power consumption, or memory usage (including memory leaks) performing unnecessary computation. The following sections examine some of these areas in greater detail. A.1. Using Threads It is widely believed that using threads makes applications perform better and faster, but this is not true in every case. Python Python uses the Global Lock Interpreter [1] , so threading is profitable only for larger I/O operations. Unladen-swallow [2] is a faster implementation of Python with which you might be able to optimize your code. Perl Perl threads were originally created for applications running on systems without forking (such as systems with 32-bit Windows operating systems). In Perl threads, the data is copied for every single thread (Copy On Write). Data is not shared by default, because users should be able to define the level of data sharing. For data sharing the threads::shared module has to be included. However, data is not only then copied (Copy On Write), but the module also creates tied variables for the data, which takes even more time and is even slower. [3] C C threads share the same memory, each thread has its own stack, and the kernel does not have to create new file descriptors and allocate new memory space. C can really use the support of more CPUs for more threads. Therefore, to maximize the performance of your threads, use a low-level language like C or C++. If you use a scripting language, consider writing a C binding. Use profilers to identify poorly performing parts of your code. [4] [1] http://docs.python.org/c-api/init.html#thread-state-and-the-global-interpreter-lock [2] http://code.google.com/p/unladen-swallow/ [3] http://www.perlmonks.org/?node_id=288022 [4] http://people.redhat.com/drepper/lt2009.pdf
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/power_management_guide/developer_tips
Chapter 3. Distribution of content in RHEL 8
Chapter 3. Distribution of content in RHEL 8 3.1. Installation Red Hat Enterprise Linux 8 is installed using ISO images. Two types of ISO image are available for the AMD64, Intel 64-bit, 64-bit ARM, IBM Power Systems, and IBM Z architectures: Binary DVD ISO: A full installation image that contains the BaseOS and AppStream repositories and allows you to complete the installation without additional repositories. Note The Installation ISO image is in multiple GB size, and as a result, it might not fit on optical media formats. A USB key or USB hard drive is recommended when using the Installation ISO image to create bootable installation media. You can also use the Image Builder tool to create customized RHEL images. For more information about Image Builder, see the Composing a customized RHEL system image document. Boot ISO: A minimal boot ISO image that is used to boot into the installation program. This option requires access to the BaseOS and AppStream repositories to install software packages. The repositories are part of the Binary DVD ISO image. See the Interactively installing RHEL from installation media document for instructions on downloading ISO images, creating installation media, and completing a RHEL installation. For automated Kickstart installations and other advanced topics, see the Automatically installing RHEL document. For a list of users and groups created by RPMs in a base RHEL installation, and the steps to obtain this list, see the What are all of the users and groups in a base RHEL installation? Knowledgebase article. 3.2. Repositories Red Hat Enterprise Linux 8 is distributed through two main repositories: BaseOS AppStream Both repositories are required for a basic RHEL installation, and are available with all RHEL subscriptions. Content in the BaseOS repository is intended to provide the core set of the underlying OS functionality that provides the foundation for all installations. This content is available in the RPM format and is subject to support terms similar to those in releases of RHEL. For a list of packages distributed through BaseOS, see the Package manifest . Content in the Application Stream repository includes additional user space applications, runtime languages, and databases in support of the varied workloads and use cases. Application Streams are available in the familiar RPM format, as an extension to the RPM format called modules , or as Software Collections. For a list of packages available in AppStream, see the Package manifest . In addition, the CodeReady Linux Builder repository is available with all RHEL subscriptions. It provides additional packages for use by developers. Packages included in the CodeReady Linux Builder repository are unsupported. For more information about RHEL 8 repositories, see the Package manifest . 3.3. Application Streams Red Hat Enterprise Linux 8 introduces the concept of Application Streams. Multiple versions of user space components are now delivered and updated more frequently than the core operating system packages. This provides greater flexibility to customize Red Hat Enterprise Linux without impacting the underlying stability of the platform or specific deployments. Components made available as Application Streams can be packaged as modules or RPM packages and are delivered through the AppStream repository in RHEL 8. Each Application Stream component has a given life cycle, either the same as RHEL 8 or shorter. For details, see Red Hat Enterprise Linux Life Cycle . Modules are collections of packages representing a logical unit: an application, a language stack, a database, or a set of tools. These packages are built, tested, and released together. Module streams represent versions of the Application Stream components. For example, several streams (versions) of the PostgreSQL database server are available in the postgresql module with the default postgresql:10 stream. Only one module stream can be installed on the system. Different versions can be used in separate containers. Detailed module commands are described in the Installing, managing, and removing user-space components document. For a list of modules available in AppStream, see the Package manifest . 3.4. Package management with YUM/DNF On Red Hat Enterprise Linux 8, installing software is ensured by the YUM tool, which is based on the DNF technology. We deliberately adhere to usage of the yum term for consistency with major versions of RHEL. However, if you type dnf instead of yum , the command works as expected because yum is an alias to dnf for compatibility. For more details, see the following documentation: Installing, managing, and removing user-space components Considerations in adopting RHEL 8
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.7_release_notes/Distribution-of-content-in-RHEL-8
Chapter 7. Hosted control planes Observability
Chapter 7. Hosted control planes Observability You can gather metrics for hosted control planes by configuring metrics sets. The HyperShift Operator can create or delete monitoring dashboards in the management cluster for each hosted cluster that it manages. 7.1. Configuring metrics sets for hosted control planes Hosted control planes for Red Hat OpenShift Container Platform creates ServiceMonitor resources in each control plane namespace that allow a Prometheus stack to gather metrics from the control planes. The ServiceMonitor resources use metrics relabelings to define which metrics are included or excluded from a particular component, such as etcd or the Kubernetes API server. The number of metrics that are produced by control planes directly impacts the resource requirements of the monitoring stack that gathers them. Instead of producing a fixed number of metrics that apply to all situations, you can configure a metrics set that identifies a set of metrics to produce for each control plane. The following metrics sets are supported: Telemetry : These metrics are needed for telemetry. This set is the default set and is the smallest set of metrics. SRE : This set includes the necessary metrics to produce alerts and allow the troubleshooting of control plane components. All : This set includes all of the metrics that are produced by standalone OpenShift Container Platform control plane components. To configure a metrics set, set the METRICS_SET environment variable in the HyperShift Operator deployment by entering the following command: USD oc set env -n hypershift deployment/operator METRICS_SET=All 7.1.1. Configuring the SRE metrics set When you specify the SRE metrics set, the HyperShift Operator looks for a config map named sre-metric-set with a single key: config . The value of the config key must contain a set of RelabelConfigs that are organized by control plane component. You can specify the following components: etcd kubeAPIServer kubeControllerManager openshiftAPIServer openshiftControllerManager openshiftRouteControllerManager cvo olm catalogOperator registryOperator nodeTuningOperator controlPlaneOperator hostedClusterConfigOperator A configuration of the SRE metrics set is illustrated in the following example: kubeAPIServer: - action: "drop" regex: "etcd_(debugging|disk|server).*" sourceLabels: ["__name__"] - action: "drop" regex: "apiserver_admission_controller_admission_latencies_seconds_.*" sourceLabels: ["__name__"] - action: "drop" regex: "apiserver_admission_step_admission_latencies_seconds_.*" sourceLabels: ["__name__"] - action: "drop" regex: "scheduler_(e2e_scheduling_latency_microseconds|scheduling_algorithm_predicate_evaluation|scheduling_algorithm_priority_evaluation|scheduling_algorithm_preemption_evaluation|scheduling_algorithm_latency_microseconds|binding_latency_microseconds|scheduling_latency_seconds)" sourceLabels: ["__name__"] - action: "drop" regex: "apiserver_(request_count|request_latencies|request_latencies_summary|dropped_requests|storage_data_key_generation_latencies_microseconds|storage_transformation_failures_total|storage_transformation_latencies_microseconds|proxy_tunnel_sync_latency_secs)" sourceLabels: ["__name__"] - action: "drop" regex: "docker_(operations|operations_latency_microseconds|operations_errors|operations_timeout)" sourceLabels: ["__name__"] - action: "drop" regex: "reflector_(items_per_list|items_per_watch|list_duration_seconds|lists_total|short_watches_total|watch_duration_seconds|watches_total)" sourceLabels: ["__name__"] - action: "drop" regex: "etcd_(helper_cache_hit_count|helper_cache_miss_count|helper_cache_entry_count|request_cache_get_latencies_summary|request_cache_add_latencies_summary|request_latencies_summary)" sourceLabels: ["__name__"] - action: "drop" regex: "transformation_(transformation_latencies_microseconds|failures_total)" sourceLabels: ["__name__"] - action: "drop" regex: "network_plugin_operations_latency_microseconds|sync_proxy_rules_latency_microseconds|rest_client_request_latency_seconds" sourceLabels: ["__name__"] - action: "drop" regex: "apiserver_request_duration_seconds_bucket;(0.15|0.25|0.3|0.35|0.4|0.45|0.6|0.7|0.8|0.9|1.25|1.5|1.75|2.5|3|3.5|4.5|6|7|8|9|15|25|30|50)" sourceLabels: ["__name__", "le"] kubeControllerManager: - action: "drop" regex: "etcd_(debugging|disk|request|server).*" sourceLabels: ["__name__"] - action: "drop" regex: "rest_client_request_latency_seconds_(bucket|count|sum)" sourceLabels: ["__name__"] - action: "drop" regex: "root_ca_cert_publisher_sync_duration_seconds_(bucket|count|sum)" sourceLabels: ["__name__"] openshiftAPIServer: - action: "drop" regex: "etcd_(debugging|disk|server).*" sourceLabels: ["__name__"] - action: "drop" regex: "apiserver_admission_controller_admission_latencies_seconds_.*" sourceLabels: ["__name__"] - action: "drop" regex: "apiserver_admission_step_admission_latencies_seconds_.*" sourceLabels: ["__name__"] - action: "drop" regex: "apiserver_request_duration_seconds_bucket;(0.15|0.25|0.3|0.35|0.4|0.45|0.6|0.7|0.8|0.9|1.25|1.5|1.75|2.5|3|3.5|4.5|6|7|8|9|15|25|30|50)" sourceLabels: ["__name__", "le"] openshiftControllerManager: - action: "drop" regex: "etcd_(debugging|disk|request|server).*" sourceLabels: ["__name__"] openshiftRouteControllerManager: - action: "drop" regex: "etcd_(debugging|disk|request|server).*" sourceLabels: ["__name__"] olm: - action: "drop" regex: "etcd_(debugging|disk|server).*" sourceLabels: ["__name__"] catalogOperator: - action: "drop" regex: "etcd_(debugging|disk|server).*" sourceLabels: ["__name__"] cvo: - action: drop regex: "etcd_(debugging|disk|server).*" sourceLabels: ["__name__"] 7.2. Enabling monitoring dashboards in a hosted cluster To enable monitoring dashboards in a hosted cluster, complete the following steps: Procedure Create the hypershift-operator-install-flags config map in the local-cluster namespace, being sure to specify the --monitoring-dashboards flag in the data.installFlagsToAdd section. For example: kind: ConfigMap apiVersion: v1 metadata: name: hypershift-operator-install-flags namespace: local-cluster data: installFlagsToAdd: "--monitoring-dashboards" installFlagsToRemove: "" Wait a couple of minutes for the HyperShift Operator deployment in the hypershift namespace to be updated to include the following environment variable: - name: MONITORING_DASHBOARDS value: "1" When monitoring dashboards are enabled, for each hosted cluster that the HyperShift Operator manages, the Operator creates a config map named cp-<hosted_cluster_namespace>-<hosted_cluster_name> in the openshift-config-managed namespace, where <hosted_cluster_namespace> is the namespace of the hosted cluster and <hosted_cluster_name> is the name of the hosted cluster. As a result, a new dashboard is added in the administrative console of the management cluster. To view the dashboard, log in to the management cluster's console and go to the dashboard for the hosted cluster by clicking Observe Dashboards . Optional: To disable a monitoring dashboards in a hosted cluster, remove the --monitoring-dashboards flag from the hypershift-operator-install-flags config map. When you delete a hosted cluster, its corresponding dashboard is also deleted. 7.2.1. Dashboard customization To generate dashboards for each hosted cluster, the HyperShift Operator uses a template that is stored in the monitoring-dashboard-template config map in the Operator namespace ( hypershift ). This template contains a set of Grafana panels that contain the metrics for the dashboard. You can edit the content of the config map to customize the dashboards. When a dashboard is generated, the following strings are replaced with values that correspond to a specific hosted cluster: Name Description __NAME__ The name of the hosted cluster __NAMESPACE__ The namespace of the hosted cluster __CONTROL_PLANE_NAMESPACE__ The namespace where the control plane pods of the hosted cluster are placed __CLUSTER_ID__ The UUID of the hosted cluster, which matches the _id label of the hosted cluster metrics
[ "oc set env -n hypershift deployment/operator METRICS_SET=All", "kubeAPIServer: - action: \"drop\" regex: \"etcd_(debugging|disk|server).*\" sourceLabels: [\"__name__\"] - action: \"drop\" regex: \"apiserver_admission_controller_admission_latencies_seconds_.*\" sourceLabels: [\"__name__\"] - action: \"drop\" regex: \"apiserver_admission_step_admission_latencies_seconds_.*\" sourceLabels: [\"__name__\"] - action: \"drop\" regex: \"scheduler_(e2e_scheduling_latency_microseconds|scheduling_algorithm_predicate_evaluation|scheduling_algorithm_priority_evaluation|scheduling_algorithm_preemption_evaluation|scheduling_algorithm_latency_microseconds|binding_latency_microseconds|scheduling_latency_seconds)\" sourceLabels: [\"__name__\"] - action: \"drop\" regex: \"apiserver_(request_count|request_latencies|request_latencies_summary|dropped_requests|storage_data_key_generation_latencies_microseconds|storage_transformation_failures_total|storage_transformation_latencies_microseconds|proxy_tunnel_sync_latency_secs)\" sourceLabels: [\"__name__\"] - action: \"drop\" regex: \"docker_(operations|operations_latency_microseconds|operations_errors|operations_timeout)\" sourceLabels: [\"__name__\"] - action: \"drop\" regex: \"reflector_(items_per_list|items_per_watch|list_duration_seconds|lists_total|short_watches_total|watch_duration_seconds|watches_total)\" sourceLabels: [\"__name__\"] - action: \"drop\" regex: \"etcd_(helper_cache_hit_count|helper_cache_miss_count|helper_cache_entry_count|request_cache_get_latencies_summary|request_cache_add_latencies_summary|request_latencies_summary)\" sourceLabels: [\"__name__\"] - action: \"drop\" regex: \"transformation_(transformation_latencies_microseconds|failures_total)\" sourceLabels: [\"__name__\"] - action: \"drop\" regex: \"network_plugin_operations_latency_microseconds|sync_proxy_rules_latency_microseconds|rest_client_request_latency_seconds\" sourceLabels: [\"__name__\"] - action: \"drop\" regex: \"apiserver_request_duration_seconds_bucket;(0.15|0.25|0.3|0.35|0.4|0.45|0.6|0.7|0.8|0.9|1.25|1.5|1.75|2.5|3|3.5|4.5|6|7|8|9|15|25|30|50)\" sourceLabels: [\"__name__\", \"le\"] kubeControllerManager: - action: \"drop\" regex: \"etcd_(debugging|disk|request|server).*\" sourceLabels: [\"__name__\"] - action: \"drop\" regex: \"rest_client_request_latency_seconds_(bucket|count|sum)\" sourceLabels: [\"__name__\"] - action: \"drop\" regex: \"root_ca_cert_publisher_sync_duration_seconds_(bucket|count|sum)\" sourceLabels: [\"__name__\"] openshiftAPIServer: - action: \"drop\" regex: \"etcd_(debugging|disk|server).*\" sourceLabels: [\"__name__\"] - action: \"drop\" regex: \"apiserver_admission_controller_admission_latencies_seconds_.*\" sourceLabels: [\"__name__\"] - action: \"drop\" regex: \"apiserver_admission_step_admission_latencies_seconds_.*\" sourceLabels: [\"__name__\"] - action: \"drop\" regex: \"apiserver_request_duration_seconds_bucket;(0.15|0.25|0.3|0.35|0.4|0.45|0.6|0.7|0.8|0.9|1.25|1.5|1.75|2.5|3|3.5|4.5|6|7|8|9|15|25|30|50)\" sourceLabels: [\"__name__\", \"le\"] openshiftControllerManager: - action: \"drop\" regex: \"etcd_(debugging|disk|request|server).*\" sourceLabels: [\"__name__\"] openshiftRouteControllerManager: - action: \"drop\" regex: \"etcd_(debugging|disk|request|server).*\" sourceLabels: [\"__name__\"] olm: - action: \"drop\" regex: \"etcd_(debugging|disk|server).*\" sourceLabels: [\"__name__\"] catalogOperator: - action: \"drop\" regex: \"etcd_(debugging|disk|server).*\" sourceLabels: [\"__name__\"] cvo: - action: drop regex: \"etcd_(debugging|disk|server).*\" sourceLabels: [\"__name__\"]", "kind: ConfigMap apiVersion: v1 metadata: name: hypershift-operator-install-flags namespace: local-cluster data: installFlagsToAdd: \"--monitoring-dashboards\" installFlagsToRemove: \"\"", "- name: MONITORING_DASHBOARDS value: \"1\"" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/hosted_control_planes/hosted-control-planes-observability
Chapter 4. Optimizing MTA performance
Chapter 4. Optimizing MTA performance MTA performance depends on a number of factors, including hardware configuration, the number and types of files in the application, the size and number of applications to be evaluated, and whether the application contains source or compiled code. For example, a file that is larger than 10 MB may need a lot of time to process. In general, MTA spends about 40% of the time decompiling classes, 40% of the time executing rules, and the remainder of the time processing other tasks and generating reports. This section describes what you can do to improve the performance of MTA. 4.1. Deploying and running the application Try these suggestions first before upgrading hardware. If possible, run MTA against the source code instead of the archives. This eliminates the need to decompile additional JARs and archives. Increase your ulimit when analyzing large applications. See this Red Hat Knowledgebase article for instructions on how to do this for Red Hat Enterprise Linux. If you have access to a server that has better resources than your laptop or desktop machine, you may want to consider running MTA on that server. 4.2. Upgrading hardware If the application and command-line suggestions above do not improve performance, you may need to upgrade your hardware. If you have access to a server that has better resources than your laptop/desktop, then you may want to consider running MTA on that server. Very large applications that require decompilation have large memory requirements. 8 GB RAM is recommended. This allows 3 - 4 GB RAM for use by the JVM. An upgrade from a single or dual-core to a quad-core CPU processor provides better performance. Disk space and fragmentation can impact performance. A fast disk, especially a solid-state drive (SSD), with greater than 4 GB of defragmented disk space should improve performance.
null
https://docs.redhat.com/en/documentation/migration_toolkit_for_applications/7.2/html/cli_guide/optimize-performance_cli-guide
Chapter 7. Configuring the systems and running tests using RHCert CLI Tool
Chapter 7. Configuring the systems and running tests using RHCert CLI Tool To complete the certification process using CLI, you must prepare the host under test (HUT) and test server, run the tests, and retrieve the test results. 7.1. Using the test plan to prepare the host under test for testing Running the provision command performs a number of operations, such as setting up passwordless SSH communication with the test server, installing the required packages on your system based on the certification type, and creating a final test plan to run, which is a list of common tests taken from both the test plan provided by Red Hat and tests generated on discovering the system requirements. For instance, required hardware or software packages will be installed if the test plan is designed for certifying a hardware or a software product. Prerequisites You have the hostname or the IP address of the test server. Procedure Run the provision command in either way. The test plan will automatically get downloaded to your system. If you have already downloaded the test plan: Replace <path_to_test_plan_document> with the test plan file saved on your system. Follow the on-screen instructions. If you have not downloaded the test plan: Follow the on-screen instructions and enter your Certification ID when prompted. When prompted, provide the hostname or the IP address of the test server to set up passwordless SSH. You are prompted only the first time you add a new system. 7.2. Using the test plan to prepare the test server for testing Running the Provision command enables and starts the rhcertd service, which configures services specified in the test suite on the test server, such as iperf for network testing, and an nfs mount point used in kdump testing. Prerequisites You have the hostname or IP address of the host under test. Procedure Run the provision command by defining the role, "test server", to the system you are adding. This is required only for provisioning the test server. Replace <path_to_test_plan_document> with the test plan file saved on your system. 7.3. Running the certification tests using CLI Procedure Run the following command: When prompted, choose whether to run each test by typing yes or no . You can also run particular tests from the list by typing select . Note After a test reboot, rhcert is running in the background to verify the image. Use tail -f / var /log/rhcert/RedHatCertDaemon.log to see the current progress and status of the verification. 7.4. Submitting the test results file Procedure Log in to authenticate your device. Note Logging in is mandatory to submit the test results file. Open the generated URL in a new browser window or tab. Enter the login and password and click Log in . Click Grant access . Device log in successful message displays. Return to the terminal and enter yes to the Please confirm once you grant access prompt. Submit the result file. When prompted, enter your Certification ID.
[ "rhcert-provision <path_to_test_plan_document>", "rhcert-provision", "rhcert-provision --role test-server <path_to_test_plan_document>", "rhcert-run", "rhcert-cli login", "rhcert-submit" ]
https://docs.redhat.com/en/documentation/red_hat_certified_cloud_and_service_provider_certification/2025/html/red_hat_cloud_instance_type_workflow/assembly_configuring-the-hosts-and-running-tests-by-using-cli_cloud-instance-wf-configure-hosts-run-tests-use-cockpit
9.2. Before Setting a Quota on a Directory
9.2. Before Setting a Quota on a Directory There are several things you should keep in mind when you set a quota on a directory. When specifying a directory to limit with the gluster volume quota command, the directory's path is relative to the Red Hat Gluster Storage volume mount point, not the root directory of the server or client on which the volume is mounted. That is, if the Red Hat Gluster Storage volume is mounted at /mnt/glusterfs and you want to place a limit on the /mnt/glusterfs/dir directory, use /dir as the path when you run the gluster volume quota command, like so: Ensure that at least one brick is available per replica set when you run the gluster volume quota command. A brick is available if a Y appears in the Online column of gluster volume status command output, like so:
[ "gluster volume quota VOLNAME limit-usage /dir hard_limit", "gluster volume status VOLNAME Status of volume: VOLNAME Gluster process Port Online Pid ------------------------------------------------------------ Brick arch:/export/rep1 24010 Y 18474 Brick arch:/export/rep2 24011 Y 18479 NFS Server on localhost 38467 Y 18486 Self-heal Daemon on localhost N/A Y 18491" ]
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/ch09s02