title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
1.2. Examples | 1.2. Examples The following examples demonstrate how SELinux increases security: The default action is deny. If an SELinux policy rule does not exist to allow access, such as for a process opening a file, access is denied. SELinux can confine Linux users. A number of confined SELinux users exist in the SELinux policy. Linux users can be mapped to confined SELinux users to take advantage of the security rules and mechanisms applied to them. For example, mapping a Linux user to the SELinux user_u user, results in a Linux user that is not able to run (unless configured otherwise) set user ID (setuid) applications, such as sudo and su . See Section 3.3, "Confined and Unconfined Users" for more information. Increased process and data separation. Processes run in their own domains, preventing processes from accessing files used by other processes, as well as preventing processes from accessing other processes. For example, when running SELinux, unless otherwise configured, an attacker cannot compromise a Samba server, and then use that Samba server as an attack vector to read and write to files used by other processes, such as MariaDB databases. SELinux helps mitigate the damage made by configuration mistakes. Domain Name System (DNS) servers often replicate information between each other in what is known as a zone transfer. Attackers can use zone transfers to update DNS servers with false information. When running the Berkeley Internet Name Domain (BIND) as a DNS server in Red Hat Enterprise Linux, even if an administrator forgets to limit which servers can perform a zone transfer, the default SELinux policy prevents zone files [1] from being updated using zone transfers, by the BIND named daemon itself, and by other processes. See the NetworkWorld.com article, A seatbelt for server software: SELinux blocks real-world exploits [2] , for background information about SELinux, and information about various exploits that SELinux has prevented. [1] Text files that include information, such as host name to IP address mappings, that are used by DNS servers. [2] Marti, Don. "A seatbelt for server software: SELinux blocks real-world exploits". Published 24 February 2008. Accessed 27 August 2009: http://www.networkworld.com/article/2283723/lan-wan/a-seatbelt-for-server-software--selinux-blocks-real-world-exploits.html . | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/sect-security-enhanced_linux-introduction-examples |
Chapter 22. Configuring the cluster-wide proxy | Chapter 22. Configuring the cluster-wide proxy Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure OpenShift Container Platform to use a proxy by modifying the Proxy object for existing clusters or by configuring the proxy settings in the install-config.yaml file for new clusters. After you enable a cluster-wide egress proxy for your cluster on a supported platform, Red Hat Enterprise Linux CoreOS (RHCOS) populates the status.noProxy parameter with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your install-config.yaml file that exists on the supported platform. Note As a postinstallation task, you can change the networking.clusterNetwork[].cidr value, but not the networking.machineNetwork[].cidr and the networking.serviceNetwork[] values. For more information, see "Configuring the cluster network range". For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the status.noProxy parameter is also populated with the instance metadata endpoint, 169.254.169.254 . Example of values added to the status: segment of a Proxy object by RHCOS apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster # ... networking: clusterNetwork: 1 - cidr: <ip_address_from_cidr> hostPrefix: 23 network type: OVNKubernetes machineNetwork: 2 - cidr: <ip_address_from_cidr> serviceNetwork: 3 - 172.30.0.0/16 # ... status: noProxy: - localhost - .cluster.local - .svc - 127.0.0.1 - <api_server_internal_url> 4 # ... 1 Specify IP address blocks from which pod IP addresses are allocated. The default value is 10.128.0.0/14 with a host prefix of /23 . 2 Specify the IP address blocks for machines. The default value is 10.0.0.0/16 . 3 Specify IP address block for services. The default value is 172.30.0.0/16 . 4 You can find the URL of the internal API server by running the oc get infrastructures.config.openshift.io cluster -o jsonpath='{.status.etcdDiscoveryDomain}' command. Important If your installation type does not include setting the networking.machineNetwork[].cidr field, you must include the machine IP addresses manually in the .status.noProxy field to make sure that the traffic between nodes can bypass the proxy. 22.1. Prerequisites Review the sites that your cluster requires access to and determine whether any of them must bypass the proxy. By default, all cluster system egress traffic is proxied, including calls to the cloud provider API for the cloud that hosts your cluster. The system-wide proxy affects system components only, not user workloads. If necessary, add sites to the spec.noProxy parameter of the Proxy object to bypass the proxy. 22.2. Enabling the cluster-wide proxy The Proxy object is used to manage the cluster-wide egress proxy. When a cluster is installed or upgraded without the proxy configured, a Proxy object is still generated but it will have a nil spec . For example: apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: trustedCA: name: "" status: A cluster administrator can configure the proxy for OpenShift Container Platform by modifying this cluster Proxy object. Note Only the Proxy object named cluster is supported, and no additional proxies can be created. Warning Enabling the cluster-wide proxy causes the Machine Config Operator (MCO) to trigger node reboot. Prerequisites Cluster administrator permissions OpenShift Container Platform oc CLI tool installed Procedure Create a config map that contains any additional CA certificates required for proxying HTTPS connections. Note You can skip this step if the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Create a file called user-ca-bundle.yaml with the following contents, and provide the values of your PEM-encoded certificates: apiVersion: v1 data: ca-bundle.crt: | 1 <MY_PEM_ENCODED_CERTS> 2 kind: ConfigMap metadata: name: user-ca-bundle 3 namespace: openshift-config 4 1 This data key must be named ca-bundle.crt . 2 One or more PEM-encoded X.509 certificates used to sign the proxy's identity certificate. 3 The config map name that will be referenced from the Proxy object. 4 The config map must be in the openshift-config namespace. Create the config map from this file: USD oc create -f user-ca-bundle.yaml Use the oc edit command to modify the Proxy object: USD oc edit proxy/cluster Configure the necessary fields for the proxy: apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 readinessEndpoints: - http://www.google.com 4 - https://www.google.com trustedCA: name: user-ca-bundle 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. The URL scheme must be either http or https . Specify a URL for the proxy that supports the URL scheme. For example, most proxies will report an error if they are configured to use https but they only support http . This failure message may not propagate to the logs and can appear to be a network connection failure instead. If using a proxy that listens for https connections from the cluster, you may need to configure the cluster to accept the CAs and certificates that the proxy uses. 3 A comma-separated list of destination domain names, domains, IP addresses (or other network CIDRs), and port numbers to exclude proxying. Note Port numbers are only supported when configuring IPv6 addresses. Port numbers are not supported when configuring IPv4 addresses. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues. This field is ignored if neither the httpProxy or httpsProxy fields are set. 4 One or more URLs external to the cluster to use to perform a readiness check before writing the httpProxy and httpsProxy values to status. 5 A reference to the config map in the openshift-config namespace that contains additional CA certificates required for proxying HTTPS connections. Note that the config map must already exist before referencing it here. This field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Save the file to apply the changes. 22.3. Removing the cluster-wide proxy The cluster Proxy object cannot be deleted. To remove the proxy from a cluster, remove all spec fields from the Proxy object. Prerequisites Cluster administrator permissions OpenShift Container Platform oc CLI tool installed Procedure Use the oc edit command to modify the proxy: USD oc edit proxy/cluster Remove all spec fields from the Proxy object. For example: apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: {} Save the file to apply the changes. 22.4. Verifying the cluster-wide proxy configuration After the cluster-wide proxy configuration is deployed, you can verify that it is working as expected. Follow these steps to check the logs and validate the implementation. Prerequisites You have cluster administrator permissions. You have the OpenShift Container Platform oc CLI tool installed. Procedure Check the proxy configuration status using the oc command: USD oc get proxy/cluster -o yaml Verify the proxy fields in the output to ensure they match your configuration. Specifically, check the spec.httpProxy , spec.httpsProxy , spec.noProxy , and spec.trustedCA fields. Inspect the status of the Proxy object: USD oc get proxy/cluster -o jsonpath='{.status}' Example output { status: httpProxy: http://user:xxx@xxxx:3128 httpsProxy: http://user:xxx@xxxx:3128 noProxy: .cluster.local,.svc,10.0.0.0/16,10.128.0.0/14,127.0.0.1,169.254.169.254,172.30.0.0/16,localhost,test.no-proxy.com } Check the logs of the Machine Config Operator (MCO) to ensure that the configuration changes were applied successfully: USD oc logs -n openshift-machine-config-operator USD(oc get pods -n openshift-machine-config-operator -l k8s-app=machine-config-operator -o name) Look for messages that indicate the proxy settings were applied and the nodes were rebooted if necessary. Verify that system components are using the proxy by checking the logs of a component that makes external requests, such as the Cluster Version Operator (CVO): USD oc logs -n openshift-cluster-version USD(oc get pods -n openshift-cluster-version -l k8s-app=machine-config-operator -o name) Look for log entries that show that external requests have been routed through the proxy. Additional resources Configuring the cluster network range Understanding the CA Bundle certificate Proxy certificates How is the cluster-wide proxy setting applied to OpenShift Container Platform nodes? | [
"apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster networking: clusterNetwork: 1 - cidr: <ip_address_from_cidr> hostPrefix: 23 network type: OVNKubernetes machineNetwork: 2 - cidr: <ip_address_from_cidr> serviceNetwork: 3 - 172.30.0.0/16 status: noProxy: - localhost - .cluster.local - .svc - 127.0.0.1 - <api_server_internal_url> 4",
"apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: trustedCA: name: \"\" status:",
"apiVersion: v1 data: ca-bundle.crt: | 1 <MY_PEM_ENCODED_CERTS> 2 kind: ConfigMap metadata: name: user-ca-bundle 3 namespace: openshift-config 4",
"oc create -f user-ca-bundle.yaml",
"oc edit proxy/cluster",
"apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 readinessEndpoints: - http://www.google.com 4 - https://www.google.com trustedCA: name: user-ca-bundle 5",
"oc edit proxy/cluster",
"apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: {}",
"oc get proxy/cluster -o yaml",
"oc get proxy/cluster -o jsonpath='{.status}'",
"{ status: httpProxy: http://user:xxx@xxxx:3128 httpsProxy: http://user:xxx@xxxx:3128 noProxy: .cluster.local,.svc,10.0.0.0/16,10.128.0.0/14,127.0.0.1,169.254.169.254,172.30.0.0/16,localhost,test.no-proxy.com }",
"oc logs -n openshift-machine-config-operator USD(oc get pods -n openshift-machine-config-operator -l k8s-app=machine-config-operator -o name)",
"oc logs -n openshift-cluster-version USD(oc get pods -n openshift-cluster-version -l k8s-app=machine-config-operator -o name)"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/networking/enable-cluster-wide-proxy |
Chapter 7. Dynamic plugins | Chapter 7. Dynamic plugins 7.1. Overview of dynamic plugins 7.1.1. About dynamic plugins A dynamic plugin allows you to add custom pages and other extensions to your interface at runtime. The ConsolePlugin custom resource registers plugins with the console, and a cluster administrator enables plugins in the console-operator configuration. 7.1.2. Key features A dynamic plugin allows you to make the following customizations to the OpenShift Container Platform experience: Add custom pages. Add perspectives beyond administrator and developer. Add navigation items. Add tabs and actions to resource pages. 7.1.3. General guidelines When creating your plugin, follow these general guidelines: Node.js and yarn are required to build and run your plugin. Prefix your CSS class names with your plugin name to avoid collisions. For example, my-plugin__heading and my-plugin_\_icon . Maintain a consistent look, feel, and behavior with other console pages. Follow react-i18next localization guidelines when creating your plugin. You can use the useTranslation hook like the one in the following example: conster Header: React.FC = () => { const { t } = useTranslation('plugin__console-demo-plugin'); return <h1>{t('Hello, World!')}</h1>; }; Avoid selectors that could affect markup outside of your plugins components, such as element selectors. These are not APIs and are subject to change. Using them might break your plugin. Avoid selectors like element selectors that could affect markup outside of your plugins components. PatternFly guidelines When creating your plugin, follow these guidelines for using PatternFly: Use PatternFly components and PatternFly CSS variables. Core PatternFly components are available through the SDK. Using PatternFly components and variables help your plugin look consistent in future console versions. Make your plugin accessible by following PatternFly's accessibility fundamentals . Avoid using other CSS libraries such as Bootstrap or Tailwind. They can conflict with PatternFly and will not match the console look and feel. 7.2. Getting started with dynamic plugins To get started using the dynamic plugin, you must set up your environment to write a new OpenShift Container Platform dynamic plugin. For an example of how to write a new plugin, see Adding a tab to the pods page . 7.2.1. Dynamic plugin development You can run the plugin using a local development environment. The OpenShift Container Platform web console runs in a container connected to the cluster you have logged into. Prerequisites You must have an OpenShift cluster running. You must have the OpenShift CLI ( oc ) installed. You must have yarn installed. You must have Docker v3.2.0 or newer or Podman installed and running. Procedure In your terminal, run the following command to install the dependencies for your plugin using yarn. USD yarn install After installing, run the following command to start yarn. USD yarn run start In another terminal window, login to the OpenShift Container Platform through the CLI. USD oc login Run the OpenShift Container Platform web console in a container connected to the cluster you have logged into by running the following command: USD yarn run start-console Verification Visit localhost:9000 to view the running plugin. Inspect the value of window.SERVER_FLAGS.consolePlugins to see the list of plugins which load at runtime. 7.3. Deploy your plugin on a cluster You can deploy the plugin to a OpenShift Container Platform cluster. 7.3.1. Build an image with Docker To deploy your plugin on a cluster, you need to build an image and push it to an image registry. Procedure Build the image with the following command: USD docker build -t quay.io/my-repositroy/my-plugin:latest . Optional: If you want to test your image, run the following command: USD docker run -it --rm -d -p 9001:80 quay.io/my-repository/my-plugin:latest Push the image by running the following command: USD docker push quay.io/my-repository/my-plugin:latest 7.3.2. Deploy your plugin on a cluster After pushing an image with your changes to a registry, you can deploy the plugin to a cluster. Procedure To deploy your plugin to a cluster, install a Helm chart with the name of the plugin as the Helm release name into a new namespace or an existing namespace as specified by the -n command-line option. Provide the location of the image within the plugin.image parameter by using the following command: USD helm upgrade -i my-plugin charts/openshift-console-plugin -n my-plugin-namespace --create-namespace --set plugin.image=my-plugin-image-location Where: n <my-plugin-namespace> Specifies an existing namespace to deploy your plugin into. --create-namespace Optional: If deploying to a new namespace, use this parameter. --set plugin.image=my-plugin-image-location Specifies the location of the image within the plugin.image parameter. Optional: You can specify any additional parameters by using the set of supported parameters in the charts/openshift-console-plugin/values.yaml file. plugin: name: "" description: "" image: "" imagePullPolicy: IfNotPresent replicas: 2 port: 9443 securityContext: enabled: true podSecurityContext: enabled: true runAsNonRoot: true seccompProfile: type: RuntimeDefault containerSecurityContext: enabled: true allowPrivilegeEscalation: false capabilities: drop: - ALL resources: requests: cpu: 10m memory: 50Mi basePath: / certificateSecretName: "" serviceAccount: create: true annotations: {} name: "" patcherServiceAccount: create: true annotations: {} name: "" jobs: patchConsoles: enabled: true image: "registry.redhat.io/openshift4/ose-tools-rhel8@sha256:e44074f21e0cca6464e50cb6ff934747e0bd11162ea01d522433a1a1ae116103" podSecurityContext: enabled: true runAsNonRoot: true seccompProfile: type: RuntimeDefault containerSecurityContext: enabled: true allowPrivilegeEscalation: false capabilities: drop: - ALL resources: requests: cpu: 10m memory: 50Mi Verification View the list of enabled plugins by navigating from Administration Cluster Settings Configuration Console operator.openshift.io Console plugins or by visiting the Overview page. Note It can take a few minutes for the new plugin configuration to appear. If you do not see your plugin, you might need to refresh your browser if the plugin was recently enabled. If you receive any errors at runtime, check the JS console in browser developer tools to look for any errors in your plugin code. 7.3.3. Disabling your plugin in the browser Console users can use the disable-plugins query parameter to disable specific or all dynamic plugins that would normally get loaded at run-time. Procedure To disable a specific plugin(s), remove the plugin you want to disable from the comma-separated list of plugin names. To disable all plugins, leave an empty string in the disable-plugins query parameter. Note Cluster administrators can disable plugins in the Cluster Settings page of the web console 7.3.4. Additional resources Understanding Helm 7.4. Dynamic plugin example Before working through the example, verify that the plugin is working by following the steps in Dynamic plugin development 7.4.1. Adding a tab to the pods page There are different customizations you can make to the OpenShift Container Platform web console. The following procedure adds a tab to the Pod details page as an example extension to your plugin. Note The OpenShift Container Platform web console runs in a container connected to the cluster you have logged into. See "Dynamic plugin development" for information to test the plugin before creating your own. Procedure Visit the console-plugin-template repository containing a template for creating plugins in a new tab. Important Custom plugin code is not supported by Red Hat. Only Cooperative community support is available for your plugin. Create a GitHub repository for the template by clicking Use this template Create new repository . Rename the new repository with the name of your plugin. Clone the new repository to your local machine so you can edit the code. Edit the package.json file, adding your plugin's metadata to the consolePlugin declaration. For example: "consolePlugin": { "name": "my-plugin", 1 "version": "0.0.1", 2 "displayName": "My Plugin", 3 "description": "Enjoy this shiny, new console plugin!", 4 "exposedModules": { "ExamplePage": "./components/ExamplePage" }, "dependencies": { "@console/pluginAPI": "/*" } } 1 Update the name of your plugin. 2 Update the version. 3 Update the display name for your plugin. 4 Update the description with a synopsis about your plugin. Add the following to the console-extensions.json file: { "type": "console.tab/horizontalNav", "properties": { "page": { "name": "Example Tab", "href": "example" }, "model": { "group": "core", "version": "v1", "kind": "Pod" }, "component": { "USDcodeRef": "ExampleTab" } } } Edit the package.json file to include the following changes: "exposedModules": { "ExamplePage": "./components/ExamplePage", "ExampleTab": "./components/ExampleTab" } Write a message to display on a new custom tab on the Pods page by creating a new file src/components/ExampleTab.tsx and adding the following script: import * as React from 'react'; export default function ExampleTab() { return ( <p>This is a custom tab added to a resource using a dynamic plugin.</p> ); } Install a Helm chart with the name of the plugin as the Helm release name into a new namespace or an existing namespace as specified by the -n command-line option to deploy your plugin on a cluster. Provide the location of the image within the plugin.image parameter by using the following command: USD helm upgrade -i my-plugin charts/openshift-console-plugin -n my-plugin-namespace --create-namespace --set plugin.image=my-plugin-image-location Note For more information on deploying your plugin on a cluster, see "Deploy your plugin on a cluster". Verification Visit a Pod page to view the added tab. 7.5. Dynamic plugin reference You can add extensions that allow you to customize your plugin. Those extensions are then loaded to the console at run-time. 7.5.1. Dynamic plugin extension types console.action/filter ActionFilter can be used to filter an action. Name Value Type Optional Description contextId string no The context ID helps to narrow the scope of contributed actions to a particular area of the application. Examples include topology and helm . filter CodeRef<(scope: any, action: Action) ⇒ boolean> no A function that will filter actions based on some conditions. scope : The scope in which actions should be provided for. A hook might be required if you want to remove the ModifyCount action from a deployment with a horizontal pod autoscaler (HPA). console.action/group ActionGroup contributes an action group that can also be a submenu. Name Value Type Optional Description id string no ID used to identify the action section. label string yes The label to display in the UI. Required for submenus. submenu boolean yes Whether this group should be displayed as submenu. insertBefore string | string[] yes Insert this item before the item referenced here. For arrays, the first one found in order is used. insertAfter string | string[] yes Insert this item after the item referenced here. For arrays, the first one found in order is used. The insertBefore value takes precedence. console.action/provider ActionProvider contributes a hook that returns list of actions for specific context. Name Value Type Optional Description contextId string no The context ID helps to narrow the scope of contributed actions to a particular area of the application. Examples include topology and helm . provider CodeRef<ExtensionHook<Action[], any>> no A React hook that returns actions for the given scope. If contextId = resource , then the scope will always be a Kubernetes resource object. console.action/resource-provider ResourceActionProvider contributes a hook that returns list of actions for specific resource model. Name Value Type Optional Description model ExtensionK8sKindVersionModel no The model for which this provider provides actions for. provider CodeRef<ExtensionHook<Action[], any>> no A react hook which returns actions for the given resource model console.alert-action This extension can be used to trigger a specific action when a specific Prometheus alert is observed by the Console based on its rule.name value. Name Value Type Optional Description alert string no Alert name as defined by alert.rule.name property text string no action CodeRef<(alert: any) ⇒ void> no Function to perform side effect console.catalog/item-filter This extension can be used for plugins to contribute a handler that can filter specific catalog items. For example, the plugin can contribute a filter that filters helm charts from specific provider. Name Value Type Optional Description catalogId string | string[] no The unique identifier for the catalog this provider contributes to. type string no Type ID for the catalog item type. filter CodeRef<(item: CatalogItem) ⇒ boolean> no Filters items of a specific type. Value is a function that takes CatalogItem[] and returns a subset based on the filter criteria. console.catalog/item-metadata This extension can be used to contribute a provider that adds extra metadata to specific catalog items. Name Value Type Optional Description catalogId string | string[] no The unique identifier for the catalog this provider contributes to. type string no Type ID for the catalog item type. provider CodeRef<ExtensionHook<CatalogItemMetadataProviderFunction, CatalogExtensionHookOptions>> no A hook which returns a function that will be used to provide metadata to catalog items of a specific type. console.catalog/item-provider This extension allows plugins to contribute a provider for a catalog item type. For example, a Helm Plugin can add a provider that fetches all the Helm Charts. This extension can also be used by other plugins to add more items to a specific catalog item type. Name Value Type Optional Description catalogId string | string[] no The unique identifier for the catalog this provider contributes to. type string no Type ID for the catalog item type. title string no Title for the catalog item provider provider CodeRef<ExtensionHook<CatalogItem<any>[], CatalogExtensionHookOptions>> no Fetch items and normalize it for the catalog. Value is a react effect hook. priority number yes Priority for this provider. Defaults to 0 . Higher priority providers may override catalog items provided by other providers. console.catalog/item-type This extension allows plugins to contribute a new type of catalog item. For example, a Helm plugin can define a new catalog item type as HelmCharts that it wants to contribute to the Developer Catalog. Name Value Type Optional Description type string no Type for the catalog item. title string no Title for the catalog item. catalogDescription string | CodeRef<React.ReactNode> yes Description for the type specific catalog. typeDescription string yes Description for the catalog item type. filters CatalogItemAttribute[] yes Custom filters specific to the catalog item. groupings CatalogItemAttribute[] yes Custom groupings specific to the catalog item. console.catalog/item-type-metadata This extension allows plugins to contribute extra metadata like custom filters or groupings for any catalog item type. For example, a plugin can attach a custom filter for HelmCharts that can filter based on chart provider. Name Value Type Optional Description type string no Type for the catalog item. filters CatalogItemAttribute[] yes Custom filters specific to the catalog item. groupings CatalogItemAttribute[] yes Custom groupings specific to the catalog item. console.cluster-overview/inventory-item Adds a new inventory item into cluster overview page. Name Value Type Optional Description component CodeRef<React.ComponentType<{}>> no The component to be rendered. console.cluster-overview/multiline-utilization-item Adds a new cluster overview multi-line utilization item. Name Value Type Optional Description title string no The title of the utilization item. getUtilizationQueries CodeRef<GetMultilineQueries> no Prometheus utilization query. humanize CodeRef<Humanize> no Convert Prometheus data to human-readable form. TopConsumerPopovers CodeRef<React.ComponentType<TopConsumerPopoverProps>[]> yes Shows Top consumer popover instead of plain value. console.cluster-overview/utilization-item Adds a new cluster overview utilization item. Name Value Type Optional Description title string no The title of the utilization item. getUtilizationQuery CodeRef<GetQuery> no Prometheus utilization query. humanize CodeRef<Humanize> no Convert Prometheus data to human-readable form. getTotalQuery CodeRef<GetQuery> yes Prometheus total query. getRequestQuery CodeRef<GetQuery> yes Prometheus request query. getLimitQuery CodeRef<GetQuery> yes Prometheus limit query. TopConsumerPopover CodeRef<React.ComponentType<TopConsumerPopoverProps>> yes Shows Top consumer popover instead of plain value. console.context-provider Adds a new React context provider to the web console application root. Name Value Type Optional Description provider CodeRef<Provider<T>> no Context Provider component. useValueHook CodeRef<() ⇒ T> no Hook for the Context value. console.dashboards/card Adds a new dashboard card. Name Value Type Optional Description tab string no The ID of the dashboard tab to which the card will be added. position 'LEFT' | 'RIGHT' | 'MAIN' no The grid position of the card on the dashboard. component CodeRef<React.ComponentType<{}>> no Dashboard card component. span OverviewCardSpan yes Card's vertical span in the column. Ignored for small screens; defaults to 12 . console.dashboards/custom/overview/detail/item Adds an item to the Details card of Overview Dashboard. Name Value Type Optional Description title string no Details card title component CodeRef<React.ComponentType<{}>> no The value, rendered by the OverviewDetailItem component valueClassName string yes Value for a className isLoading CodeRef<() ⇒ boolean> yes Function returning the loading state of the component error CodeRef<() ⇒ string> yes Function returning errors to be displayed by the component console.dashboards/overview/activity/resource Adds an activity to the Activity Card of Overview Dashboard where the triggering of activity is based on watching a Kubernetes resource. Name Value Type Optional Description k8sResource CodeRef<FirehoseResource & { isList: true; }> no The utilization item to be replaced. component CodeRef<React.ComponentType<K8sActivityProps<T>>> no The action component. isActivity CodeRef<(resource: T) ⇒ boolean> yes Function which determines if the given resource represents the action. If not defined, every resource represents activity. getTimestamp CodeRef<(resource: T) ⇒ Date> yes Time stamp for the given action, which will be used for ordering. console.dashboards/overview/health/operator Adds a health subsystem to the status card of the Overview dashboard, where the source of status is a Kubernetes REST API. Name Value Type Optional Description title string no Title of Operators section in the pop-up menu. resources CodeRef<FirehoseResource[]> no Kubernetes resources which will be fetched and passed to healthHandler . getOperatorsWithStatuses CodeRef<GetOperatorsWithStatuses<T>> yes Resolves status for the Operators. operatorRowLoader CodeRef<React.ComponentType<OperatorRowProps<T>>> yes Loader for pop-up row component. viewAllLink string yes Links to all resources page. If not provided, then a list page of the first resource from resources prop is used. console.dashboards/overview/health/prometheus Adds a health subsystem to the status card of Overview dashboard where the source of status is Prometheus. Name Value Type Optional Description title string no The display name of the subsystem. queries string[] no The Prometheus queries. healthHandler CodeRef<PrometheusHealthHandler> no Resolve the subsystem's health. additionalResource CodeRef<FirehoseResource> yes Additional resource which will be fetched and passed to healthHandler . popupComponent CodeRef<React.ComponentType<PrometheusHealthPopupProps>> yes Loader for pop-up menu content. If defined, a health item is represented as a link, which opens a pop-up menu with the given content. popupTitle string yes The title of the popover. disallowedControlPlaneTopology string[] yes Control plane topology for which the subsystem should be hidden. console.dashboards/overview/health/resource Adds a health subsystem to the status card of Overview dashboard where the source of status is a Kubernetes Resource. Name Value Type Optional Description title string no The display name of the subsystem. resources CodeRef<WatchK8sResources<T>> no Kubernetes resources that will be fetched and passed to healthHandler . healthHandler CodeRef<ResourceHealthHandler<T>> no Resolve the subsystem's health. popupComponent CodeRef<WatchK8sResults<T>> yes Loader for pop-up menu content. If defined, a health item is represented as a link, which opens a pop-up menu with the given content. popupTitle string yes The title of the popover. console.dashboards/overview/health/url Adds a health subsystem to the status card of Overview dashboard where the source of status is a Kubernetes REST API. Name Value Type Optional Description title string no The display name of the subsystem. url string no The URL to fetch data from. It will be prefixed with base Kubernetes URL. healthHandler CodeRef<URLHealthHandler<T, K8sResourceCommon | K8sResourceCommon[]>> no Resolve the subsystem's health. additionalResource CodeRef<FirehoseResource> yes Additional resource which will be fetched and passed to healthHandler . popupComponent CodeRef<React.ComponentType<{ healthResult?: T; healthResultError?: any; k8sResult?: FirehoseResult<R>; }>> yes Loader for popup content. If defined, a health item will be represented as a link which opens popup with given content. popupTitle string yes The title of the popover. console.dashboards/overview/inventory/item Adds a resource tile to the overview inventory card. Name Value Type Optional Description model CodeRef<T> no The model for resource which will be fetched. Used to get the model's label or abbr . mapper CodeRef<StatusGroupMapper<T, R>> yes Function which maps various statuses to groups. additionalResources CodeRef<WatchK8sResources<R>> yes Additional resources which will be fetched and passed to the mapper function. console.dashboards/overview/inventory/item/group Adds an inventory status group. Name Value Type Optional Description id string no The ID of the status group. icon CodeRef<React.ReactElement<any, string | React.JSXElementConstructor<any>>> no React component representing the status group icon. console.dashboards/overview/inventory/item/replacement Replaces an overview inventory card. Name Value Type Optional Description model CodeRef<T> no The model for resource which will be fetched. Used to get the model's label or abbr . mapper CodeRef<StatusGroupMapper<T, R>> yes Function which maps various statuses to groups. additionalResources CodeRef<WatchK8sResources<R>> yes Additional resources which will be fetched and passed to the mapper function. console.dashboards/overview/prometheus/activity/resource Adds an activity to the Activity Card of Prometheus Overview Dashboard where the triggering of activity is based on watching a Kubernetes resource. Name Value Type Optional Description queries string[] no Queries to watch. component CodeRef<React.ComponentType<PrometheusActivityProps>> no The action component. isActivity CodeRef<(results: PrometheusResponse[]) ⇒ boolean> yes Function which determines if the given resource represents the action. If not defined, every resource represents activity. console.dashboards/project/overview/item Adds a resource tile to the project overview inventory card. Name Value Type Optional Description model CodeRef<T> no The model for resource which will be fetched. Used to get the model's label or abbr . mapper CodeRef<StatusGroupMapper<T, R>> yes Function which maps various statuses to groups. additionalResources CodeRef<WatchK8sResources<R>> yes Additional resources which will be fetched and passed to the mapper function. console.dashboards/tab Adds a new dashboard tab, placed after the Overview tab. Name Value Type Optional Description id string no A unique tab identifier, used as tab link href and when adding cards to this tab. navSection 'home' | 'storage' no Navigation section to which the tab belongs to. title string no The title of the tab. console.file-upload This extension can be used to provide a handler for the file drop action on specific file extensions. Name Value Type Optional Description fileExtensions string[] no Supported file extensions. handler CodeRef<FileUploadHandler> no Function which handles the file drop action. console.flag Gives full control over the web console feature flags. Name Value Type Optional Description handler CodeRef<FeatureFlagHandler> no Used to set or unset arbitrary feature flags. console.flag/hookProvider Gives full control over the web console feature flags with hook handlers. Name Value Type Optional Description handler CodeRef<FeatureFlagHandler> no Used to set or unset arbitrary feature flags. console.flag/model Adds a new web console feature flag driven by the presence of a CustomResourceDefinition (CRD) object on the cluster. Name Value Type Optional Description flag string no The name of the flag to set after the CRD is detected. model ExtensionK8sModel no The model which refers to a CRD. console.global-config This extension identifies a resource used to manage the configuration of the cluster. A link to the resource will be added to the Administration Cluster Settings Configuration page. Name Value Type Optional Description id string no Unique identifier for the cluster config resource instance. name string no The name of the cluster config resource instance. model ExtensionK8sModel no The model which refers to a cluster config resource. namespace string no The namespace of the cluster config resource instance. console.model-metadata Customize the display of models by overriding values retrieved and generated through API discovery. Name Value Type Optional Description model ExtensionK8sGroupModel no The model to customize. May specify only a group, or optional version and kind. badge ModelBadge yes Whether to consider this model reference as Technology Preview or Developer Preview. color string yes The color to associate to this model. label string yes Override the label. Requires kind be provided. labelPlural string yes Override the plural label. Requires kind be provided. abbr string yes Customize the abbreviation. Defaults to all uppercase characters in kind , up to 4 characters long. Requires that kind is provided. console.navigation/href This extension can be used to contribute a navigation item that points to a specific link in the UI. Name Value Type Optional Description id string no A unique identifier for this item. name string no The name of this item. href string no The link href value. perspective string yes The perspective ID to which this item belongs to. If not specified, contributes to the default perspective. section string yes Navigation section to which this item belongs to. If not specified, render this item as a top level link. dataAttributes { [key: string]: string; } yes Adds data attributes to the DOM. startsWith string[] yes Mark this item as active when the URL starts with one of these paths. insertBefore string | string[] yes Insert this item before the item referenced here. For arrays, the first one found in order is used. insertAfter string | string[] yes Insert this item after the item referenced here. For arrays, the first one found in order is used. insertBefore takes precedence. namespaced boolean yes If true , adds /ns/active-namespace to the end. prefixNamespaced boolean yes If true , adds /k8s/ns/active-namespace to the beginning. console.navigation/resource-cluster This extension can be used to contribute a navigation item that points to a cluster resource details page. The K8s model of that resource can be used to define the navigation item. Name Value Type Optional Description id string no A unique identifier for this item. model ExtensionK8sModel no The model for which this navigation item links to. perspective string yes The perspective ID to which this item belongs to. If not specified, contributes to the default perspective. section string yes Navigation section to which this item belongs to. If not specified, render this item as a top-level link. dataAttributes { [key: string]: string; } yes Adds data attributes to the DOM. startsWith string[] yes Mark this item as active when the URL starts with one of these paths. insertBefore string | string[] yes Insert this item before the item referenced here. For arrays, the first one found in order is used. insertAfter string | string[] yes Insert this item after the item referenced here. For arrays, the first one found in order is used. insertBefore takes precedence. name string yes Overrides the default name. If not supplied the name of the link will equal the plural value of the model. console.navigation/resource-ns This extension can be used to contribute a navigation item that points to a namespaced resource details page. The K8s model of that resource can be used to define the navigation item. Name Value Type Optional Description id string no A unique identifier for this item. model ExtensionK8sModel no The model for which this navigation item links to. perspective string yes The perspective ID to which this item belongs to. If not specified, contributes to the default perspective. section string yes Navigation section to which this item belongs to. If not specified, render this item as a top-level link. dataAttributes { [key: string]: string; } yes Adds data attributes to the DOM. startsWith string[] yes Mark this item as active when the URL starts with one of these paths. insertBefore string | string[] yes Insert this item before the item referenced here. For arrays, the first one found in order is used. insertAfter string | string[] yes Insert this item after the item referenced here. For arrays, the first one found in order is used. insertBefore takes precedence. name string yes Overrides the default name. If not supplied the name of the link will equal the plural value of the model. console.navigation/section This extension can be used to define a new section of navigation items in the navigation tab. Name Value Type Optional Description id string no A unique identifier for this item. perspective string yes The perspective ID to which this item belongs to. If not specified, contributes to the default perspective. dataAttributes { [key: string]: string; } yes Adds data attributes to the DOM. insertBefore string | string[] yes Insert this item before the item referenced here. For arrays, the first one found in order is used. insertAfter string | string[] yes Insert this item after the item referenced here. For arrays, the first one found in order is used. insertBefore takes precedence. name string yes Name of this section. If not supplied, only a separator will be shown above the section. console.navigation/separator This extension can be used to add a separator between navigation items in the navigation. Name Value Type Optional Description id string no A unique identifier for this item. perspective string yes The perspective ID to which this item belongs to. If not specified, contributes to the default perspective. section string yes Navigation section to which this item belongs to. If not specified, render this item as a top level link. dataAttributes { [key: string]: string; } yes Adds data attributes to the DOM. insertBefore string | string[] yes Insert this item before the item referenced here. For arrays, the first one found in order is used. insertAfter string | string[] yes Insert this item after the item referenced here. For arrays, the first one found in order is used. insertBefore takes precedence. console.page/resource/details Name Value Type Optional Description model ExtensionK8sGroupKindModel no The model for which this resource page links to. component CodeRef<React.ComponentType<{ match: match<{}>; namespace: string; model: ExtensionK8sModel; }>> no The component to be rendered when the route matches. console.page/resource/list Adds new resource list page to Console router. Name Value Type Optional Description model ExtensionK8sGroupKindModel no The model for which this resource page links to. component CodeRef<React.ComponentType<{ match: match<{}>; namespace: string; model: ExtensionK8sModel; }>> no The component to be rendered when the route matches. console.page/route Adds a new page to the web console router. See React Router . Name Value Type Optional Description component CodeRef<React.ComponentType<RouteComponentProps<{}, StaticContext, any>>> no The component to be rendered when the route matches. path string | string[] no Valid URL path or array of paths that path-to-regexp@^1.7.0 understands. perspective string yes The perspective to which this page belongs to. If not specified, contributes to all perspectives. exact boolean yes When true, will only match if the path matches the location.pathname exactly. console.page/route/standalone Adds a new standalone page, rendered outside the common page layout, to the web console router. See React Router . Name Value Type Optional Description component CodeRef<React.ComponentType<RouteComponentProps<{}, StaticContext, any>>> no The component to be rendered when the route matches. path string | string[] no Valid URL path or array of paths that path-to-regexp@^1.7.0 understands. exact boolean yes When true, will only match if the path matches the location.pathname exactly. console.perspective This extension contributes a new perspective to the console, which enables customization of the navigation menu. Name Value Type Optional Description id string no The perspective identifier. name string no The perspective display name. icon CodeRef<LazyComponent> no The perspective display icon. landingPageURL CodeRef<(flags: { [key: string]: boolean; }, isFirstVisit: boolean) ⇒ string> no The function to get perspective landing page URL. importRedirectURL CodeRef<(namespace: string) ⇒ string> no The function to get redirect URL for import flow. default boolean yes Whether the perspective is the default. There can only be one default. defaultPins ExtensionK8sModel[] yes Default pinned resources on the nav usePerspectiveDetection CodeRef<() ⇒ [boolean, boolean]> yes The hook to detect default perspective console.project-overview/inventory-item Adds a new inventory item into the Project Overview page. Name Value Type Optional Description component CodeRef<React.ComponentType<{ projectName: string; }>> no The component to be rendered. console.project-overview/utilization-item Adds a new project overview utilization item. Name Value Type Optional Description title string no The title of the utilization item. getUtilizationQuery CodeRef<GetProjectQuery> no Prometheus utilization query. humanize CodeRef<Humanize> no Convert Prometheus data to human-readable form. getTotalQuery CodeRef<GetProjectQuery> yes Prometheus total query. getRequestQuery CodeRef<GetProjectQuery> yes Prometheus request query. getLimitQuery CodeRef<GetProjectQuery> yes Prometheus limit query. TopConsumerPopover CodeRef<React.ComponentType<TopConsumerPopoverProps>> yes Shows the top consumer popover instead of plain value. console.pvc/alert This extension can be used to contribute custom alerts on the PVC details page. Name Value Type Optional Description alert CodeRef<React.ComponentType<{ pvc: K8sResourceCommon; }>> no The alert component. console.pvc/create-prop This extension can be used to specify additional properties that will be used when creating PVC resources on the PVC list page. Name Value Type Optional Description label string no Label for the create prop action. path string no Path for the create prop action. console.pvc/delete This extension allows hooking into deleting PVC resources. It can provide an alert with additional information and custom PVC delete logic. Name Value Type Optional Description predicate CodeRef<(pvc: K8sResourceCommon) ⇒ boolean> no Predicate that tells whether to use the extension or not. onPVCKill CodeRef<(pvc: K8sResourceCommon) ⇒ Promise<void>> no Method for the PVC delete operation. alert CodeRef<React.ComponentType<{ pvc: K8sResourceCommon; }>> no Alert component to show additional information. console.pvc/status Name Value Type Optional Description priority number no Priority for the status component. A larger value means higher priority. status CodeRef<React.ComponentType<{ pvc: K8sResourceCommon; }>> no The status component. predicate CodeRef<(pvc: K8sResourceCommon) ⇒ boolean> no Predicate that tells whether to render the status component or not. console.redux-reducer Adds new reducer to Console Redux store which operates on plugins.<scope> substate. Name Value Type Optional Description scope string no The key to represent the reducer-managed substate within the Redux state object. reducer CodeRef<Reducer<any, AnyAction>> no The reducer function, operating on the reducer-managed substate. console.resource/create This extension allows plugins to provide a custom component (i.e., wizard or form) for specific resources, which will be rendered, when users try to create a new resource instance. Name Value Type Optional Description model ExtensionK8sModel no The model for which this create resource page will be rendered component CodeRef<React.ComponentType<CreateResourceComponentProps>> no The component to be rendered when the model matches console.storage-class/provisioner Adds a new storage class provisioner as an option during storage class creation. Name Value Type Optional Description CSI ProvisionerDetails yes Container Storage Interface provisioner type OTHERS ProvisionerDetails yes Other provisioner type console.storage-provider This extension can be used to contribute a new storage provider to select, when attaching storage and a provider specific component. Name Value Type Optional Description name string no Displayed name of the provider. Component CodeRef<React.ComponentType<Partial<RouteComponentProps<{}, StaticContext, any>>>> no Provider specific component to render. console.tab Adds a tab to a horizontal nav matching the contextId . Name Value Type Optional Description contextId string no Context ID assigned to the horizontal nav in which the tab will be injected. Possible values: dev-console-observe name string no The display label of the tab href string no The href appended to the existing URL component CodeRef<React.ComponentType<PageComponentProps<K8sResourceCommon>>> no Tab content component. console.tab/horizontalNav This extension can be used to add a tab on the resource details page. Name Value Type Optional Description model ExtensionK8sKindVersionModel no The model for which this provider show tab. page { name: string; href: string; } no The page to be show in horizontal tab. It takes tab name as name and href of the tab component CodeRef<React.ComponentType<PageComponentProps<K8sResourceCommon>>> no The component to be rendered when the route matches. console.telemetry/listener This component can be used to register a listener function receiving telemetry events. These events include user identification, page navigation, and other application specific events. The listener may use this data for reporting and analytics purposes. Name Value Type Optional Description listener CodeRef<TelemetryEventListener> no Listen for telemetry events console.topology/adapter/build BuildAdapter contributes an adapter to adapt element to data that can be used by the Build component. Name Value Type Optional Description adapt CodeRef<(element: GraphElement) ⇒ AdapterDataType<BuildConfigData> | undefined> no Adapter to adapt element to data that can be used by Build component. console.topology/adapter/network NetworkAdapater contributes an adapter to adapt element to data that can be used by the Networking component. Name Value Type Optional Description adapt CodeRef<(element: GraphElement) ⇒ NetworkAdapterType | undefined> no Adapter to adapt element to data that can be used by Networking component. console.topology/adapter/pod PodAdapter contributes an adapter to adapt element to data that can be used by the Pod component. Name Value Type Optional Description adapt CodeRef<(element: GraphElement) ⇒ AdapterDataType<PodsAdapterDataType> | undefined> no Adapter to adapt element to data that can be used by Pod component. console.topology/component/factory Getter for a ViewComponentFactory . Name Value Type Optional Description getFactory CodeRef<ViewComponentFactory> no Getter for a ViewComponentFactory . console.topology/create/connector Getter for the create connector function. Name Value Type Optional Description getCreateConnector CodeRef<CreateConnectionGetter> no Getter for the create connector function. console.topology/data/factory Topology Data Model Factory Extension Name Value Type Optional Description id string no Unique ID for the factory. priority number no Priority for the factory resources WatchK8sResourcesGeneric yes Resources to be fetched from useK8sWatchResources hook. workloadKeys string[] yes Keys in resources containing workloads. getDataModel CodeRef<TopologyDataModelGetter> yes Getter for the data model factory. isResourceDepicted CodeRef<TopologyDataModelDepicted> yes Getter for function to determine if a resource is depicted by this model factory. getDataModelReconciler CodeRef<TopologyDataModelReconciler> yes Getter for function to reconcile data model after all extensions' models have loaded. console.topology/decorator/provider Topology Decorator Provider Extension Name Value Type Optional Description id string no ID for topology decorator specific to the extension priority number no Priority for topology decorator specific to the extension quadrant TopologyQuadrant no Quadrant for topology decorator specific to the extension decorator CodeRef<TopologyDecoratorGetter> no Decorator specific to the extension console.topology/details/resource-alert DetailsResourceAlert contributes an alert for specific topology context or graph element. Name Value Type Optional Description id string no The ID of this alert. Used to save state if the alert should not be shown after dismissed. contentProvider CodeRef<(element: GraphElement) ⇒ DetailsResourceAlertContent | null> no Hook to return the contents of the alert. console.topology/details/resource-link DetailsResourceLink contributes a link for specific topology context or graph element. Name Value Type Optional Description link CodeRef<(element: GraphElement) ⇒ React.Component | undefined> no Return the resource link if provided, otherwise undefined. Use the ResourceIcon and ResourceLink properties for styles. priority number yes A higher priority factory will get the first chance to create the link. console.topology/details/tab DetailsTab contributes a tab for the topology details panel. Name Value Type Optional Description id string no A unique identifier for this details tab. label string no The tab label to display in the UI. insertBefore string | string[] yes Insert this item before the item referenced here. For arrays, the first one found in order is used. insertAfter string | string[] yes Insert this item after the item referenced here. For arrays, the first one found in order is used. The insertBefore value takes precedence. console.topology/details/tab-section DetailsTabSection contributes a section for a specific tab in the topology details panel. Name Value Type Optional Description id string no A unique identifier for this details tab section. tab string no The parent tab ID that this section should contribute to. provider CodeRef<DetailsTabSectionExtensionHook> no A hook that returns a component, or if null or undefined, renders in the topology sidebar. SDK component: <Section title=\{}>... padded area section CodeRef<(element: GraphElement, renderNull?: () ⇒ null) ⇒ React.Component | undefined> no Deprecated: Fallback if no provider is defined. renderNull is a no-op already. insertBefore string | string[] yes Insert this item before the item referenced here. For arrays, the first one found in order is used. insertAfter string | string[] yes Insert this item after the item referenced here. For arrays, the first one found in order is used. The insertBefore value takes precedence. console.topology/display/filters Topology Display Filters Extension Name Value Type Optional Description getTopologyFilters CodeRef<() ⇒ TopologyDisplayOption[]> no Getter for topology filters specific to the extension applyDisplayOptions CodeRef<TopologyApplyDisplayOptions> no Function to apply filters to the model console.topology/relationship/provider Topology relationship provider connector extension Name Value Type Optional Description provides CodeRef<RelationshipProviderProvides> no Use to determine if a connection can be created between the source and target node tooltip string no Tooltip to show when connector operation is hovering over the drop target, for example, "Create a Visual Connector" create CodeRef<RelationshipProviderCreate> no Callback to execute when connector is drop over target node to create a connection priority number no Priority for relationship, higher will be preferred in case of multiple console.user-preference/group This extension can be used to add a group on the console user-preferences page. It will appear as a vertical tab option on the console user-preferences page. Name Value Type Optional Description id string no ID used to identify the user preference group. label string no The label of the user preference group insertBefore string yes ID of user preference group before which this group should be placed insertAfter string yes ID of user preference group after which this group should be placed console.user-preference/item This extension can be used to add an item to the user preferences group on the console user preferences page. Name Value Type Optional Description id string no ID used to identify the user preference item and referenced in insertAfter and insertBefore to define the item order label string no The label of the user preference description string no The description of the user preference field UserPreferenceField no The input field options used to render the values to set the user preference groupId string yes IDs used to identify the user preference groups the item would belong to insertBefore string yes ID of user preference item before which this item should be placed insertAfter string yes ID of user preference item after which this item should be placed console.yaml-template YAML templates for editing resources via the yaml editor. Name Value Type Optional Description model ExtensionK8sModel no Model associated with the template. template CodeRef<string> no The YAML template. name string no The name of the template. Use the name default to mark this as the default template. dev-console.add/action This extension allows plugins to contribute an add action item to the add page of developer perspective. For example, a Serverless plugin can add a new action item for adding serverless functions to the add page of developer console. Name Value Type Optional Description id string no ID used to identify the action. label string no The label of the action. description string no The description of the action. href string no The href to navigate to. groupId string yes IDs used to identify the action groups the action would belong to. icon CodeRef<React.ReactNode> yes The perspective display icon. accessReview AccessReviewResourceAttributes[] yes Optional access review to control the visibility or enablement of the action. dev-console.add/action-group This extension allows plugins to contibute a group in the add page of developer console. Groups can be referenced by actions, which will be grouped together in the add action page based on their extension definition. For example, a Serverless plugin can contribute a Serverless group and together with multiple add actions. Name Value Type Optional Description id string no ID used to identify the action group name string no The title of the action group insertBefore string yes ID of action group before which this group should be placed insertAfter string yes ID of action group after which this group should be placed dev-console.import/environment This extension can be used to specify extra build environment variable fields under the builder image selector in the developer console git import form. When set, the fields will override environment variables of the same name in the build section. Name Value Type Optional Description imageStreamName string no Name of the image stream to provide custom environment variables for imageStreamTags string[] no List of supported image stream tags environments ImageEnvironment[] no List of environment variables console.dashboards/overview/detail/item Deprecated. use CustomOverviewDetailItem type instead Name Value Type Optional Description component CodeRef<React.ComponentType<{}>> no The value, based on the DetailItem component console.page/resource/tab Deprecated. Use console.tab/horizontalNav instead. Adds a new resource tab page to Console router. Name Value Type Optional Description model ExtensionK8sGroupKindModel no The model for which this resource page links to. component CodeRef<React.ComponentType<RouteComponentProps<{}, StaticContext, any>>> no The component to be rendered when the route matches. name string no The name of the tab. href string yes The optional href for the tab link. If not provided, the first path is used. exact boolean yes When true, will only match if the path matches the location.pathname exactly. 7.5.2. OpenShift Container Platform console API useActivePerspective Hook that provides the currently active perspective and a callback for setting the active perspective. It returns a tuple containing the current active perspective and setter callback. Example const Component: React.FC = (props) => { const [activePerspective, setActivePerspective] = useActivePerspective(); return <select value={activePerspective} onChange={(e) => setActivePerspective(e.target.value)} > { // ...perspective options } </select> } GreenCheckCircleIcon Component for displaying a green check mark circle icon. Example <GreenCheckCircleIcon title="Healthy" /> Parameter Name Description className (optional) additional class name for the component title (optional) icon title size (optional) icon size: ( sm , md , lg , xl ) RedExclamationCircleIcon Component for displaying a red exclamation mark circle icon. Example <RedExclamationCircleIcon title="Failed" /> Parameter Name Description className (optional) additional class name for the component title (optional) icon title size (optional) icon size: ( sm , md , lg , xl ) YellowExclamationTriangleIcon Component for displaying a yellow triangle exclamation icon. Example <YellowExclamationTriangleIcon title="Warning" /> Parameter Name Description className (optional) additional class name for the component title (optional) icon title size (optional) icon size: ( sm , md , lg , xl ) BlueInfoCircleIcon Component for displaying a blue info circle icon. Example <BlueInfoCircleIcon title="Info" /> Parameter Name Description className (optional) additional class name for the component title (optional) icon title size (optional) icon size: ('sm', 'md', 'lg', 'xl') ErrorStatus Component for displaying an error status popover. Example <ErrorStatus title={errorMsg} /> Parameter Name Description title (optional) status text iconOnly (optional) if true, only displays icon noTooltip (optional) if true, tooltip won't be displayed className (optional) additional class name for the component popoverTitle (optional) title for popover InfoStatus Component for displaying an information status popover. Example <InfoStatus title={infoMsg} /> Parameter Name Description title (optional) status text iconOnly (optional) if true, only displays icon noTooltip (optional) if true, tooltip won't be displayed className (optional) additional class name for the component popoverTitle (optional) title for popover ProgressStatus Component for displaying a progressing status popover. Example <ProgressStatus title={progressMsg} /> Parameter Name Description title (optional) status text iconOnly (optional) if true, only displays icon noTooltip (optional) if true, tooltip won't be displayed className (optional) additional class name for the component popoverTitle (optional) title for popover SuccessStatus Component for displaying a success status popover. Example <SuccessStatus title={successMsg} /> Parameter Name Description title (optional) status text iconOnly (optional) if true, only displays icon noTooltip (optional) if true, tooltip won't be displayed className (optional) additional class name for the component popoverTitle (optional) title for popover checkAccess Provides information about user access to a given resource. It returns an object with resource access information. Parameter Name Description resourceAttributes resource attributes for access review impersonate impersonation details useAccessReview Hook that provides information about user access to a given resource. It returns an array with isAllowed and loading values. Parameter Name Description resourceAttributes resource attributes for access review impersonate impersonation details useResolvedExtensions React hook for consuming Console extensions with resolved CodeRef properties. This hook accepts the same argument(s) as useExtensions hook and returns an adapted list of extension instances, resolving all code references within each extension's properties. Initially, the hook returns an empty array. After the resolution is complete, the React component is re-rendered with the hook returning an adapted list of extensions. When the list of matching extensions changes, the resolution is restarted. The hook will continue to return the result until the resolution completes. The hook's result elements are guaranteed to be referentially stable across re-renders. It returns a tuple containing a list of adapted extension instances with resolved code references, a boolean flag indicating whether the resolution is complete, and a list of errors detected during the resolution. Example const [navItemExtensions, navItemsResolved] = useResolvedExtensions<NavItem>(isNavItem); // process adapted extensions and render your component Parameter Name Description typeGuards A list of callbacks that each accept a dynamic plugin extension as an argument and return a boolean flag indicating whether or not the extension meets desired type constraints HorizontalNav A component that creates a Navigation bar for a page. Routing is handled as part of the component. console.tab/horizontalNav can be used to add additional content to any horizontal navigation. Example const HomePage: React.FC = (props) => { const page = { href: '/home', name: 'Home', component: () => <>Home</> } return <HorizontalNav match={props.match} pages={[page]} /> } Parameter Name Description resource The resource associated with this Navigation, an object of K8sResourceCommon type pages An array of page objects match match object provided by React Router VirtualizedTable A component for making virtualized tables. Example const MachineList: React.FC<MachineListProps> = (props) => { return ( <VirtualizedTable<MachineKind> {...props} aria-label='Machines' columns={getMachineColumns} Row={getMachineTableRow} /> ); } Parameter Name Description data data for table loaded flag indicating data is loaded loadError error object if issue loading data columns column setup Row row setup unfilteredData original data without filter NoDataEmptyMsg (optional) no data empty message component EmptyMsg (optional) empty message component scrollNode (optional) function to handle scroll label (optional) label for table ariaLabel (optional) aria label gridBreakPoint sizing of how to break up grid for responsiveness onSelect (optional) function for handling select of table rowData (optional) data specific to row TableData Component for displaying table data within a table row. Example const PodRow: React.FC<RowProps<K8sResourceCommon>> = ({ obj, activeColumnIDs }) => { return ( <> <TableData id={columns[0].id} activeColumnIDs={activeColumnIDs}> <ResourceLink kind="Pod" name={obj.metadata.name} namespace={obj.metadata.namespace} /> </TableData> <TableData id={columns[1].id} activeColumnIDs={activeColumnIDs}> <ResourceLink kind="Namespace" name={obj.metadata.namespace} /> </TableData> </> ); }; Parameter Name Description id unique ID for table activeColumnIDs active columns className (optional) option class name for styling useActiveColumns A hook that provides a list of user-selected active TableColumns. Example // See implementation for more details on TableColumn type const [activeColumns, userSettingsLoaded] = useActiveColumns({ columns, showNamespaceOverride: false, columnManagementID, }); return userSettingsAreLoaded ? <VirtualizedTable columns={activeColumns} {...otherProps} /> : null Parameter Name Description options Which are passed as a key-value map \{TableColumn[]} options.columns An array of all available TableColumns {boolean} [options.showNamespaceOverride] (optional) If true, a namespace column will be included, regardless of column management selections {string} [options.columnManagementID] (optional) A unique ID used to persist and retrieve column management selections to and from user settings. Usually a group/version/kind (GVK) string for a resource. A tuple containing the current user selected active columns (a subset of options.columns), and a boolean flag indicating whether user settings have been loaded. ListPageHeader Component for generating a page header. Example const exampleList: React.FC = () => { return ( <> <ListPageHeader title="Example List Page"/> </> ); }; Parameter Name Description title heading title helpText (optional) help section as react node badge (optional) badge icon as react node ListPageCreate Component for adding a create button for a specific resource kind that automatically generates a link to the create YAML for this resource. Example const exampleList: React.FC<MyProps> = () => { return ( <> <ListPageHeader title="Example Pod List Page"/> <ListPageCreate groupVersionKind="Pod">Create Pod</ListPageCreate> </ListPageHeader> </> ); }; Parameter Name Description groupVersionKind the resource group/version/kind to represent ListPageCreateLink Component for creating a stylized link. Example const exampleList: React.FC<MyProps> = () => { return ( <> <ListPageHeader title="Example Pod List Page"/> <ListPageCreateLink to={'/link/to/my/page'}>Create Item</ListPageCreateLink> </ListPageHeader> </> ); }; Parameter Name Description to string location where link should direct createAccessReview (optional) object with namespace and kind used to determine access children (optional) children for the component ListPageCreateButton Component for creating button. Example const exampleList: React.FC<MyProps> = () => { return ( <> <ListPageHeader title="Example Pod List Page"/> <ListPageCreateButton createAccessReview={access}>Create Pod</ListPageCreateButton> </ListPageHeader> </> ); }; Parameter Name Description createAccessReview (optional) object with namespace and kind used to determine access pfButtonProps (optional) Patternfly Button props ListPageCreateDropdown Component for creating a dropdown wrapped with permissions check. Example const exampleList: React.FC<MyProps> = () => { const items = { SAVE: 'Save', DELETE: 'Delete', } return ( <> <ListPageHeader title="Example Pod List Page"/> <ListPageCreateDropdown createAccessReview={access} items={items}>Actions</ListPageCreateDropdown> </ListPageHeader> </> ); }; Parameter Name Description items key:ReactNode pairs of items to display in dropdown component onClick callback function for click on dropdown items createAccessReview (optional) object with namespace and kind used to determine access children (optional) children for the dropdown toggle ListPageFilter Component that generates filter for list page. Example // See implementation for more details on RowFilter and FilterValue types const [staticData, filteredData, onFilterChange] = useListPageFilter( data, rowFilters, staticFilters, ); // ListPageFilter updates filter state based on user interaction and resulting filtered data can be rendered in an independent component. return ( <> <ListPageHeader .../> <ListPagBody> <ListPageFilter data={staticData} onFilterChange={onFilterChange} /> <List data={filteredData} /> </ListPageBody> </> ) Parameter Name Description data An array of data points loaded indicates that data has loaded onFilterChange callback function for when filter is updated rowFilters (optional) An array of RowFilter elements that define the available filter options nameFilterPlaceholder (optional) placeholder for name filter labelFilterPlaceholder (optional) placeholder for label filter hideLabelFilter (optional) only shows the name filter instead of both name and label filter hideNameLabelFilter (optional) hides both name and label filter columnLayout (optional) column layout object hideColumnManagement (optional) flag to hide the column management useListPageFilter A hook that manages filter state for the ListPageFilter component. It returns a tuple containing the data filtered by all static filters, the data filtered by all static and row filters, and a callback that updates rowFilters. Example // See implementation for more details on RowFilter and FilterValue types const [staticData, filteredData, onFilterChange] = useListPageFilter( data, rowFilters, staticFilters, ); // ListPageFilter updates filter state based on user interaction and resulting filtered data can be rendered in an independent component. return ( <> <ListPageHeader .../> <ListPagBody> <ListPageFilter data={staticData} onFilterChange={onFilterChange} /> <List data={filteredData} /> </ListPageBody> </> ) Parameter Name Description data An array of data points rowFilters (optional) An array of RowFilter elements that define the available filter options staticFilters (optional) An array of FilterValue elements that are statically applied to the data ResourceLink Component that creates a link to a specific resource type with an icon badge. Example <ResourceLink kind="Pod" name="testPod" title={metadata.uid} /> Parameter Name Description kind (optional) the kind of resource i.e. Pod, Deployment, Namespace groupVersionKind (optional) object with group, version, and kind className (optional) class style for component displayName (optional) display name for component, overwrites the resource name if set inline (optional) flag to create icon badge and name inline with children linkTo (optional) flag to create a Link object - defaults to true name (optional) name of resource namesapce (optional) specific namespace for the kind resource to link to hideIcon (optional) flag to hide the icon badge title (optional) title for the link object (not displayed) dataTest (optional) identifier for testing onClick (optional) callback function for when component is clicked truncate (optional) flag to truncate the link if too long ResourceIcon Component that creates an icon badge for a specific resource type. Example <ResourceIcon kind="Pod"/> Parameter Name Description kind (optional) the kind of resource i.e. Pod, Deployment, Namespace groupVersionKind (optional) object with group, version, and kind className (optional) class style for component useK8sModel Hook that retrieves the k8s model for provided K8sGroupVersionKind from redux. It returns an array with the first item as k8s model and second item as inFlight status. Example const Component: React.FC = () => { const [model, inFlight] = useK8sModel({ group: 'app'; version: 'v1'; kind: 'Deployment' }); return ... } Parameter Name Description groupVersionKind group, version, kind of k8s resource K8sGroupVersionKind is preferred alternatively can pass reference for group, version, kind which is deprecated, i.e, group/version/kind (GVK) K8sResourceKindReference. useK8sModels Hook that retrieves all current k8s models from redux. It returns an array with the first item as the list of k8s model and second item as inFlight status. Example const Component: React.FC = () => { const [models, inFlight] = UseK8sModels(); return ... } useK8sWatchResource Hook that retrieves the k8s resource along with status for loaded and error. It returns an array with first item as resource(s), second item as loaded status and third item as error state if any. Example const Component: React.FC = () => { const watchRes = { ... } const [data, loaded, error] = useK8sWatchResource(watchRes) return ... } Parameter Name Description initResource options needed to watch for resource. useK8sWatchResources Hook that retrieves the k8s resources along with their respective status for loaded and error. It returns a map where keys are as provided in initResouces and value has three properties data, loaded and error. Example const Component: React.FC = () => { const watchResources = { 'deployment': {...}, 'pod': {...} ... } const {deployment, pod} = useK8sWatchResources(watchResources) return ... } Parameter Name Description initResources Resources must be watched as key-value pair, wherein key will be unique to resource and value will be options needed to watch for the respective resource. consoleFetch A custom wrapper around fetch that adds console specific headers and allows for retries and timeouts.It also validates the response status code and throws appropriate error or logs out the user if required. It returns a promise that resolves to the response. Parameter Name Description url The URL to fetch options The options to pass to fetch timeout The timeout in milliseconds consoleFetchJSON A custom wrapper around fetch that adds console specific headers and allows for retries and timeouts. It also validates the response status code and throws appropriate error or logs out the user if required. It returns the response as a JSON object. Uses consoleFetch internally. It returns a promise that resolves to the response as JSON object. Parameter Name Description url The URL to fetch method The HTTP method to use. Defaults to GET options The options to pass to fetch timeout The timeout in milliseconds cluster The name of the cluster to make the request to. Defaults to the active cluster the user has selected consoleFetchText A custom wrapper around fetch that adds console specific headers and allows for retries and timeouts. It also validates the response status code and throws appropriate error or logs out the user if required. It returns the response as a text. Uses consoleFetch internally. It returns a promise that resolves to the response as text. Parameter Name Description url The URL to fetch options The options to pass to fetch timeout The timeout in milliseconds cluster The name of the cluster to make the request to. Defaults to the active cluster the user has selected getConsoleRequestHeaders A function that creates impersonation and multicluster related headers for API requests using current redux state. It returns an object containing the appropriate impersonation and clustr requst headers, based on redux state. Parameter Name Description targetCluster Override the current active cluster with the provided targetCluster k8sGetResource It fetches a resource from the cluster, based on the provided options. If the name is provided it returns one resource else it returns all the resources matching the model. It returns a promise that resolves to the response as JSON object with a resource if the name is providedelse it returns all the resources matching the model. In case of failure, the promise gets rejected with HTTP error response. Parameter Name Description options Which are passed as key-value pairs in the map options.model k8s model options.name The name of the resource, if not provided then it will look for all the resources matching the model. options.ns The namespace to look into, should not be specified for cluster-scoped resources. options.path Appends as subpath if provided options.queryParams The query parameters to be included in the URL. options.requestInit The fetch init object to use. This can have request headers, method, redirect, etc. See Interface RequestInit for more. k8sCreateResource It creates a resource in the cluster, based on the provided options. It returns a promise that resolves to the response of the resource created. In case of failure promise gets rejected with HTTP error response. Parameter Name Description options Which are passed as key-value pairs in the map options.model k8s model options.data Payload for the resource to be created options.path Appends as subpath if provided options.queryParams The query parameters to be included in the URL. k8sUpdateResource It updates the entire resource in the cluster, based on providedoptions. When a client needs to replace an existing resource entirely, they can use k8sUpdate. Alternatively can use k8sPatch to perform the partial update. It returns a promise that resolves to the response of the resource updated. In case of failure promise gets rejected with HTTP error response. Parameter Name Description options Which are passed as key-value pair in the map options.model k8s model options.data Payload for the k8s resource to be updated options.ns Namespace to look into, it should not be specified for cluster-scoped resources. options.name Resource name to be updated. options.path Appends as subpath if provided options.queryParams The query parameters to be included in the URL. k8sPatchResource It patches any resource in the cluster, based on provided options. When a client needs to perform the partial update, they can use k8sPatch. Alternatively can use k8sUpdate to replace an existing resource entirely. See Data Tracker for more. It returns a promise that resolves to the response of the resource patched. In case of failure promise gets rejected with HTTP error response. Parameter Name Description options Which are passed as key-value pairs in the map. options.model k8s model options.resource The resource to be patched. options.data Only the data to be patched on existing resource with the operation, path, and value. options.path Appends as subpath if provided. options.queryParams The query parameters to be included in the URL. k8sDeleteResource It deletes resources from the cluster, based on the provided model, resource. The garbage collection works based on Foreground | Background can be configured with propagationPolicy property in provided model or passed in json. It returns a promise that resolves to the response of kind Status. In case of failure promise gets rejected with HTTP error response. Example kind: 'DeleteOptions', apiVersion: 'v1', propagationPolicy Parameter Name Description options Which are passed as key-value pair in the map. options.model k8s model options.resource The resource to be deleted. options.path Appends as subpath if provided options.queryParams The query parameters to be included in the URL. options.requestInit The fetch init object to use. This can have request headers, method, redirect, etc. See Interface RequestInit for more. options.json Can control garbage collection of resources explicitly if provided else will default to model's "propagationPolicy". k8sListResource Lists the resources as an array in the cluster, based on provided options. It returns a promise that resolves to the response. Parameter Name Description options Which are passed as key-value pairs in the map options.model k8s model options.queryParams The query parameters to be included in the URL and can pass label selector's as well with key "labelSelector". options.requestInit The fetch init object to use. This can have request headers, method, redirect, etc. See Interface RequestInit for more. k8sListResourceItems Same interface as k8sListResource but returns the sub items. It returns the apiVersion for the model, i.e., group/version . getAPIVersionForModel Provides apiVersion for a k8s model. Parameter Name Description model k8s model getGroupVersionKindForResource Provides a group, version, and kind for a resource. It returns the group, version, kind for the provided resource. If the resource does not have an API group, group "core" will be returned. If the resource has an invalid apiVersion, then it will throw an Error. Parameter Name Description resource k8s resource getGroupVersionKindForModel Provides a group, version, and kind for a k8s model. This returns the group, version, kind for the provided model. If the model does not have an apiGroup, group "core" will be returned. Parameter Name Description model k8s model StatusPopupSection Component that shows the status in a popup window. Helpful component for building console.dashboards/overview/health/resource extensions. Example <StatusPopupSection firstColumn={ <> <span>{title}</span> <span className="text-secondary"> My Example Item </span> </> } secondColumn='Status' > Parameter Name Description firstColumn values for first column of popup secondColumn (optional) values for second column of popup children (optional) children for the popup StatusPopupItem Status element used in status popup; used in StatusPopupSection . Example <StatusPopupSection firstColumn='Example' secondColumn='Status' > <StatusPopupItem icon={healthStateMapping[MCGMetrics.state]?.icon}> Complete </StatusPopupItem> <StatusPopupItem icon={healthStateMapping[RGWMetrics.state]?.icon}> Pending </StatusPopupItem> </StatusPopupSection> Parameter Name Description value (optional) text value to display icon (optional) icon to display children child elements Overview Creates a wrapper component for a dashboard. Example <Overview> <OverviewGrid mainCards={mainCards} leftCards={leftCards} rightCards={rightCards} /> </Overview> Parameter Name Description className (optional) style class for div children (optional) elements of the dashboard OverviewGrid Creates a grid of card elements for a dashboard; used within Overview . Example <Overview> <OverviewGrid mainCards={mainCards} leftCards={leftCards} rightCards={rightCards} /> </Overview> Parameter Name Description mainCards cards for grid leftCards (optional) cards for left side of grid rightCards (optional) cards for right side of grid InventoryItem Creates an inventory card item. Example return ( <InventoryItem> <InventoryItemTitle>{title}</InventoryItemTitle> <InventoryItemBody error={loadError}> {loaded && <InventoryItemStatus count={workerNodes.length} icon={<MonitoringIcon />} />} </InventoryItemBody> </InventoryItem> ) Parameter Name Description children elements to render inside the item InventoryItemTitle Creates a title for an inventory card item; used within InventoryItem . Example return ( <InventoryItem> <InventoryItemTitle>{title}</InventoryItemTitle> <InventoryItemBody error={loadError}> {loaded && <InventoryItemStatus count={workerNodes.length} icon={<MonitoringIcon />} />} </InventoryItemBody> </InventoryItem> ) Parameter Name Description children elements to render inside the title InventoryItemBody Creates the body of an inventory card; used within InventoryCard and can be used with InventoryTitle . Example return ( <InventoryItem> <InventoryItemTitle>{title}</InventoryItemTitle> <InventoryItemBody error={loadError}> {loaded && <InventoryItemStatus count={workerNodes.length} icon={<MonitoringIcon />} />} </InventoryItemBody> </InventoryItem> ) Parameter Name Description children elements to render inside the Inventory Card or title error elements of the div InventoryItemStatus Creates a count and icon for an inventory card with optional link address; used within InventoryItemBody Example return ( <InventoryItem> <InventoryItemTitle>{title}</InventoryItemTitle> <InventoryItemBody error={loadError}> {loaded && <InventoryItemStatus count={workerNodes.length} icon={<MonitoringIcon />} />} </InventoryItemBody> </InventoryItem> ) Parameter Name Description count count for display icon icon for display linkTo (optional) link address InventoryItemLoading Creates a skeleton container for when an inventory card is loading; used with InventoryItem and related components Example if (loadError) { title = <Link to={workerNodesLink}>{t('Worker Nodes')}</Link>; } else if (!loaded) { title = <><InventoryItemLoading /><Link to={workerNodesLink}>{t('Worker Nodes')}</Link></>; } return ( <InventoryItem> <InventoryItemTitle>{title}</InventoryItemTitle> </InventoryItem> ) useFlag Hook that returns the given feature flag from FLAGS redux state. It returns the boolean value of the requested feature flag or undefined. Parameter Name Description flag The feature flag to return YAMLEditor A basic lazy loaded YAML editor with hover help and completion. Example <React.Suspense fallback={<LoadingBox />}> <YAMLEditor value={code} /> </React.Suspense> Parameter Name Description value String representing the yaml code to render. options Monaco editor options. minHeight Minimum editor height in valid CSS height values. showShortcuts Boolean to show shortcuts on top of the editor. toolbarLinks Array of ReactNode rendered on the toolbar links section on top of the editor. onChange Callback for on code change event. onSave Callback called when the command CTRL / CMD + S is triggered. ref React reference to { editor?: IStandaloneCodeEditor } . Using the editor property, you are able to access to all methods to control the editor. ResourceYAMLEditor A lazy loaded YAML editor for Kubernetes resources with hover help and completion. The component use the YAMLEditor and add on top of it more functionality likeresource update handling, alerts, save, cancel and reload buttons, accessibility and more. Unless onSave callback is provided, the resource update is automatically handled.It should be wrapped in a React.Suspense component. Example <React.Suspense fallback={<LoadingBox />}> <ResourceYAMLEditor initialResource={resource} header="Create resource" onSave={(content) => updateResource(content)} /> </React.Suspense> Parameter Name Description initialResource YAML/Object representing a resource to be shown by the editor. This prop is used only during the initial render header Add a header on top of the YAML editor onSave Callback for the Save button. Passing it will override the default update performed on the resource by the editor ResourceEventStream A component to show events related to a particular resource. Example const [resource, loaded, loadError] = useK8sWatchResource(clusterResource); return <ResourceEventStream resource={resource} /> Parameter Name Description resource An object whose related events should be shown. usePrometheusPoll Sets up a poll to Prometheus for a single query. It returns a tuple containing the query response, a boolean flag indicating whether the response has completed, and any errors encountered during the request or post-processing of the request. Parameter Name Description {PrometheusEndpoint} props.endpoint one of the PrometheusEndpoint (label, query, range, rules, targets) {string} [props.query] (optional) Prometheus query string. If empty or undefined, polling is not started. {number} [props.delay] (optional) polling delay interval (ms) {number} [props.endTime] (optional) for QUERY_RANGE enpoint, end of the query range {number} [props.samples] (optional) for QUERY_RANGE enpoint {number} [options.timespan] (optional) for QUERY_RANGE enpoint {string} [options.namespace] (optional) a search param to append {string} [options.timeout] (optional) a search param to append Timestamp A component to render timestamp. The timestamps are synchronized between invidual instances of the Timestamp component. The provided timestamp is formatted according to user locale. Parameter Name Description timestamp the timestamp to render. Format is expected to be ISO 8601 (used by Kubernetes), epoch timestamp, or an instance of a Date. simple render simple version of the component omitting icon and tooltip. omitSuffix formats the date ommiting the suffix. className additional class name for the component. useModal A hook to launch Modals. Example const context: AppPage: React.FC = () => {<br/> const [launchModal] = useModal();<br/> const onClick = () => launchModal(ModalComponent);<br/> return (<br/> <Button onClick={onClick}>Launch a Modal</Button><br/> )<br/>}<br/>` ActionServiceProvider Component that allows to receive contributions from other plugins for the console.action/provider extension type. Example const context: ActionContext = { 'a-context-id': { dataFromDynamicPlugin } }; ... <ActionServiceProvider context={context}> {({ actions, options, loaded }) => loaded && ( <ActionMenu actions={actions} options={options} variant={ActionMenuVariant.DROPDOWN} /> ) } </ActionServiceProvider> Parameter Name Description context Object with contextId and optional plugin data NamespaceBar A component that renders a horizontal toolbar with a namespace dropdown menu in the leftmost position. Additional components can be passed in as children and will be rendered to the right of the namespace dropdown. This component is designed to be used at the top of the page. It should be used on pages where the user needs to be able to change the active namespace, such as on pages with k8s resources. Example const logNamespaceChange = (namespace) => console.log(`New namespace: USD{namespace}`); ... <NamespaceBar onNamespaceChange={logNamespaceChange}> <NamespaceBarApplicationSelector /> </NamespaceBar> <Page> ... Parameter Name Description onNamespaceChange (optional) A function that is executed when a namespace option is selected. It accepts the new namespace in the form of a string as its only argument. The active namespace is updated automatically when an option is selected, but additional logic can be applied via this function. When the namespace is changed, the namespace parameter in the URL will be changed from the namespace to the newly selected namespace. isDisabled (optional) A boolean flag that disables the namespace dropdown if set to true. This option only applies to the namespace dropdown and has no effect on child components. children (optional) Additional elements to be rendered inside the toolbar to the right of the namespace dropdown. ErrorBoundaryFallbackPage Creates full page ErrorBoundaryFallbackPage component to display the "Oh no! Something went wrong." message along with the stack trace and other helpful debugging information. This is to be used inconjunction with an component. Example //in ErrorBoundary component return ( if (this.state.hasError) { return <ErrorBoundaryFallbackPage errorMessage={errorString} componentStack={componentStackString} stack={stackTraceString} title={errorString}/>; } return this.props.children; ) Parameter Name Description errorMessage text description of the error message componentStack component trace of the exception stack stack trace of the exception title title to render as the header of the error boundary page PerspectiveContext Deprecated: Use the provided usePerspectiveContext instead. Creates the perspective context. Parameter Name Description PerspectiveContextType object with active perspective and setter useAccessReviewAllowed Deprecated: Use useAccessReview from @console/dynamic-plugin-sdk instead. Hook that provides allowed status about user access to a given resource. It returns the isAllowed boolean value. Parameter Name Description resourceAttributes resource attributes for access review impersonate impersonation details useSafetyFirst Deprecated: This hook is not related to console functionality. Hook that ensures a safe asynchronnous setting of React state in case a given component could be unmounted. It returns an array with a pair of state value and its set function. Parameter Name Description initialState initial state value 7.5.3. Troubleshooting your dynamic plugin Refer to this list of troubleshooting tips if you run into issues loading your plugin. Verify that you have enabled your plugin in the console Operator configuration and your plugin name is the output by running the following command: USD oc get console.operator.openshift.io cluster -o jsonpath='{.spec.plugins}' Verify the enabled plugins on the status card of the Overview page in the Administrator perspective. You must refresh your browser if the plugin was recently enabled. Verify your plugin service is healthy by: Verifying your plugin pod status is running and your containers are ready. Verifying the service label selector matches the pod and the target port is correct. Curl the plugin-manifest.json from the service in a terminal on the console pod or another pod on the cluster. Verify your ConsolePlugin resource name ( consolePlugin.name ) matches the plugin name used in package.json . Verify your service name, namespace, port, and path are declared correctly in the ConsolePlugin resource. Verify your plugin service uses HTTPS and service serving certificates. Verify any certificates or connection errors in the console pod logs. Verify the feature flag your plugin relys on is not disabled. Verify your plugin does not have any consolePlugin.dependencies in package.json that are not met. This can include console version dependencies or dependencies on other plugins. Filter the JS console in your browser for your plugin's name to see messages that are logged. Verify there are no typos in the nav extension perspective or section IDs. Your plugin may be loaded, but nav items missing if IDs are incorrect. Try navigating to a plugin page directly by editing the URL. Verify there are no network policies that are blocking traffic from the console pod to your plugin service. If necessary, adjust network policies to allow console pods in the openshift-console namespace to make requests to your service. Verify the list of dynamic plugins to be loaded in your browser in the Console tab of the developer tools browser. Evaluate window.SERVER_FLAGS.consolePlugins to see the dynamic plugin on the Console frontend. Additional resources Understanding service serving certificates | [
"conster Header: React.FC = () => { const { t } = useTranslation('plugin__console-demo-plugin'); return <h1>{t('Hello, World!')}</h1>; };",
"yarn install",
"yarn run start",
"oc login",
"yarn run start-console",
"docker build -t quay.io/my-repositroy/my-plugin:latest .",
"docker run -it --rm -d -p 9001:80 quay.io/my-repository/my-plugin:latest",
"docker push quay.io/my-repository/my-plugin:latest",
"helm upgrade -i my-plugin charts/openshift-console-plugin -n my-plugin-namespace --create-namespace --set plugin.image=my-plugin-image-location",
"plugin: name: \"\" description: \"\" image: \"\" imagePullPolicy: IfNotPresent replicas: 2 port: 9443 securityContext: enabled: true podSecurityContext: enabled: true runAsNonRoot: true seccompProfile: type: RuntimeDefault containerSecurityContext: enabled: true allowPrivilegeEscalation: false capabilities: drop: - ALL resources: requests: cpu: 10m memory: 50Mi basePath: / certificateSecretName: \"\" serviceAccount: create: true annotations: {} name: \"\" patcherServiceAccount: create: true annotations: {} name: \"\" jobs: patchConsoles: enabled: true image: \"registry.redhat.io/openshift4/ose-tools-rhel8@sha256:e44074f21e0cca6464e50cb6ff934747e0bd11162ea01d522433a1a1ae116103\" podSecurityContext: enabled: true runAsNonRoot: true seccompProfile: type: RuntimeDefault containerSecurityContext: enabled: true allowPrivilegeEscalation: false capabilities: drop: - ALL resources: requests: cpu: 10m memory: 50Mi",
"\"consolePlugin\": { \"name\": \"my-plugin\", 1 \"version\": \"0.0.1\", 2 \"displayName\": \"My Plugin\", 3 \"description\": \"Enjoy this shiny, new console plugin!\", 4 \"exposedModules\": { \"ExamplePage\": \"./components/ExamplePage\" }, \"dependencies\": { \"@console/pluginAPI\": \"/*\" } }",
"{ \"type\": \"console.tab/horizontalNav\", \"properties\": { \"page\": { \"name\": \"Example Tab\", \"href\": \"example\" }, \"model\": { \"group\": \"core\", \"version\": \"v1\", \"kind\": \"Pod\" }, \"component\": { \"USDcodeRef\": \"ExampleTab\" } } }",
"\"exposedModules\": { \"ExamplePage\": \"./components/ExamplePage\", \"ExampleTab\": \"./components/ExampleTab\" }",
"import * as React from 'react'; export default function ExampleTab() { return ( <p>This is a custom tab added to a resource using a dynamic plugin.</p> ); }",
"helm upgrade -i my-plugin charts/openshift-console-plugin -n my-plugin-namespace --create-namespace --set plugin.image=my-plugin-image-location",
"const Component: React.FC = (props) => { const [activePerspective, setActivePerspective] = useActivePerspective(); return <select value={activePerspective} onChange={(e) => setActivePerspective(e.target.value)} > { // ...perspective options } </select> }",
"<GreenCheckCircleIcon title=\"Healthy\" />",
"<RedExclamationCircleIcon title=\"Failed\" />",
"<YellowExclamationTriangleIcon title=\"Warning\" />",
"<BlueInfoCircleIcon title=\"Info\" />",
"<ErrorStatus title={errorMsg} />",
"<InfoStatus title={infoMsg} />",
"<ProgressStatus title={progressMsg} />",
"<SuccessStatus title={successMsg} />",
"const [navItemExtensions, navItemsResolved] = useResolvedExtensions<NavItem>(isNavItem); // process adapted extensions and render your component",
"const HomePage: React.FC = (props) => { const page = { href: '/home', name: 'Home', component: () => <>Home</> } return <HorizontalNav match={props.match} pages={[page]} /> }",
"const MachineList: React.FC<MachineListProps> = (props) => { return ( <VirtualizedTable<MachineKind> {...props} aria-label='Machines' columns={getMachineColumns} Row={getMachineTableRow} /> ); }",
"const PodRow: React.FC<RowProps<K8sResourceCommon>> = ({ obj, activeColumnIDs }) => { return ( <> <TableData id={columns[0].id} activeColumnIDs={activeColumnIDs}> <ResourceLink kind=\"Pod\" name={obj.metadata.name} namespace={obj.metadata.namespace} /> </TableData> <TableData id={columns[1].id} activeColumnIDs={activeColumnIDs}> <ResourceLink kind=\"Namespace\" name={obj.metadata.namespace} /> </TableData> </> ); };",
"// See implementation for more details on TableColumn type const [activeColumns, userSettingsLoaded] = useActiveColumns({ columns, showNamespaceOverride: false, columnManagementID, }); return userSettingsAreLoaded ? <VirtualizedTable columns={activeColumns} {...otherProps} /> : null",
"const exampleList: React.FC = () => { return ( <> <ListPageHeader title=\"Example List Page\"/> </> ); };",
"const exampleList: React.FC<MyProps> = () => { return ( <> <ListPageHeader title=\"Example Pod List Page\"/> <ListPageCreate groupVersionKind=\"Pod\">Create Pod</ListPageCreate> </ListPageHeader> </> ); };",
"const exampleList: React.FC<MyProps> = () => { return ( <> <ListPageHeader title=\"Example Pod List Page\"/> <ListPageCreateLink to={'/link/to/my/page'}>Create Item</ListPageCreateLink> </ListPageHeader> </> ); };",
"const exampleList: React.FC<MyProps> = () => { return ( <> <ListPageHeader title=\"Example Pod List Page\"/> <ListPageCreateButton createAccessReview={access}>Create Pod</ListPageCreateButton> </ListPageHeader> </> ); };",
"const exampleList: React.FC<MyProps> = () => { const items = { SAVE: 'Save', DELETE: 'Delete', } return ( <> <ListPageHeader title=\"Example Pod List Page\"/> <ListPageCreateDropdown createAccessReview={access} items={items}>Actions</ListPageCreateDropdown> </ListPageHeader> </> ); };",
"// See implementation for more details on RowFilter and FilterValue types const [staticData, filteredData, onFilterChange] = useListPageFilter( data, rowFilters, staticFilters, ); // ListPageFilter updates filter state based on user interaction and resulting filtered data can be rendered in an independent component. return ( <> <ListPageHeader .../> <ListPagBody> <ListPageFilter data={staticData} onFilterChange={onFilterChange} /> <List data={filteredData} /> </ListPageBody> </> )",
"// See implementation for more details on RowFilter and FilterValue types const [staticData, filteredData, onFilterChange] = useListPageFilter( data, rowFilters, staticFilters, ); // ListPageFilter updates filter state based on user interaction and resulting filtered data can be rendered in an independent component. return ( <> <ListPageHeader .../> <ListPagBody> <ListPageFilter data={staticData} onFilterChange={onFilterChange} /> <List data={filteredData} /> </ListPageBody> </> )",
"<ResourceLink kind=\"Pod\" name=\"testPod\" title={metadata.uid} />",
"<ResourceIcon kind=\"Pod\"/>",
"const Component: React.FC = () => { const [model, inFlight] = useK8sModel({ group: 'app'; version: 'v1'; kind: 'Deployment' }); return }",
"const Component: React.FC = () => { const [models, inFlight] = UseK8sModels(); return }",
"const Component: React.FC = () => { const watchRes = { } const [data, loaded, error] = useK8sWatchResource(watchRes) return }",
"const Component: React.FC = () => { const watchResources = { 'deployment': {...}, 'pod': {...} } const {deployment, pod} = useK8sWatchResources(watchResources) return }",
"<StatusPopupSection firstColumn={ <> <span>{title}</span> <span className=\"text-secondary\"> My Example Item </span> </> } secondColumn='Status' >",
"<StatusPopupSection firstColumn='Example' secondColumn='Status' > <StatusPopupItem icon={healthStateMapping[MCGMetrics.state]?.icon}> Complete </StatusPopupItem> <StatusPopupItem icon={healthStateMapping[RGWMetrics.state]?.icon}> Pending </StatusPopupItem> </StatusPopupSection>",
"<Overview> <OverviewGrid mainCards={mainCards} leftCards={leftCards} rightCards={rightCards} /> </Overview>",
"<Overview> <OverviewGrid mainCards={mainCards} leftCards={leftCards} rightCards={rightCards} /> </Overview>",
"return ( <InventoryItem> <InventoryItemTitle>{title}</InventoryItemTitle> <InventoryItemBody error={loadError}> {loaded && <InventoryItemStatus count={workerNodes.length} icon={<MonitoringIcon />} />} </InventoryItemBody> </InventoryItem> )",
"return ( <InventoryItem> <InventoryItemTitle>{title}</InventoryItemTitle> <InventoryItemBody error={loadError}> {loaded && <InventoryItemStatus count={workerNodes.length} icon={<MonitoringIcon />} />} </InventoryItemBody> </InventoryItem> )",
"return ( <InventoryItem> <InventoryItemTitle>{title}</InventoryItemTitle> <InventoryItemBody error={loadError}> {loaded && <InventoryItemStatus count={workerNodes.length} icon={<MonitoringIcon />} />} </InventoryItemBody> </InventoryItem> )",
"return ( <InventoryItem> <InventoryItemTitle>{title}</InventoryItemTitle> <InventoryItemBody error={loadError}> {loaded && <InventoryItemStatus count={workerNodes.length} icon={<MonitoringIcon />} />} </InventoryItemBody> </InventoryItem> )",
"if (loadError) { title = <Link to={workerNodesLink}>{t('Worker Nodes')}</Link>; } else if (!loaded) { title = <><InventoryItemLoading /><Link to={workerNodesLink}>{t('Worker Nodes')}</Link></>; } return ( <InventoryItem> <InventoryItemTitle>{title}</InventoryItemTitle> </InventoryItem> )",
"<React.Suspense fallback={<LoadingBox />}> <YAMLEditor value={code} /> </React.Suspense>",
"<React.Suspense fallback={<LoadingBox />}> <ResourceYAMLEditor initialResource={resource} header=\"Create resource\" onSave={(content) => updateResource(content)} /> </React.Suspense>",
"const [resource, loaded, loadError] = useK8sWatchResource(clusterResource); return <ResourceEventStream resource={resource} />",
"const context: AppPage: React.FC = () => {<br/> const [launchModal] = useModal();<br/> const onClick = () => launchModal(ModalComponent);<br/> return (<br/> <Button onClick={onClick}>Launch a Modal</Button><br/> )<br/>}<br/>`",
"const context: ActionContext = { 'a-context-id': { dataFromDynamicPlugin } }; <ActionServiceProvider context={context}> {({ actions, options, loaded }) => loaded && ( <ActionMenu actions={actions} options={options} variant={ActionMenuVariant.DROPDOWN} /> ) } </ActionServiceProvider>",
"const logNamespaceChange = (namespace) => console.log(`New namespace: USD{namespace}`); <NamespaceBar onNamespaceChange={logNamespaceChange}> <NamespaceBarApplicationSelector /> </NamespaceBar> <Page>",
"//in ErrorBoundary component return ( if (this.state.hasError) { return <ErrorBoundaryFallbackPage errorMessage={errorString} componentStack={componentStackString} stack={stackTraceString} title={errorString}/>; } return this.props.children; )",
"oc get console.operator.openshift.io cluster -o jsonpath='{.spec.plugins}'"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/web_console/dynamic-plugins |
Chapter 5. Building an OSGi Bundle | Chapter 5. Building an OSGi Bundle Abstract This chapter describes how to build an OSGi bundle using Maven. For building bundles, the Maven bundle plug-in plays a key role, because it enables you to automate the generation of OSGi bundle headers (which would otherwise be a tedious task). Maven archetypes, which generate a complete sample project, can also provide a starting point for your bundle projects. 5.1. Generating a Bundle Project 5.1.1. Generating bundle projects with Maven archetypes To help you get started quickly, you can invoke a Maven archetype to generate the initial outline of a Maven project (a Maven archetype is analogous to a project wizard). The following Maven archetype generates a project for building OSGi bundles. 5.1.2. Apache Camel archetype The Apache Camel OSGi archetype creates a project for building a route that can be deployed into the OSGi container. Following example shows how to generate a camel-blueprint project using the Maven archetype command with the coordinates, GroupId : ArtifactId : Version , . After running this command, Maven prompts you to specify the GroupId , ArtifactId , and Version . 5.1.3. Building the bundle By default, the preceding archetypes create a project in a new directory, whose name is the same as the specified artifact ID, ArtifactId . To build the bundle defined by the new project, open a command prompt, go to the project directory (that is, the directory containing the pom.xml file), and enter the following Maven command: The effect of this command is to compile all of the Java source files, to generate a bundle JAR under the ArtifactId /target directory, and then to install the generated JAR in the local Maven repository. 5.2. Modifying an Existing Maven Project 5.2.1. Overview If you already have a Maven project and you want to modify it so that it generates an OSGi bundle, perform the following steps: Section 5.2.2, "Change the package type to bundle" . Section 5.2.3, "Add the bundle plug-in to your POM" . Section 5.2.4, "Customize the bundle plug-in" . Section 5.2.5, "Customize the JDK compiler version" . 5.2.2. Change the package type to bundle Configure Maven to generate an OSGi bundle by changing the package type to bundle in your project's pom.xml file. Change the contents of the packaging element to bundle , as shown in the following example: The effect of this setting is to select the Maven bundle plug-in, maven-bundle-plugin , to perform packaging for this project. This setting on its own, however, has no effect until you explicitly add the bundle plug-in to your POM. 5.2.3. Add the bundle plug-in to your POM To add the Maven bundle plug-in, copy and paste the following sample plugin element into the project/build/plugins section of your project's pom.xml file: Where the bundle plug-in is configured by the settings in the instructions element. 5.2.4. Customize the bundle plug-in For some specific recommendations on configuring the bundle plug-in for Apache CXF, see Section 5.3, "Packaging a Web Service in a Bundle" . 5.2.5. Customize the JDK compiler version It is almost always necessary to specify the JDK version in your POM file. If your code uses any modern features of the Java language-such as generics, static imports, and so on-and you have not customized the JDK version in the POM, Maven will fail to compile your source code. It is not sufficient to set the JAVA_HOME and the PATH environment variables to the correct values for your JDK, you must also modify the POM file. To configure your POM file, so that it accepts the Java language features introduced in JDK 1.8, add the following maven-compiler-plugin plug-in settings to your POM (if they are not already present): 5.3. Packaging a Web Service in a Bundle 5.3.1. Overview This section explains how to modify an existing Maven project for a Apache CXF application, so that the project generates an OSGi bundle suitable for deployment in the Red Hat Fuse OSGi container. To convert the Maven project, you need to modify the project's POM file and the project's Blueprint file(s) (located in META-INF/spring ). 5.3.2. Modifying the POM file to generate a bundle To configure a Maven POM file to generate a bundle, there are essentially two changes you need to make: change the POM's package type to bundle ; and add the Maven bundle plug-in to your POM. For details, see Section 5.1, "Generating a Bundle Project" . 5.3.3. Mandatory import packages In order for your application to use the Apache CXF components, you need to import their packages into the application's bundle. Because of the complex nature of the dependencies in Apache CXF, you cannot rely on the Maven bundle plug-in, or the bnd tool, to automatically determine the needed imports. You will need to explicitly declare them. You need to import the following packages into your bundle: 5.3.4. Sample Maven bundle plug-in instructions Example 5.1, "Configuration of Mandatory Import Packages" shows how to configure the Maven bundle plug-in in your POM to import the mandatory packages. The mandatory import packages appear as a comma-separated list inside the Import-Package element. Note the appearance of the wildcard, * , as the last element of the list. The wildcard ensures that the Java source files from the current bundle are scanned to discover what additional packages need to be imported. Example 5.1. Configuration of Mandatory Import Packages 5.3.5. Add a code generation plug-in A Web services project typically requires code to be generated. Apache CXF provides two Maven plug-ins for the JAX-WS front-end, which enable tyou to integrate the code generation step into your build. The choice of plug-in depends on whether you develop your service using the Java-first approach or the WSDL-first approach, as follows: Java-first approach -use the cxf-java2ws-plugin plug-in. WSDL-first approach -use the cxf-codegen-plugin plug-in. 5.3.6. OSGi configuration properties The OSGi Configuration Admin service defines a mechanism for passing configuration settings to an OSGi bundle. You do not have to use this service for configuration, but it is typically the most convenient way of configuring bundle applications. Blueprint provides support for OSGi configuration, enabling you to substitute variables in a Blueprint file using values obtained from the OSGi Configuration Admin service. For details of how to use OSGi configuration properties, see Section 5.3.7, "Configuring the Bundle Plug-In" and Section 9.6, "Add OSGi configurations to the feature" . 5.3.7. Configuring the Bundle Plug-In Overview A bundle plug-in requires very little information to function. All of the required properties use default settings to generate a valid OSGi bundle. While you can create a valid bundle using just the default values, you will probably want to modify some of the values. You can specify most of the properties inside the plug-in's instructions element. Configuration properties Some of the commonly used configuration properties are: Bundle-SymbolicName Bundle-Name Bundle-Version Export-Package Private-Package Import-Package Setting a bundle's symbolic name By default, the bundle plug-in sets the value for the Bundle-SymbolicName property to groupId + "." + artifactId , with the following exceptions: If groupId has only one section (no dots), the first package name with classes is returned. For example, if the group Id is commons-logging:commons-logging , the bundle's symbolic name is org.apache.commons.logging . If artifactId is equal to the last section of groupId , then groupId is used. For example, if the POM specifies the group ID and artifact ID as org.apache.maven:maven , the bundle's symbolic name is org.apache.maven . If artifactId starts with the last section of groupId , that portion is removed. For example, if the POM specifies the group ID and artifact ID as org.apache.maven:maven-core , the bundle's symbolic name is org.apache.maven.core . To specify your own value for the bundle's symbolic name, add a Bundle-SymbolicName child in the plug-in's instructions element, as shown in Example 5.2, "Setting a bundle's symbolic name" . Example 5.2. Setting a bundle's symbolic name Setting a bundle's name By default, a bundle's name is set to USD{project.name} . To specify your own value for the bundle's name, add a Bundle-Name child to the plug-in's instructions element, as shown in Example 5.3, "Setting a bundle's name" . Example 5.3. Setting a bundle's name Setting a bundle's version By default, a bundle's version is set to USD{project.version} . Any dashes ( - ) are replaced with dots ( . ) and the number is padded up to four digits. For example, 4.2-SNAPSHOT becomes 4.2.0.SNAPSHOT . To specify your own value for the bundle's version, add a Bundle-Version child to the plug-in's instructions element, as shown in Example 5.4, "Setting a bundle's version" . Example 5.4. Setting a bundle's version Specifying exported packages By default, the OSGi manifest's Export-Package list is populated by all of the packages in your local Java source code (under src/main/java ), except for the default package, . , and any packages containing .impl or .internal . Important If you use a Private-Package element in your plug-in configuration and you do not specify a list of packages to export, the default behavior includes only the packages listed in the Private-Package element in the bundle. No packages are exported. The default behavior can result in very large packages and in exporting packages that should be kept private. To change the list of exported packages you can add an Export-Package child to the plug-in's instructions element. The Export-Package element specifies a list of packages that are to be included in the bundle and that are to be exported. The package names can be specified using the * wildcard symbol. For example, the entry com.fuse.demo.* includes all packages on the project's classpath that start with com.fuse.demo . You can specify packages to be excluded be prefixing the entry with ! . For example, the entry !com.fuse.demo.private excludes the package com.fuse.demo.private . When excluding packages, the order of entries in the list is important. The list is processed in order from the beginning and any subsequent contradicting entries are ignored. For example, to include all packages starting with com.fuse.demo except the package com.fuse.demo.private , list the packages using: However, if you list the packages using com.fuse.demo.*,!com.fuse.demo.private , then com.fuse.demo.private is included in the bundle because it matches the first pattern. Specifying private packages If you want to specify a list of packages to include in a bundle without exporting them, you can add a Private-Package instruction to the bundle plug-in configuration. By default, if you do not specify a Private-Package instruction, all packages in your local Java source are included in the bundle. Important If a package matches an entry in both the Private-Package element and the Export-Package element, the Export-Package element takes precedence. The package is added to the bundle and exported. The Private-Package element works similarly to the Export-Package element in that you specify a list of packages to be included in the bundle. The bundle plug-in uses the list to find all classes on the project's classpath that are to be included in the bundle. These packages are packaged in the bundle, but not exported (unless they are also selected by the Export-Package instruction). Example 5.5, "Including a private package in a bundle" shows the configuration for including a private package in a bundle Example 5.5. Including a private package in a bundle Specifying imported packages By default, the bundle plug-in populates the OSGi manifest's Import-Package property with a list of all the packages referred to by the contents of the bundle. While the default behavior is typically sufficient for most projects, you might find instances where you want to import packages that are not automatically added to the list. The default behavior can also result in unwanted packages being imported. To specify a list of packages to be imported by the bundle, add an Import-Package child to the plug-in's instructions element. The syntax for the package list is the same as for the Export-Package element and the Private-Package element. Important When you use the Import-Package element, the plug-in does not automatically scan the bundle's contents to determine if there are any required imports. To ensure that the contents of the bundle are scanned, you must place an * as the last entry in the package list. Example 5.6, "Specifying the packages imported by a bundle" shows the configuration for specifying the packages imported by a bundle Example 5.6. Specifying the packages imported by a bundle More information For more information on configuring a bundle plug-in, see: olink:OsgiDependencies/OsgiDependencies Apache Felix documentation Peter Kriens' aQute Software Consultancy web site 5.3.8. OSGI configAdmin file naming convention PID strings (symbolic-name syntax) allow hyphens in the OSGI specification. However, hyphens are interpreted by Apache Felix.fileinstall and config:edit shell commands to differentiate a "managed service" and "managed service factory". Therefore, it is recommended to not use hyphens elsewhere in a PID string. Note The Configuration file names are related to the PID and factory PID. | [
"mvn archetype:generate -DarchetypeGroupId=org.apache.camel.archetypes -DarchetypeArtifactId=camel-archetype-blueprint -DarchetypeVersion=2.23.2.fuse-7_13_0-00013-redhat-00001",
"mvn install",
"<project ... > <packaging> bundle </packaging> </project>",
"<project ... > <build> <defaultGoal>install</defaultGoal> <plugins> <plugin> <groupId>org.apache.felix</groupId> <artifactId>maven-bundle-plugin</artifactId> <version>3.3.0</version> <extensions>true</extensions> <configuration> <instructions> <Bundle-SymbolicName>USD{project.groupId}.USD{project.artifactId} </Bundle-SymbolicName> <Import-Package>*</Import-Package> </instructions> </configuration> </plugin> </plugins> </build> </project>",
"<project ... > <build> <defaultGoal>install</defaultGoal> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.8</source> <target>1.8</target> </configuration> </plugin> </plugins> </build> </project>",
"javax.jws javax.wsdl javax.xml.bind javax.xml.bind.annotation javax.xml.namespace javax.xml.ws org.apache.cxf.bus org.apache.cxf.bus.spring org.apache.cxf.bus.resource org.apache.cxf.configuration.spring org.apache.cxf.resource org.apache.cxf.jaxws org.springframework.beans.factory.config",
"<project ... > <build> <plugins> <plugin> <groupId>org.apache.felix</groupId> <artifactId>maven-bundle-plugin</artifactId> <extensions>true</extensions> <configuration> <instructions> <Import-Package> javax.jws, javax.wsdl, javax.xml.bind, javax.xml.bind.annotation, javax.xml.namespace, javax.xml.ws, org.apache.cxf.bus, org.apache.cxf.bus.spring, org.apache.cxf.bus.resource, org.apache.cxf.configuration.spring, org.apache.cxf.resource, org.apache.cxf.jaxws, org.springframework.beans.factory.config, * </Import-Package> </instructions> </configuration> </plugin> </plugins> </build> </project>",
"<plugin> <groupId>org.apache.felix</groupId> <artifactId>maven-bundle-plugin</artifactId> <configuration> <instructions> <Bundle-SymbolicName>USD{project.artifactId}</Bundle-SymbolicName> </instructions> </configuration> </plugin>",
"<plugin> <groupId>org.apache.felix</groupId> <artifactId>maven-bundle-plugin</artifactId> <configuration> <instructions> <Bundle-Name>JoeFred</Bundle-Name> </instructions> </configuration> </plugin>",
"<plugin> <groupId>org.apache.felix</groupId> <artifactId>maven-bundle-plugin</artifactId> <configuration> <instructions> <Bundle-Version>1.0.3.1</Bundle-Version> </instructions> </configuration> </plugin>",
"!com.fuse.demo.private,com.fuse.demo.*",
"<plugin> <groupId>org.apache.felix</groupId> <artifactId>maven-bundle-plugin</artifactId> <configuration> <instructions> <Private-Package>org.apache.cxf.wsdlFirst.impl</Private-Package> </instructions> </configuration> </plugin>",
"<plugin> <groupId>org.apache.felix</groupId> <artifactId>maven-bundle-plugin</artifactId> <configuration> <instructions> <Import-Package>javax.jws, javax.wsdl, org.apache.cxf.bus, org.apache.cxf.bus.spring, org.apache.cxf.bus.resource, org.apache.cxf.configuration.spring, org.apache.cxf.resource, org.springframework.beans.factory.config, * </Import-Package> </instructions> </configuration> </plugin>"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/deploying_into_apache_karaf/buildbundle |
Chapter 91. Micrometer | Chapter 91. Micrometer Since Camel 2.22 Only producer is supported The Micrometer component allows to collect various metrics directly from Camel routes. Supported metric types are counter , summary , and timer . Micrometer provides simple way to measure the behavior of an application. Configurable reporting backends (via Micrometer registries) enable different integration options for collecting and visualizing statistics. The component also provides a MicrometerRoutePolicyFactory which allows to expose route statistics using Micrometer as well as EventNotifier implementations for counting routes and timing exchanges from their creation to their completion. 91.1. Dependencies Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-micrometer</artifactId> </dependency> At the same time update dependencyManagement section with: <dependencyManagement> <dependencies> <dependency> <groupId>com.redhat.camel.springboot.platform</groupId> <artifactId>camel-spring-boot-bom</artifactId> <version>USD{camel-spring-boot-version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> 91.2. URI format 91.3. Configuring Options Camel components are configured on two separate levels: component level endpoint level 91.3.1. Configuring component options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 91.3.2. Configuring endpoint options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL and DataFormat DSL as a type safe way of configuring endpoints and data formats in Java. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 91.4. Component Options The Micrometer component supports 3 options, which are listed below. Name Description Default Type lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. False Boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. True Boolean metricsRegistry (advanced) To use a custom configured MetricRegistry. MeterRegistry 91.5. Endpoint Options The Micrometer endpoint is configured using URI syntax: with the following path and query parameters: 91.5.1. Path Parameters (3 parameters) Name Description Default Type metricsType (producer) Required Type of metrics. Enum values: counter summary timer Type metricsName (producer) Required Name of metrics. String tags (producer) Tags of metrics. Iterable 91.5.2. Query Parameters (6 parameters) Name Description Default Type action (producer) Action expression when using timer type. Enum values: start stop String decrement (producer) Decrement value expression when using counter type. String increment (producer) Increment value expression when using counter type. boolean metricsDescription (producer) Description of metrics. String value (producer) Value expression when using histogram type. String lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. False Boolean 91.6. Message Headers The Micrometer component supports 7 message header(s), which is/are listed below: Name Description Default Type CamelMetricsTimerAction (producer) Constant: HEADER_TIMER_ACTION Override timer action in URI. Enum values: start stop MicrometerTimerAction CamelMetricsHistogramValue (producer) Constant: HEADER_HISTOGRAM_VALUE Override histogram value in URI. Long CamelMetricsCounterDecrement (producer) Constant: HEADER_COUNTER_DECREMENT Override decrement value in URI. Double CamelMetricsCounterIncrement (producer) Constant: HEADER_COUNTER_INCREMENT Override increment value in URI. Double CamelMetricsName (producer) Constant: HEADER_METRIC_NAME Override name value in URI. String CamelMetricsDescription (producer) Constant: HEADER_METRIC_DESCRIPTION Override description value in URI. String CamelMetricsTags (producer) Constant: HEADER_METRIC_TAGS To augment meter tags defined as URI parameters. Iterable 91.7. Meter Registry By default the Camel Micrometer component creates a SimpleMeterRegistry instance, suitable mainly for testing. You should define a dedicated registry by providing a MeterRegistry bean. Micrometer registries primarily determine the backend monitoring system to be used. A CompositeMeterRegistry can be used to address more than one monitoring target. For example using Spring Java Configuration: @Configuration public static class MyConfig extends SingleRouteCamelConfiguration { @Bean @Override public RouteBuilder route() { return new RouteBuilder() { @Override public void configure() throws Exception { // define Camel routes here } }; } @Bean(name = MicrometerConstants.METRICS_REGISTRY_NAME) public MeterRegistry getMeterRegistry() { CompositeMeterRegistry registry = ...; registry.add(...); // ... return registry; } } Or using CDI: @Override public void configure() { from("...") // Register the 'my-meter' meter in the MetricRegistry below .to("micrometer:meter:my-meter"); } @Produces // If multiple MetricRegistry beans // @Named(MicrometerConstants.METRICS_REGISTRY_NAME) MetricRegistry registry() { CompositeMeterRegistry registry = ...; registry.add(...); // ... return registry; } } 91.8. Default Camel Metrics Some Camel specific metrics are available out of the box. Name Type Description camel.message.history Timer Sample of performance of each node in the route when message history is enabled camel.routes.added Gauge Number of routes added camel.routes.running Gauge Number of routes running camel.exchanges.inflight Gauge Route inflight messages camel.exchanges.total Counter Total number of processed exchanges camel.exchanges.succeeded Counter Number of successfully completed exchange camel.exchanges.failed Counter Number of failed exchanges camel.exchanges.failures.handled Counter Number of failures handled camel.exchanges.external.redeliveries Counter Number of external initiated redeliveries (such as from JMS broker) camel.exchange.event.notifier Gauge + Summary Metrics for message created, sent, completed, and failed events camel.route.policy Gauge + Summary Route performance metrics camel.route.policy.long.task Gauge + Summary Route long task metric 91.9. Usage of producers Each meter has type and name. Supported types are counter , distribution summary and timer. If no type is provided then a counter is used by default. The meter name is a string that is evaluated as Simple expression. In addition to using the CamelMetricsName header (see below), this allows to select the meter depending on exchange data. The optional tags URI parameter is a comma-separated string, consisting of key=value expressions. Both key and value are strings that are also evaluated as Simple expression. E.g. the URI parameter tags=X=USD{header.Y} would assign the current value of header Y to the key X . 91.9.1. Headers The meter name defined in URI can be overridden by populating a header with name CamelMetricsName. The meter tags defined as URI parameters can be augmented by populating a header with name CamelMetricsTags. For example: from("direct:in") .setHeader(MicrometerConstants.HEADER_METRIC_NAME, constant("new.name")) .setHeader(MicrometerConstants.HEADER_METRIC_TAGS, constant(Tags.of("dynamic-key", "dynamic-value"))) .to("micrometer:counter:name.not.used?tags=key=value") .to("direct:out"); will update a counter with name new.name instead of name.not.used using the tag dynamic-key with value dynamic-value in addition to the tag key with value value . All Metrics specific headers are removed from the message once the Micrometer endpoint finishes processing of exchange. While processing exchange Micrometer endpoint will catch all exceptions and write log entry using level warn . 91.10. Counter micrometer:counter:name[?options] 91.10.1. Options Name Default Description increment Double value to add to the counter decrement Double value to subtract from the counter If neither increment or decrement is defined then counter value will be incremented by one. If increment and decrement are both defined only increment operation is called. // update counter simple.counter by 7 from("direct:in") .to("micrometer:counter:simple.counter?increment=7") .to("direct:out"); // increment counter simple.counter by 1 from("direct:in") .to("micrometer:counter:simple.counter") .to("direct:out"); Both increment and decrement values are evaluated as Simple expressions with a Double result, e.g. if header X contains a value that evaluates to 3.0, the simple.counter counter is decremented by 3.0: // decrement counter simple.counter by 3 from("direct:in") .to("micrometer:counter:simple.counter?decrement=USD{header.X}") .to("direct:out"); 91.10.2. Headers Like in camel-metrics, specific Message headers can be used to override increment and decrement values specified in the Micrometer endpoint URI. Name Description Expected type CamelMetricsCounterIncrement Double value to add to the counter CamelMetricsCounterDecrement Double value to subtract from the counter from("direct:in") .setHeader(MicrometerConstants.HEADER_COUNTER_INCREMENT, constant(417.0D)) .to("micrometer:counter:simple.counter?increment=7") .to("direct:out"); from("direct:in") .setHeader(MicrometerConstants.HEADER_COUNTER_INCREMENT, simple("USD{body.length}")) .to("micrometer:counter:body.length") .to("direct:out"); 91.11. Distribution Summary micrometer:summary:metricname[?options] 91.11.1. Options Name Default Description value Value to use in histogram If no value is not set, nothing is added to histogram and warning is logged. // adds value 9923 to simple.histogram from("direct:in") .to("micrometer:summary:simple.histogram?value=9923") .to("direct:out"); // nothing is added to simple.histogram; warning is logged from("direct:in") .to("micrometer:summary:simple.histogram") .to("direct:out"); value is evaluated as Simple expressions with a Double result, e.g. if header X contains a value that evaluates to 3.0, this value is registered with the simple.histogram : from("direct:in") .to("micrometer:summary:simple.histogram?value=USD{header.X}") .to("direct:out"); 91.11.2. Headers Like in camel-metrics , a specific Message header can be used to override the value specified in the Micrometer endpoint URI. Name Description Expected type CamelMetricsHistogramValue Override histogram value in URI Long // adds value 992.0 to simple.histogram from("direct:in") .setHeader(MicrometerConstants.HEADER_HISTOGRAM_VALUE, constant(992.0D)) .to("micrometer:summary:simple.histogram?value=700") .to("direct:out") 91.12. Timer micrometer:timer:metricname[?options] 91.12.1. Options Name Default Description action start or stop If no action or invalid value is provided then warning is logged without any timer update. If action start is called on an already running timer or stop is called on an unknown timer, nothing is updated and warning is logged. // measure time spent in route "direct:calculate" from("direct:in") .to("micrometer:timer:simple.timer?action=start") .to("direct:calculate") .to("micrometer:timer:simple.timer?action=stop"); Timer.Sample objects are stored as Exchange properties between different Metrics component calls. action is evaluated as a Simple expression returning a result of type MicrometerTimerAction . 91.12.2. Headers Like in camel-metrics , a specific Message header can be used to override action value specified in the Micrometer endpoint URI. Name Description Expected type CamelMetricsTimerAction Override timer action in URI org.apache.camel.component.micrometer.MicrometerTimerAction // sets timer action using header from("direct:in") .setHeader(MicrometerConstants.HEADER_TIMER_ACTION, MicrometerTimerAction.start) .to("micrometer:timer:simple.timer") .to("direct:out"); 91.13. Using Micrometer route policy factory MicrometerRoutePolicyFactory allows to add a RoutePolicy for each route in order to exposes route utilization statistics using Micrometer. This factory can be used in Java and XML as the examples below demonstrates. Note Instead of using the MicrometerRoutePolicyFactory you can define a dedicated MicrometerRoutePolicy per route you want to instrument, in case you only want to instrument a few selected routes. From Java you just add the factory to the CamelContext as shown below: context.addRoutePolicyFactory(new MicrometerRoutePolicyFactory()); And from XML DSL you define a <bean> as follows: <!-- use camel-micrometer route policy to gather metrics for all routes --> <bean id="metricsRoutePolicyFactory" class="org.apache.camel.component.micrometer.routepolicy.MicrometerRoutePolicyFactory"/> The MicrometerRoutePolicyFactory and MicrometerRoutePolicy supports the following options: Name Default Description prettyPrint false Whether to use pretty print when outputting statistics in json format meterRegistry Allow to use a shared MeterRegistry . If none is provided then Camel will create a shared instance used by the this CamelContext. durationUnit TimeUnit.MILLISECONDS The unit to use for duration in when dumping the statistics as json. configuration see below MicrometerRoutePolicyConfiguration.class The MicrometerRoutePolicyConfiguration supports the following options: Name Default Description additionalCounters true activates all additional counters exchangesSucceeded true activates counter for succeeded exchanges exchangesFailed true activates counter for failed exchanges exchangesTotal true activates counter for total count of exchanges externalRedeliveries true activates counter for redeliveries of exchanges failuresHandled true activates counter for handled failures longTask false activates long task timer (current processing time for micrometer) timerInitiator null Consumer<Timer.Builder> for custom initialize Timer longTaskInitiator null Consumer<LongTaskTimer.Builder> for custom initialize LongTaskTimer If JMX is enabled in the CamelContext, the MBean is registered in the type=services tree with name=MicrometerRoutePolicy . 91.14. Using Micrometer message history factory MicrometerMessageHistoryFactory allows to use metrics to capture Message History performance statistics while routing messages. It works by using a Micrometer Timer for each node in all the routes. This factory can be used in Java and XML as the examples below demonstrates. From Java you just set the factory to the CamelContext as shown below: context.setMessageHistoryFactory(new MicrometerMessageHistoryFactory()); And from XML DSL you define a <bean> as follows: <!-- use camel-micrometer message history to gather metrics for all messages being routed --> <bean id="metricsMessageHistoryFactory" class="org.apache.camel.component.micrometer.messagehistory.MicrometerMessageHistoryFactory"/> The following options is supported on the factory: Name Default Description prettyPrint false Whether to use pretty print when outputting statistics in json format meterRegistry Allow to use a shared MeterRegistry . If none is provided then Camel will create a shared instance used by the this CamelContext. durationUnit TimeUnit.MILLISECONDS The unit to use for duration in when dumping the statistics as json. At runtime the metrics can be accessed from Java API or JMX which allows to gather the data as json output. From Java code you can get the service from the CamelContext as shown: MicrometerMessageHistoryService service = context.hasService(MicrometerMessageHistoryService.class); String json = service.dumpStatisticsAsJson(); If JMX is enabled in the CamelContext, the MBean is registered in the type=services tree with name=MicrometerMessageHistory . 91.15. Micrometer event notification There is a MicrometerRouteEventNotifier (counting added and running routes) and a MicrometerExchangeEventNotifier (timing exchanges from their creation to their completion). EventNotifiers can be added to the CamelContext, e.g.: camelContext.getManagementStrategy().addEventNotifier(new MicrometerExchangeEventNotifier()) At runtime the metrics can be accessed from Java API or JMX which allows to gather the data as json output. From Java code you can do get the service from the CamelContext as shown: MicrometerEventNotifierService service = context.hasService(MicrometerEventNotifierService.class); String json = service.dumpStatisticsAsJson(); If JMX is enabled in the CamelContext, the MBean is registered in the type=services tree with name=MicrometerEventNotifier . 91.16. Instrumenting Camel Thread Pools InstrumentedThreadPoolFactory allows you to gather performance information about Camel Thread Pools by injecting a InstrumentedThreadPoolFactory which collects information from inside of Camel. See more details at Advanced configuration of CamelContext using Spring. 91.17. Exposing Micrometer statistics in JMX Micrometer uses MeterRegistry implementations in order to publish statistics. While in production scenarios it is advisable to select a dedicated backend like Prometheus or Graphite, it may be sufficient for test or local deployments to publish statistics to JMX. In order to achieve this, add the following dependency: <dependency> <groupId>io.micrometer</groupId> <artifactId>micrometer-registry-jmx</artifactId> <version>USD{micrometer-version}</version> </dependency> and add a JmxMeterRegistry instance: @Bean(name = MicrometerConstants.METRICS_REGISTRY_NAME) public MeterRegistry getMeterRegistry() { CompositeMeterRegistry meterRegistry = new CompositeMeterRegistry(); meterRegistry.add(...); meterRegistry.add(new JmxMeterRegistry( CamelJmxConfig.DEFAULT, Clock.SYSTEM, HierarchicalNameMapper.DEFAULT)); return meterRegistry; } } The HierarchicalNameMapper strategy determines how meter name and tags are assembled into an MBean name. 91.18. Using Camel Micrometer with Spring Boot When you use camel-micrometer-starter with Spring Boot, then Spring Boot auto configuration will automatically enable metrics capture if a io.micrometer.core.instrument.MeterRegistry is available. For example to capture data with Prometheus, you can add the following dependency: <dependency> <groupId>io.micrometer</groupId> <artifactId>micrometer-registry-prometheus</artifactId> </dependency> See the following table for options to specify what metrics to capture, or to turn it off. 91.19. Spring Boot Auto Configuration Compared to the plain camel micrometer, micrometer component on Spring Boot provides 10 more options, which are listed below: Name Description Default Type camel.component.micrometer.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. True Boolean camel.component.micrometer.enabled Whether to enable auto configuration of the micrometer component. This is enabled by default. Boolean camel.component.micrometer.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. False Boolean camel.component.micrometer.metrics-registry To use a custom configured MetricRegistry. The option is a io.micrometer.core.instrument.MeterRegistry type. MeterRegistry camel.metrics.enable-exchange-event-notifier Set whether to enable the MicrometerExchangeEventNotifier for capturing metrics on exchange processing times. True Boolean camel.metrics.enable-message-history Set whether to enable the MicrometerMessageHistoryFactory for capturing metrics on individual route node processing times. Depending on the number of configured route nodes, there is the potential to create a large volume of metrics. Therefore, this option is disabled by default. False Boolean camel.metrics.enable-route-event-notifier Set whether to enable the MicrometerRouteEventNotifier for capturing metrics on the total number of routes and total number of routes running. True Boolean camel.metrics.enable-route-policy Set whether to enable the MicrometerRoutePolicyFactory for capturing metrics on route processing times. True Boolean camel.metrics.uri-tag-dynamic Whether to use static or dynamic values for URI tags in captured metrics. When using dynamic tags, then a REST service with base URL: /users/{id} will capture metrics with uri tag with the actual dynamic value such as: /users/123. However, this can lead to many tags as the URI is dynamic, so use this with care. False Boolean camel.metrics.uri-tag-enabled Whether HTTP uri tags should be enabled or not in captured metrics. If disabled then the uri tag, is likely not able to be resolved and will be marked as UNKNOWN. True Boolean | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-micrometer</artifactId> </dependency>",
"<dependencyManagement> <dependencies> <dependency> <groupId>com.redhat.camel.springboot.platform</groupId> <artifactId>camel-spring-boot-bom</artifactId> <version>USD{camel-spring-boot-version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement>",
"micrometer:[ counter | summary | timer ]:metricname[?options]",
"micrometer:metricsType:metricsName",
"@Configuration public static class MyConfig extends SingleRouteCamelConfiguration { @Bean @Override public RouteBuilder route() { return new RouteBuilder() { @Override public void configure() throws Exception { // define Camel routes here } }; } @Bean(name = MicrometerConstants.METRICS_REGISTRY_NAME) public MeterRegistry getMeterRegistry() { CompositeMeterRegistry registry = ...; registry.add(...); // return registry; } }",
"@Override public void configure() { from(\"...\") // Register the 'my-meter' meter in the MetricRegistry below .to(\"micrometer:meter:my-meter\"); } @Produces // If multiple MetricRegistry beans // @Named(MicrometerConstants.METRICS_REGISTRY_NAME) MetricRegistry registry() { CompositeMeterRegistry registry = ...; registry.add(...); // return registry; } }",
"from(\"direct:in\") .setHeader(MicrometerConstants.HEADER_METRIC_NAME, constant(\"new.name\")) .setHeader(MicrometerConstants.HEADER_METRIC_TAGS, constant(Tags.of(\"dynamic-key\", \"dynamic-value\"))) .to(\"micrometer:counter:name.not.used?tags=key=value\") .to(\"direct:out\");",
"micrometer:counter:name[?options]",
"// update counter simple.counter by 7 from(\"direct:in\") .to(\"micrometer:counter:simple.counter?increment=7\") .to(\"direct:out\"); // increment counter simple.counter by 1 from(\"direct:in\") .to(\"micrometer:counter:simple.counter\") .to(\"direct:out\");",
"// decrement counter simple.counter by 3 from(\"direct:in\") .to(\"micrometer:counter:simple.counter?decrement=USD{header.X}\") .to(\"direct:out\");",
"from(\"direct:in\") .setHeader(MicrometerConstants.HEADER_COUNTER_INCREMENT, constant(417.0D)) .to(\"micrometer:counter:simple.counter?increment=7\") .to(\"direct:out\");",
"from(\"direct:in\") .setHeader(MicrometerConstants.HEADER_COUNTER_INCREMENT, simple(\"USD{body.length}\")) .to(\"micrometer:counter:body.length\") .to(\"direct:out\");",
"micrometer:summary:metricname[?options]",
"// adds value 9923 to simple.histogram from(\"direct:in\") .to(\"micrometer:summary:simple.histogram?value=9923\") .to(\"direct:out\"); // nothing is added to simple.histogram; warning is logged from(\"direct:in\") .to(\"micrometer:summary:simple.histogram\") .to(\"direct:out\");",
"from(\"direct:in\") .to(\"micrometer:summary:simple.histogram?value=USD{header.X}\") .to(\"direct:out\");",
"// adds value 992.0 to simple.histogram from(\"direct:in\") .setHeader(MicrometerConstants.HEADER_HISTOGRAM_VALUE, constant(992.0D)) .to(\"micrometer:summary:simple.histogram?value=700\") .to(\"direct:out\")",
"micrometer:timer:metricname[?options]",
"// measure time spent in route \"direct:calculate\" from(\"direct:in\") .to(\"micrometer:timer:simple.timer?action=start\") .to(\"direct:calculate\") .to(\"micrometer:timer:simple.timer?action=stop\");",
"// sets timer action using header from(\"direct:in\") .setHeader(MicrometerConstants.HEADER_TIMER_ACTION, MicrometerTimerAction.start) .to(\"micrometer:timer:simple.timer\") .to(\"direct:out\");",
"context.addRoutePolicyFactory(new MicrometerRoutePolicyFactory());",
"<!-- use camel-micrometer route policy to gather metrics for all routes --> <bean id=\"metricsRoutePolicyFactory\" class=\"org.apache.camel.component.micrometer.routepolicy.MicrometerRoutePolicyFactory\"/>",
"context.setMessageHistoryFactory(new MicrometerMessageHistoryFactory());",
"<!-- use camel-micrometer message history to gather metrics for all messages being routed --> <bean id=\"metricsMessageHistoryFactory\" class=\"org.apache.camel.component.micrometer.messagehistory.MicrometerMessageHistoryFactory\"/>",
"MicrometerMessageHistoryService service = context.hasService(MicrometerMessageHistoryService.class); String json = service.dumpStatisticsAsJson();",
"camelContext.getManagementStrategy().addEventNotifier(new MicrometerExchangeEventNotifier())",
"MicrometerEventNotifierService service = context.hasService(MicrometerEventNotifierService.class); String json = service.dumpStatisticsAsJson();",
"<dependency> <groupId>io.micrometer</groupId> <artifactId>micrometer-registry-jmx</artifactId> <version>USD{micrometer-version}</version> </dependency>",
"@Bean(name = MicrometerConstants.METRICS_REGISTRY_NAME) public MeterRegistry getMeterRegistry() { CompositeMeterRegistry meterRegistry = new CompositeMeterRegistry(); meterRegistry.add(...); meterRegistry.add(new JmxMeterRegistry( CamelJmxConfig.DEFAULT, Clock.SYSTEM, HierarchicalNameMapper.DEFAULT)); return meterRegistry; } }",
"<dependency> <groupId>io.micrometer</groupId> <artifactId>micrometer-registry-prometheus</artifactId> </dependency>"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-micrometer-component-starter |
4.12. Prioritizing and Disabling SELinux Policy Modules | 4.12. Prioritizing and Disabling SELinux Policy Modules The SELinux module storage in /etc/selinux/ allows using a priority on SELinux modules. Enter the following command as root to show two module directories with a different priority: While the default priority used by semodule utility is 400, the priority used in selinux-policy packages is 100, so you can find most of the SELinux modules installed with the priority 100. You can override an existing module with a modified module with the same name using a higher priority. When there are more modules with the same name and different priorities, only a module with the highest priority is used when the policy is built. Example 4.1. Using SELinux Policy Modules Priority Prepare a new module with modified file context. Install the module with the semodule -i command and set the priority of the module to 400. We use sandbox.pp in the following example. To return back to the default module, enter the semodule -r command as root: Disabling a System Policy Module To disable a system policy module, enter the following command as root: Warning If you remove a system policy module using the semodule -r command, it is deleted on your system's storage and you cannot load it again. To avoid unnecessary reinstallations of the selinux-policy-targeted package for restoring all system policy modules, use the semodule -d command instead. | [
"~]# ls /etc/selinux/targeted/active/modules 100 400 disabled",
"~]# semodule -X 400 -i sandbox.pp ~]# semodule --list-modules=full | grep sandbox 400 sandbox pp 100 sandbox pp",
"~]# semodule -X 400 -r sandbox libsemanage.semanage_direct_remove_key: sandbox module at priority 100 is now active.",
"semodule -d MODULE_NAME"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/Security-Enhanced_Linux-prioritizing_selinux_modules |
Chapter 2. Non-interactively selecting a system-wide Red Hat build of OpenJDK version on RHEL | Chapter 2. Non-interactively selecting a system-wide Red Hat build of OpenJDK version on RHEL If you have multiple versions of Red Hat build of OpenJDK installed on RHEL, you can select the default Red Hat build of OpenJDK version to use system-wide in a non-interactive way. This is useful for administrators who have root privileges on a Red Hat Enterprise Linux system and need to switch the default Red Hat build of OpenJDK on many systems in an automated way. Note If you do not have root privileges, you can select an Red Hat build of OpenJDK version by configuring the JAVA_HOME environment variable . Prerequisites You must have root privileges on the system. Multiple versions of Red Hat build of OpenJDK were installed using the yum package manager. Procedure Select the major Red Hat build of OpenJDK version to switch to. For example, for Red Hat build of OpenJDK 8, use java-1.8.0-openjdk. Verify that the active Red Hat build of OpenJDK version is the one you specified. Note A similar approach can be followed for javac . | [
"PKG_NAME=java-1.8.0-openjdk JAVA_TO_SELECT=USD(alternatives --display java | grep \"family USDPKG_NAME\" | cut -d' ' -f1) alternatives --set java USDJAVA_TO_SELECT",
"java -version openjdk version \"1.8.0_242\" OpenJDK Runtime Environment (build 1.8.0_242-b08) OpenJDK 64-Bit Server VM (build 25.242-b08, mixed mode)"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/configuring_red_hat_build_of_openjdk_8_for_rhel/noninteractively-selecting-systemwide-openjdk8-version-on-rhel |
Part III. Secure Applications | Part III. Secure Applications This part provides details on how to use Pluggable Authentication Modules ( PAM ), how to use the Kerberos authentication protocol and the certmonger daemon, and, finally, how to configure applications for Single sign-on ( SSO ). | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system-level_authentication_guide/secure-apps |
Chapter 1. About the Multicloud Object Gateway | Chapter 1. About the Multicloud Object Gateway The Multicloud Object Gateway (MCG) is a lightweight object storage service for OpenShift, allowing users to start small and then scale as needed on-premise, in multiple clusters, and with cloud-native storage. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/managing_hybrid_and_multicloud_resources/about-the-multicloud-object-gateway |
Chapter 11. Configuring OIDC for Red Hat Quay | Chapter 11. Configuring OIDC for Red Hat Quay Configuring OpenID Connect (OIDC) for Red Hat Quay can provide several benefits to your Red Hat Quay deployment. For example, OIDC allows users to authenticate to Red Hat Quay using their existing credentials from an OIDC provider, such as Red Hat Single Sign-On , Google, Github, Microsoft, or others. Other benefits of OIDC include centralized user management, enhanced security, and single sign-on (SSO). Overall, OIDC configuration can simplify user authentication and management, enhance security, and provide a seamless user experience for Red Hat Quay users. The following procedures show you how to configure Red Hat Single Sign-On and Azure AD. Collectively, these procedures include configuring OIDC on the Red Hat Quay Operator, and on standalone deployments by using the Red Hat Quay config tool. Note By following these procedures, you will be able to add any OIDC provider to Red Hat Quay, regardless of which identity provider you choose to use. 11.1. Configuring Red Hat Single Sign-On for Red Hat Quay Based on the Keycloak project, Red Hat Single Sign-On (RH-SSO) is an open source identity and access management (IAM) solution provided by Red Hat. RH-SSO allows organizations to manage user identities, secure applications, and enforce access control policies across their systems and applications. It also provides a unified authentication and authorization framework, which allows users to log in one time and gain access to multiple applications and resources without needing to re-authenticate. For more information, see Red Hat Single Sign-On . By configuring Red Hat Single Sign-On on Red Hat Quay, you can create a seamless authentication integration between Red Hat Quay and other application platforms like OpenShift Container Platform. 11.1.1. Configuring the Red Hat Single Sign-On Operator for the Red Hat Quay Operator Use the following procedure to configure Red Hat Single Sign-On for the Red Hat Quay Operator on OpenShift Container Platform. Prerequisites You have configured Red Hat Single Sign-On for the Red Hat Quay Operator. For more information, see Red Hat Single Sign-On Operator . You have configured TLS/SSL for your Red Hat Quay deployment and for Red Hat Single Sign-On. You have generated a single Certificate Authority (CA) and uploaded it to your Red Hat Single Sign-On Operator and to your Red Hat Quay configuration. You are logged into your OpenShift Container Platform cluster. You have installed the OpenShift CLI ( oc ). Procedure Navigate to the Red Hat Single Sign-On Admin Console . On the OpenShift Container Platform Web Console , navigate to Network Route . Select the Red Hat Single Sign-On project from the drop-down list. Find the Red Hat Single Sign-On Admin Console in the Routes table. Select the Realm that you will use to configure Red Hat Quay. Click Clients under the Configure section of the navigation panel, and then click the Create button to add a new OIDC for Red Hat Quay. Enter the following information. Client ID: quay-enterprise Client Protocol: openid-connect Root URL: https://<quay endpoint>/ Click Save . This results in a redirect to the Clients setting panel. Navigate to Access Type and select Confidential . Navigate to Valid Redirect URIs . You must provide three redirect URIs. The value should be the fully qualified domain name of the Red Hat Quay registry appended with /oauth2/redhatsso/callback . For example: https://<quay_endpoint>/oauth2/redhatsso/callback https://<quay_endpoint>/oauth2/redhatsso/callback/attach https://<quay_endpoint>/oauth2/redhatsso/callback/cli Click Save and navigate to the new Credentials setting. Copy the value of the Secret. 11.1.2. Configuring the Red Hat Quay Operator to use Red Hat Single Sign-On Use the following procedure to configure Red Hat Single Sign-On with the Red Hat Quay Operator. Prerequisites You have configured the Red Hat Single Sign-On Operator for the Red Hat Quay Operator. Procedure Enter the Red Hat Quay config editor tool by navigating to Operators Installed Operators . Click Red Hat Quay Quay Registry . Then, click the name of your Red Hat Quay registry, and the URL listed with Config Editor Endpoint . Upload a custom SSL/TLS certificate to your OpenShift Container Platform deployment. Navigate to the Red Hat Quay config tool UI. Under Custom SSL Certificates , click Select file and upload your custom SSL/TLS certificates. Reconfigure your Red Hat Quay deployment. Scroll down to the External Authorization (OAuth) section. Click Add OIDC Provider . When prompted, enter redhatsso . Enter the following information: OIDC Server: The fully qualified domain name (FQDN) of the Red Hat Single Sign-On instance, appended with /auth/realms/ and the Realm name. You must include the forward slash at the end, for example, https://sso-redhat.example.com//auth/realms/<keycloak_realm_name>/ . Client ID: The client ID of the application that is being reistered with the identity provider, for example, quay-enterprise . Client Secret: The Secret from the Credentials tab of the quay-enterprise OIDC client settings. Service Name: The name that is displayed on the Red Hat Quay login page, for example, Red hat Single Sign On . Verified Email Address Claim: The name of the claim that is used to verify the email address of the user. Login Scopes: The scopes to send to the OIDC provider when performing the login flow, for example, openid . After configuration, you must click Add . Scroll down and click Validate Configuration Changes . Then, click Restart Now to deploy the Red Hat Quay Operator with OIDC enabled. 11.2. Configuring Azure AD OIDC for Red Hat Quay By integrating Azure AD authentication with Red Hat Quay, your organization can take advantage of the centralized user management and security features offered by Azure AD. Some features include the ability to manage user access to Red Hat Quay repositories based on their Azure AD roles and permissions, and the ability to enable multi-factor authentication and other security features provided by Azure AD. Azure Active Directory (Azure AD) authentication for Red Hat Quay allows users to authenticate and access Red Hat Quay using their Azure AD credentials. 11.2.1. Configuring Azure AD by using the Red Hat Quay config tool The following procedure configures Azure AD for Red Hat Quay using the config tool. Procedure Enter the Red Hat Quay config editor tool. If you are running a standalone Red Hat Quay deployment, you can enter the following command: Use your browser to navigate to the user interface for the configuration tool and log in. If you are on the Red Hat Quay Operator, navigate to Operators Installed Operators . Click Red Hat Quay Quay Registry . Then, click the name of your Red Hat Quay registry, and the URL listed with Config Editor Endpoint . Scroll down to the External Authorization (OAuth) section. Click Add OIDC Provider . When prompted, enter the ID for the ODIC provider. Note Your OIDC server must end with / . After the ODIC provider has been added, Red Hat Quay lists three callback URLs that must be registered on Azure. These addresses allow Azure to direct back to Red Hat Quay after authentication is confirmed. For example: https://QUAY_HOSTNAME/oauth2/<name_of_service>/callback https://QUAY_HOSTNAME/oauth2/<name_of_service>/callback/attach https://QUAY_HOSTNAME/oauth2/<name_of_service>/callback/cli After all required fields have been set, validate your settings by clicking Validate Configuration Changes . If any errors are reported, continue editing your configuration until the settings are valid and Red Hat Quay can connect to your database and Redis servers. 11.2.2. Configuring Azure AD by updating the Red Hat Quay config.yaml file Use the following procedure to configure Azure AD by updating the Red Hat Quay config.yaml file directly. Procedure Using the following procedure, you can add any ODIC provider to Red Hat Quay, regardless of which identity provider is being added. If your system has a firewall in use, or proxy enabled, you must whitelist all Azure API endpoints for each Oauth application that is created. Otherwise, the following error is returned: x509: certificate signed by unknown authority . Add the following information to your Red Hat Quay config.yaml file: 1 The parent key that holds the OIDC configuration settings. In this example, the parent key used is AZURE_LOGIN_CONFIG , however, the string AZURE can be replaced with any arbitrary string based on your specific needs, for example ABC123 .However, the following strings are not accepted: GOOGLE , GITHUB . These strings are reserved for their respecitve identity platforms and require a specific config.yaml entry contingent upon when platform you are using. 2 The client ID of the application that is being reistered with the identity provider. 3 The client secret of the application that is being registered with the identity provider. 4 The address of the OIDC server that is being used for authentication. In this example, you must use sts.windows.net as the issuer identifier. Using https://login.microsoftonline.com results in the following error: Could not create provider for AzureAD. Error: oidc: issuer did not match the issuer returned by provider, expected "https://login.microsoftonline.com/73f2e714-xxxx-xxxx-xxxx-dffe1df8a5d5" got "https://sts.windows.net/73f2e714-xxxx-xxxx-xxxx-dffe1df8a5d5/" . 5 The name of the service that is being authenticated. 6 The name of the claim that is used to verify the email address of the user. Proper configuration of Azure AD results three redirects with the following format: https://QUAY_HOSTNAME/oauth2/<name_of_service>/callback https://QUAY_HOSTNAME/oauth2/<name_of_service>/callback/attach https://QUAY_HOSTNAME/oauth2/<name_of_service>/callback/cli Restart your Red Hat Quay deployment. | [
"sudo podman run --rm -it --name quay_config -p 80:8080 -p 443:8443 registry.redhat.io/quay/quay-rhel8:v3.9.10 config secret",
"AZURE_LOGIN_CONFIG: 1 CLIENT_ID: <client_id> 2 CLIENT_SECRET: <client_secret> 3 OIDC_SERVER: <oidc_server_address_> 4 SERVICE_NAME: Azure AD 5 VERIFIED_EMAIL_CLAIM_NAME: <verified_email> 6"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/manage_red_hat_quay/configuring-oidc-authentication |
7.3. Overcommitting Virtualized CPUs | 7.3. Overcommitting Virtualized CPUs The KVM hypervisor supports overcommitting virtualized CPUs (vCPUs). Virtualized CPUs can be overcommitted as far as load limits of guest virtual machines allow. Use caution when overcommitting vCPUs, as loads near 100% may cause dropped requests or unusable response times. In Red Hat Enterprise Linux 7, it is possible to overcommit guests with more than one vCPU, known as symmetric multiprocessing (SMP) virtual machines. However, you may experience performance deterioration when running more cores on the virtual machine than are present on your physical CPU. For example, a virtual machine with four vCPUs should not be run on a host machine with a dual core processor, but on a quad core host. Overcommitting SMP virtual machines beyond the physical number of processing cores causes significant performance degradation, due to programs getting less CPU time than required. In addition, it is not recommended to have more than 10 total allocated vCPUs per physical processor core. With SMP guests, some processing overhead is inherent. CPU overcommitting can increase the SMP overhead, because using time-slicing to allocate resources to guests can make inter-CPU communication inside a guest slower. This overhead increases with guests that have a larger number of vCPUs, or a larger overcommit ratio. Virtualized CPUs are overcommitted best when when a single host has multiple guests, and each guest has a small number of vCPUs, compared to the number of host CPUs. KVM should safely support guests with loads under 100% at a ratio of five vCPUs (on 5 virtual machines) to one physical CPU on one single host. The KVM hypervisor will switch between all of the virtual machines, making sure that the load is balanced. For best performance, Red Hat recommends assigning guests only as many vCPUs as are required to run the programs that are inside each guest. Important Applications that use 100% of memory or processing resources may become unstable in overcommitted environments. Do not overcommit memory or CPUs in a production environment without extensive testing, as the CPU overcommit ratio and the amount of SMP are workload-dependent. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-overcommitting_with_kvm-overcommitting_virtualized_cpus |
Preface | Preface Red Hat Trusted Application Pipeline (RHTAP) is not really a single product. Instead, it is a set of products that combine to form a highly automated, customizable, and secure platform for building applications. RHTAP includes the following products: Red Hat Developer Hub : a self-service portal for developers. OpenShift GitOps : to manage Kubernetes deployments and their infrastructure. OpenShift Pipelines : to enable automation and provide visibility for continuous integration and continuous delivery (CI/CD) of software. Trusted Artifact Signer : to sign and validate the artifacts that RHTAP produces. Trusted Profile Analyzer : to deliver actionable information about your security posture. It also depends on the following products: Quay.io : a container registry, where RHTAP stores your artifacts. Advanced Cluster Security (ACS) : a security tool that RHTAP uses to scan your artifacts. Note To see exactly which versions of these products RHTAP supports, reference the compatibility and support matrix in our Release notes . Because a fully-operational instance of RHTAP involves all of the products listed above, installing RHTAP takes time and effort. However, we have automated this process where possible, and are providing instructions here that we hope are helpful and concise. Additionally, be aware that the RHTAP installer is not a manager: it does not support upgrades. The installer generates your first deployment of RHTAP. After installation, you manage each product within RHTAP individually. Before you can begin installation, you must meet six prerequisites. Then you must complete seven procedures. Prerequisites ClusterAdmin access to an OpenShift Container Platform (OCP) cluster, through both the CLI and the web console An instance of Red Hat Advanced Cluster Security, as well as the following values from that instance: ACS API token. You can follow the instructions for the prerequisites here to create an API token. ACS central endpoint URL. You can follow the instructions here to configure the endpoint. To enable ACS to access private repositories in image registries, ACS will need to be configured for your specific registry For Quay.io, under Integrations->Image Integrations select the Quay.io card Add your OAUTH tokens to access your specific Quay.io instance Validate the access via the test button. This will ensure if the RHTAP is asked to scan a private image, ACS will have access A Quay.io account The Helm CLI tool A GitHub account Procedures Creating a GitHub application for RHTAP Forking the template catalog Creating a GitOps git token Creating the Docker configuration value Creating a private-values.yaml file Installing RHTAP in your cluster Finalizing your GitHub application The following pages of this document explain each of those procedures in detail. If you have the prerequisites, you are ready to start the installation process by creating a GitHub application. | null | https://docs.redhat.com/en/documentation/red_hat_trusted_application_pipeline/1.0/html/installing_red_hat_trusted_application_pipeline/pr01 |
Chapter 2. Preparing a control node and managed nodes to use RHEL system roles | Chapter 2. Preparing a control node and managed nodes to use RHEL system roles Before you can use individual RHEL system roles to manage services and settings, you must prepare the control node and managed nodes. 2.1. Preparing a control node on RHEL 8 Before using RHEL system roles, you must configure a control node. This system then configures the managed hosts from the inventory according to the playbooks. Prerequisites RHEL 8.6 or later is installed. For more information about installing RHEL, see Interactively installing RHEL from installation media . Note In RHEL 8.5 and earlier versions, Ansible packages were provided through Ansible Engine instead of Ansible Core, and with a different level of support. Do not use Ansible Engine because the packages might not be compatible with Ansible automation content in RHEL 8.6 and later. For more information, see Scope of support for the Ansible Core package included in the RHEL 9 and RHEL 8.6 and later AppStream repositories . The system is registered to the Customer Portal. A Red Hat Enterprise Linux Server subscription is attached to the system. Optional: An Ansible Automation Platform subscription is attached to the system. Procedure Create a user named ansible to manage and run playbooks: Switch to the newly created ansible user: Perform the rest of the procedure as this user. Create an SSH public and private key: Use the suggested default location for the key file. Optional: To prevent Ansible from prompting you for the SSH key password each time you establish a connection, configure an SSH agent. Create the ~/.ansible.cfg file with the following content: Note Settings in the ~/.ansible.cfg file have a higher priority and override settings from the global /etc/ansible/ansible.cfg file. With these settings, Ansible performs the following actions: Manages hosts in the specified inventory file. Uses the account set in the remote_user parameter when it establishes SSH connections to managed nodes. Uses the sudo utility to execute tasks on managed nodes as the root user. Prompts for the root password of the remote user every time you apply a playbook. This is recommended for security reasons. Create an ~/inventory file in INI or YAML format that lists the hostnames of managed hosts. You can also define groups of hosts in the inventory file. For example, the following is an inventory file in the INI format with three hosts and one host group named US : Note that the control node must be able to resolve the hostnames. If the DNS server cannot resolve certain hostnames, add the ansible_host parameter to the host entry to specify its IP address. Install RHEL system roles: On a RHEL host without Ansible Automation Platform, install the rhel-system-roles package: This command installs the collections in the /usr/share/ansible/collections/ansible_collections/redhat/rhel_system_roles/ directory, and the ansible-core package as a dependency. On Ansible Automation Platform, perform the following steps as the ansible user: Define Red Hat automation hub as the primary source for content in the ~/.ansible.cfg file. Install the redhat.rhel_system_roles collection from Red Hat automation hub: This command installs the collection in the ~/.ansible/collections/ansible_collections/redhat/rhel_system_roles/ directory. step Prepare the managed nodes. For more information, see Preparing a managed node . Additional resources Scope of support for the Ansible Core package included in the RHEL 9 and RHEL 8.6 and later AppStream repositories How to register and subscribe a system to the Red Hat Customer Portal using subscription-manager (Red Hat Knowledgebase) The ssh-keygen(1) manual page Connecting to remote machines with SSH keys using ssh-agent Ansible configuration settings How to build your inventory Updates to using Ansible in RHEL 8.6 and 9.0 2.2. Preparing a managed node Managed nodes are the systems listed in the inventory and which will be configured by the control node according to the playbook. You do not have to install Ansible on managed hosts. Prerequisites You prepared the control node. For more information, see Preparing a control node on RHEL 8 . You have SSH access from the control node. Important Direct SSH access as the root user is a security risk. To reduce this risk, you will create a local user on this node and configure a sudo policy when preparing a managed node. Ansible on the control node can then use the local user account to log in to the managed node and run playbooks as different users, such as root . Procedure Create a user named ansible : The control node later uses this user to establish an SSH connection to this host. Set a password for the ansible user: You must enter this password when Ansible uses sudo to perform tasks as the root user. Install the ansible user's SSH public key on the managed node: Log in to the control node as the ansible user, and copy the SSH public key to the managed node: When prompted, connect by entering yes : When prompted, enter the password: Verify the SSH connection by remotely executing a command on the control node: Create a sudo configuration for the ansible user: Create and edit the /etc/sudoers.d/ansible file by using the visudo command: The benefit of using visudo over a normal editor is that this utility provides basic checks, such as for parse errors, before installing the file. Configure a sudoers policy in the /etc/sudoers.d/ansible file that meets your requirements, for example: To grant permissions to the ansible user to run all commands as any user and group on this host after entering the ansible user's password, use: To grant permissions to the ansible user to run all commands as any user and group on this host without entering the ansible user's password, use: Alternatively, configure a more fine-granular policy that matches your security requirements. For further details on sudoers policies, see the sudoers(5) manual page. Verification Verify that you can execute commands from the control node on an all managed nodes: [ansible@control-node]USD ansible all -m ping BECOME password: <password> managed-node-01.example.com | SUCCESS => { "ansible_facts": { "discovered_interpreter_python": "/usr/bin/python3" }, "changed": false, "ping": "pong" } ... The hard-coded all group dynamically contains all hosts listed in the inventory file. Verify that privilege escalation works correctly by running the whoami utility on all managed nodes by using the Ansible command module: If the command returns root, you configured sudo on the managed nodes correctly. Additional resources Preparing a control node on RHEL 8 sudoers(5) manual page | [
"useradd ansible",
"su - ansible",
"[ansible@control-node]USD ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/ansible/.ssh/id_rsa): Enter passphrase (empty for no passphrase): <password> Enter same passphrase again: <password>",
"[defaults] inventory = /home/ansible/inventory remote_user = ansible [privilege_escalation] become = True become_method = sudo become_user = root become_ask_pass = True",
"managed-node-01.example.com [US] managed-node-02.example.com ansible_host=192.0.2.100 managed-node-03.example.com",
"yum install rhel-system-roles",
"[ansible@control-node]USD ansible-galaxy collection install redhat.rhel_system_roles",
"useradd ansible",
"passwd ansible Changing password for user ansible. New password: <password> Retype new password: <password> passwd: all authentication tokens updated successfully.",
"[ansible@control-node]USD ssh-copy-id managed-node-01.example.com /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: \"/home/ansible/.ssh/id_rsa.pub\" The authenticity of host 'managed-node-01.example.com (192.0.2.100)' can't be established. ECDSA key fingerprint is SHA256:9bZ33GJNODK3zbNhybokN/6Mq7hu3vpBXDrCxe7NAvo.",
"Are you sure you want to continue connecting (yes/no/[fingerprint])? yes /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys",
"[email protected]'s password: <password> Number of key(s) added: 1 Now try logging into the machine, with: \"ssh 'managed-node-01.example.com'\" and check to make sure that only the key(s) you wanted were added.",
"[ansible@control-node]USD ssh managed-node-01.example.com whoami ansible",
"visudo /etc/sudoers.d/ansible",
"ansible ALL=(ALL) ALL",
"ansible ALL=(ALL) NOPASSWD: ALL",
"[ansible@control-node]USD ansible all -m ping BECOME password: <password> managed-node-01.example.com | SUCCESS => { \"ansible_facts\": { \"discovered_interpreter_python\": \"/usr/bin/python3\" }, \"changed\": false, \"ping\": \"pong\" }",
"[ansible@control-node]USD ansible all -m command -a whoami BECOME password: <password> managed-node-01.example.com | CHANGED | rc=0 >> root"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/automating_system_administration_by_using_rhel_system_roles/assembly_preparing-a-control-node-and-managed-nodes-to-use-rhel-system-roles_automating-system-administration-by-using-rhel-system-roles |
8.13. User Serviceable Snapshots | 8.13. User Serviceable Snapshots User Serviceable Snapshot is a quick and easy way to access data stored in snapshotted volumes. This feature is based on the core snapshot feature in Red Hat Gluster Storage. With User Serviceable Snapshot feature, you can access the activated snapshots of the snapshot volume. Consider a scenario where a user wants to access a file test.txt which was in the Home directory a couple of months earlier and was deleted accidentally. You can now easily go to the virtual .snaps directory that is inside the home directory and recover the test.txt file using the cp command. Note User Serviceable Snapshot is not the recommended option for bulk data access from an earlier snapshot volume. For such scenarios it is recommended to mount the Snapshot volume and then access the data. For more information see, Chapter 8, Managing Snapshots Each activated snapshot volume when initialized by User Serviceable Snapshots, consumes some memory. Most of the memory is consumed by various house keeping structures of gfapi and xlators like DHT, AFR, etc. Therefore, the total memory consumption by snapshot depends on the number of bricks as well. Each brick consumes approximately 10MB of space, for example, in a 4x3 replica setup the total memory consumed by snapshot is around 50MB and for a 6x3 setup it is roughly 90MB. Therefore, as the number of active snapshots grow, the total memory footprint of the snapshot daemon (snapd) also grows. Therefore, in a low memory system, the snapshot daemon can get OOM killed if there are too many active snapshots 8.13.1. Enabling and Disabling User Serviceable Snapshot To enable user serviceable snapshot, run the following command: For example: Activate the snapshot to access it via the user serviceable snapshot: To disable user serviceable snapshot run the following command: For example: 8.13.2. Viewing and Retrieving Snapshots using NFS / FUSE For every snapshot available for a volume, any user who has access to the volume will have a read-only view of the volume. You can recover the files through these read-only views of the volume from different point in time. Each snapshot of the volume will be available in the .snaps directory of every directory of the mounted volume. Note To access the snapshot you must first mount the volume. For NFS mount refer Section 6.3.2.2.1, "Manually Mounting Volumes Using Gluster NFS (Deprecated)" for more details. Following command is an example. For FUSE mount refer Section 6.2.3.2, "Mounting Volumes Manually" for more details. Following command is an example. The .snaps directory is a virtual directory which will not be listed by either the ls command, or the ls -a option. The .snaps directory will contain every snapshot taken for that given volume as individual directories. Each of these snapshot entries will in turn contain the data of the particular directory the user is accessing from when the snapshot was taken. To view or retrieve a file from a snapshot follow these steps: Go to the folder where the file was present when the snapshot was taken. For example, if you had a test.txt file in the root directory of the mount that has to be recovered, then go to that directory. Note Since every directory has a virtual .snaps directory, you can enter the .snaps directory from here. Since .snaps is a virtual directory, ls and ls -a command will not list the .snaps directory. For example: Go to the .snaps folder Run the ls command to list all the snaps For example: Go to the snapshot directory from where the file has to be retrieved. For example: Copy the file/directory to the desired location. 8.13.3. Viewing and Retrieving Snapshots using CIFS for Windows Client For every snapshot available for a volume, any user who has access to the volume will have a read-only view of the volume. You can recover the files through these read-only views of the volume from different point in time. Each snapshot of the volume will be available in the .snaps folder of every folder in the root of the CIFS share. The .snaps folder is a hidden folder which will be displayed only when the following option is set to ON on the volume using the following command: After the option is set to ON , every Windows client can access the .snaps folder by following these steps: In the Folder options, enable the Show hidden files, folders, and drives option. Go to the root of the CIFS share to view the .snaps folder. Note The .snaps folder is accessible only in the root of the CIFS share and not in any sub folders. The list of snapshots are available in the .snaps folder. You can now access the required file and retrieve it. You can also access snapshots on Windows using Samba. For more information see, Section 6.4.8, "Accessing Snapshots in Windows" . | [
"gluster volume set VOLNAME features.uss enable",
"gluster volume set test_vol features.uss enable volume set: success",
"gluster snapshot activate < snapshot-name >",
"gluster volume set VOLNAME features.uss disable",
"gluster volume set test_vol features.uss disable volume set: success",
"mount -t nfs -o vers=3 server1:/test-vol /mnt/glusterfs",
"mount -t glusterfs server1:/test-vol /mnt/glusterfs",
"cd /mnt/glusterfs",
"ls -a ....Bob John test1.txt test2.txt",
"cd .snaps",
"ls -p snapshot_Dec2014/ snapshot_Nov2014/ snapshot_Oct2014/ snapshot_Sept2014/",
"cd snapshot_Nov2014",
"ls -p John/ test1.txt test2.txt",
"cp -p test2.txt USDHOME",
"gluster volume set volname features.show-snapshot-directory on"
] | https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/sect-User_Serviceable_Snapshots |
5.3.6. Activating and Mounting the Original Logical Volume | 5.3.6. Activating and Mounting the Original Logical Volume Since you had to deactivate the logical volume mylv , you need to activate it again before you can mount it. | [
"root@tng3-1 ~]# lvchange -a y mylv mount /dev/myvg/mylv /mnt df Filesystem 1K-blocks Used Available Use% Mounted on /dev/yourvg/yourlv 24507776 32 24507744 1% /mnt /dev/myvg/mylv 24507776 32 24507744 1% /mnt"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/active_mount_ex3 |
Chapter 11. Uninstalling a cluster on RHOSP from your own infrastructure | Chapter 11. Uninstalling a cluster on RHOSP from your own infrastructure You can remove a cluster that you deployed to Red Hat OpenStack Platform (RHOSP) on user-provisioned infrastructure. 11.1. Downloading playbook dependencies The Ansible playbooks that simplify the removal process on user-provisioned infrastructure require several Python modules. On the machine where you will run the process, add the modules' repositories and then download them. Note These instructions assume that you are using Red Hat Enterprise Linux (RHEL) 8. Prerequisites Python 3 is installed on your machine. Procedure On a command line, add the repositories: Register with Red Hat Subscription Manager: USD sudo subscription-manager register # If not done already Pull the latest subscription data: USD sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already Disable the current repositories: USD sudo subscription-manager repos --disable=* # If not done already Add the required repositories: USD sudo subscription-manager repos \ --enable=rhel-8-for-x86_64-baseos-rpms \ --enable=openstack-16-tools-for-rhel-8-x86_64-rpms \ --enable=ansible-2.9-for-rhel-8-x86_64-rpms \ --enable=rhel-8-for-x86_64-appstream-rpms Install the modules: USD sudo yum install python3-openstackclient ansible python3-openstacksdk Ensure that the python command points to python3 : USD sudo alternatives --set python /usr/bin/python3 11.2. Removing a cluster from RHOSP that uses your own infrastructure You can remove an OpenShift Container Platform cluster on Red Hat OpenStack Platform (RHOSP) that uses your own infrastructure. To complete the removal process quickly, run several Ansible playbooks. Prerequisites Python 3 is installed on your machine. You downloaded the modules in "Downloading playbook dependencies." You have the playbooks that you used to install the cluster. You modified the playbooks that are prefixed with down- to reflect any changes that you made to their corresponding installation playbooks. For example, changes to the bootstrap.yaml file are reflected in the down-bootstrap.yaml file. All of the playbooks are in a common directory. Procedure On a command line, run the playbooks that you downloaded: USD ansible-playbook -i inventory.yaml \ down-bootstrap.yaml \ down-control-plane.yaml \ down-compute-nodes.yaml \ down-load-balancers.yaml \ down-network.yaml \ down-security-groups.yaml Remove any DNS record changes you made for the OpenShift Container Platform installation. OpenShift Container Platform is removed from your infrastructure. | [
"sudo subscription-manager register # If not done already",
"sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already",
"sudo subscription-manager repos --disable=* # If not done already",
"sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=openstack-16-tools-for-rhel-8-x86_64-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-appstream-rpms",
"sudo yum install python3-openstackclient ansible python3-openstacksdk",
"sudo alternatives --set python /usr/bin/python3",
"ansible-playbook -i inventory.yaml down-bootstrap.yaml down-control-plane.yaml down-compute-nodes.yaml down-load-balancers.yaml down-network.yaml down-security-groups.yaml"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_openstack/uninstalling-openstack-user |
Chapter 5. Installing a cluster with RHEL KVM on IBM Z and IBM LinuxONE in a restricted network | Chapter 5. Installing a cluster with RHEL KVM on IBM Z and IBM LinuxONE in a restricted network In OpenShift Container Platform version 4.15, you can install a cluster on IBM Z(R) or IBM(R) LinuxONE infrastructure that you provision in a restricted network. Note While this document refers to only IBM Z(R), all information in it also applies to IBM(R) LinuxONE. Important Additional considerations exist for non-bare metal platforms. Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you install an OpenShift Container Platform cluster. 5.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You created a registry on your mirror host and obtained the imageContentSources data for your version of OpenShift Container Platform. You must move or remove any existing installation files, before you begin the installation process. This ensures that the required installation files are created and updated during the installation process. Important Ensure that installation steps are done from a machine with access to the installation media. You provisioned persistent storage using OpenShift Data Foundation or other supported storage protocols for your cluster. To deploy a private image registry, you must set up persistent storage with ReadWriteMany access. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. You provisioned a RHEL Kernel Virtual Machine (KVM) system that is hosted on the logical partition (LPAR) and based on RHEL 8.6 or later. See Red Hat Enterprise Linux 8 and 9 Life Cycle . 5.2. About installations in restricted networks In OpenShift Container Platform 4.15, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. Important Because of the complexity of the configuration for user-provisioned installations, consider completing a standard user-provisioned infrastructure installation before you attempt a restricted network installation using user-provisioned infrastructure. Completing this test installation might make it easier to isolate and troubleshoot any issues that might arise during your installation in a restricted network. 5.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 5.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.15, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. 5.4. Machine requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. One or more KVM host machines based on RHEL 8.6 or later. Each RHEL KVM host machine must have libvirt installed and running. The virtual machines are provisioned under each RHEL KVM host machine. 5.4.1. Required machines The smallest OpenShift Container Platform clusters require the following hosts: Table 5.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To improve high availability of your cluster, distribute the control plane machines over different RHEL instances on at least two physical machines. The bootstrap, control plane, and compute machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. See Red Hat Enterprise Linux technology capabilities and limits . 5.4.2. Network connectivity requirements The OpenShift Container Platform installer creates the Ignition files, which are necessary for all the Red Hat Enterprise Linux CoreOS (RHCOS) virtual machines. The automated installation of OpenShift Container Platform is performed by the bootstrap machine. It starts the installation of OpenShift Container Platform on each node, starts the Kubernetes cluster, and then finishes. During this bootstrap, the virtual machine must have an established network connection either through a Dynamic Host Configuration Protocol (DHCP) server or static IP address. 5.4.3. IBM Z network connectivity requirements To install on IBM Z(R) under RHEL KVM, you need: A RHEL KVM host configured with an OSA or RoCE network adapter. Either a RHEL KVM host that is configured to use bridged networking in libvirt or MacVTap to connect the network to the guests. See Types of virtual network connections . 5.4.4. Host machine resource requirements The RHEL KVM host in your environment must meet the following requirements to host the virtual machines that you plan for the OpenShift Container Platform environment. See Getting started with virtualization . You can install OpenShift Container Platform version 4.15 on the following IBM(R) hardware: IBM(R) z16 (all models), IBM(R) z15 (all models), IBM(R) z14 (all models) IBM(R) LinuxONE 4 (all models), IBM(R) LinuxONE III (all models), IBM(R) LinuxONE Emperor II, IBM(R) LinuxONE Rockhopper II 5.4.5. Minimum IBM Z system environment Hardware requirements The equivalent of six Integrated Facilities for Linux (IFL), which are SMT2 enabled, for each cluster. At least one network connection to both connect to the LoadBalancer service and to serve data for traffic outside the cluster. Note You can use dedicated or shared IFLs to assign sufficient compute resources. Resource sharing is one of the key strengths of IBM Z(R). However, you must adjust capacity correctly on each hypervisor layer and ensure sufficient resources for every OpenShift Container Platform cluster. Important Since the overall performance of the cluster can be impacted, the LPARs that are used to set up the OpenShift Container Platform clusters must provide sufficient compute capacity. In this context, LPAR weight management, entitlements, and CPU shares on the hypervisor level play an important role. Operating system requirements One LPAR running on RHEL 8.6 or later with KVM, which is managed by libvirt On your RHEL KVM host, set up: Three guest virtual machines for OpenShift Container Platform control plane machines Two guest virtual machines for OpenShift Container Platform compute machines One guest virtual machine for the temporary OpenShift Container Platform bootstrap machine 5.4.6. Minimum resource requirements Each cluster virtual machine must meet the following minimum requirements: Virtual Machine Operating System vCPU [1] Virtual RAM Storage IOPS Bootstrap RHCOS 4 16 GB 100 GB N/A Control plane RHCOS 4 16 GB 100 GB N/A Compute RHCOS 2 8 GB 100 GB N/A One physical core (IFL) provides two logical cores (threads) when SMT-2 is enabled. The hypervisor can provide two or more vCPUs. 5.4.7. Preferred IBM Z system environment Hardware requirements Three LPARS that each have the equivalent of six IFLs, which are SMT2 enabled, for each cluster. Two network connections to both connect to the LoadBalancer service and to serve data for traffic outside the cluster. Operating system requirements For high availability, two or three LPARs running on RHEL 8.6 or later with KVM, which are managed by libvirt. On your RHEL KVM host, set up: Three guest virtual machines for OpenShift Container Platform control plane machines, distributed across the RHEL KVM host machines. At least six guest virtual machines for OpenShift Container Platform compute machines, distributed across the RHEL KVM host machines. One guest virtual machine for the temporary OpenShift Container Platform bootstrap machine. To ensure the availability of integral components in an overcommitted environment, increase the priority of the control plane by using cpu_shares . Do the same for infrastructure nodes, if they exist. See schedinfo in IBM(R) Documentation. 5.4.8. Preferred resource requirements The preferred requirements for each cluster virtual machine are: Virtual Machine Operating System vCPU Virtual RAM Storage Bootstrap RHCOS 4 16 GB 120 GB Control plane RHCOS 8 16 GB 120 GB Compute RHCOS 6 8 GB 120 GB 5.4.9. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. Additional resources Recommended host practices for IBM Z(R) & IBM(R) LinuxONE environments 5.4.10. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 5.4.10.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 5.4.10.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Table 5.2. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 5.3. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 5.4. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers. Additional resources Configuring chrony time service 5.4.11. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 5.5. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <control_plane><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <compute><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 5.4.11.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 5.1. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 5.2. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 5.4.12. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application Ingress load balancing infrastructure. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 5.6. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 5.7. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 5.4.12.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 5.3. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 5.5. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure If you are using DHCP to provide the IP networking configuration to your cluster nodes, configure your DHCP service. Add persistent IP addresses for the nodes to your DHCP server configuration. In your configuration, match the MAC address of the relevant network interface to the intended IP address for each node. When you use DHCP to configure IP addressing for the cluster machines, the machines also obtain the DNS server information through DHCP. Define the persistent DNS server address that is used by the cluster nodes through your DHCP server configuration. Note If you are not using a DHCP service, you must provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. Define the hostnames of your cluster nodes in your DHCP server configuration. See the Setting the cluster node hostnames through DHCP section for details about hostname considerations. Note If you are not using a DHCP service, the cluster nodes obtain their hostname through a reverse DNS lookup. Choose to perform either a fast track installation of Red Hat Enterprise Linux CoreOS (RHCOS) or a full installation of Red Hat Enterprise Linux CoreOS (RHCOS). For the full installation, you must set up an HTTP or HTTPS server to provide Ignition files and install images to the cluster nodes. For the fast track installation an HTTP or HTTPS server is not required, however, a DHCP server is required. See sections "Fast-track installation: Creating Red Hat Enterprise Linux CoreOS (RHCOS) machines" and "Full installation: Creating Red Hat Enterprise Linux CoreOS (RHCOS) machines". Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. 5.6. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 604800 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 604800 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. 5.7. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 5.8. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for IBM Z(R) 5.8.1. Sample install-config.yaml file for IBM Z You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Specifies whether to enable or disable simultaneous multithreading (SMT), or hyperthreading. By default, SMT is enabled to increase the performance of the cores in your machines. You can disable it by setting the parameter value to Disabled . If you disable SMT, you must disable it in all cluster machines; this includes both control plane and compute machines. Note Simultaneous multithreading (SMT) is enabled by default. If SMT is not available on your OpenShift Container Platform nodes, the hyperthreading parameter has no effect. Important If you disable hyperthreading , whether on your OpenShift Container Platform nodes or in the install-config.yaml file, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned infrastructure. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. In user-provisioned installations, you must manually deploy the compute machines before you finish installing the cluster. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 11 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 12 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 13 You must set the platform to none . You cannot provide additional platform configuration variables for IBM Z(R) infrastructure. Important Clusters that are installed with the platform type none are unable to use some features, such as managing compute machines with the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that would normally support the feature. This parameter cannot be changed after installation. 14 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 15 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example, registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. 16 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 17 Add the additionalTrustBundle parameter and value. The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority or the self-signed certificate that you generated for the mirror registry. 18 Provide the imageContentSources section according to the output of the command that you used to mirror the repository. Important When using the oc adm release mirror command, use the output from the imageContentSources section. When using oc mirror command, use the repositoryDigestMirrors section of the ImageContentSourcePolicy file that results from running the command. ImageContentSourcePolicy is deprecated. For more information see Configuring image registry repository mirroring . 5.8.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 5.8.3. Configuring a three-node cluster Optionally, you can deploy zero compute machines in a minimal three node cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. In three-node OpenShift Container Platform environments, the three control plane machines are schedulable, which means that your application workloads are scheduled to run on them. Prerequisites You have an existing install-config.yaml file. Procedure Ensure that the number of compute replicas is set to 0 in your install-config.yaml file, as shown in the following compute stanza: compute: - name: worker platform: {} replicas: 0 Note You must set the value of the replicas parameter for the compute machines to 0 when you install OpenShift Container Platform on user-provisioned infrastructure, regardless of the number of compute machines you are deploying. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. This does not apply to user-provisioned installations, where the compute machines are deployed manually. Note The preferred resource for control plane nodes is six vCPUs and 21 GB. For three control plane nodes this is the memory + vCPU equivalent of a minimum five-node cluster. You should back the three nodes, each installed on a 120 GB disk, with three IFLs that are SMT2 enabled. The minimum tested setup is three vCPUs and 10 GB on a 120 GB disk for each control plane node. For three-node cluster installations, follow these steps: If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. See the Load balancing requirements for user-provisioned infrastructure section for more information. When you create the Kubernetes manifest files in the following procedure, ensure that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml file is set to true . This enables your application workloads to run on the control plane nodes. Do not deploy any compute nodes when you create the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 5.9. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin. OVNKubernetes is the only supported plugin during installation. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 5.9.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 5.8. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 5.9. defaultNetwork object Field Type Description type string OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. OpenShift SDN is no longer available as an installation choice for new clusters. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 5.10. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify a configuration object for customizing the IPsec configuration. ipv4 object Specifies a configuration object for IPv4 settings. ipv6 object Specifies a configuration object for IPv6 settings. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. Table 5.11. ovnKubernetesConfig.ipv4 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the 100.88.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is 100.88.0.0/16 . internalJoinSubnet string If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . The default value is 100.64.0.0/16 . Table 5.12. ovnKubernetesConfig.ipv6 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the fd98::/48 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. This field cannot be changed after installation. The default value is fd98::/48 . internalJoinSubnet string If your existing network infrastructure overlaps with the fd98::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is fd98::/64 . Table 5.13. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 5.14. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. ipForwarding object You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the ipForwarding specification in the Network resource. Specify Restricted to only allow IP forwarding for Kubernetes related traffic. Specify Global to allow forwarding of all IP traffic. For new installations, the default is Restricted . For updates to OpenShift Container Platform 4.14 or later, the default is Global . ipv4 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv4 addresses. ipv6 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv6 addresses. Table 5.15. gatewayConfig.ipv4 object Field Type Description internalMasqueradeSubnet string The masquerade IPv4 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is 169.254.169.0/29 . Table 5.16. gatewayConfig.ipv6 object Field Type Description internalMasqueradeSubnet string The masquerade IPv6 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is fd69::/125 . Table 5.17. ipsecConfig object Field Type Description mode string Specifies the behavior of the IPsec implementation. Must be one of the following values: Disabled : IPsec is not enabled on cluster nodes. External : IPsec is enabled for network traffic with external hosts. Full : IPsec is enabled for pod traffic and network traffic with external hosts. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full Important Using OVNKubernetes can lead to a stack exhaustion problem on IBM Power(R). kubeProxyConfig object configuration (OpenShiftSDN container network interface only) The values for the kubeProxyConfig object are defined in the following table: Table 5.18. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 5.10. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Note The installation program that generates the manifest and Ignition files is architecture specific and can be obtained from the client image mirror . The Linux version of the installation program runs on s390x only. This installer program is also available as a Mac OS version. Prerequisites You obtained the OpenShift Container Platform installation program. For a restricted network installation, these files are on your mirror host. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 5.11. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on IBM Z(R) infrastructure that you provision, you must install Red Hat Enterprise Linux CoreOS (RHCOS) as Red Hat Enterprise Linux (RHEL) guest virtual machines. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS machines have rebooted. You can perform a fast-track installation of RHCOS that uses a prepackaged QEMU copy-on-write (QCOW2) disk image. Alternatively, you can perform a full installation on a new QCOW2 disk image. To add further security to your system, you can optionally install RHCOS using IBM(R) Secure Execution before proceeding to the fast-track installation. 5.11.1. Installing RHCOS using IBM Secure Execution Before you install RHCOS using IBM(R) Secure Execution, you must prepare the underlying infrastructure. Prerequisites IBM(R) z15 or later, or IBM(R) LinuxONE III or later. Red Hat Enterprise Linux (RHEL) 8 or later. You have a bootstrap Ignition file. The file is not protected, enabling others to view and edit it. You have verified that the boot image has not been altered after installation. You must run all your nodes as IBM(R) Secure Execution guests. Procedure Prepare your RHEL KVM host to support IBM(R) Secure Execution. By default, KVM hosts do not support guests in IBM(R) Secure Execution mode. To support guests in IBM(R) Secure Execution mode, KVM hosts must boot in LPAR mode with the kernel parameter specification prot_virt=1 . To enable prot_virt=1 on RHEL 8, follow these steps: Navigate to /boot/loader/entries/ to modify your bootloader configuration file *.conf . Add the kernel command line parameter prot_virt=1 . Run the zipl command and reboot your system. KVM hosts that successfully start with support for IBM(R) Secure Execution for Linux issue the following kernel message: prot_virt: Reserving <amount>MB as ultravisor base storage. To verify that the KVM host now supports IBM(R) Secure Execution, run the following command: # cat /sys/firmware/uv/prot_virt_host Example output 1 The value of this attribute is 1 for Linux instances that detect their environment as consistent with that of a secure host. For other instances, the value is 0. Add your host keys to the KVM guest via Ignition. During the first boot, RHCOS looks for your host keys to re-encrypt itself with them. RHCOS searches for files starting with ibm-z-hostkey- in the /etc/se-hostkeys directory. All host keys, for each machine the cluster is running on, must be loaded into the directory by the administrator. After first boot, you cannot run the VM on any other machines. Note You need to prepare your Ignition file on a safe system. For example, another IBM(R) Secure Execution guest. For example: { "ignition": { "version": "3.0.0" }, "storage": { "files": [ { "path": "/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt", "contents": { "source": "data:;base64,<base64 encoded hostkey document>" }, "mode": 420 }, { "path": "/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt", "contents": { "source": "data:;base64,<base64 encoded hostkey document>" }, "mode": 420 } ] } } ``` Note You can add as many host keys as required if you want your node to be able to run on multiple IBM Z(R) machines. To generate the Base64 encoded string, run the following command: base64 <your-hostkey>.crt Compared to guests not running IBM(R) Secure Execution, the first boot of the machine is longer because the entire image is encrypted with a randomly generated LUKS passphrase before the Ignition phase. Add Ignition protection To protect the secrets that are stored in the Ignition config file from being read or even modified, you must encrypt the Ignition config file. Note To achieve the desired security, Ignition logging and local login are disabled by default when running IBM(R) Secure Execution. Fetch the public GPG key for the secex-qemu.qcow2 image and encrypt the Ignition config with the key by running the following command: gpg --recipient-file /path/to/ignition.gpg.pub --yes --output /path/to/config.ign.gpg --verbose --armor --encrypt /path/to/config.ign Follow the fast-track installation of RHCOS to install nodes by using the IBM(R) Secure Execution QCOW image. Note Before you start the VM, replace serial=ignition with serial=ignition_crypted , and add the launchSecurity parameter. Verification When you have completed the fast-track installation of RHCOS and Ignition runs at the first boot, verify if decryption is successful. If the decryption is successful, you can expect an output similar to the following example: Example output [ 2.801433] systemd[1]: Starting coreos-ignition-setup-user.service - CoreOS Ignition User Config Setup... [ 2.803959] coreos-secex-ignition-decrypt[731]: gpg: key <key_name>: public key "Secure Execution (secex) 38.20230323.dev.0" imported [ 2.808874] coreos-secex-ignition-decrypt[740]: gpg: encrypted with rsa4096 key, ID <key_name>, created <yyyy-mm-dd> [ OK ] Finished coreos-secex-igni...S Secex Ignition Config Decryptor. If the decryption fails, you can expect an output similar to the following example: Example output Starting coreos-ignition-s...reOS Ignition User Config Setup... [ 2.863675] coreos-secex-ignition-decrypt[729]: gpg: key <key_name>: public key "Secure Execution (secex) 38.20230323.dev.0" imported [ 2.869178] coreos-secex-ignition-decrypt[738]: gpg: encrypted with RSA key, ID <key_name> [ 2.870347] coreos-secex-ignition-decrypt[738]: gpg: public key decryption failed: No secret key [ 2.870371] coreos-secex-ignition-decrypt[738]: gpg: decryption failed: No secret key Additional resources Introducing IBM(R) Secure Execution for Linux Linux as an IBM(R) Secure Execution host or guest Setting up IBM(R) Secure Execution on IBM Z 5.11.2. Configuring NBDE with static IP in an IBM Z or IBM LinuxONE environment Enabling NBDE disk encryption in an IBM Z(R) or IBM(R) LinuxONE environment requires additional steps, which are described in detail in this section. Prerequisites You have set up the External Tang Server. See Network-bound disk encryption for instructions. You have installed the butane utility. You have reviewed the instructions for how to create machine configs with Butane. Procedure Create Butane configuration files for the control plane and compute nodes. The following example of a Butane configuration for a control plane node creates a file named master-storage.bu for disk encryption: variant: openshift version: 4.15.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 2 1 The cipher option is only required if FIPS mode is enabled. Omit the entry if FIPS is disabled. 2 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Create a customized initramfs file to boot the machine, by running the following command: USD coreos-installer pxe customize \ /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img \ --dest-device /dev/disk/by-id/scsi-<serial_number> --dest-karg-append \ ip=<ip_address>::<gateway_ip>:<subnet_mask>::<network_device>:none \ --dest-karg-append nameserver=<nameserver_ip> \ --dest-karg-append rd.neednet=1 -o \ /root/rhcos-bootfiles/<node_name>-initramfs.s390x.img Note Before first boot, you must customize the initramfs for each node in the cluster, and add PXE kernel parameters. Create a parameter file that includes ignition.platform.id=metal and ignition.firstboot . rd.neednet=1 \ console=ttysclp0 \ ignition.firstboot ignition.platform.id=metal \ coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \ 1 coreos.inst.ignition_url=http://<http_server>/master.ign \ 2 ip=10.19.17.2::10.19.17.1:255.255.255.0::enbdd0:none nameserver=10.19.17.1 \ zfcp.allow_lun_scan=0 \ rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 \ rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 1 Specify the location of the rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. 2 Specify the location of the Ignition config file. Use master.ign or worker.ign . Only HTTP and HTTPS protocols are supported. Note Write all options in the parameter file as a single line and make sure you have no newline characters. Additional resources Creating machine configs with Butane 5.11.3. Fast-track installation by using a prepackaged QCOW2 disk image Complete the following steps to create the machines in a fast-track installation of Red Hat Enterprise Linux CoreOS (RHCOS), importing a prepackaged Red Hat Enterprise Linux CoreOS (RHCOS) QEMU copy-on-write (QCOW2) disk image. Prerequisites At least one LPAR running on RHEL 8.6 or later with KVM, referred to as RHEL KVM host in this procedure. The KVM/QEMU hypervisor is installed on the RHEL KVM host. A domain name server (DNS) that can perform hostname and reverse lookup for the nodes. A DHCP server that provides IP addresses. Procedure Obtain the RHEL QEMU copy-on-write (QCOW2) disk image file from the Product Downloads page on the Red Hat Customer Portal or from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate RHCOS QCOW2 image described in the following procedure. Download the QCOW2 disk image and Ignition files to a common directory on the RHEL KVM host. For example: /var/lib/libvirt/images Note The Ignition files are generated by the OpenShift Container Platform installer. Create a new disk image with the QCOW2 disk image backing file for each KVM guest node. USD qemu-img create -f qcow2 -F qcow2 -b /var/lib/libvirt/images/{source_rhcos_qemu} /var/lib/libvirt/images/{vmname}.qcow2 {size} Create the new KVM guest nodes using the Ignition file and the new disk image. USD virt-install --noautoconsole \ --connect qemu:///system \ --name {vm_name} \ --memory {memory} \ --vcpus {vcpus} \ --disk {disk} \ --launchSecurity type="s390-pv" \ 1 --import \ --network network={network},mac={mac} \ --disk path={ign_file},format=raw,readonly=on,serial=ignition,startup_policy=optional 2 1 If IBM(R) Secure Execution is enabled, add the launchSecurity type="s390-pv" parameter. 2 If IBM(R) Secure Execution is enabled, replace serial=ignition with serial=ignition_crypted . 5.11.4. Full installation on a new QCOW2 disk image Complete the following steps to create the machines in a full installation on a new QEMU copy-on-write (QCOW2) disk image. Prerequisites At least one LPAR running on RHEL 8.6 or later with KVM, referred to as RHEL KVM host in this procedure. The KVM/QEMU hypervisor is installed on the RHEL KVM host. A domain name server (DNS) that can perform hostname and reverse lookup for the nodes. An HTTP or HTTPS server is set up. Procedure Obtain the RHEL kernel, initramfs, and rootfs files from the Product Downloads page on the Red Hat Customer Portal or from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate RHCOS QCOW2 image described in the following procedure. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel: rhcos-<version>-live-kernel-<architecture> initramfs: rhcos-<version>-live-initramfs.<architecture>.img rootfs: rhcos-<version>-live-rootfs.<architecture>.img Move the downloaded RHEL live kernel, initramfs, and rootfs as well as the Ignition files to an HTTP or HTTPS server before you launch virt-install . Note The Ignition files are generated by the OpenShift Container Platform installer. Create the new KVM guest nodes using the RHEL kernel, initramfs, and Ignition files, the new disk image, and adjusted parm line arguments. For --location , specify the location of the kernel/initrd on the HTTP or HTTPS server. For coreos.inst.ignition_url= , specify the Ignition file for the machine role. Use bootstrap.ign , master.ign , or worker.ign . Only HTTP and HTTPS protocols are supported. For coreos.live.rootfs_url= , specify the matching rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. USD virt-install \ --connect qemu:///system \ --name {vm_name} \ --vcpus {vcpus} \ --memory {memory_mb} \ --disk {vm_name}.qcow2,size={image_size| default(10,true)} \ --network network={virt_network_parm} \ --boot hd \ --location {media_location},kernel={rhcos_kernel},initrd={rhcos_initrd} \ --extra-args "rd.neednet=1 coreos.inst.install_dev=/dev/vda coreos.live.rootfs_url={rhcos_liveos} ip={ip}::{default_gateway}:{subnet_mask_length}:{vm_name}:enc1:none:{MTU} nameserver={dns} coreos.inst.ignition_url={rhcos_ign}" \ --noautoconsole \ --wait 5.11.5. Advanced RHCOS installation reference This section illustrates the networking configuration and other advanced options that allow you to modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables describe the kernel arguments and command-line options you can use with the RHCOS live installer and the coreos-installer command. 5.11.5.1. Networking options for ISO installations If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the image to configure networking for a node. If no networking arguments are specified, DHCP is activated in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file. Important When adding networking arguments manually, you must also add the rd.neednet=1 kernel argument to bring the network up in the initramfs. The following information provides examples for configuring networking on your RHCOS nodes for ISO installations. The examples describe how to use the ip= and nameserver= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= and nameserver= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page. The following examples are the networking options for ISO installation. Configuring DHCP or static IP addresses To configure an IP address, either use DHCP ( ip=dhcp ) or set an individual static IP address ( ip=<host_ip> ). If setting a static IP, you must then identify the DNS server IP address ( nameserver=<dns_ip> ) on each node. The following example sets: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The hostname to core0.example.com The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41 Note When you use DHCP to configure IP addressing for the RHCOS machines, the machines also obtain the DNS server information through DHCP. For DHCP-based deployments, you can define the DNS server address that is used by the RHCOS nodes through your DHCP server configuration. Configuring an IP address without a static hostname You can configure an IP address without assigning a static hostname. If a static hostname is not set by the user, it will be picked up and automatically set by a reverse DNS lookup. To configure an IP address without a static hostname refer to the following example: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41 Specifying multiple network interfaces You can specify multiple network interfaces by setting multiple ip= entries. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring default gateway and route Optional: You can configure routes to additional networks by setting an rd.route= value. Note When you configure one or multiple networks, one default gateway is required. If the additional network gateway is different from the primary network gateway, the default gateway must be the primary network gateway. Run the following command to configure the default gateway: ip=::10.10.10.254:::: Enter the following command to configure the route for the additional network: rd.route=20.20.20.0/24:20.20.20.254:enp2s0 Disabling DHCP on a single interface You can disable DHCP on a single interface, such as when there are two or more network interfaces and only one interface is being used. In the example, the enp1s0 interface has a static networking configuration and DHCP is disabled for enp2s0 , which is not used: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none Combining DHCP and static IP configurations You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example: ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring VLANs on individual interfaces Optional: You can configure VLANs on individual interfaces by using the vlan= parameter. To configure a VLAN on a network interface and use a static IP address, run the following command: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0 To configure a VLAN on a network interface and to use DHCP, run the following command: ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0 Providing multiple DNS servers You can provide multiple DNS servers by adding a nameserver= entry for each server, for example: nameserver=1.1.1.1 nameserver=8.8.8.8 5.12. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.28.5 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 5.13. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 5.14. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.28.5 master-1 Ready master 73m v1.28.5 master-2 Ready master 74m v1.28.5 worker-0 Ready worker 11m v1.28.5 worker-1 Ready worker 11m v1.28.5 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 5.15. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m Configure the Operators that are not available. 5.15.1. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 5.15.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 5.15.2.1. Configuring registry storage for IBM Z As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a cluster on IBM Z(R). You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have 100Gi capacity. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resources found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.15 True False False 6h50m Ensure that your registry is set to managed to enable building and pushing of images. Run: Then, change the line to 5.15.2.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 5.16. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Postinstallation machine configuration tasks documentation for more information. Register your cluster on the Cluster registration page. Additional resources How to generate SOSREPORT within OpenShift Container Platform version 4 nodes without SSH . 5.17. steps Customize your cluster . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . If necessary, see Registering your disconnected cluster | [
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - name: worker platform: {} replicas: 0",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full",
"kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s",
"./openshift-install create manifests --dir <installation_directory> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"prot_virt: Reserving <amount>MB as ultravisor base storage.",
"cat /sys/firmware/uv/prot_virt_host",
"1",
"{ \"ignition\": { \"version\": \"3.0.0\" }, \"storage\": { \"files\": [ { \"path\": \"/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt\", \"contents\": { \"source\": \"data:;base64,<base64 encoded hostkey document>\" }, \"mode\": 420 }, { \"path\": \"/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt\", \"contents\": { \"source\": \"data:;base64,<base64 encoded hostkey document>\" }, \"mode\": 420 } ] } } ```",
"base64 <your-hostkey>.crt",
"gpg --recipient-file /path/to/ignition.gpg.pub --yes --output /path/to/config.ign.gpg --verbose --armor --encrypt /path/to/config.ign",
"[ 2.801433] systemd[1]: Starting coreos-ignition-setup-user.service - CoreOS Ignition User Config Setup [ 2.803959] coreos-secex-ignition-decrypt[731]: gpg: key <key_name>: public key \"Secure Execution (secex) 38.20230323.dev.0\" imported [ 2.808874] coreos-secex-ignition-decrypt[740]: gpg: encrypted with rsa4096 key, ID <key_name>, created <yyyy-mm-dd> [ OK ] Finished coreos-secex-igni...S Secex Ignition Config Decryptor.",
"Starting coreos-ignition-s...reOS Ignition User Config Setup [ 2.863675] coreos-secex-ignition-decrypt[729]: gpg: key <key_name>: public key \"Secure Execution (secex) 38.20230323.dev.0\" imported [ 2.869178] coreos-secex-ignition-decrypt[738]: gpg: encrypted with RSA key, ID <key_name> [ 2.870347] coreos-secex-ignition-decrypt[738]: gpg: public key decryption failed: No secret key [ 2.870371] coreos-secex-ignition-decrypt[738]: gpg: decryption failed: No secret key",
"variant: openshift version: 4.15.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 2",
"coreos-installer pxe customize /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img --dest-device /dev/disk/by-id/scsi-<serial_number> --dest-karg-append ip=<ip_address>::<gateway_ip>:<subnet_mask>::<network_device>:none --dest-karg-append nameserver=<nameserver_ip> --dest-karg-append rd.neednet=1 -o /root/rhcos-bootfiles/<node_name>-initramfs.s390x.img",
"rd.neednet=1 console=ttysclp0 ignition.firstboot ignition.platform.id=metal coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 1 coreos.inst.ignition_url=http://<http_server>/master.ign \\ 2 ip=10.19.17.2::10.19.17.1:255.255.255.0::enbdd0:none nameserver=10.19.17.1 zfcp.allow_lun_scan=0 rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000",
"qemu-img create -f qcow2 -F qcow2 -b /var/lib/libvirt/images/{source_rhcos_qemu} /var/lib/libvirt/images/{vmname}.qcow2 {size}",
"virt-install --noautoconsole --connect qemu:///system --name {vm_name} --memory {memory} --vcpus {vcpus} --disk {disk} --launchSecurity type=\"s390-pv\" \\ 1 --import --network network={network},mac={mac} --disk path={ign_file},format=raw,readonly=on,serial=ignition,startup_policy=optional 2",
"virt-install --connect qemu:///system --name {vm_name} --vcpus {vcpus} --memory {memory_mb} --disk {vm_name}.qcow2,size={image_size| default(10,true)} --network network={virt_network_parm} --boot hd --location {media_location},kernel={rhcos_kernel},initrd={rhcos_initrd} --extra-args \"rd.neednet=1 coreos.inst.install_dev=/dev/vda coreos.live.rootfs_url={rhcos_liveos} ip={ip}::{default_gateway}:{subnet_mask_length}:{vm_name}:enc1:none:{MTU} nameserver={dns} coreos.inst.ignition_url={rhcos_ign}\" --noautoconsole --wait",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.28.5 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.28.5 master-1 Ready master 73m v1.28.5 master-2 Ready master 74m v1.28.5 worker-0 Ready worker 11m v1.28.5 worker-1 Ready worker 11m v1.28.5",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.15 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1"
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_ibm_z_and_ibm_linuxone/installing-restricted-networks-ibm-z-kvm |
4.6.3. EDIT MONITORING SCRIPTS Subsection | 4.6.3. EDIT MONITORING SCRIPTS Subsection Click on the MONITORING SCRIPTS link at the top of the page. The EDIT MONITORING SCRIPTS subsection allows the administrator to specify a send/expect string sequence to verify that the service for the virtual server is functional on each real server. It is also the place where the administrator can specify customized scripts to check services requiring dynamically changing data. Figure 4.9. The EDIT MONITORING SCRIPTS Subsection Sending Program For more advanced service verification, you can use this field to specify the path to a service-checking script. This functionality is especially helpful for services that require dynamically changing data, such as HTTPS or SSL. To use this functionality, you must write a script that returns a textual response, set it to be executable, and type the path to it in the Sending Program field. Note To ensure that each server in the real server pool is checked, use the special token %h after the path to the script in the Sending Program field. This token is replaced with each real server's IP address as the script is called by the nanny daemon. The following is a sample script to use as a guide when composing an external service-checking script: Note If an external program is entered in the Sending Program field, then the Send field is ignored. Send Enter a string for the nanny daemon to send to each real server in this field. By default the send field is completed for HTTP. You can alter this value depending on your needs. If you leave this field blank, the nanny daemon attempts to open the port and assume the service is running if it succeeds. Only one send sequence is allowed in this field, and it can only contain printable, ASCII characters as well as the following escape characters: \n for new line. \r for carriage return. \t for tab. \ to escape the character which follows it. Expect Enter a the textual response the server should return if it is functioning properly. If you wrote your own sending program, enter the response you told it to send if it was successful. Note To determine what to send for a given service, you can open a telnet connection to the port on a real server and see what is returned. For instance, FTP reports 220 upon connecting, so could enter quit in the Send field and 220 in the Expect field. Warning Remember to click the ACCEPT button after making any changes in this panel. To make sure you do not lose any changes when selecting a new panel. Once you have configured virtual servers using the Piranha Configuration Tool , you must copy specific configuration files to the backup LVS router. See Section 4.7, "Synchronizing Configuration Files" for details. | [
"#!/bin/sh TEST=`dig -t soa example.com @USD1 | grep -c dns.example.com if [ USDTEST != \"1\" ]; then echo \"OK else echo \"FAIL\" fi"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/virtual_server_administration/s2-piranha-virtservs-ems-VSA |
Chapter 106. Scheduler | Chapter 106. Scheduler Only consumer is supported The Scheduler component is used to generate message exchanges when a scheduler fires. This component is similar to the Timer component, but it offers more functionality in terms of scheduling. Also this component uses JDK ScheduledExecutorService . Where as the timer uses a JDK Timer . You can only consume events from this endpoint. 106.1. Dependencies When using scheduler with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-scheduler-starter</artifactId> </dependency> 106.2. URI format Where name is the name of the scheduler, which is created and shared across endpoints. So if you use the same name for all your scheduler endpoints, only one scheduler thread pool and thread will be used - but you can configure the thread pool to allow more concurrent threads. Note The IN body of the generated exchange is null . So exchange.getIn().getBody() returns null . 106.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 106.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 106.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 106.4. Component Options The Scheduler component supports 3 options, which are listed below. Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean poolSize (scheduler) Number of core threads in the thread pool used by the scheduling thread pool. Is by default using a single thread. 1 int 106.5. Endpoint Options The Scheduler endpoint is configured using URI syntax: with the following path and query parameters: 106.5.1. Path Parameters (1 parameters) Name Description Default Type name (consumer) Required The name of the scheduler. String 106.5.2. Query Parameters (21 parameters) Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean sendEmptyMessageWhenIdle (consumer) If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern pollStrategy (consumer (advanced)) A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. PollingConsumerPollStrategy synchronous (advanced) Sets whether synchronous processing should be strictly used. false boolean backoffErrorThreshold (scheduler) The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. int backoffIdleThreshold (scheduler) The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. int backoffMultiplier (scheduler) To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. int delay (scheduler) Milliseconds before the poll. 500 long greedy (scheduler) If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the run polled 1 or more messages. false boolean initialDelay (scheduler) Milliseconds before the first poll starts. 1000 long poolSize (scheduler) Number of core threads in the thread pool used by the scheduling thread pool. Is by default using a single thread. 1 int repeatCount (scheduler) Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever. 0 long runLoggingLevel (scheduler) The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. Enum values: TRACE DEBUG INFO WARN ERROR OFF TRACE LoggingLevel scheduledExecutorService (scheduler) Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. ScheduledExecutorService scheduler (scheduler) To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler. none Object schedulerProperties (scheduler) To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler. Map startScheduler (scheduler) Whether the scheduler should be auto started. true boolean timeUnit (scheduler) Time unit for initialDelay and delay options. Enum values: NANOSECONDS MICROSECONDS MILLISECONDS SECONDS MINUTES HOURS DAYS MILLISECONDS TimeUnit useFixedDelay (scheduler) Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. true boolean 106.6. More information This component is a scheduler Polling Consumer where you can find more information about the options above, and examples at the Polling Consumer page. 106.7. Exchange Properties When the timer is fired, it adds the following information as properties to the Exchange : Name Type Description Exchange.TIMER_NAME String The value of the name option. Exchange.TIMER_FIRED_TIME Date The time when the consumer fired. 106.8. Sample To set up a route that generates an event every 60 seconds: from("scheduler://foo?delay=60000").to("bean:myBean?method=someMethodName"); The above route will generate an event and then invoke the someMethodName method on the bean called myBean in the Registry such as JNDI or Spring. And the route in Spring DSL: <route> <from uri="scheduler://foo?delay=60000"/> <to uri="bean:myBean?method=someMethodName"/> </route> 106.9. Forcing the scheduler to trigger immediately when completed To let the scheduler trigger as soon as the task is complete, you can set the option greedy=true . But beware then the scheduler will keep firing all the time. So use this with caution. 106.10. Forcing the scheduler to be idle There can be use cases where you want the scheduler to trigger and be greedy. But sometimes you want "tell the scheduler" that there was no task to poll, so the scheduler can change into idle mode using the backoff options. To do this you would need to set a property on the exchange with the key Exchange.SCHEDULER_POLLED_MESSAGES to a boolean value of false. This will cause the consumer to indicate that there was no messages polled. The consumer will otherwise as by default return 1 message polled to the scheduler, every time the consumer has completed processing the exchange. 106.11. Spring Boot Auto-Configuration The component supports 4 options, which are listed below. Name Description Default Type camel.component.scheduler.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.scheduler.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.scheduler.enabled Whether to enable auto configuration of the scheduler component. This is enabled by default. Boolean camel.component.scheduler.pool-size Number of core threads in the thread pool used by the scheduling thread pool. Is by default using a single thread. 1 Integer | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-scheduler-starter</artifactId> </dependency>",
"scheduler:name[?options]",
"scheduler:name",
"from(\"scheduler://foo?delay=60000\").to(\"bean:myBean?method=someMethodName\");",
"<route> <from uri=\"scheduler://foo?delay=60000\"/> <to uri=\"bean:myBean?method=someMethodName\"/> </route>"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-scheduler-component-starter |
Chapter 27. Designing the case definition | Chapter 27. Designing the case definition You design cases using the process designer in Business Central. Case design is the basis of case management and sets the specific goals and tasks for each case. The case flow can be modified dynamically during run time by adding dynamic tasks or processes. In this procedure, you will create this same case definition to familiarize yourself with the case definition design process. The IT_Orders sample project in Business Central includes the following orderhardware business process case definition. Figure 27.1. orderhardware business process case definition Prerequisites You have created a new case in Business Central. For more information, see Chapter 25, Creating a new IT_Orders case project . You have created the data objects. For more information, see Chapter 26, Data objects . Procedure In Business Central, go to Menu Design Projects and click IT_Orders_New . Click Add Asset Case Definition . In the Create new Case definition window, add the following required information: Case Definition : Input orderhardware . This is usually the subject of the case or project that is being case managed. Package : Select com.myspace.it_orders_new to specify the location that the case file is created in. Click Ok to open the process designer. Define values for the case file variables that are accessible to the sub-processes, subcases, and business rules used in the case. In the upper-right corner, click the Properties icon. Scroll down and expand Case Management , click in the Case File Variables section, and enter the following: Figure 27.2. orderhardware case file variables Note The following case file variables are custom data types: hwSpec : org.jbpm.document.Document (type in this value) survey : Survey [com.myspace.it_orders_new] (select this value) Click Save . Define the roles involved in the case. In the upper-right corner, click the Properties icon. Scroll down and expand Case Management , click in the Case Roles section, and enter the following: Figure 27.3. orderhardware case roles owner : The employee who is making the hardware order request. The role cardinality is set to 1 , which means that only one person or group can be assigned to this role. manager : The employee's manager; the person who will approve or deny the requested hardware. The role cardinality is set to 1 , which means that only one person or group can be assigned to this role. supplier : The available suppliers of IT hardware in the system. The role cardinality is set to 2 , which means that more than one supplier can be assigned to this role. Click Save . 27.1. Creating the Place order sub-process Create the Place order sub-process, which is a separate business process that is carried out by the supplier. This is a reusable process that occurs during the course of case execution as described in Chapter 27, Designing the case definition . Prerequisites You have created a new case in Business Central. For more information, see Chapter 25, Creating a new IT_Orders case project . You have created the data objects. For more information, see Chapter 26, Data objects . Procedure In Business Central, go to Menu Design Projects IT_Orders_New . From the project menu, click Add Asset Business Process . In the Create new Business Process wizard, enter the following values: Business Process : place-order Package : Select com.myspace.it_orders_new Click Ok . The diagram editor opens. Click an empty space in the canvas, and in the upper-right corner, click the Properties icon. Scroll down, expand Process Data , click in the Process Variables section, and enter the following values under Process Variables : Table 27.1. Process variables Name Data Type CaseID String Requestor String _hwSpec org.jbm.doc ordered_ Boolean info_ String caseFile_hwSpec org.jbm.doc caseFile-ordered Boolean caseFile-orderinf String Figure 27.4. Completed process variables Click Save . Drag a start event onto the canvas and create an outgoing connection from the start event to a task and convert the new task to a user task. Click the user task and in the Properties panel, input Place order in the Name field. Expand Implementation/Execution , click Add below the Groups menu, click Select New , and input supplier . click in the Assignments field and add the following data inputs and outputs in the Place order Data I/O dialog box: Table 27.2. Data inputs and assignements Name Data Type Source _hwSpec org.jbpm.document caseFile_hwSpec orderNumber String CaseId Requestor String Requestor Table 27.3. Data outputs and assignements Name Data Type Target ordered_ Boolean caseFile_ordered info_ String CaseFile_orderInfo For the first input assignment, select Custom for the Data Type and input org.jbpm.document.Document . Click OK . Select the Skippable check box and enter the following text in the Description field: Approved order #{CaseId} to be placed Create an outgoing connection from the Place order user task and connect it to an end event. Click Save to confirm your changes. You can open the sub-process in a new editor in Business Central by clicking the Place order task in the main process and then clicking the Open Sub-process task icon. 27.2. Creating the Manager approval business process The manager approval process determines whether or not the order will be placed or rejected. Procedure In Business Central, go to Menu Design Projects IT_Orders_New orderhardware Business Processes . Create and configure the Prepare hardware spec user task: Expand Tasks in the Object Library and drag a user task onto the canvas and convert the new task to a user task. Click the new user task and click the Properties icon in the upper-right corner. Input Prepare hardware spec in the Name field. Expand Implementation/Execution , click Add below the Groups menu, click Select New , and input supplier . Input PrepareHardwareSpec in the Task Name field. Select the Skippable check box and enter the following text in the Description field: Prepare hardware specification for #{initiator} (order number #{CaseId}) click in the Assignments field and add the following: Click OK . Create and configure the manager approval user task: Click the Prepare hardware spec user task and create a new user task. Click the new user task and click the Properties icon in the upper-right corner. Click the user task and in the Properties panel input Manager approval in the Name field. Expand Implementation/Execution , click Add below the Actors menu, click Select New , and input manager . Input ManagerApproval in the Task Name field. click in the Assignments field and add the following: Click OK . Select the Skippable check box and enter the following text in the Description field: Approval request for new hardware for #{initiator} (order number #{CaseId}) Enter the following Java expression in the On Exit Action field: kcontext.setVariable("caseFile_managerDecision", approved); Click Save . Click the Manager approval user task and create a Data-based Exclusive (XOR) gateway. Create and configure the Place order reusable sub-process: From the Object Library , expand sub-processes , click Reusable , and drag the new element to the canvas on the right side of the Data-based Exclusive (XOR) gateway. Connect the Data-based Exclusive (XOR) gateway to the sub-process. Click the new sub task and click the Properties icon in the upper-right corner. Input Place order in the Name field. Expand Data Assignments and click in the Assignments field and add the following: Click OK . Click the connection from the Data-based Exclusive (XOR) gateway to the sub-process and click the Properties icon. Expand Implementation/Execution , select Condition , and set the following condition expressions. Click the Place order user task and create an end event. Create and configure the order rejected user task: Click the Data-based Exclusive (XOR) gateway and create a new user task. Drag the new task to align it below the Place order task. Click the new user task and click the Properties icon in the upper-right corner. Input Order rejected in the Name field. Expand Implementation/Execution and input OrderRejected in the Task Name field. Click Add below the Actors menu, click Select New , and input owner . click in the Assignments field and add the following: Click OK . Select the Skippable check box and enter the following text in the Description field: Order #{CaseId} has been rejected by manager Click the Order rejected user task and create an end event. Click Save . Click the connection from the Data-based Exclusive (XOR) gateway to the Order rejected user task and click the Properties icon. Expand Implementation/Execution , select Condition , and set the following condition expressions. Click Save . Figure 27.5. Manager approval business process | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/getting_started_with_red_hat_process_automation_manager/case-management-designing-it-hardware-proc |
Chapter 6. Review the State of a Node | Chapter 6. Review the State of a Node If you have a deployment of the Uchiwa dashboard, you can use it with the Sensu server to review the state of your nodes: Login to the Uchiwa dashboard and click the Data Center tab to confirm that the Data Center is operational. Check that all overcloud nodes are in a Connected state. At a suitable time, reboot one of the overcloud nodes and review the rebooted node's status in the Uchiwa dashboard. After the reboot completes, verify that the node successfully re-connects to the Sensu server and starts executing checks. | [
"http://<SERVER_IP_ADDRESS>/uchiwa"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/monitoring_tools_configuration_guide/sect-review-node |
Chapter 2. Switching RHEL to FIPS mode | Chapter 2. Switching RHEL to FIPS mode To enable the cryptographic module self-checks mandated by the Federal Information Processing Standard (FIPS) 140-3, you must operate RHEL 9 in FIPS mode. Starting the installation in FIPS mode is the recommended method if you aim for FIPS compliance. Note The cryptographic modules of RHEL 9 are not yet certified for the FIPS 140-3 requirements. 2.1. Federal Information Processing Standards 140 and FIPS mode The Federal Information Processing Standards (FIPS) Publication 140 is a series of computer security standards developed by the National Institute of Standards and Technology (NIST) to ensure the quality of cryptographic modules. The FIPS 140 standard ensures that cryptographic tools implement their algorithms correctly. Runtime cryptographic algorithm and integrity self-tests are some of the mechanisms to ensure a system uses cryptography that meets the requirements of the standard. RHEL in FIPS mode To ensure that your RHEL system generates and uses all cryptographic keys only with FIPS-approved algorithms, you must switch RHEL to FIPS mode. You can enable FIPS mode by using one of the following methods: Starting the installation in FIPS mode Switching the system into FIPS mode after the installation If you aim for FIPS compliance, start the installation in FIPS mode. This avoids cryptographic key material regeneration and reevaluation of the compliance of the resulting system associated with converting already deployed systems. To operate a FIPS-compliant system, create all cryptographic key material in FIPS mode. Furthermore, the cryptographic key material must never leave the FIPS environment unless it is securely wrapped and never unwrapped in non-FIPS environments. The FIPS - Federal Information Processing Standards section on the Product compliance Red Hat Customer Portal page provides an overview of the validation status of cryptographic modules for selected RHEL minor releases. Switching to FIPS mode after the installation Switching the system to FIPS mode by using the fips-mode-setup tool does not guarantee compliance with the FIPS 140 standard. Re-generating all cryptographic keys after setting the system to FIPS mode may not be possible. For example, in the case of an existing IdM realm with users' cryptographic keys you cannot re-generate all the keys. If you cannot start the installation in FIPS mode, always enable FIPS mode as the first step after the installation, before you make any post-installation configuration steps or install any workloads. The fips-mode-setup tool also uses the FIPS system-wide cryptographic policy internally. But on top of what the update-crypto-policies --set FIPS command does, fips-mode-setup ensures the installation of the FIPS dracut module by using the fips-finish-install tool, it also adds the fips=1 boot option to the kernel command line and regenerates the initial RAM disk. Furthermore, enforcement of restrictions required in FIPS mode depends on the content of the /proc/sys/crypto/fips_enabled file. If the file contains 1 , RHEL core cryptographic components switch to mode, in which they use only FIPS-approved implementations of cryptographic algorithms. If /proc/sys/crypto/fips_enabled contains 0 , the cryptographic components do not enable their FIPS mode. FIPS in crypto-policies The FIPS system-wide cryptographic policy helps to configure higher-level restrictions. Therefore, communication protocols supporting cryptographic agility do not announce ciphers that the system refuses when selected. For example, the ChaCha20 algorithm is not FIPS-approved, and the FIPS cryptographic policy ensures that TLS servers and clients do not announce the TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 TLS cipher suite, because any attempt to use such a cipher fails. If you operate RHEL in FIPS mode and use an application providing its own FIPS-mode-related configuration options, ignore these options and the corresponding application guidance. The system running in FIPS mode and the system-wide cryptographic policies enforce only FIPS-compliant cryptography. For example, the Node.js configuration option --enable-fips is ignored if the system runs in FIPS mode. If you use the --enable-fips option on a system not running in FIPS mode, you do not meet the FIPS-140 compliance requirements. Warning A RHEL 9.2 and later system running in FIPS mode enforces that any TLS 1.2 connection must use the Extended Master Secret (EMS) extension (RFC 7627) as requires the FIPS 140-3 standard. Thus, legacy clients not supporting EMS or TLS 1.3 cannot connect to RHEL 9 servers running in FIPS mode, RHEL 9 clients in FIPS mode cannot connect to servers that support only TLS 1.2 without EMS. For more information, see the Red Hat Knowledgebase solution TLS Extension "Extended Master Secret" enforced with Red Hat Enterprise Linux 9.2 . Additional resources FIPS - Federal Information Processing Standards section on the Product compliance Red Hat Customer Portal page RHEL system-wide cryptographic policies FIPS publications at NIST Computer Security Resource Center . Federal Information Processing Standards Publication: FIPS 140-3 2.2. Installing the system with FIPS mode enabled To enable the cryptographic module self-checks mandated by the Federal Information Processing Standard (FIPS) 140, enable FIPS mode during the system installation. Important Only enabling FIPS mode during the RHEL installation ensures that the system generates all keys with FIPS-approved algorithms and continuous monitoring tests in place. Warning After you complete the setup of FIPS mode, you cannot switch off FIPS mode without putting the system into an inconsistent state. If your scenario requires this change, the only correct way is a complete re-installation of the system. Procedure Add the fips=1 option to the kernel command line during the system installation. During the software selection stage, do not install any third-party software. After the installation, the system starts in FIPS mode automatically. Verification After the system starts, check that FIPS mode is enabled: Additional resources Editing boot options 2.3. Switching the system to FIPS mode The system-wide cryptographic policies contain a policy level that enables cryptographic algorithms in accordance with the requirements by the Federal Information Processing Standard (FIPS) Publication 140. The fips-mode-setup tool that enables or disables FIPS mode internally uses the FIPS system-wide cryptographic policy. Switching the system to FIPS mode by using the FIPS system-wide cryptographic policy does not guarantee compliance with the FIPS 140 standard. Re-generating all cryptographic keys after setting the system to FIPS mode may not be possible. For example, in the case of an existing IdM realm with users' cryptographic keys you cannot re-generate all the keys. Important Only enabling FIPS mode during the RHEL installation ensures that the system generates all keys with FIPS-approved algorithms and continuous monitoring tests in place. The fips-mode-setup tool uses the FIPS policy internally. But on top of what the update-crypto-policies command with the --set FIPS option does, fips-mode-setup ensures the installation of the FIPS dracut module by using the fips-finish-install tool, it also adds the fips=1 boot option to the kernel command line and regenerates the initial RAM disk. Warning After you complete the setup of FIPS mode, you cannot switch off FIPS mode without putting the system into an inconsistent state. If your scenario requires this change, the only correct way is a complete re-installation of the system. Note The cryptographic modules of RHEL 9 are not yet certified for the FIPS 140-3 requirements. Procedure To switch the system to FIPS mode: Restart your system to allow the kernel to switch to FIPS mode: Verification After the restart, you can check the current state of FIPS mode: Additional resources fips-mode-setup(8) man page on your system How to enable FIPS on instances using Red Hat Unified Kernel Images (Red Hat Knowledgebase) Security Requirements for Cryptographic Modules on the National Institute of Standards and Technology (NIST) web site. 2.4. Enabling FIPS mode in a container To enable the full set of cryptographic module self-checks mandated by the Federal Information Processing Standard Publication 140-2 (FIPS mode), the host system kernel must be running in FIPS mode. The podman utility automatically enables FIPS mode on supported containers. The fips-mode-setup command does not work correctly in containers, and it cannot be used to enable or check FIPS mode in this scenario. Note The cryptographic modules of RHEL 9 are not yet certified for the FIPS 140-3 requirements. Prerequisites The host system must be in FIPS mode. Procedure On systems with FIPS mode enabled, the podman utility automatically enables FIPS mode on supported containers. Additional resources Switching the system to FIPS mode . Installing the system in FIPS mode 2.5. List of RHEL applications using cryptography that is not compliant with FIPS 140-3 To pass all relevant cryptographic certifications, such as FIPS 140-3, use libraries from the core cryptographic components set. These libraries, except from libgcrypt , also follow the RHEL system-wide cryptographic policies. See the RHEL core cryptographic components Red Hat Knowledgebase article for an overview of the core cryptographic components, the information on how are they selected, how are they integrated into the operating system, how do they support hardware security modules and smart cards, and how do cryptographic certifications apply to them. List of RHEL 9 applications using cryptography that is not compliant with FIPS 140-3 Bacula Implements the CRAM-MD5 authentication protocol. Cyrus SASL Uses the SCRAM-SHA-1 authentication method. Dovecot Uses SCRAM-SHA-1. Emacs Uses SCRAM-SHA-1. FreeRADIUS Uses MD5 and SHA-1 for authentication protocols. Ghostscript Custom cryptography implementation (MD5, RC4, SHA-2, AES) to encrypt and decrypt documents. GRUB Supports legacy firmware protocols requiring SHA-1 and includes the libgcrypt library. iPXE Implements TLS stack. Kerberos Preserves support for SHA-1 (interoperability with Windows). Lasso The lasso_wsse_username_token_derive_key() key derivation function (KDF) uses SHA-1. MariaDB, MariaDB Connector The mysql_native_password authentication plugin uses SHA-1. MySQL mysql_native_password uses SHA-1. OpenIPMI The RAKP-HMAC-MD5 authentication method is not approved for FIPS usage and does not work in FIPS mode. Ovmf (UEFI firmware), Edk2, shim Full cryptographic stack (an embedded copy of the OpenSSL library). Perl Uses HMAC, HMAC-SHA1, HMAC-MD5, SHA-1, SHA-224,.... Pidgin Implements DES and RC4 ciphers. PKCS #12 file processing (OpenSSL, GnuTLS, NSS, Firefox, Java) All uses of PKCS #12 are not FIPS-compliant, because the Key Derivation Function (KDF) used for calculating the whole-file HMAC is not FIPS-approved. As such, PKCS #12 files are considered to be plain text for the purposes of FIPS compliance. For key-transport purposes, wrap PKCS #12 (.p12) files using a FIPS-approved encryption scheme. Poppler Can save PDFs with signatures, passwords, and encryption based on non-allowed algorithms if they are present in the original PDF (for example MD5, RC4, and SHA-1). PostgreSQL Implements Blowfish, DES, and MD5. A KDF uses SHA-1. QAT Engine Mixed hardware and software implementation of cryptographic primitives (RSA, EC, DH, AES,...) Ruby Provides insecure MD5 and SHA-1 library functions. Samba Preserves support for RC4 and DES (interoperability with Windows). Syslinux BIOS passwords use SHA-1. SWTPM Explicitly disables FIPS mode in its OpenSSL usage. Unbound DNS specification requires that DNSSEC resolvers use a SHA-1-based algorithm in DNSKEY records for validation. Valgrind AES, SHA hashes. [1] zip Custom cryptography implementation (insecure PKWARE encryption algorithm) to encrypt and decrypt archives using a password. Additional resources FIPS - Federal Information Processing Standards section on the Product compliance Red Hat Customer Portal page RHEL core cryptographic components (Red Hat Knowledgebase) [1] Re-implements in software hardware-offload operations, such as AES-NI or SHA-1 and SHA-2 on ARM. | [
"fips-mode-setup --check FIPS mode is enabled.",
"fips-mode-setup --enable Kernel initramdisks are being regenerated. This might take some time. Setting system policy to FIPS Note: System-wide crypto policies are applied on application start-up. It is recommended to restart the system for the change of policies to fully take place. FIPS mode will be enabled. Please reboot the system for the setting to take effect.",
"reboot",
"fips-mode-setup --check FIPS mode is enabled."
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/security_hardening/switching-rhel-to-fips-mode_security-hardening |
Configuring networking services | Configuring networking services Red Hat OpenStack Services on OpenShift 18.0 Configuring the Networking service (neutron) for managing networking traffic in a Red Hat OpenStack Services on OpenShift environment OpenStack Documentation Team [email protected] | [
"ovnController: spec: ovn: template: ovnController: networkAttachment: tenant nicMappings: <network_name: nic_name>",
"oc apply -f openstack_control_plane.yaml -n openstack",
"oc get openstackcontrolplane -n openstack NAME STATUS MESSAGE openstack-control-plane Unknown Setup started",
"oc get pods -n openstack",
"oc rsh -n openstack openstackclient openstack network agent list",
"+--------------------------------------+------------------------------+---------+ | ID | agent_type | host | +--------------------------------------+----------------------------------------+ | 5335c34d-9233-47bd-92f1-fc7503270783 | OVN Controller Gateway agent | ctrl0 | | ff66288c-5a7c-41fb-ba54-6c781f95a81e | OVN Controller Gateway agent | ctrl1 | | 5335c34d-9233-47bd-92f1-fc7503270783 | OVN Controller Gateway agent | ctrl2 | +--------------------------------------+----------------------------------------+",
"oc apply -f openstack_control_plane.yaml -n openstack",
"apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: my-data-plane-node-set spec: nodeTemplate: ansible: ansibleVars: edpm_network_config_template: | --- Network configuration options here",
"oc apply -f my_data_plane_node_set.yaml",
"oc get openstackdataplanenodeset",
"NAME STATUS MESSAGE my-data-plane-node-set False Deployment not started",
"apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: my-data-plane-deploy",
"spec: nodeSets: - my-data-plane-node-set",
"oc create -f my_data_plane_deploy.yaml -n openstack",
"oc get pod -l app=openstackansibleee -n openstack -w oc logs -l app=openstackansibleee -n openstack -f --max-log-requests 10",
"oc get openstackdataplanedeployment -n openstack",
"NAME STATUS MESSAGE my-data-plane-node-set True Setup Complete",
"oc get openstackdataplanenodeset -n openstack",
"NAME STATUS MESSAGE my-data-plane-node-set True NodeSet Ready",
"edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: interface name: nic2",
"edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup( vars , networks_lower[network] ~ _mtu )) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: members: - type: vlan device: nic{{ loop.index + 1 }} mtu: {{ lookup( vars , networks_lower[network] ~ _mtu ) }} vlan_id: {{ lookup( vars , networks_lower[network] ~ _vlan_id ) }} addresses: - ip_netmask: {{ lookup( vars , networks_lower[network] ~ _ip ) }}/{{ lookup( vars , networks_lower[network] ~ _cidr ) }} routes: {{ lookup( vars , networks_lower[network] ~ _host_routes ) }}",
"edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup( vars , networks_lower[network] ~ _mtu )) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: ovs_bridge name: br-bond dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} members: - type: ovs_bond name: bond1 mtu: {{ min_viable_mtu }} ovs_options: {{ bound_interface_ovs_options }} members: - type: interface name: nic2 mtu: {{ min_viable_mtu }} primary: true - type: interface name: nic3 mtu: {{ min_viable_mtu }}",
"edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup( vars , networks_lower[network] ~ _mtu )) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: members: - type: ovs_bond name: bond1 mtu: {{ min_viable_mtu }} ovs_options: {{ bond_interface_ovs_options }} members: - type: interface name: nic2 mtu: {{ min_viable_mtu }} primary: true - type: interface name: nic3 mtu: {{ min_viable_mtu }}",
"edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup( vars , networks_lower[network] ~ _mtu )) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: members: - type: ovs_user_bridge name: br-dpdk0 members: - type: ovs_dpdk_bond name: dpdkbond0 rx_queue: {{ num_dpdk_interface_rx_queues }} members: - type: ovs_dpdk_port name: dpdk0 members: - type: interface name: nic4 - type: ovs_dpdk_port name: dpdk1 members: - type: interface name: nic5",
"ovs-vsctl set port <bond port> other_config:lb-output-action=true",
"edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup( vars , networks_lower[network] ~ _mtu )) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: linux_bond name: bond1 mtu: {{ min_viable_mtu }} bonding_options: \"mode=802.3ad lacp_rate=fast updelay=1000 miimon=100 xmit_hash_policy=layer3+4\" members: type: interface name: ens1f0 mtu: {{ min_viable_mtu }} primary: true type: interface name: ens1f1 mtu: {{ min_viable_mtu }}",
"edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup( vars , networks_lower[network] ~ _mtu )) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: linux_bond name: bond1 members: - type: interface name: nic2 - type: interface name: nic3 bonding_options: \"mode=802.3ad lacp_rate=[fast|slow] updelay=1000 miimon=100\"",
". edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup( vars , networks_lower[network] ~ _mtu )) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: linux_bond name: bond_api bonding_options: \"mode=active-backup\" use_dhcp: false dns_servers: get_param: DnsServers members: - type: interface name: nic3 primary: true - type: interface name: nic4 - type: vlan vlan_id: get_param: InternalApiNetworkVlanID device: bond_api addresses: - ip_netmask: get_param: InternalApiIpSubnet",
"edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup( vars , networks_lower[network] ~ _mtu )) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: ovs_bridge name: br-tenant use_dhcp: false mtu: 9000 members: - type: linux_bond name: bond_tenant bonding_options: \"mode=802.3ad updelay=1000 miimon=100\" use_dhcp: false dns_servers: get_param: DnsServers members: - type: interface name: p1p1 primary: true - type: interface name: p1p2 - type: vlan device: bond_tenant vlan_id: {get_param: TenantNetworkVlanID} addresses: - ip_netmask: {get_param: TenantIpSubnet}",
"edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup( vars , networks_lower[network] ~ _mtu )) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: ovs_bridge name: br-tenant routes: {{ [ctlplane_host_routes] | flatten | unique }}",
"edpm_network_config_os_net_config_mappings: edpm-compute-0: dmiString: system-serial-number id: 3V3J4V3 nic1: ec:2a:72:40:ca:2e nic2: 6c:fe:54:3f:8a:00 nic3: 6c:fe:54:3f:8a:01 nic4: 6c:fe:54:3f:8a:02 nic5: 6c:fe:54:3f:8a:03 nic6: e8:eb:d3:33:39:12 nic7: e8:eb:d3:33:39:13 edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} - type: interface name: nic1 use_dhcp: false use_dhcpv6: false type: linux_bond name: bond_api use_dhcp: false use_dhcpv6: false bonding_options: \"mode=active-backup\" dns_servers: {{ ctlplane_dns_nameservers }} addresses: ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }} routes: default: true next_hop: 192.168.122.1 members: - type: interface name: nic2 primary: true - type: interface name: nic3 {% for network in nodeset_networks if network not in ['external', 'tenant'] %} - type: vlan mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} device: bond_api addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} {% endfor %} - type: ovs_bridge name: br-access use_dhcp: false use_dhcpv6: false members: - type: linux_bond name: bond_data mtu: {{ min_viable_mtu }} bonding_options: \"mode=active-backup\" members: - type: interface name: nic4 - type: interface name: nic5 - type: vlan vlan_id: {{ lookup('vars', networks_lower['tenant'] ~ '_vlan_id') }} mtu: {{ lookup('vars', networks_lower['tenant'] ~ '_mtu') }} device: bond_data addresses: - ip_netmask: {{ lookup('vars', networks_lower['tenant'] ~ '_ip') }}/{{ lookup('vars', networks_lower['tenant'] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower['tenant'] ~ '_host_routes') }}",
"oc patch -n openstack openstackcontrolplane openstack-galera-network-isolation --type=merge --patch \" --- spec: neutron: template: customServiceConfig: | [default] service_plugins=ovn-router,port_forwarding \"",
"oc rsh -n openstack openstackclient",
"openstack extension list --network -c Name -c Alias --max-width 74 | grep -i -e 'Neutron L3 Router' -i -e floating-ip-port-forwarding --os-cloud <cloud_name>",
"| Floating IP Port Forwarding | floating-ip-port-forwarding | | Neutron L3 Router | router |",
"exit",
"DEVICE=eth0 TYPE=OVSPort DEVICETYPE=ovs OVS_BRIDGE=br-ex ONBOOT=yes",
"DEVICE=br-ex DEVICETYPE=ovs TYPE=OVSBridge BOOTPROTO=static IPADDR=192.168.120.10 NETMASK=255.255.255.0 GATEWAY=192.168.120.1 DNS1=192.168.120.1 ONBOOT=yes",
"apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: my-data-plane-node-set spec: nodeTemplate: ansible: ansibleVars: edpm_network_config_template: | --- OvnHardwareOffloadedQos: true",
"apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: my-data-plane-node-set spec: nodeTemplate: ansible: ansibleVars: edpm_network_config_template: | --- OvnHardwareOffloadedQos: true edpm_ovn_encap_tos: 1",
"oc apply -f my_data_plane_node_set.yaml",
"oc get openstackdataplanenodeset",
"NAME STATUS MESSAGE my-data-plane-node-set False Deployment not started",
"apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: my-data-plane-deploy",
"spec: nodeSets: - my-data-plane-node-set",
"oc create -f my_data_plane_deploy.yaml -n openstack",
"oc get pod -l app=openstackansibleee -n openstack -w oc logs -l app=openstackansibleee -n openstack -f --max-log-requests 10",
"oc get openstackdataplanedeployment -n openstack",
"NAME STATUS MESSAGE my-data-plane-node-set True Setup Complete",
"oc get openstackdataplanenodeset -n openstack",
"NAME STATUS MESSAGE my-data-plane-node-set True NodeSet Ready",
"openstack network qos policy list",
"apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: my-data-plane-node-set spec: nodeTemplate: ansible: ansibleVars: edpm_network_config_template: | --- NeutronSriovAgentExtensions: \"qos\"",
"oc apply -f my_data_plane_node_set.yaml",
"oc get openstackdataplanenodeset",
"NAME STATUS MESSAGE my-data-plane-node-set False Deployment not started",
"apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: my-data-plane-deploy",
"spec: nodeSets: - my-data-plane-node-set",
"oc create -f my_data_plane_deploy.yaml -n openstack",
"oc get pod -l app=openstackansibleee -n openstack -w oc logs -l app=openstackansibleee -n openstack -f --max-log-requests 10",
"oc get openstackdataplanedeployment -n openstack",
"NAME STATUS MESSAGE my-data-plane-node-set True Setup Complete",
"oc get openstackdataplanenodeset -n openstack",
"NAME STATUS MESSAGE my-data-plane-node-set True NodeSet Ready",
"openstack network agent list",
"openstack network agent show <uuid>",
"openstack network agent show 8676ccb3-1de0-4ca6-8fb7-b814015d9e5f --max-width 70",
"------------------- ------------------------------------------------+ | Field | Value | ------------------- ------------------------------------------------+ | admin_state_up | UP | | agent_type | NIC Switch agent | | alive | :-) | | availability_zone | None | | binary | neutron-sriov-nic-agent | | configuration | { device_mappings : {}, devices : 0, extensi | | | ons : [ qos ], resource_provider_bandwidths : | | | {}, resource_provider_hypervisors : {}, reso | | | urce_provider_inventory_defaults : { allocatio | | | n_ratio : 1.0, min_unit : 1, step_size : 1, | | | reserved : 0}} | | created_at | 2024-08-08 08:22:57 | | description | None | | ha_state | None | | host | edpm-compute-0.ctlplane.example.com | | id | 8676ccb3-1de0-4ca6-8fb7-b814015d9e5f | | last_heartbeat_at | 2024-08-08 08:24:27 | | resources_synced | None | | started_at | 2024-08-08 08:22:57 | | topic | N/A | ------------------- ------------------------------------------------+",
"dnf list installed python-openstackclient",
"echo USDOS_CLOUD my_cloud",
"export OS_CLOUD=my_other_cloud",
"openstack network list",
"+--------------------------------------+-------------+-------------------------------------------------------+ | id | name | subnets | +--------------------------------------+-------------+-------------------------------------------------------+ | fa9bb72f-b81a-4572-9c7f-7237e5fcabd3 | web-servers | 20512ffe-ad56-4bb4-b064-2cb18fecc923 192.168.200.0/24 | | bcc16b34-e33e-445b-9fde-dd491817a48a | private | 7fe4a05a-4b81-4a59-8c47-82c965b0e050 10.0.0.0/24 | | 9b2f4feb-fee8-43da-bb99-032e4aaf3f85 | public | 2318dc3b-cff0-43fc-9489-7d4cf48aaab9 172.24.4.224/28 | +--------------------------------------+-------------+-------------------------------------------------------+",
"openstack project list",
"+----------------------------------+----------+ | ID | Name | +----------------------------------+----------+ | 4b0b98f8c6c040f38ba4f7146e8680f5 | auditors | | 519e6344f82e4c079c8e2eabb690023b | services | | 80bf5732752a41128e612fe615c886c6 | demo | | 98a2f53c20ce4d50a40dac4a38016c69 | admin | +----------------------------------+----------+",
"openstack network rbac create --type network --target-project 4b0b98f8c6c040f38ba4f7146e8680f5 --action access_as_shared web-servers",
"+----------------+--------------------------------------+ | Field | Value | +----------------+--------------------------------------+ | action | access_as_shared | | id | 314004d0-2261-4d5e-bda7-0181fcf40709 | | object_id | fa9bb72f-b81a-4572-9c7f-7237e5fcabd3 | | object_type | network | | target_project | 4b0b98f8c6c040f38ba4f7146e8680f5 | | project_id | 98a2f53c20ce4d50a40dac4a38016c69 | +----------------+--------------------------------------+",
"dnf list installed python-openstackclient",
"echo USDOS_CLOUD my_cloud",
"export OS_CLOUD=my_other_cloud",
"openstack network rbac list",
"+--------------------------------------+-------------+--------------------------------------+ | id | object_type | object_id | +--------------------------------------+-------------+--------------------------------------+ | 314004d0-2261-4d5e-bda7-0181fcf40709 | network | fa9bb72f-b81a-4572-9c7f-7237e5fcabd3 | | bbab1cf9-edc5-47f9-aee3-a413bd582c0a | network | 9b2f4feb-fee8-43da-bb99-032e4aaf3f85 | +--------------------------------------+-------------+--------------------------------------+",
"openstack network rbac show 314004d0-2261-4d5e-bda7-0181fcf40709",
"+----------------+--------------------------------------+ | Field | Value | +----------------+--------------------------------------+ | action | access_as_shared | | id | 314004d0-2261-4d5e-bda7-0181fcf40709 | | object_id | fa9bb72f-b81a-4572-9c7f-7237e5fcabd3 | | object_type | network | | target_project | 4b0b98f8c6c040f38ba4f7146e8680f5 | | project_id | 98a2f53c20ce4d50a40dac4a38016c69 | +----------------+--------------------------------------+",
"dnf list installed python-openstackclient",
"echo USDOS_CLOUD my_cloud",
"export OS_CLOUD=my_other_cloud",
"openstack network rbac list +--------------------------------------+-------------+--------------------------------------+ | id | object_type | object_id | +--------------------------------------+-------------+--------------------------------------+ | 314004d0-2261-4d5e-bda7-0181fcf40709 | network | fa9bb72f-b81a-4572-9c7f-7237e5fcabd3 | | bbab1cf9-edc5-47f9-aee3-a413bd582c0a | network | 9b2f4feb-fee8-43da-bb99-032e4aaf3f85 | +--------------------------------------+-------------+--------------------------------------+",
"openstack network rbac delete 314004d0-2261-4d5e-bda7-0181fcf40709 Deleted rbac_policy: 314004d0-2261-4d5e-bda7-0181fcf40709",
"oc rsh -n openstack openstackclient",
"openstack network rbac create --type network --target-project c717f263785d4679b16a122516247deb --action access_as_external web-servers",
"+----------------+--------------------------------------+ | Field | Value | +----------------+--------------------------------------+ | action | access_as_external | | id | ddef112a-c092-4ac1-8914-c714a3d3ba08 | | object_id | 6e437ff0-d20f-4483-b627-c3749399bdca | | object_type | network | | target_project | c717f263785d4679b16a122516247deb | | project_id | c717f263785d4679b16a122516247deb | +----------------+--------------------------------------+",
"openstack network list",
"+--------------------------------------+-------------+------------------------------------------------------+ | id | name | subnets | +--------------------------------------+-------------+------------------------------------------------------+ | 6e437ff0-d20f-4483-b627-c3749399bdca | web-servers | fa273245-1eff-4830-b40c-57eaeac9b904 192.168.10.0/24 | +--------------------------------------+-------------+------------------------------------------------------+",
"exit",
"openstack security group create ping_ssh",
"oc rsh -n openstack openstackclient",
"openstack project list",
"openstack security group list",
"openstack network rbac create --target-project 32016615de5d43bb88de99e7f2e26a1e --action access_as_shared --type security_group 5ba835b7-22b0-4be6-bdbe-e0722d1b5f24",
"exit",
"oc patch -n openstack openstackcontrolplane openstack-galera-network-isolation --type=merge --patch \" --- spec: neutron: template: customServiceConfig: | [ml2] extension_drivers=dns_domain_ports \"",
"oc rsh -n openstack openstackclient",
"openstack extension list --network --max-width 75 | grep dns-domain-ports --os-cloud <cloud_name>",
"| dns_domain for ports | dns-domain-ports | Allows the DNS domain to be specified for a network port.",
"openstack port create --network public --dns-name my_port new_port",
"openstack port show -c dns_assignment -c dns_domain -c dns_name -c name new_port",
"+-------------------------+----------------------------------------------+ | Field | Value | +-------------------------+----------------------------------------------+ | dns_assignment | fqdn='my_port.example.com', | | | hostname='my_port', | | | ip_address='10.65.176.113' | | dns_domain | example.com | | dns_name | my_port | | name | new_port | +-------------------------+----------------------------------------------+",
"openstack server create --image rhel --flavor m1.small --port new_port my_vm",
"openstack port show -c dns_assignment -c dns_domain -c dns_name -c name new_port",
"+-------------------------+----------------------------------------------+ | Field | Value | +-------------------------+----------------------------------------------+ | dns_assignment | fqdn='my_vm.example.com', | | | hostname='my_vm', | | | ip_address='10.65.176.113' | | dns_domain | example.com | | dns_name | my_vm | | name | new_port | +-------------------------+----------------------------------------------+",
"exit",
"oc patch -n openstack openstackcontrolplane openstack-galera-network-isolation --type=merge --patch \" --- spec: neutron: template: customServiceConfig: | [ml2] extension_drivers=port_numa_affinity_policy \"",
"oc rsh -n openstack openstackclient",
"openstack extension list --network --max-width 74 | grep port-numa-affinity-policy --os-cloud <cloud_name>",
"| Port NUMA affinity policy | port-numa-affinity-policy | Expose the port NUMA affinity policy",
"openstack port create --network public --numa-policy-legacy myNUMAAffinityPort",
"openstack port show myNUMAAffinityPort -c numa_affinity_policy",
"+----------------------+--------+ | Field | Value | +----------------------+--------+ | numa_affinity_policy | legacy | +----------------------+--------+",
"exit",
"apiVersion: v1 kind: ConfigMap metadata: name: neutron-metadata-rate-limit data: 20-neutron-metadata-rate.conf: | [metadata_rate_limiting] rate_limit_enabled = True ip_versions = 4 base_window_duration = 60 base_query_rate_limit = 6 burst_window_duration = 10 burst_query_rate_limit = 2",
"oc create -f neutron-metadata-rate-limit.yaml -n openstack",
"apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: neutron-metadata-rate-limit",
"apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: neutron-metadata-rate-limit spec: dataSources: - configMapRef: name: neutron-metadata-rate-limit - secretRef: name: neutron-ovn-metadata-agent-neutron-config - secretRef: name: nova-metadata-neutron-config - configMapRef: name: neutron-metadata-rate-limit tlsCerts: default: contents: - dnsnames - ips networks: - ctlplane issuer: osp-rootca-issuer-ovn keyUsages: - digital signature - key encipherment - client auth caCerts: combined-ca-bundle containerImageFields: - EdpmNeutronMetadataAgentImage",
"apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: neutron-metadata-rate-limit spec: playbook: osp.edpm.neutron_metadata dataSources: - configMapRef: name: neutron-metadata-rate-limit - secretRef: name: neutron-ovn-metadata-agent-neutron-config - secretRef: name: nova-metadata-neutron-config tlsCerts: default: contents: - dnsnames - ips networks: - ctlplane issuer: osp-rootca-issuer-ovn keyUsages: - digital signature - key encipherment - client auth caCerts: combined-ca-bundle containerImageFields: - EdpmNeutronMetadataAgentImage",
"oc apply -f neutron-metadata-rate-limit -n openstack",
"oc get openstackdataplaneservice neutron-metadata-rate-limit -o yaml -n openstack",
"spec: neutron: template: customServiceConfig: | [ovn] localnet_learn_fdb = true 1 fdb_age_threshold = 300 2 fdb_removal_limit = 50 3",
"oc apply -f openstack_control_plane.yaml -n openstack",
"oc get openstackcontrolplane -n openstack"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html-single/configuring_networking_services/index |
Administration guide for Red Hat Developer Hub | Administration guide for Red Hat Developer Hub Red Hat Developer Hub 1.3 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html/administration_guide_for_red_hat_developer_hub/index |
Chapter 2. NFV performance considerations | Chapter 2. NFV performance considerations For a network functions virtualization (NFV) solution to be useful, its virtualized functions must meet or exceed the performance of physical implementations. Red Hat's virtualization technologies are based on the high-performance Kernel-based Virtual Machine (KVM) hypervisor, common in OpenStack and cloud deployments. Red Hat OpenStack Platform director configures the Compute nodes to enforce resource partitioning and fine tuning to achieve line rate performance for the guest virtual network functions (VNFs). The key performance factors in the NFV use case are throughput, latency, and jitter. You can enable high-performance packet switching between physical NICs and virtual machines using data plane development kit (DPDK) accelerated virtual machines. OVS 2.10 embeds support for DPDK 17 and includes support for vhost-user multiqueue, allowing scalable performance. OVS-DPDK provides line-rate performance for guest VNFs. Single root I/O virtualization (SR-IOV) networking provides enhanced performance, including improved throughput for specific networks and virtual machines. Other important features for performance tuning include huge pages, NUMA alignment, host isolation, and CPU pinning. VNF flavors require huge pages and emulator thread isolation for better performance. Host isolation and CPU pinning improve NFV performance and prevent spurious packet loss. 2.1. CPUs and NUMA nodes Previously, all memory on x86 systems was equally accessible to all CPUs in the system. This resulted in memory access times that were the same regardless of which CPU in the system was performing the operation and was referred to as Uniform Memory Access (UMA). In Non-Uniform Memory Access (NUMA), system memory is divided into zones called nodes, which are allocated to particular CPUs or sockets. Access to memory that is local to a CPU is faster than memory connected to remote CPUs on that system. Normally, each socket on a NUMA system has a local memory node whose contents can be accessed faster than the memory in the node local to another CPU or the memory on a bus shared by all CPUs. Similarly, physical NICs are placed in PCI slots on the Compute node hardware. These slots connect to specific CPU sockets that are associated to a particular NUMA node. For optimum performance, connect your datapath NICs to the same NUMA nodes in your CPU configuration (SR-IOV or OVS-DPDK). The performance impact of NUMA misses are significant, generally starting at a 10% performance hit or higher. Each CPU socket can have multiple CPU cores which are treated as individual CPUs for virtualization purposes. Tip For more information about NUMA, see What is NUMA and how does it work on Linux? 2.1.1. NUMA node example The following diagram provides an example of a two-node NUMA system and the way the CPU cores and memory pages are made available: Figure 2.1. Example: two-node NUMA system Note Remote memory available via Interconnect is accessed only if VM1 from NUMA node 0 has a CPU core in NUMA node 1. In this case, the memory of NUMA node 1 acts as local for the third CPU core of VM1 (for example, if VM1 is allocated with CPU 4 in the diagram above), but at the same time, it acts as remote memory for the other CPU cores of the same VM. 2.1.2. NUMA aware instances You can configure an OpenStack environment to use NUMA topology awareness on systems with a NUMA architecture. When running a guest operating system in a virtual machine (VM) there are two NUMA topologies involved: the NUMA topology of the physical hardware of the host the NUMA topology of the virtual hardware exposed to the guest operating system You can optimize the performance of guest operating systems by aligning the virtual hardware with the physical hardware NUMA topology. 2.2. CPU pinning CPU pinning is the ability to run a specific virtual machine's virtual CPU on a specific physical CPU, in a given host. vCPU pinning provides similar advantages to task pinning on bare-metal systems. Since virtual machines run as user space tasks on the host operating system, pinning increases cache efficiency. For details on how to configure CPU pinning, see Configuring CPU pinning on Compute nodes in the Configuring the Compute service for instance creation guide. 2.3. Huge pages Physical memory is segmented into contiguous regions called pages. For efficiency, the system retrieves memory by accessing entire pages instead of individual bytes of memory. To perform this translation, the system looks in the Translation Lookaside Buffers (TLB) that contain the physical to virtual address mappings for the most recently or frequently used pages. When the system cannot find a mapping in the TLB, the processor must iterate through all of the page tables to determine the address mappings. Optimize the TLB to minimize the performance penalty that occurs during these TLB misses. The typical page size in an x86 system is 4KB, with other larger page sizes available. Larger page sizes mean that there are fewer pages overall, and therefore increases the amount of system memory that can have its virtual to physical address translation stored in the TLB. Consequently, this reduces TLB misses, which increases performance. With larger page sizes, there is an increased potential for memory to be under-utilized as processes must allocate in pages, but not all of the memory is likely required. As a result, choosing a page size is a compromise between providing faster access times with larger pages, and ensuring maximum memory utilization with smaller pages. 2.4. Port security Port security is an anti-spoofing measure that blocks any egress traffic that does not match the source IP and source MAC address of the originating network port. You cannot view or modify this behavior using security group rules. By default, the port_security_enabled parameter is set to enabled on newly created Neutron networks in OpenStack. Newly created ports copy the value of the port_security_enabled parameter from the network they are created on. For some NFV use cases, such as building a firewall or router, you must disable port security. To disable port security on a single port, run the following command: To prevent port security from being enabled on any newly created port on a network, run the following command: | [
"openstack port set --disable-port-security <port-id>",
"openstack network set --disable-port-security <network-id>"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/network_functions_virtualization_planning_and_configuration_guide/nfv-perf-consider_rhosp-nfv |
Upgrading SAP environments from RHEL 7 to RHEL 8 | Upgrading SAP environments from RHEL 7 to RHEL 8 Red Hat Enterprise Linux for SAP Solutions 8 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/8/html/upgrading_sap_environments_from_rhel_7_to_rhel_8/index |
Chapter 1. Network APIs | Chapter 1. Network APIs 1.1. CloudPrivateIPConfig [cloud.network.openshift.io/v1] Description CloudPrivateIPConfig performs an assignment of a private IP address to the primary NIC associated with cloud VMs. This is done by specifying the IP and Kubernetes node which the IP should be assigned to. This CRD is intended to be used by the network plugin which manages the cluster network. The spec side represents the desired state requested by the network plugin, and the status side represents the current state that this CRD's controller has executed. No users will have permission to modify it, and if a cluster-admin decides to edit it for some reason, their changes will be overwritten the time the network plugin reconciles the object. Note: the CR's name must specify the requested private IP address (can be IPv4 or IPv6). Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.2. EgressFirewall [k8s.ovn.org/v1] Description EgressFirewall describes the current egress firewall for a Namespace. Traffic from a pod to an IP address outside the cluster will be checked against each EgressFirewallRule in the pod's namespace's EgressFirewall, in order. If no rule matches (or no EgressFirewall is present) then the traffic will be allowed by default. Type object 1.3. EgressIP [k8s.ovn.org/v1] Description EgressIP is a CRD allowing the user to define a fixed source IP for all egress traffic originating from any pods which match the EgressIP resource according to its spec definition. Type object 1.4. EgressQoS [k8s.ovn.org/v1] Description EgressQoS is a CRD that allows the user to define a DSCP value for pods egress traffic on its namespace to specified CIDRs. Traffic from these pods will be checked against each EgressQoSRule in the namespace's EgressQoS, and if there is a match the traffic is marked with the relevant DSCP value. Type object 1.5. Endpoints [v1] Description Endpoints is a collection of endpoints that implement the actual service. Example: Type object 1.6. EndpointSlice [discovery.k8s.io/v1] Description EndpointSlice represents a subset of the endpoints that implement a service. For a given service there may be multiple EndpointSlice objects, selected by labels, which must be joined to produce the full set of endpoints. Type object 1.7. EgressRouter [network.operator.openshift.io/v1] Description EgressRouter is a feature allowing the user to define an egress router that acts as a bridge between pods and external systems. The egress router runs a service that redirects egress traffic originating from a pod or a group of pods to a remote external system or multiple destinations as per configuration. It is consumed by the cluster-network-operator. More specifically, given an EgressRouter CR with <name>, the CNO will create and manage: - A service called <name> - An egress pod called <name> - A NAD called <name> Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). EgressRouter is a single egressrouter pod configuration object. Type object 1.8. Ingress [networking.k8s.io/v1] Description Ingress is a collection of rules that allow inbound connections to reach the endpoints defined by a backend. An Ingress can be configured to give services externally-reachable urls, load balance traffic, terminate SSL, offer name based virtual hosting etc. Type object 1.9. IngressClass [networking.k8s.io/v1] Description IngressClass represents the class of the Ingress, referenced by the Ingress Spec. The ingressclass.kubernetes.io/is-default-class annotation can be used to indicate that an IngressClass should be considered default. When a single IngressClass resource has this annotation set to true, new Ingress resources without a class specified will be assigned this default class. Type object 1.10. IPPool [whereabouts.cni.cncf.io/v1alpha1] Description IPPool is the Schema for the ippools API Type object 1.11. NetworkAttachmentDefinition [k8s.cni.cncf.io/v1] Description NetworkAttachmentDefinition is a CRD schema specified by the Network Plumbing Working Group to express the intent for attaching pods to one or more logical or physical networks. More information available at: https://github.com/k8snetworkplumbingwg/multi-net-spec Type object 1.12. NetworkPolicy [networking.k8s.io/v1] Description NetworkPolicy describes what network traffic is allowed for a set of Pods Type object 1.13. OverlappingRangeIPReservation [whereabouts.cni.cncf.io/v1alpha1] Description OverlappingRangeIPReservation is the Schema for the OverlappingRangeIPReservations API Type object 1.14. PodNetworkConnectivityCheck [controlplane.operator.openshift.io/v1alpha1] Description PodNetworkConnectivityCheck Compatibility level 4: No compatibility is provided, the API can change at any point for any reason. These capabilities should not be used by applications needing long term support. Type object 1.15. Route [route.openshift.io/v1] Description A route allows developers to expose services through an HTTP(S) aware load balancing and proxy layer via a public DNS entry. The route may further specify TLS options and a certificate, or specify a public CNAME that the router should also accept for HTTP and HTTPS traffic. An administrator typically configures their router to be visible outside the cluster firewall, and may also add additional security, caching, or traffic controls on the service content. Routers usually talk directly to the service endpoints. Once a route is created, the host field may not be changed. Generally, routers use the oldest route with a given host when resolving conflicts. Routers are subject to additional customization and may support additional controls via the annotations field. Because administrators may configure multiple routers, the route status field is used to return information to clients about the names and states of the route under each router. If a client chooses a duplicate name, for instance, the route status conditions are used to indicate the route cannot be chosen. To enable HTTP/2 ALPN on a route it requires a custom (non-wildcard) certificate. This prevents connection coalescing by clients, notably web browsers. We do not support HTTP/2 ALPN on routes that use the default certificate because of the risk of connection re-use/coalescing. Routes that do not have their own custom certificate will not be HTTP/2 ALPN-enabled on either the frontend or the backend. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.16. Service [v1] Description Service is a named abstraction of software service (for example, mysql) consisting of local port (for example 3306) that the proxy listens on, and the selector that determines which pods will answer requests sent through the proxy. Type object | [
"Name: \"mysvc\", Subsets: [ { Addresses: [{\"ip\": \"10.10.1.1\"}, {\"ip\": \"10.10.2.2\"}], Ports: [{\"name\": \"a\", \"port\": 8675}, {\"name\": \"b\", \"port\": 309}] }, { Addresses: [{\"ip\": \"10.10.3.3\"}], Ports: [{\"name\": \"a\", \"port\": 93}, {\"name\": \"b\", \"port\": 76}] }, ]"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/network_apis/network-apis |
function::strtol | function::strtol Name function::strtol - strtol - Convert a string to a long Synopsis Arguments str string to convert base the base to use Description This function converts the string representation of a number to an integer. The base parameter indicates the number base to assume for the string (eg. 16 for hex, 8 for octal, 2 for binary). | [
"strtol:long(str:string,base:long)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-strtol |
Chapter 35. Using Complex Types | Chapter 35. Using Complex Types Abstract Complex types can contain multiple elements and they can have attributes. They are mapped into Java classes that can hold the data represented by the type definition. Typically, the mapping is to a bean with a set of properties representing the elements and the attributes of the content model.. 35.1. Basic Complex Type Mapping Overview XML Schema complex types define constructs containing more complex information than a simple type. The most simple complex types define an empty element with an attribute. More intricate complex types are made up of a collection of elements. By default, an XML Schema complex type is mapped to a Java class, with a member variable to represent each element and attribute listed in the XML Schema definition. The class has setters and getters for each member variable. Defining in XML Schema XML Schema complex types are defined using the complexType element. The complexType element wraps the rest of elements used to define the structure of the data. It can appear either as the parent element of a named type definition, or as the child of an element element anonymously defining the structure of the information stored in the element. When the complexType element is used to define a named type, it requires the use of the name attribute. The name attribute specifies a unique identifier for referencing the type. Complex type definitions that contain one or more elements have one of the child elements described in Table 35.1, "Elements for Defining How Elements Appear in a Complex Type" . These elements determine how the specified elements appear in an instance of the type. Table 35.1. Elements for Defining How Elements Appear in a Complex Type Element Description all All of the elements defined as part of the complex type must appear in an instance of the type. However, they can appear in any order. choice Only one of the elements defined as part of the complex type can appear in an instance of the type. sequence All of the elements defined as part of the complex type must appear in an instance of the type, and they must also appear in the order specified in the type definition. Note If a complex type definition only uses attributes, you do not need one of the elements described in Table 35.1, "Elements for Defining How Elements Appear in a Complex Type" . After deciding how the elements will appear, you define the elements by adding one or more element element children to the definition. Example 35.1, "XML Schema Complex Type" shows a complex type definition in XML Schema. Example 35.1. XML Schema Complex Type Mapping to Java XML Schema complex types are mapped to Java classes. Each element in the complex type definition is mapped to a member variable in the Java class. Getter and setter methods are also generated for each element in the complex type. All generated Java classes are decorated with the @XmlType annotation. If the mapping is for a named complex type, the annotations name is set to the value of the complexType element's name attribute. If the complex type is defined as part of an element definition, the value of the @XmlType annotation's name property is the value of the element element's name attribute. Note As described in the section called "Java mapping of elements with an in-line type" , the generated class is decorated with the @XmlRootElement annotation if it is generated for a complex type defined as part of an element definition. To provide the runtime with guidelines indicating how the elements of the XML Schema complex type should be handled, the code generators alter the annotations used to decorate the class and its member variables. All Complex Type All complex types are defined using the all element. They are annotated as follows: The @XmlType annotation's propOrder property is empty. Each element is decorated with the @XmlElement annotation. The @XmlElement annotation's required property is set to true . Example 35.2, "Mapping of an All Complex Type" shows the mapping for an all complex type with two elements. Example 35.2. Mapping of an All Complex Type Choice Complex Type Choice complex types are defined using the choice element. They are annotated as follows: The @XmlType annotation's propOrder property lists the names of the elements in the order they appear in the XML Schema definition. None of the member variables are annotated. Example 35.3, "Mapping of a Choice Complex Type" shows the mapping for a choice complex type with two elements. Example 35.3. Mapping of a Choice Complex Type Sequence Complex Type A sequence complex type is defined using the sequence element. It is annotated as follows: The @XmlType annotation's propOrder property lists the names of the elements in the order they appear in the XML Schema definition. Each element is decorated with the @XmlElement annotation. The @XmlElement annotation's required property is set to true . Example 35.4, "Mapping of a Sequence Complex Type" shows the mapping for the complex type defined in Example 35.1, "XML Schema Complex Type" . Example 35.4. Mapping of a Sequence Complex Type 35.2. Attributes Overview Apache CXF supports the use of attribute elements and attributeGroup elements within the scope of a complexType element. When defining structures for an XML document attribute declarations provide a means of adding information that is specified within the tag, not the value that the tag contains. For example, when describing the XML element <value currency="euro">410<\value> in XML Schema the currency attribute is described using an attribute element as shown in Example 35.5, "XML Schema Defining and Attribute" . The attributeGroup element allows you to define a group of reusable attributes that can be referenced by all complex types defined by the schema. For example, if you are defining a series of elements that all use the attributes category and pubDate , you could define an attribute group with these attributes and reference them in all the elements that use them. This is shown in Example 35.7, "Attribute Group Definition" . When describing data types for use in developing application logic, attributes whose use attribute is set to either optional or required are treated as elements of a structure. For each attribute declaration contained within a complex type description, an element is generated in the class for the attribute, along with the appropriate getter and setter methods. Defining an attribute in XML Schema An XML Schema attribute element has one required attribute, name , that is used to identify the attribute. It also has four optional attributes that are described in Table 35.2, "Optional Attributes Used to Define Attributes in XML Schema" . Table 35.2. Optional Attributes Used to Define Attributes in XML Schema Attribute Description use Specifies if the attribute is required. Valid values are required , optional , or prohibited . optional is the default value. type Specifies the type of value the attribute can take. If it is not used the schema type of the attribute must be defined in-line. default Specifies a default value to use for the attribute. It is only used when the attribute element's use attribute is set to optional . fixed Specifies a fixed value to use for the attribute. It is only used when the attribute element's use attribute is set to optional . Example 35.5, "XML Schema Defining and Attribute" shows an attribute element defining an attribute, currency, whose value is a string. Example 35.5. XML Schema Defining and Attribute If the type attribute is omitted from the attribute element, the format of the data must be described in-line. Example 35.6, "Attribute with an In-Line Data Description" shows an attribute element for an attribute, category , that can take the values autobiography , non-fiction , or fiction . Example 35.6. Attribute with an In-Line Data Description Using an attribute group in XML Schema Using an attribute group in a complex type definition is a two step process: Define the attribute group. An attribute group is defined using an attributeGroup element with a number of attribute child elements. The attributeGroup requires a name attribute that defines the string used to refer to the attribute group. The attribute elements define the members of the attribute group and are specified as shown in the section called "Defining an attribute in XML Schema" . Example 35.7, "Attribute Group Definition" shows the description of the attribute group catalogIndecies . The attribute group has two members: category , which is optional, and pubDate , which is required. Example 35.7. Attribute Group Definition Use the attribute group in the definition of a complex type. You use attribute groups in complex type definitions by using the attributeGroup element with the ref attribute. The value of the ref attribute is the name given the attribute group that you want to use as part of the type definition. For example if you want to use the attribute group catalogIndecies in the complex type dvdType , you would use <attributeGroup ref="catalogIndecies" /> as shown in Example 35.8, "Complex Type with an Attribute Group" . Example 35.8. Complex Type with an Attribute Group Mapping attributes to Java Attributes are mapped to Java in much the same way that member elements are mapped to Java. Required attributes and optional attributes are mapped to member variables in the generated Java class. The member variables are decorated with the @XmlAttribute annotation. If the attribute is required, the @XmlAttribute annotation's required property is set to true . The complex type defined in Example 35.9, "techDoc Description" is mapped to the Java class shown in Example 35.10, "techDoc Java Class" . Example 35.9. techDoc Description Example 35.10. techDoc Java Class As shown in Example 35.10, "techDoc Java Class" , the default attribute and the fixed attribute instruct the code generators to add code to the getter method generated for the attribute. This additional code ensures that the specified value is returned if no value is set. Important The fixed attribute is treated the same as the default attribute. If you want the fixed attribute to be treated as a Java constant you can use the customization described in Section 38.5, "Customizing Fixed Value Attribute Mapping" . Mapping attribute groups to Java Attribute groups are mapped to Java as if the members of the group were explicitly used in the type definition. If the attribute group has three members, and it is used in a complex type, the generated class for that type will include a member variable, along with the getter and setter methods, for each member of the attribute group. For example, the complex type defined in Example 35.8, "Complex Type with an Attribute Group" , Apache CXF generates a class containing the member variables category and pubDate to support the members of the attribute group as shown in Example 35.11, "dvdType Java Class" . Example 35.11. dvdType Java Class 35.3. Deriving Complex Types from Simple Types Overview Apache CXF supports derivation of a complex type from a simple type. A simple type has, by definition, neither sub-elements nor attributes. Hence, one of the main reasons for deriving a complex type from a simple type is to add attributes to the simple type. There are two ways of deriving a complex type from a simple type: By extension By restriction Derivation by extension Example 35.12, "Deriving a Complex Type from a Simple Type by Extension" shows an example of a complex type, internationalPrice , derived by extension from the xsd:decimal primitive type to include a currency attribute. Example 35.12. Deriving a Complex Type from a Simple Type by Extension The simpleContent element indicates that the new type does not contain any sub-elements. The extension element specifies that the new type extends xsd:decimal . Derivation by restriction Example 35.13, "Deriving a Complex Type from a Simple Type by Restriction" shows an example of a complex type, idType , that is derived by restriction from xsd:string . The defined type restricts the possible values of xsd:string to values that are ten characters in length. It also adds an attribute to the type. Example 35.13. Deriving a Complex Type from a Simple Type by Restriction As in Example 35.12, "Deriving a Complex Type from a Simple Type by Extension" the simpleContent element signals that the new type does not contain any children. This example uses a restriction element to constrain the possible values used in the new type. The attribute element adds the element to the new type. Mapping to Java A complex type derived from a simple type is mapped to a Java class that is decorated with the @XmlType annotation. The generated class contains a member variable, value , of the simple type from which the complex type is derived. The member variable is decorated with the @XmlValue annotation. The class also has a getValue() method and a setValue() method. In addition, the generated class has a member variable, and the associated getter and setter methods, for each attribute that extends the simple type. Example 35.14, "idType Java Class" shows the Java class generated for the idType type defined in Example 35.13, "Deriving a Complex Type from a Simple Type by Restriction" . Example 35.14. idType Java Class 35.4. Deriving Complex Types from Complex Types Overview Using XML Schema, you can derive new complex types by either extending or restricting other complex types using the complexContent element. When generating the Java class to represent the derived complex type, Apache CXF extends the base type's class. In this way, the generated Java code preserves the inheritance hierarchy intended in the XML Schema. Schema syntax You derive complex types from other complex types by using the complexContent element, and either the extension element or the restriction element. The complexContent element specifies that the included data description includes more than one field. The extension element and the restriction element, which are children of the complexContent element, specify the base type being modified to create the new type. The base type is specified by the base attribute. Extending a complex type To extend a complex type use the extension element to define the additional elements and attributes that make up the new type. All elements that are allowed in a complex type description are allowable as part of the new type's definition. For example, you can add an anonymous enumeration to the new type, or you can use the choice element to specify that only one of the new fields can be valid at a time. Example 35.15, "Deriving a Complex Type by Extension" shows an XML Schema fragment that defines two complex types, widgetOrderInfo and widgetOrderBillInfo . widgetOrderBillInfo is derived by extending widgetOrderInfo to include two new elements: orderNumber and amtDue . Example 35.15. Deriving a Complex Type by Extension Restricting a complex type To restrict a complex type use the restriction element to limit the possible values of the base type's elements or attributes. When restricting a complex type you must list all of the elements and attributes of the base type. For each element you can add restrictive attributes to the definition. For example, you can add a maxOccurs attribute to an element to limit the number of times it can occur. You can also use the fixed attribute to force one or more of the elements to have predetermined values. Example 35.16, "Defining a Complex Type by Restriction" shows an example of defining a complex type by restricting another complex type. The restricted type, wallawallaAddress , can only be used for addresses in Walla Walla, Washington because the values for the city element, the state element, and the zipCode element are fixed. Example 35.16. Defining a Complex Type by Restriction Mapping to Java As it does with all complex types, Apache CXF generates a class to represent complex types derived from another complex type. The Java class generated for the derived complex type extends the Java class generated to support the base complex type. The base Java class is also modified to include the @XmlSeeAlso annotation. The base class' @XmlSeeAlso annotation lists all of the classes that extend the base class. When the new complex type is derived by extension, the generated class will include member variables for all of the added elements and attributes. The new member variables will be generated according to the same mappings as all other elements. When the new complex type is derived by restriction, the generated class will have no new member variables. The generated class will simply be a shell that does not provide any additional functionality. It is entirely up to you to ensure that the restrictions defined in the XML Schema are enforced. For example, the schema in Example 35.15, "Deriving a Complex Type by Extension" results in the generation of two Java classes: WidgetOrderInfo and WidgetBillOrderInfo . WidgetOrderBillInfo extends WidgetOrderInfo because widgetOrderBillInfo is derived by extension from widgetOrderInfo . Example 35.17, "WidgetOrderBillInfo" shows the generated class for widgetOrderBillInfo . Example 35.17. WidgetOrderBillInfo 35.5. Occurrence Constraints 35.5.1. Schema Elements Supporting Occurrence Constraints XML Schema allows you to specify the occurrence constraints on four of the XML Schema elements that make up a complex type definition: Section 35.5.2, "Occurrence Constraints on the All Element" Section 35.5.3, "Occurrence Constraints on the Choice Element" Section 35.5.4, "Occurrence Constraints on Elements" Section 35.5.5, "Occurrence Constraints on Sequences" 35.5.2. Occurrence Constraints on the All Element XML Schema Complex types defined with the all element do not allow for multiple occurrences of the structure defined by the all element. You can, however, make the structure defined by the all element optional by setting its minOccurs attribute to 0 . Mapping to Java Setting the all element's minOccurs attribute to 0 has no effect on the generated Java class. 35.5.3. Occurrence Constraints on the Choice Element Overview By default, the results of a choice element can only appear once in an instance of a complex type. You can change the number of times the element chosen to represent the structure defined by a choice element is allowed to appear using its minOccurs attribute and its mxOccurs attribute. Using these attributes you can specify that the choice type can occur zero to an unlimited number of times in an instance of a complex type. The element chosen for the choice type does not need to be the same for each occurrence of the type. Using in XML Schema The minOccurs attribute specifies the minimum number of times the choice type must appear. Its value can be any positive integer. Setting the minOccurs attribute to 0 specifies that the choice type does not need to appear inside an instance of the complex type. The maxOccurs attribute specifies the maximum number of times the choice type can appear. Its value can be any non-zero, positive integer or unbounded . Setting the maxOccurs attribute to unbounded specifies that the choice type can appear an infinite number of times. Example 35.18, "Choice Occurrence Constraints" shows the definition of a choice type, ClubEvent , with choice occurrence constraints. The choice type overall can be repeated 0 to unbounded times. Example 35.18. Choice Occurrence Constraints Mapping to Java Unlike single instance choice structures, XML Schema choice structures that can occur multiple times are mapped to a Java class with a single member variable. This single member variable is a List<T> object that holds all of the data for the multiple occurrences of the sequence. For example, if the sequence defined in Example 35.18, "Choice Occurrence Constraints" occurred two times, then the list would have two items. The name of the Java class' member variable is derived by concatenating the names of the member elements. The element names are separated by Or and the first letter of the variable name is converted to lower case. For example, the member variable generated from Example 35.18, "Choice Occurrence Constraints" would be named memberNameOrGuestName . The type of object stored in the list depends on the relationship between the types of the member elements. For example: If the member elements are of the same type the generated list will contain JAXBElement<T> objects. The base type of the JAXBElement<T> objects is determined by the normal mapping of the member elements' type. If the member elements are of different types and their Java representations implement a common interface, the list will contains objects of the common interface. If the member elements are of different types and their Java representations extend a common base class, the list will contains objects of the common base class. If none of the other conditions are met, the list will contain Object objects. The generated Java class will only have a getter method for the member variable. The getter method returns a reference to the live list. Any modifications made to the returned list will effect the actual object. The Java class is decorated with the @XmlType annotation. The annotation's name property is set to the value of the name attribute from the parent element of the XML Schema definition. The annotation's propOrder property contains the single member variable representing the elements in the sequence. The member variable representing the elements in the choice structure are decorated with the @XmlElements annotation. The @XmlElements annotation contains a comma separated list of @XmlElement annotations. The list has one @XmlElement annotation for each member element defined in the XML Schema definition of the type. The @XmlElement annotations in the list have their name property set to the value of the XML Schema element element's name attribute and their type property set to the Java class resulting from the mapping of the XML Schema element element's type. Example 35.19, "Java Representation of Choice Structure with an Occurrence Constraint" shows the Java mapping for the XML Schema choice structure defined in Example 35.18, "Choice Occurrence Constraints" . Example 35.19. Java Representation of Choice Structure with an Occurrence Constraint minOccurs set to 0 If only the minOccurs element is specified and its value is 0 , the code generators generate the Java class as if the minOccurs attribute were not set. 35.5.4. Occurrence Constraints on Elements Overview You can specify how many times a specific element in a complex type appears using the element element's minOccurs attribute and maxOccurs attribute. The default value for both attributes is 1 . minOccurs set to 0 When you set one of the complex type's member element's minOccurs attribute to 0 , the @XmlElement annotation decorating the corresponding Java member variable is changed. Instead of having its required property set to true , the @XmlElement annotation's required property is set to false . minOccurs set to a value greater than 1 In XML Schema you can specify that an element must occur more than once in an instance of the type by setting the element element's minOccurs attribute to a value greater than one. However, the generated Java class will not support the XML Schema constraint. Apache CXF generates the supporting Java member variable as if the minOccurs attribute were not set. Elements with maxOccurs set When you want a member element to appear multiple times in an instance of a complex type, you set the element's maxOccurs attribute to a value greater than 1. You can set the maxOccurs attribute's value to unbounded to specify that the member element can appear an unlimited number of times. The code generators map a member element with the maxOccurs attribute set to a value greater than 1 to a Java member variable that is a List<T> object. The base class of the list is determined by mapping the element's type to Java. For XML Schema primitive types, the wrapper classes are used as described in the section called "Wrapper classes" . For example, if the member element is of type xsd:int the generated member variable is a List<Integer> object. 35.5.5. Occurrence Constraints on Sequences Overview By default, the contents of a sequence element can only appear once in an instance of a complex type. You can change the number of times the sequence of elements defined by a sequence element is allowed to appear using its minOccurs attribute and its maxOccurs attribute. Using these attributes you can specify that the sequence type can occur zero to an unlimited number of times in an instance of a complex type. Using XML Schema The minOccurs attribute specifies the minimum number of times the sequence must occur in an instance of the defined complex type. Its value can be any positive integer. Setting the minOccurs attribute to 0 specifies that the sequence does not need to appear inside an instance of the complex type. The maxOccurs attribute specifies the upper limit for how many times the sequence can occur in an instance of the defined complex type. Its value can be any non-zero, positive integer or unbounded . Setting the maxOccurs attribute to unbounded specifies that the sequence can appear an infinite number of times. Example 35.20, "Sequence with Occurrence Constraints" shows the definition of a sequence type, CultureInfo , with sequence occurrence constraints. The sequence can be repeated 0 to 2 times. Example 35.20. Sequence with Occurrence Constraints Mapping to Java Unlike single instance sequences, XML Schema sequences that can occur multiple times are mapped to a Java class with a single member variable. This single member variable is a List<T> object that holds all of the data for the multiple occurrences of the sequence. For example, if the sequence defined in Example 35.20, "Sequence with Occurrence Constraints" occurred two times, then the list would have four items. The name of the Java class' member variable is derived by concatenating the names of the member elements. The element names are separated by And and the first letter of the variable name is converted to lower case. For example, the member variable generated from Example 35.20, "Sequence with Occurrence Constraints" is named nameAndLcid . The type of object stored in the list depends on the relationship between the types of the member elements. For example: If the member elements are of the same type the generated list will contain JAXBElement<T> objects. The base type of the JAXBElement<T> objects is determined by the normal mapping of the member elements' type. If the member elements are of different types and their Java representations implement a common interface, the list will contains objects of the common interface. If the member elements are of different types and their Java representations extend a common base class, the list will contain objects of the common base class. If none of the other conditions are met, the list will contain Object objects. The generated Java class only has a getter method for the member variable. The getter method returns a reference to the live list. Any modifications made to the returned list effects the actual object. The Java class is decorated with the @XmlType annotation. The annotation's name property is set to the value of the name attribute from the parent element of the XML Schema definition. The annotation's propOrder property contains the single member variable representing the elements in the sequence. The member variable representing the elements in the sequence are decorated with the @XmlElements annotation. The @XmlElements annotation contains a comma separated list of @XmlElement annotations. The list has one @XmlElement annotation for each member element defined in the XML Schema definition of the type. The @XmlElement annotations in the list have their name property set to the value of the XML Schema element element's name attribute and their type property set to the Java class resulting from the mapping of the XML Schema element element's type. Example 35.21, "Java Representation of Sequence with an Occurrence Constraint" shows the Java mapping for the XML Schema sequence defined in Example 35.20, "Sequence with Occurrence Constraints" . Example 35.21. Java Representation of Sequence with an Occurrence Constraint minOccurs set to 0 If only the minOccurs element is specified and its value is 0 , the code generators generate the Java class as if the minOccurs attribute is not set. 35.6. Using Model Groups Overview XML Schema model groups are convenient shortcuts that allows you to reference a group of elements from a user-defined complex type.For example, you can define a group of elements that are common to several types in your application and then reference the group repeatedly. Model groups are defined using the group element, and are similar to complex type definitions. The mapping of model groups to Java is also similar to the mapping for complex types. Defining a model group in XML Schema You define a model group in XML Schema using the group element with the name attribute. The value of the name attribute is a string that is used to refer to the group throughout the schema. The group element, like the complexType element, can have the sequence element, the all element, or the choice element as its immediate child. Inside the child element, you define the members of the group using element elements. For each member of the group, specify one element element. Group members can use any of the standard attributes for the element element including minOccurs and maxOccurs . So, if your group has three elements and one of them can occur up to three times, you define a group with three element elements, one of which uses maxOccurs="3". Example 35.22, "XML Schema Model Group" shows a model group with three elements. Example 35.22. XML Schema Model Group Using a model group in a type definition Once a model group has been defined, it can be used as part of a complex type definition. To use a model group in a complex type definition, use the group element with the ref attribute. The value of the ref attribute is the name given to the group when it was defined. For example, to use the group defined in Example 35.22, "XML Schema Model Group" you use <group ref="tns:passenger" /> as shown in Example 35.23, "Complex Type with a Model Group" . Example 35.23. Complex Type with a Model Group When a model group is used in a type definition, the group becomes a member of the type. So an instance of reservation has four member elements. The first element is the passenger element and it contains the member elements defined by the group shown in Example 35.22, "XML Schema Model Group" . An example of an instance of reservation is shown in Example 35.24, "Instance of a Type with a Model Group" . Example 35.24. Instance of a Type with a Model Group Mapping to Java By default, a model group is only mapped to Java artifacts when it is included in a complex type definition. When generating code for a complex type that includes a model group, Apache CXF simply includes the member variables for the model group into the Java class generated for the type. The member variables representing the model group are annotated based on the definitions of the model group. Example 35.25, "Type with a Group" shows the Java class generated for the complex type defined in Example 35.23, "Complex Type with a Model Group" . Example 35.25. Type with a Group Multiple occurrences You can specify that the model group appears more than once by setting the group element's maxOccurs attribute to a value greater than one. To allow for multiple occurrences of the model group Apache CXF maps the model group to a List<T> object. The List<T> object is generated following the rules for the group's first child: If the group is defined using a sequence element see Section 35.5.5, "Occurrence Constraints on Sequences" . If the group is defined using a choice element see Section 35.5.3, "Occurrence Constraints on the Choice Element" . | [
"<complexType name=\"sequence\"> <sequence> <element name=\"name\" type=\"xsd:string\" /> <element name=\"street\" type=\"xsd:short\" /> <element name=\"city\" type=\"xsd:string\" /> <element name=\"state\" type=\"xsd:string\" /> <element name=\"zipCode\" type=\"xsd:string\" /> </sequence> </complexType>",
"@XmlType(name = \"all\", propOrder = { }) public class All { @XmlElement(required = true) protected BigDecimal amount; @XmlElement(required = true) protected String type; public BigDecimal getAmount() { return amount; } public void setAmount(BigDecimal value) { this.amount = value; } public String getType() { return type; } public void setType(String value) { this.type = value; } }",
"@XmlType(name = \"choice\", propOrder = { \"address\", \"floater\" }) public class Choice { protected Sequence address; protected Float floater; public Sequence getAddress() { return address; } public void setAddress(Sequence value) { this.address = value; } public Float getFloater() { return floater; } public void setFloater(Float value) { this.floater = value; } }",
"@XmlType(name = \"sequence\", propOrder = { \"name\", \"street\", \"city\", \"state\", \"zipCode\" }) public class Sequence { @XmlElement(required = true) protected String name; protected short street; @XmlElement(required = true) protected String city; @XmlElement(required = true) protected String state; @XmlElement(required = true) protected String zipCode; public String getName() { return name; } public void setName(String value) { this.name = value; } public short getStreet() { return street; } public void setStreet(short value) { this.street = value; } public String getCity() { return city; } public void setCity(String value) { this.city = value; } public String getState() { return state; } public void setState(String value) { this.state = value; } public String getZipCode() { return zipCode; } public void setZipCode(String value) { this.zipCode = value; } }",
"<element name=\"value\"> <complexType> <xsd:simpleContent> <xsd:extension base=\"xsd:integer\"> <xsd:attribute name=\"currency\" type=\"xsd:string\" use=\"required\"/> </xsd:extension> </xsd:simpleContent> </xsd:complexType> </xsd:element>",
"<attribute name=\"category\" use=\"required\"> <simpleType> <restriction base=\"xsd:string\"> <enumeration value=\"autobiography\"/> <enumeration value=\"non-fiction\"/> <enumeration value=\"fiction\"/> </restriction> </simpleType> </attribute>",
"<attributeGroup name=\"catalogIndices\"> <attribute name=\"category\" type=\"catagoryType\" /> <attribute name=\"pubDate\" type=\"dateTime\" use=\"required\" /> </attributeGroup>",
"<complexType name=\"dvdType\"> <sequence> <element name=\"title\" type=\"xsd:string\" /> <element name=\"director\" type=\"xsd:string\" /> <element name=\"numCopies\" type=\"xsd:int\" /> </sequence> <attributeGroup ref=\"catalogIndices\" /> </complexType>",
"<complexType name=\"techDoc\"> <all> <element name=\"product\" type=\"xsd:string\" /> <element name=\"version\" type=\"xsd:short\" /> </all> <attribute name=\"usefullness\" type=\"xsd:float\" use=\"optional\" default=\"0.01\" /> </complexType>",
"@XmlType(name = \"techDoc\", propOrder = { }) public class TechDoc { @XmlElement(required = true) protected String product; protected short version; @XmlAttribute protected Float usefullness; public String getProduct() { return product; } public void setProduct(String value) { this.product = value; } public short getVersion() { return version; } public void setVersion(short value) { this.version = value; } public float getUsefullness() { if (usefullness == null) { return 0.01F; } else { return usefullness; } } public void setUsefullness(Float value) { this.usefullness = value; } }",
"@XmlType(name = \"dvdType\", propOrder = { \"title\", \"director\", \"numCopies\" }) public class DvdType { @XmlElement(required = true) protected String title; @XmlElement(required = true) protected String director; protected int numCopies; @XmlAttribute protected CatagoryType category; @XmlAttribute(required = true) @XmlSchemaType(name = \"dateTime\") protected XMLGregorianCalendar pubDate; public String getTitle() { return title; } public void setTitle(String value) { this.title = value; } public String getDirector() { return director; } public void setDirector(String value) { this.director = value; } public int getNumCopies() { return numCopies; } public void setNumCopies(int value) { this.numCopies = value; } public CatagoryType getCatagory() { return catagory; } public void setCatagory(CatagoryType value) { this.catagory = value; } public XMLGregorianCalendar getPubDate() { return pubDate; } public void setPubDate(XMLGregorianCalendar value) { this.pubDate = value; } }",
"<complexType name=\"internationalPrice\"> <simpleContent> <extension base=\"xsd:decimal\"> <attribute name=\"currency\" type=\"xsd:string\"/> </extension> </simpleContent> </complexType>",
"<complexType name=\"idType\"> <simpleContent> <restriction base=\"xsd:string\"> <length value=\"10\" /> <attribute name=\"expires\" type=\"xsd:dateTime\" /> </restriction> </simpleContent> </complexType>",
"@XmlType(name = \"idType\", propOrder = { \"value\" }) public class IdType { @XmlValue protected String value; @XmlAttribute @XmlSchemaType(name = \"dateTime\") protected XMLGregorianCalendar expires; public String getValue() { return value; } public void setValue(String value) { this.value = value; } public XMLGregorianCalendar getExpires() { return expires; } public void setExpires(XMLGregorianCalendar value) { this.expires = value; } }",
"<complexType name=\"widgetOrderInfo\"> <sequence> <element name=\"amount\" type=\"xsd:int\"/> <element name=\"order_date\" type=\"xsd:dateTime\"/> <element name=\"type\" type=\"xsd1:widgetSize\"/> <element name=\"shippingAddress\" type=\"xsd1:Address\"/> </sequence> <attribute name=\"rush\" type=\"xsd:boolean\" use=\"optional\" /> </complexType> <complexType name=\"widgetOrderBillInfo\"> <complexContent> <extension base=\"xsd1:widgetOrderInfo\"> <sequence> <element name=\"amtDue\" type=\"xsd:decimal\"/> <element name=\"orderNumber\" type=\"xsd:string\"/> </sequence> <attribute name=\"paid\" type=\"xsd:boolean\" default=\"false\" /> </extension> </complexContent> </complexType>",
"<complexType name=\"Address\"> <sequence> <element name=\"name\" type=\"xsd:string\"/> <element name=\"street\" type=\"xsd:short\" maxOccurs=\"3\"/> <element name=\"city\" type=\"xsd:string\"/> <element name=\"state\" type=\"xsd:string\"/> <element name=\"zipCode\" type=\"xsd:string\"/> </sequence> </complexType> <complexType name=\"wallawallaAddress\"> <complexContent> <restriction base=\"xsd1:Address\"> <sequence> <element name=\"name\" type=\"xsd:string\"/> <element name=\"street\" type=\"xsd:short\" maxOccurs=\"3\"/> <element name=\"city\" type=\"xsd:string\" fixed=\"WallaWalla\"/> <element name=\"state\" type=\"xsd:string\" fixed=\"WA\" /> <element name=\"zipCode\" type=\"xsd:string\" fixed=\"99362\" /> </sequence> </restriction> </complexContent> </complexType>",
"@XmlType(name = \"widgetOrderBillInfo\", propOrder = { \"amtDue\", \"orderNumber\" }) public class WidgetOrderBillInfo extends WidgetOrderInfo { @XmlElement(required = true) protected BigDecimal amtDue; @XmlElement(required = true) protected String orderNumber; @XmlAttribute protected Boolean paid; public BigDecimal getAmtDue() { return amtDue; } public void setAmtDue(BigDecimal value) { this.amtDue = value; } public String getOrderNumber() { return orderNumber; } public void setOrderNumber(String value) { this.orderNumber = value; } public boolean isPaid() { if (paid == null) { return false; } else { return paid; } } public void setPaid(Boolean value) { this.paid = value; } }",
"<complexType name=\"ClubEvent\"> <choice minOccurs=\"0\" maxOccurs=\"unbounded\"> <element name=\"MemberName\" type=\"xsd:string\"/> <element name=\"GuestName\" type=\"xsd:string\"/> </choice> </complexType>",
"@XmlType(name = \"ClubEvent\", propOrder = { \"memberNameOrGuestName\" }) public class ClubEvent { @XmlElementRefs({ @XmlElementRef(name = \"GuestName\", type = JAXBElement.class), @XmlElementRef(name = \"MemberName\", type = JAXBElement.class) }) protected List<JAXBElement<String>> memberNameOrGuestName; public List<JAXBElement<String>> getMemberNameOrGuestName() { if (memberNameOrGuestName == null) { memberNameOrGuestName = new ArrayList<JAXBElement<String>>(); } return this.memberNameOrGuestName; } }",
"<complexType name=\"CultureInfo\"> <sequence minOccurs=\"0\" maxOccurs=\"2\"> <element name=\"Name\" type=\"string\"/> <element name=\"Lcid\" type=\"int\"/> </sequence> </complexType>",
"@XmlType(name = \"CultureInfo\", propOrder = { \"nameAndLcid\" }) public class CultureInfo { @XmlElements({ @XmlElement(name = \"Name\", type = String.class), @XmlElement(name = \"Lcid\", type = Integer.class) }) protected List<Serializable> nameAndLcid; public List<Serializable> getNameAndLcid() { if (nameAndLcid == null) { nameAndLcid = new ArrayList<Serializable>(); } return this.nameAndLcid; } }",
"<group name=\"passenger\"> <sequence> <element name=\"name\" type=\"xsd:string\" /> <element name=\"clubNum\" type=\"xsd:long\" /> <element name=\"seatPref\" type=\"xsd:string\" maxOccurs=\"3\" /> </sequence> </group>",
"<complexType name=\"reservation\"> <sequence> <group ref=\"tns:passenger\" /> <element name=\"origin\" type=\"xsd:string\" /> <element name=\"destination\" type=\"xsd:string\" /> <element name=\"fltNum\" type=\"xsd:long\" /> </sequence> </complexType>",
"<reservation> <passenger> <name>A. Smart</name> <clubNum>99</clubNum> <seatPref>isle1</seatPref> </passenger> <origin>LAX</origin> <destination>FRA</destination> <fltNum>34567</fltNum> </reservation>",
"@XmlType(name = \"reservation\", propOrder = { \"name\", \"clubNum\", \"seatPref\", \"origin\", \"destination\", \"fltNum\" }) public class Reservation { @XmlElement(required = true) protected String name; protected long clubNum; @XmlElement(required = true) protected List<String> seatPref; @XmlElement(required = true) protected String origin; @XmlElement(required = true) protected String destination; protected long fltNum; public String getName() { return name; } public void setName(String value) { this.name = value; } public long getClubNum() { return clubNum; } public void setClubNum(long value) { this.clubNum = value; } public List<String> getSeatPref() { if (seatPref == null) { seatPref = new ArrayList<String>(); } return this.seatPref; } public String getOrigin() { return origin; } public void setOrigin(String value) { this.origin = value; } public String getDestination() { return destination; } public void setDestination(String value) { this.destination = value; } public long getFltNum() { return fltNum; } public void setFltNum(long value) { this.fltNum = value; }"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/jaxwscomplextypemapping |
Chapter 82. Inference and truth maintenance in the decision engine | Chapter 82. Inference and truth maintenance in the decision engine The basic function of the decision engine is to match data to business rules and determine whether and how to execute rules. To ensure that relevant data is applied to the appropriate rules, the decision engine makes inferences based on existing knowledge and performs the actions based on the inferred information. For example, the following DRL rule determines the age requirements for adults, such as in a bus pass policy: Rule to define age requirement Based on this rule, the decision engine infers whether a person is an adult or a child and performs the specified action (the then consequence). Every person who is 18 years old or older has an instance of IsAdult inserted for them in the working memory. This inferred relation of age and bus pass can then be invoked in any rule, such as in the following rule segment: In many cases, new data in a rule system is the result of other rule executions, and this new data can affect the execution of other rules. If the decision engine asserts data as a result of executing a rule, the decision engine uses truth maintenance to justify the assertion and enforce truthfulness when applying inferred information to other rules. Truth maintenance also helps to identify inconsistencies and to handle contradictions. For example, if two rules are executed and result in a contradictory action, the decision engine chooses the action based on assumptions from previously calculated conclusions. The decision engine inserts facts using either stated or logical insertions: Stated insertions: Defined with insert() . After stated insertions, facts are generally retracted explicitly. (The term insertion , when used generically, refers to stated insertion .) Logical insertions: Defined with insertLogical() . After logical insertions, the facts that were inserted are automatically retracted when the conditions in the rules that inserted the facts are no longer true. The facts are retracted when no condition supports the logical insertion. A fact that is logically inserted is considered to be justified by the decision engine. For example, the following sample DRL rules use stated fact insertion to determine the age requirements for issuing a child bus pass or an adult bus pass: Rules to issue bus pass, stated insertion These rules are not easily maintained in the decision engine as bus riders increase in age and move from child to adult bus pass. As an alternative, these rules can be separated into rules for bus rider age and rules for bus pass type using logical fact insertion. The logical insertion of the fact makes the fact dependent on the truth of the when clause. The following DRL rules use logical insertion to determine the age requirements for children and adults: Children and adult age requirements, logical insertion Important For logical insertions, your fact objects must override the equals and hashCode methods from the java.lang.Object object according to the Java standard. Two objects are equal if their equals methods return true for each other and if their hashCode methods return the same values. For more information, see the Java API documentation for your Java version. When the condition in the rule is false, the fact is automatically retracted. This behavior is helpful in this example because the two rules are mutually exclusive. In this example, if the person is younger than 18 years old, the rule logically inserts an IsChild fact. After the person is 18 years old or older, the IsChild fact is automatically retracted and the IsAdult fact is inserted. The following DRL rules then determine whether to issue a child bus pass or an adult bus pass and logically insert the ChildBusPass and AdultBusPass facts. This rule configuration is possible because the truth maintenance system in the decision engine supports chaining of logical insertions for a cascading set of retracts. Rules to issue bus pass, logical insertion When a person turns 18 years old, the IsChild fact and the person's ChildBusPass fact is retracted. To these set of conditions, you can relate another rule that states that a person must return the child pass after turning 18 years old. When the decision engine automatically retracts the ChildBusPass object, the following rule is executed to send a request to the person: Rule to notify bus pass holder of new pass The following flowcharts illustrate the life cycle of stated and logical insertions: Figure 82.1. Stated insertion Figure 82.2. Logical insertion When the decision engine logically inserts an object during a rule execution, the decision engine justifies the object by executing the rule. For each logical insertion, only one equal object can exist, and each subsequent equal logical insertion increases the justification counter for that logical insertion. A justification is removed when the conditions of the rule become untrue. When no more justifications exist, the logical object is automatically retracted. 82.1. Fact equality modes in the decision engine The decision engine supports the following fact equality modes that determine how the decision engine stores and compares inserted facts: identity : (Default) The decision engine uses an IdentityHashMap to store all inserted facts. For every new fact insertion, the decision engine returns a new FactHandle object. If a fact is inserted again, the decision engine returns the original FactHandle object, ignoring repeated insertions for the same fact. In this mode, two facts are the same for the decision engine only if they are the very same object with the same identity. equality : The decision engine uses a HashMap to store all inserted facts. The decision engine returns a new FactHandle object only if the inserted fact is not equal to an existing fact, according to the equals() method of the inserted fact. In this mode, two facts are the same for the decision engine if they are composed the same way, regardless of identity. Use this mode when you want objects to be assessed based on feature equality instead of explicit identity. As an illustration of fact equality modes, consider the following example facts: Example facts In identity mode, facts p1 and p2 are different instances of a Person class and are treated as separate objects because they have separate identities. In equality mode, facts p1 and p2 are treated as the same object because they are composed the same way. This difference in behavior affects how you can interact with fact handles. For example, assume that you insert facts p1 and p2 into the decision engine and later you want to retrieve the fact handle for p1 . In identity mode, you must specify p1 to return the fact handle for that exact object, whereas in equality mode, you can specify p1 , p2 , or new Person("John", 45) to return the fact handle. Example code to insert a fact and return the fact handle in identity mode Example code to insert a fact and return the fact handle in equality mode To set the fact equality mode, use one of the following options: Set the system property drools.equalityBehavior to identity (default) or equality . Set the equality mode while creating the KIE base programmatically: KieServices ks = KieServices.get(); KieBaseConfiguration kieBaseConf = ks.newKieBaseConfiguration(); kieBaseConf.setOption(EqualityBehaviorOption.EQUALITY); KieBase kieBase = kieContainer.newKieBase(kieBaseConf); Set the equality mode in the KIE module descriptor file ( kmodule.xml ) for a specific Red Hat Decision Manager project: <kmodule> ... <kbase name="KBase2" default="false" equalsBehavior="equality" packages="org.domain.pkg2, org.domain.pkg3" includes="KBase1"> ... </kbase> ... </kmodule> | [
"rule \"Infer Adult\" when USDp : Person(age >= 18) then insert(new IsAdult(USDp)) end",
"USDp : Person() IsAdult(person == USDp)",
"rule \"Issue Child Bus Pass\" when USDp : Person(age < 18) then insert(new ChildBusPass(USDp)); end rule \"Issue Adult Bus Pass\" when USDp : Person(age >= 18) then insert(new AdultBusPass(USDp)); end",
"rule \"Infer Child\" when USDp : Person(age < 18) then insertLogical(new IsChild(USDp)) end rule \"Infer Adult\" when USDp : Person(age >= 18) then insertLogical(new IsAdult(USDp)) end",
"rule \"Issue Child Bus Pass\" when USDp : Person() IsChild(person == USDp) then insertLogical(new ChildBusPass(USDp)); end rule \"Issue Adult Bus Pass\" when USDp : Person() IsAdult(person =USDp) then insertLogical(new AdultBusPass(USDp)); end",
"rule \"Return ChildBusPass Request\" when USDp : Person() not(ChildBusPass(person == USDp)) then requestChildBusPass(USDp); end",
"Person p1 = new Person(\"John\", 45); Person p2 = new Person(\"John\", 45);",
"ksession.insert(p1); ksession.getFactHandle(p1);",
"ksession.insert(p1); ksession.getFactHandle(p1); // Alternate option: ksession.getFactHandle(new Person(\"John\", 45));",
"KieServices ks = KieServices.get(); KieBaseConfiguration kieBaseConf = ks.newKieBaseConfiguration(); kieBaseConf.setOption(EqualityBehaviorOption.EQUALITY); KieBase kieBase = kieContainer.newKieBase(kieBaseConf);",
"<kmodule> <kbase name=\"KBase2\" default=\"false\" equalsBehavior=\"equality\" packages=\"org.domain.pkg2, org.domain.pkg3\" includes=\"KBase1\"> </kbase> </kmodule>"
] | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_decision_services_in_red_hat_decision_manager/inference-and-truth-maintenance_decision-engine |
Chapter 25. Compiler and Tools | Chapter 25. Compiler and Tools The PCRE library now correctly recognizes non-ASCII printable characters as required by Unicode When matching a Unicode string with non-ASCII printable characters using the Perl Compatible Regular Expressions (PCRE) library, the library was previously unable to correctly recognize printable non-ASCII characters. A patch has been applied, and the PCRE library now recognizes printable non-ASCII characters in UTF-8 mode. (BZ# 1400267 ) Applications using Bundler to manage dependencies can now properly load the JSON library Previously, when Bundler was used to manage Ruby application dependencies, it was sometimes impossible to load the JSON library. Consequently, the application failed with a LoadError . This caused problems especially because Ruby on Rails no longer explicitly specifies dependency on the JSON library. With this update, JSON is always available on the load path, and the described problem no longer occurs. (BZ# 1308992 ) Git can now be used with HTTP or HTTPS and SSO Since libcurl version 7.21.7, a new paramater for delegating Kerberos tickets is required because of CVE-2011-2192. Previously, Git did not provide a way to set such a parameter. As a consequence, using Git with Single Sign-On on HTTP or HTTPS connections failed. With this update, Git provides a new http.delegation configuration variable, which corresponds to the cURL --delegation parameter. Users need to set this parameter when delegation of Kerberos tickets is required. (BZ# 1369173 ) rescan-scsi-bus.sh --luns=1 now scans only LUNs numbered with 1 The sg3_utils package contains utilities that send SCSI commands to devices. In version 1.28-5 and all versions of sg3_utils , the rescan-scsi-bus.sh --luns=1 command rescaned only Logical Unit Numbers (LUNs) numbered with 1. After the update to version 1.28-6, rescan-scsi-bus.sh --luns=1 incorrectly rescaned all LUNs. With this update, the underlying source code has been fixed, and rescan-scsi-bus.sh --luns=1 now scans only LUNs numbered with 1. (BZ#1380744) ps no longer removes prefixes from wait channel names The ps utility was previously removing the sys_ and do_ prefixes from wait channel ( WCHAN ) data. This prevented the user from distinguishing functions with names intentionally containing these prefixes in a ps output. The code for prefix removing has been removed, and ps now shows full wait channel names. (BZ# 1373246 ) tcsh no longer becomes unresponsive when the .history file is located on a network file system Previously, if the .history file was located on a network file system, such as NFS or Samba, the tcsh command language interpreter sometimes became unresponsive during the login process. A patch has been applied to avoid .history file-locking if .history is located on a network file system, and tcsh no longer becomes unresponsive in the described situation. Note that having multiple instances of tcsh running can cause .history to become corrupted. To resolve this problem, enable explicit file-locking mechanism by add the lock parameter to the savehist option. For example: The lock option must be the third parameter of the savehist option to force tcsh to use file-locking when .history is located on a network file system. Red Hat does not guarantee that using the lock parameter prevents tcsh from becoming unresponsive during the login process. (BZ#1388426) fcoeadm --target no longer causes fcoeadm to crash Previously, executing the fcoeadm --target command sometimes caused the fcoeadm utility to terminate unexpectedly with a segmentation fault. With this update, fcoeadm has been modified to ignore sysfs paths for non-FCoE targets, and fcoeadm --target no longer causes fcoeadm to crash. (BZ#1384707) tar option --directory no longer ignored Previously, the --directory option of the tar command was ignored when used in combination with the --remove-files option. As a consequence, files in the current working directory were removed instead of the files in the directory specified by the --directory option. To fix this bug, new functions and an attribute that retrieve, store, and act upon the --directory option have been added. As a result, files are now correctly removed from the directory specified by the --directory option. (BZ# 1319820 ) tar options --xattrs-exclude and --xattrs-include no longer ignored Previously, the tar command ignored the --xattrs-exclude and --xattrs-include options. To fix this bug, tar has been modified to apply include and exclude masks when fetching extended attributes. As a result, the --xattrs-exclude and --xattrs-include options are no longer ignored. (BZ# 1341786 ) tar now restores incremental backup correctly Previously, the tar command did not restore incremental backup correctly. Consequently, a file removed in the incremental backup was not removed when restoring. The bug has been fixed, and tar now restores incremental backup correctly. (BZ# 1184697 ) The perl-homedir profile scripts now support csh Previously, the perl-homedir profile scripts were unable to handle the C shell (csh) syntax. Consequently, when the the perl-homedir package was installed and the /etc/sysconfig/perl-homedir file contained the PERL_HOMEDIR=0 line, executing the profile scripts resulted in the following error: This update adds support for the csh syntax, and the described problem no longer occurs. (BZ# 1122993 ) getaddrinfo no longer accessing uninitialised data On systems with the nscd daemon enabled, the getaddrinfo() function in the glibc library could access uninitialized data and consequently could return false address information. This update prevents uninitialized data access and ensures that correct addresses are returned. (BZ# 1324568 ) Additional security checks performed in malloc implementation in glibc Previously, because the glibc library was compiled without assertions, the functions implementing malloc did not check the heap consistency. This increased the risk that heap-based buffer overflows could be exploited. The heap consistency check has been converted from an assertion into an explicit check. As a result, the security of calls to malloc implementation in glibc is now increased. (BZ#1326739) chrpath rebased to version 0.16 The chrpath package has been upgraded to upstream version 0.16, which provides a number of bug fixes over the version. Notably, the chrpath tool could only modify the run path property of 64-bit binaries on 64-bit systems, and 32-bit binaries on 32-bit systems. This bug has been fixed, and chrpath on a 64-bit system can now modify run paths of binaries for a 32-bit system and binaries for a 32-bit system on a 64-bit system. (BZ# 1271380 ) Updated translations for the system-config-language package To resolve missing translations for system-config-language , the following 10 languages have been added; de, es, fr, it, ja, ko, pt_BR, ru, zh_CN, zh_TW. (BZ#1304223) Mutt no longer send emails with an incomplete From header when a host name lacks the domain part Previously, when a host name did not include the domain name, the Mutt email client sent an email with a From header that was missing the host name. As a consequence, it was impossible to reply to such an email. This bug has been fixed, and Mutt now correctly handles host names that do not contain the domain part. (BZ# 1388512 ) strace displays correctly the O_TMPFILE flag and mode for open() function Previously, the strace utility did not recognize existence of the O_TMPFILE flag for the system function open() and its requirement for presence of mode option. As a consequence, the strace output did not show name of the respective flag and lacked the mode option value. The strace utility has been extended to recognize this situation. As a result, the O_TMPFILE flag and mode are displayed correctly. (BZ# 1377847 ) ld no longer enters an infinite loop when linking large programs In large programs for the IBM Power Systems architecture the .text segment is serviced by two stub sections. Previously, the ld linker sizing termination condition was never satisfied when sizing such segments because one of the sections always had to grow. As a consequence, ld entered an infinite loop and had to be terminated. ld has been extended to recognize this situation and alter the sizing termination condition. As a result, ld terminates correctly. (BZ#1406498) gold warning messages for cross object references to hidden symbols fixed The gold linker produces a warning message when linking shared libraries where the code in one library references a hidden symbol in a second library or object file. Previously, gold produced this warning message even if another library or object file provided a visible definition of the same symbol. To fix this bug, gold has been extended with a check for this specific case and produces the warning message only if there are no visible definitions of the symbol. As a result, gold no longer displays the wrong warning message. (BZ#1326710) OProfile default event on Intel Xeon(R) C3xxx Processors with Denverton SOC fixed Previously, incorrect values were used in the default cycle counting event for OProfile on Intel Xeon(R) C3xxx Processors with Denverton SOC. As a consequence, OProfile sampling and counting using the default event did not work. The relevant OProfile setting was corrected. As a result, the default event now works on Intel Xeon(R) C3xxx Processors with Denverton SOC. (BZ#1380809) | [
"cat /etc/csh.cshrc csh configuration for all shell invocations. set savehist = (1024 merge lock)",
"PERL_HOMEDIR=0: Command not found."
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.4_release_notes/bug_fixes_compiler_and_tools |
Part I. Preparing the RHEL installation | Part I. Preparing the RHEL installation Before installing Red Hat Enterprise Linux (RHEL), ensure that your system meets the necessary hardware and architecture requirements. Additionally, you may want to optimize your installation experience by customizing the installation media or creating a bootable medium tailored to your environment. Registration of your RHEL system to Red Hat provides access to updates and support, which can enhance the system's stability and security. Special attention may also be needed for systems using UEFI Secure Boot, particularly when installing or booting RHEL beta releases. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/interactively_installing_rhel_from_installation_media/preparing-the-rhel-installation |
23.4. CPU allocation | 23.4. CPU allocation <domain> ... <vcpu placement='static' cpuset="1-4,^3,6" current="1">2</vcpu> ... </domain> Figure 23.6. CPU Allocation The <vcpu> element defines the maximum number of virtual CPUs allocated for the guest virtual machine operating system, which must be between 1 and the maximum number supported by the hypervisor. This element can contain an optional cpuset attribute, which is a comma-separated list of physical CPU numbers that the domain process and virtual CPUs can be pinned to by default. Note that the pinning policy of the domain process and virtual CPUs can be specified separately by using the cputune attribute. If the emulatorpin attribute is specified in <cputune> , cpuset specified by <vcpu> will be ignored. Similarly, virtual CPUs that have set a value for vcpupin cause cpuset settings to be ignored. For virtual CPUs where vcpupin is not specified, it will be pinned to the physical CPUs specified by cpuset . Each element in the cpuset list is either a single CPU number, a range of CPU numbers, or a caret (^) followed by a CPU number to be excluded from a range. The attribute current can be used to specify whether fewer than the maximum number of virtual CPUs should be enabled. The placement optional attribute can be used to indicate the CPU placement mode for domain processes. The value of placement can be set as one of the following: static - pins the vCPU to the physical CPUs defined by the cpuset attribute. If cpuset is not defined, the domain processes will be pinned to all the available physical CPUs. auto - indicates the domain process will be pinned to the advisory nodeset from the querying numad, and the value of attribute cpuset will be ignored if it is specified. Note If the cpuset attribute is used along with placement , the value of placement defaults to the value of the <numatune> element (if it is used), or to static . | [
"<domain> <vcpu placement='static' cpuset=\"1-4,^3,6\" current=\"1\">2</vcpu> </domain>"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-manipulating_the_domain_xml-cpu_allocation |
Chapter 18. PersistentClaimStorage schema reference | Chapter 18. PersistentClaimStorage schema reference Used in: JbodStorage , KafkaClusterSpec , KafkaNodePoolSpec , ZookeeperClusterSpec The type property is a discriminator that distinguishes use of the PersistentClaimStorage type from EphemeralStorage . It must have the value persistent-claim for the type PersistentClaimStorage . Property Property type Description id integer Storage identification number. It is mandatory only for storage volumes defined in a storage of type 'jbod'. type string Must be persistent-claim . size string When type=persistent-claim , defines the size of the persistent volume claim, such as 100Gi. Mandatory when type=persistent-claim . kraftMetadata string (one of [shared]) Specifies whether this volume should be used for storing KRaft metadata. This property is optional. When set, the only currently supported value is shared . At most one volume can have this property set. class string The storage class to use for dynamic volume allocation. selector map Specifies a specific persistent volume to use. It contains key:value pairs representing labels for selecting such a volume. deleteClaim boolean Specifies if the persistent volume claim has to be deleted when the cluster is un-deployed. overrides PersistentClaimStorageOverride array The overrides property has been deprecated. The storage overrides for individual brokers are deprecated and will be removed in the future. Please use multiple KafkaNodePool custom resources with different storage classes instead. Overrides for individual brokers. The overrides field allows you to specify a different configuration for different brokers. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-PersistentClaimStorage-reference |
3.7. Statistics History Views | 3.7. Statistics History Views Statistics data is available in hourly , daily , and samples views. To query a statistics view, run SELECT * FROM view_name_ [hourly|daily|samples]; . For example: # SELECT * FROM v4_4_statistics_hosts_resources_usage_daily; To list all available views, run: # \dv 3.7.1. Enabling Debug Mode You can enable debug mode to record log sampling, hourly, and daily job times in the /var/log/ovirt-engine-dwh/ovirt-engine-dwhd.log file. This is useful for checking the ETL process. Debug mode is disabled by default. Log in to the Manager machine and create a configuration file (for example, /etc/ovirt-engine-dwh/ovirt-engine-dwhd.conf.d/logging.conf ). Add the following line to the configuration file: DWH_AGGREGATION_DEBUG=true Restart the ovirt-engine-dwhd service: # systemctl restart ovirt-engine-dwhd.service 3.7.2. Storage Domain Statistics Views Table 3.2. Historical Statistics for Each Storage Domain in the System Name Type Description Indexed history_id bigint The unique ID of this row in the table. No history_datetime date The timestamp of this history row (rounded to minute, hour, day as per the aggregation level). Yes storage_domain_id uuid Unique ID of the storage domain in the system. Yes storage_domain_status smallint The storage domain status. No seconds_in_status integer The total number of seconds that the storage domain was in the status shown state as shown in the status column for the aggregation period. For example, if a storage domain was "Active" for 55 seconds and "Inactive" for 5 seconds within a minute, two rows will be reported in the table for the same minute. One row will have a status of Active with seconds_in_status of 55, the other will have a status of Inactive and seconds_in_status of 5. No minutes_in_status numeric(7,2) The total number of minutes that the storage domain was in the status shown state as shown in the status column for the aggregation period. For example, if a storage domain was "Active" for 55 minutes and "Inactive" for 5 minutes within an hour, two rows will be reported in the table for the same hour. One row will have a status of Active with minutes_in_status of 55, the other will have a status of Inactive and minutes_in_status of 5. No available_disk_size_gb integer The total available (unused) capacity on the disk, expressed in gigabytes (GB). No used_disk_size_gb integer The total used capacity on the disk, expressed in gigabytes (GB). No storage_configuration_version integer The storage domain configuration version at the time of sample. This is identical to the value of history_id in the v4_4_configuration_history_storage_domains view and it can be used to join them. Yes 3.7.3. Host Statistics Views Table 3.3. Historical Statistics for Each Host in the System Name Type Description Indexed history_id bigint The unique ID of this row in the table. No history_datetime date The timestamp of this history row (rounded to minute, hour, day as per the aggregation level). Yes host_id uuid Unique ID of the host in the system. Yes host_status smallint -1 - Unknown Status (used only to indicate a problem with the ETL. Please notify Red Hat Support. ) 1 - Up 2 - Maintenance 3 - Problematic No seconds_in_status integer The total number of seconds that the host was in the status shown in the status column for the aggregation period. For example, if a host was up for 55 seconds and down for 5 seconds during a minute, two rows will show for this minute. One will have a status of Up and seconds_in_status of 55, the other will have a status of Down and a seconds_in_status of 5. No minutes_in_status numeric(7,2) The total number of minutes that the host was in the status shown in the status column for the aggregation period. For example, if a host was up for 55 minutes and down for 5 minutes during an hour, two rows will show for this hour. One will have a status of Up and minutes_in_status of 55, the other will have a status of Down and a minutes_in_status of 5. No memory_usage_percent smallint Percentage of used memory on the host. No max_memory_usage smallint The maximum memory usage for the aggregation period, expressed as a percentage. For hourly aggregations, this is the maximum collected sample value. For daily aggregations, it is the maximum hourly average value. No ksm_shared_memory_mb bigint The Kernel Shared Memory size, in megabytes (MB), that the host is using. No max_ksm_shared_memory_mb bigint The maximum KSM memory usage for the aggregation period expressed in megabytes (MB). For hourly aggregations, this is the maximum collected sample value. For daily aggregations, it is the maximum hourly average value. No cpu_usage_percent smallint Used CPU percentage on the host. No max_cpu_usage smallint The maximum CPU usage for the aggregation period, expressed as a percentage. For hourly aggregations, this is the maximum collected sample value. For daily aggregations, it is the maximum hourly average value. No ksm_cpu_percent smallint CPU percentage ksm on the host is using. No max_ksm_cpu_percent smallint The maximum KSM usage for the aggregation period, expressed as a percentage. For hourly aggregations, this is the maximum collected sample value. For daily aggregations, it is the maximum hourly average value. No active_vms smallint The average number of active virtual machines for this aggregation. No max_active_vms smallint The maximum active number of virtual machines for the aggregation period. For hourly aggregations, this is the maximum collected sample value. For daily aggregations, it is the maximum hourly average value. No total_vms smallint The average number of all virtual machines on the host for this aggregation. No max_total_vms smallint The maximum total number of virtual machines for the aggregation period. For hourly aggregations, this is the maximum collected sample value. For daily aggregations, it is the maximum hourly average value. No total_vms_vcpus integer Total number of vCPUs allocated to the host. No max_total_vms_vcpus integer The maximum total virtual machine vCPU number for the aggregation period. For hourly aggregations, this is the maximum collected sample value. For daily aggregations, it is the maximum hourly average value. No cpu_load integer The CPU load of the host. No max_cpu_load integer The maximum CPU load for the aggregation period. For hourly aggregations, this is the maximum collected sample value. For daily aggregations, it is the maximum hourly average value. No system_cpu_usage_percent smallint Used CPU percentage on the host. No max_system_cpu_usage_percent smallint The maximum system CPU usage for the aggregation period, expressed as a percentage. For hourly aggregations, this is the maximum collected sample value. For daily aggregations, it is the maximum hourly average value. No user_cpu_usage_percent smallint Used user CPU percentage on the host. No max_user_cpu_usage_percent smallint The maximum user CPU usage for the aggregation period, expressed as a percentage. For hourly aggregations, this is the maximum collected sample value. For daily aggregations, it is the maximum hourly average value. No swap_used_mb integer Used swap size usage of the host in megabytes (MB). No max_swap_used_mb integer The maximum user swap size usage of the host for the aggregation period in megabytes (MB), expressed as a percentage. For hourly aggregations, this is the maximum collected sample value. For daily aggregations, it is the maximum hourly average value. No host_configuration_version integer The host configuration version at the time of sample. The host configuration version at the time of sample. This is identical to the value of history_id in the v4_4_configuration_history_hosts view and it can be used to join them. Yes 3.7.4. Host Interface Statistics Views Table 3.4. Historical Statistics for Each Host Network Interface in the System Name Type Description Indexed history_id bigint The unique ID of this row in the table. No history_datetime date The timestamp of this history view (rounded to minute, hour, day as per the aggregation level). Yes host_interface_id uuid Unique identifier of the interface in the system. Yes receive_rate_percent smallint Used receive rate percentage on the host. No max_receive_rate_percent smallint The maximum receive rate for the aggregation period, expressed as a percentage. For hourly aggregations, this is the maximum collected sample value. For daily aggregations, it is the maximum hourly average value. No transmit_rate_percent smallint Used transmit rate percentage on the host. No max_transmit_rate_percent smallint The maximum transmit rate for the aggregation period, expressed as a percentage. For hourly aggregations, this is the maximum collected sample value. For daily aggregations, it is the maximum hourly average value. No received_total_byte bigint The total number of bytes received by the host. No transmitted_total_byte bigint The total number of bytes transmitted from the host. No host_interface_configuration_version integer The host interface configuration version at the time of sample. This is identical to the value of history_id in the v4_4_configuration_history_hosts_interfaces view and it can be used to join them. Yes 3.7.5. Virtual Machine Statistics Views Table 3.5. Historical Statistics for Each Virtual Machine in the System Name Type Description Indexed history_id bigint The unique ID of this row in the table. No history_datetime date The timestamp of this history row (rounded to minute, hour, day as per the aggregation level). Yes vm_id uuid Unique ID of the virtual machine in the system. Yes vm_status smallint -1 - Unknown Status (used only to indicate problems with the ETL. Please notify Red Hat Support. ) 0 - Down 1 - Up 2 - Paused 3 - Problematic No seconds_in_status integer The total number of seconds that the virtual machine was in the status shown in the status column for the aggregation period. For example, if a virtual machine was up for 55 seconds and down for 5 seconds during a minute, two rows will show for this minute. One will have a status of Up and seconds_in_status, the other will have a status of Down and a seconds_in_status of 5. No minutes_in_status numeric(7,2) The total number of minutes that the virtual machine was in the status shown in the status column for the aggregation period. For example, if a virtual machine was up for 55 minutes and down for 5 minutes during an hour, two rows will show for this hour. One will have a status of Up and minutes_in_status, the other will have a status of Down and a minutes_in_status of 5. No cpu_usage_percent smallint The percentage of the CPU in use by the virtual machine. No max_cpu_usage smallint The maximum CPU usage for the aggregation period, expressed as a percentage. For hourly aggregations, this is the maximum collected sample value. For daily aggregations, it is the maximum hourly average value. No memory_usage_percent smallint Percentage of used memory in the virtual machine. The guest tools must be installed on the virtual machine for memory usage to be recorded. No max_memory_usage smallint The maximum memory usage for the aggregation period, expressed as a percentage. For hourly aggregations, this is the maximum collected sample value. For daily aggregations, it is the maximum hourly average value. The guest tools must be installed on the virtual machine for memory usage to be recorded. No user_cpu_usage_percent smallint Used user CPU percentage on the host. No max_user_cpu_usage_percent smallint The maximum user CPU usage for the aggregation period, expressed as a percentage. For hourly aggregations, this is the maximum collected sample value. For daily aggregation, it is the maximum hourly average value. No system_cpu_usage_percent smallint Used system CPU percentage on the host. No max_system_cpu_usage_percent smallint The maximum system CPU usage for the aggregation period, expressed as a percentage. For hourly aggregations, this is the maximum collected sample value. For daily aggregations, it is the maximum hourly average value. No vm_ip text The IP address of the first NIC. Only shown if the guest agent is installed. No currently_running_on_host uuid The unique ID of the host the virtual machine is running on. No current_user_id uuid The unique ID of the user logged into the virtual machine console, if the guest agent is installed. No disks_usage text The disk description. File systems type, mount point, total size, and used size. No vm_configuration_version integer The virtual machine configuration version at the time of sample. This is identical to the value of history_id in the v4_4_configuration_history_vms view. Yes current_host_configuration_version integer The host configuration version at the time of sample. This is identical to the value of history_id in the v4_4_configuration_history_hosts view and it can be used to join them. Yes memory_buffered_kb bigint The amount of buffered memory on the virtual machine, in kilobytes (KB). No memory_cached_kb bigint The amount of cached memory on the virtual machine, in kilobytes (KB). No max_memory_buffered_kb bigint The maximum buffered memory for the aggregation period, in kilobytes (KB). For hourly aggregations, this is the maximum collected sample value. For daily aggregations, it is the maximum hourly average value. No max_memory_cached_kb bigint The maximum cached memory for the aggregation period, in kilobytes (KB). For hourly aggregations, this is the maximum collected sample value. For daily aggregations, it is the maximum hourly average value. No 3.7.6. Virtual Machine Interface Statistics Views Table 3.6. Historical Statistics for the Virtual Machine Network Interfaces in the System Name Type Description Indexed history_id integer The unique ID of this row in the table. No history_datetime date The timestamp of this history row (rounded to minute, hour, day as per the aggregation level). Yes vm_interface_id uuid Unique ID of the interface in the system. Yes receive_rate_percent smallint Used receive rate percentage on the host. No max_receive_rate_percent smallint The maximum receive rate for the aggregation period, expressed as a percentage. For hourly aggregations, this is the maximum collected sample value. For daily aggregations, it is the maximum hourly average value. No transmit_rate_percent smallint Used transmit rate percentage on the host. No max_transmit_rate_percent smallint The maximum transmit rate for the aggregation period, expressed as a percentage. For hourly aggregations, this is the maximum collected sample value. For daily aggregations, it is the maximum hourly average rate. No received_total_byte bigint The total number of bytes received by the virtual machine. No transmitted_total_byte bigint The total number of bytes transmitted from the virtual machine. No vm_interface_configuration_version integer The virtual machine interface configuration version at the time of sample. This is identical to the value of history_id in the v4_4_configuration_history_vms_interfaces view and it can be used to join them. Yes 3.7.7. Virtual Disk Statistics Views Table 3.7. Historical Statistics for the Virtual Disks in the System Name Type Description Indexed history_id bigint The unique ID of this row in the table. No history_datetime date The timestamp of this history row (rounded to minute, hour, day as per the aggregation level). Yes vm_disk_id uuid Unique ID of the disk in the system. Yes vm_disk_status smallint 0 - Unassigned 1 - OK 2 - Locked 3 - Invalid 4 - Illegal No seconds_in_status integer The total number of seconds that the virtual disk was in the status shown in the status column for the aggregation period. For example, if a virtual disk was locked for 55 seconds and OK for 5 seconds during a minute, two rows will show for this minute. One will have a status of Locked and seconds_in_status of 55, the other will have a status of OK and a seconds_in_status of 5. No minutes_in_status numeric(7,2) The total number of minutes that the virtual disk was in the status shown in the status column for the aggregation period. For example, if a virtual disk was locked for 55 minutes and OK for 5 minutes during an hour, two rows will show for this hour. One will have a status of Locked and minutes_in_status of 55, the other will have a status of OK and a minutes_in_status of 5. No vm_disk_actual_size_mb integer The actual size allocated to the disk. No read_rate_bytes_per_second integer Read rate to disk in bytes per second. No max_read_rate_bytes_per_second integer The maximum read rate for the aggregation period. For hourly aggregations, this is the maximum collected sample value. For daily aggregations, it is the maximum hourly average value. No read_ops_total_count numeric(20,0) Read I/O operations to disk since vm start. No read_latency_seconds numeric(18,9) The virtual disk read latency measured in seconds. No write_rate_bytes_per_second integer Write rate to disk in bytes per second. No max_read_latency_seconds numeric(18,9) The maximum read latency for the aggregation period, measured in seconds. For hourly aggregations, this is the maximum collected sample value. For daily aggregations, it is the maximum hourly average value. No max_write_rate_bytes_per_second integer The maximum write rate for the aggregation period. For hourly aggregations, this is the maximum collected sample value. For daily aggregations, it is the maximum hourly average value. No write_ops_total_count numeric(20,0) Write I/O operations to disk since vm start. No write_latency_seconds numeric(18,9) The virtual disk write latency measured in seconds. No max_write_latency_seconds numeric(18,9) The maximum write latency for the aggregation period, measured in seconds. For hourly aggregations, this is the maximum collected sample value. For daily aggregations, it is the maximum hourly average value. No flush_latency_seconds numeric(18,9) The virtual disk flush latency measured in seconds. No max_flush_latency_seconds numeric(18,9) The maximum flush latency for the aggregation period, measured in seconds. For hourly aggregations, this is the maximum collected sample value. For daily aggregations, it is the maximum hourly average value. No vm_disk_configuration_version integer The virtual disk configuration version at the time of sample. This is identical to the value of history_id in the v4_4_configuration_history_vms_disks view and it can be used to join them. Yes | [
"SELECT * FROM v4_4_statistics_hosts_resources_usage_daily;",
"\\dv",
"DWH_AGGREGATION_DEBUG=true",
"systemctl restart ovirt-engine-dwhd.service",
"To disable debug mode, delete the configuration file and restart the service. // removed note"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/data_warehouse_guide/sect-statistics_history_views |
Chapter 6. Network security | Chapter 6. Network security 6.1. Understanding network policy APIs Kubernetes offers two features that users can use to enforce network security. One feature that allows users to enforce network policy is the NetworkPolicy API that is designed mainly for application developers and namespace tenants to protect their namespaces by creating namespace-scoped policies. The second feature is AdminNetworkPolicy which consists of two APIs: the AdminNetworkPolicy (ANP) API and the BaselineAdminNetworkPolicy (BANP) API. ANP and BANP are designed for cluster and network administrators to protect their entire cluster by creating cluster-scoped policies. Cluster administrators can use ANPs to enforce non-overridable policies that take precedence over NetworkPolicy objects. Administrators can use BANP to set up and enforce optional cluster-scoped network policy rules that are overridable by users using NetworkPolicy objects when necessary. When used together, ANP, BANP, and network policy can achieve full multi-tenant isolation that administrators can use to secure their cluster. OVN-Kubernetes CNI in Red Hat OpenShift Service on AWS implements these network policies using Access Control List (ACL) Tiers to evaluate and apply them. ACLs are evaluated in descending order from Tier 1 to Tier 3. Tier 1 evaluates AdminNetworkPolicy (ANP) objects. Tier 2 evaluates NetworkPolicy objects. Tier 3 evaluates BaselineAdminNetworkPolicy (BANP) objects. ANPs are evaluated first. When the match is an ANP allow or deny rule, any existing NetworkPolicy and BaselineAdminNetworkPolicy (BANP) objects in the cluster are skipped from evaluation. When the match is an ANP pass rule, then evaluation moves from tier 1 of the ACL to tier 2 where the NetworkPolicy policy is evaluated. If no NetworkPolicy matches the traffic then evaluation moves from tier 2 ACLs to tier 3 ACLs where BANP is evaluated. 6.1.1. Key differences between AdminNetworkPolicy and NetworkPolicy custom resources The following table explains key differences between the cluster scoped AdminNetworkPolicy API and the namespace scoped NetworkPolicy API. Policy elements AdminNetworkPolicy NetworkPolicy Applicable user Cluster administrator or equivalent Namespace owners Scope Cluster Namespaced Drop traffic Supported with an explicit Deny action set as a rule. Supported via implicit Deny isolation at policy creation time. Delegate traffic Supported with an Pass action set as a rule. Not applicable Allow traffic Supported with an explicit Allow action set as a rule. The default action for all rules is to allow. Rule precedence within the policy Depends on the order in which they appear within an ANP. The higher the rule's position the higher the precedence. Rules are additive Policy precedence Among ANPs the priority field sets the order for evaluation. The lower the priority number higher the policy precedence. There is no policy ordering between policies. Feature precedence Evaluated first via tier 1 ACL and BANP is evaluated last via tier 3 ACL. Enforced after ANP and before BANP, they are evaluated in tier 2 of the ACL. Matching pod selection Can apply different rules across namespaces. Can apply different rules across pods in single namespace. Cluster egress traffic Supported via nodes and networks peers Supported through ipBlock field along with accepted CIDR syntax. Cluster ingress traffic Not supported Not supported Fully qualified domain names (FQDN) peer support Not supported Not supported Namespace selectors Supports advanced selection of Namespaces with the use of namespaces.matchLabels field Supports label based namespace selection with the use of namespaceSelector field 6.2. Admin network policy 6.2.1. OVN-Kubernetes AdminNetworkPolicy 6.2.1.1. AdminNetworkPolicy An AdminNetworkPolicy (ANP) is a cluster-scoped custom resource definition (CRD). As a Red Hat OpenShift Service on AWS administrator, you can use ANP to secure your network by creating network policies before creating namespaces. Additionally, you can create network policies on a cluster-scoped level that is non-overridable by NetworkPolicy objects. The key difference between AdminNetworkPolicy and NetworkPolicy objects are that the former is for administrators and is cluster scoped while the latter is for tenant owners and is namespace scoped. An ANP allows administrators to specify the following: A priority value that determines the order of its evaluation. The lower the value the higher the precedence. A set of pods that consists of a set of namespaces or namespace on which the policy is applied. A list of ingress rules to be applied for all ingress traffic towards the subject . A list of egress rules to be applied for all egress traffic from the subject . AdminNetworkPolicy example Example 6.1. Example YAML file for an ANP apiVersion: policy.networking.k8s.io/v1alpha1 kind: AdminNetworkPolicy metadata: name: sample-anp-deny-pass-rules 1 spec: priority: 50 2 subject: namespaces: matchLabels: kubernetes.io/metadata.name: example.name 3 ingress: 4 - name: "deny-all-ingress-tenant-1" 5 action: "Deny" from: - pods: namespaceSelector: matchLabels: custom-anp: tenant-1 podSelector: matchLabels: custom-anp: tenant-1 6 egress: 7 - name: "pass-all-egress-to-tenant-1" action: "Pass" to: - pods: namespaceSelector: matchLabels: custom-anp: tenant-1 podSelector: matchLabels: custom-anp: tenant-1 1 Specify a name for your ANP. 2 The spec.priority field supports a maximum of 100 ANPs in the range of values 0-99 in a cluster. The lower the value, the higher the precedence because the range is read in order from the lowest to highest value. Because there is no guarantee which policy takes precedence when ANPs are created at the same priority, set ANPs at different priorities so that precedence is deliberate. 3 Specify the namespace to apply the ANP resource. 4 ANP have both ingress and egress rules. ANP rules for spec.ingress field accepts values of Pass , Deny , and Allow for the action field. 5 Specify a name for the ingress.name . 6 Specify podSelector.matchLabels to select pods within the namespaces selected by namespaceSelector.matchLabels as ingress peers. 7 ANPs have both ingress and egress rules. ANP rules for spec.egress field accepts values of Pass , Deny , and Allow for the action field. Additional resources Network Policy API Working Group 6.2.1.1.1. AdminNetworkPolicy actions for rules As an administrator, you can set Allow , Deny , or Pass as the action field for your AdminNetworkPolicy rules. Because OVN-Kubernetes uses a tiered ACLs to evaluate network traffic rules, ANP allow you to set very strong policy rules that can only be changed by an administrator modifying them, deleting the rule, or overriding them by setting a higher priority rule. AdminNetworkPolicy Allow example The following ANP that is defined at priority 9 ensures all ingress traffic is allowed from the monitoring namespace towards any tenant (all other namespaces) in the cluster. Example 6.2. Example YAML file for a strong Allow ANP apiVersion: policy.networking.k8s.io/v1alpha1 kind: AdminNetworkPolicy metadata: name: allow-monitoring spec: priority: 9 subject: namespaces: {} # Use the empty selector with caution because it also selects OpenShift namespaces as well. ingress: - name: "allow-ingress-from-monitoring" action: "Allow" from: - namespaces: matchLabels: kubernetes.io/metadata.name: monitoring # ... This is an example of a strong Allow ANP because it is non-overridable by all the parties involved. No tenants can block themselves from being monitored using NetworkPolicy objects and the monitoring tenant also has no say in what it can or cannot monitor. AdminNetworkPolicy Deny example The following ANP that is defined at priority 5 ensures all ingress traffic from the monitoring namespace is blocked towards restricted tenants (namespaces that have labels security: restricted ). Example 6.3. Example YAML file for a strong Deny ANP apiVersion: policy.networking.k8s.io/v1alpha1 kind: AdminNetworkPolicy metadata: name: block-monitoring spec: priority: 5 subject: namespaces: matchLabels: security: restricted ingress: - name: "deny-ingress-from-monitoring" action: "Deny" from: - namespaces: matchLabels: kubernetes.io/metadata.name: monitoring # ... This is a strong Deny ANP that is non-overridable by all the parties involved. The restricted tenant owners cannot authorize themselves to allow monitoring traffic, and the infrastructure's monitoring service cannot scrape anything from these sensitive namespaces. When combined with the strong Allow example, the block-monitoring ANP has a lower priority value giving it higher precedence, which ensures restricted tenants are never monitored. AdminNetworkPolicy Pass example The following ANP that is defined at priority 7 ensures all ingress traffic from the monitoring namespace towards internal infrastructure tenants (namespaces that have labels security: internal ) are passed on to tier 2 of the ACLs and evaluated by the namespaces' NetworkPolicy objects. Example 6.4. Example YAML file for a strong Pass ANP apiVersion: policy.networking.k8s.io/v1alpha1 kind: AdminNetworkPolicy metadata: name: pass-monitoring spec: priority: 7 subject: namespaces: matchLabels: security: internal ingress: - name: "pass-ingress-from-monitoring" action: "Pass" from: - namespaces: matchLabels: kubernetes.io/metadata.name: monitoring # ... This example is a strong Pass action ANP because it delegates the decision to NetworkPolicy objects defined by tenant owners. This pass-monitoring ANP allows all tenant owners grouped at security level internal to choose if their metrics should be scraped by the infrastructures' monitoring service using namespace scoped NetworkPolicy objects. 6.2.2. OVN-Kubernetes BaselineAdminNetworkPolicy 6.2.2.1. BaselineAdminNetworkPolicy BaselineAdminNetworkPolicy (BANP) is a cluster-scoped custom resource definition (CRD). As a Red Hat OpenShift Service on AWS administrator, you can use BANP to setup and enforce optional baseline network policy rules that are overridable by users using NetworkPolicy objects if need be. Rule actions for BANP are allow or deny . The BaselineAdminNetworkPolicy resource is a cluster singleton object that can be used as a guardrail policy incase a passed traffic policy does not match any NetworkPolicy objects in the cluster. A BANP can also be used as a default security model that provides guardrails that intra-cluster traffic is blocked by default and a user will need to use NetworkPolicy objects to allow known traffic. You must use default as the name when creating a BANP resource. A BANP allows administrators to specify: A subject that consists of a set of namespaces or namespace. A list of ingress rules to be applied for all ingress traffic towards the subject . A list of egress rules to be applied for all egress traffic from the subject . BaselineAdminNetworkPolicy example Example 6.5. Example YAML file for BANP apiVersion: policy.networking.k8s.io/v1alpha1 kind: BaselineAdminNetworkPolicy metadata: name: default 1 spec: subject: namespaces: matchLabels: kubernetes.io/metadata.name: example.name 2 ingress: 3 - name: "deny-all-ingress-from-tenant-1" 4 action: "Deny" from: - pods: namespaceSelector: matchLabels: custom-banp: tenant-1 5 podSelector: matchLabels: custom-banp: tenant-1 6 egress: - name: "allow-all-egress-to-tenant-1" action: "Allow" to: - pods: namespaceSelector: matchLabels: custom-banp: tenant-1 podSelector: matchLabels: custom-banp: tenant-1 1 The policy name must be default because BANP is a singleton object. 2 Specify the namespace to apply the ANP to. 3 BANP have both ingress and egress rules. BANP rules for spec.ingress and spec.egress fields accepts values of Deny and Allow for the action field. 4 Specify a name for the ingress.name 5 Specify the namespaces to select the pods from to apply the BANP resource. 6 Specify podSelector.matchLabels name of the pods to apply the BANP resource. BaselineAdminNetworkPolicy Deny example The following BANP singleton ensures that the administrator has set up a default deny policy for all ingress monitoring traffic coming into the tenants at internal security level. When combined with the "AdminNetworkPolicy Pass example", this deny policy acts as a guardrail policy for all ingress traffic that is passed by the ANP pass-monitoring policy. Example 6.6. Example YAML file for a guardrail Deny rule apiVersion: policy.networking.k8s.io/v1alpha1 kind: BaselineAdminNetworkPolicy metadata: name: default spec: subject: namespaces: matchLabels: security: internal ingress: - name: "deny-ingress-from-monitoring" action: "Deny" from: - namespaces: matchLabels: kubernetes.io/metadata.name: monitoring # ... You can use an AdminNetworkPolicy resource with a Pass value for the action field in conjunction with the BaselineAdminNetworkPolicy resource to create a multi-tenant policy. This multi-tenant policy allows one tenant to collect monitoring data on their application while simultaneously not collecting data from a second tenant. As an administrator, if you apply both the "AdminNetworkPolicy Pass action example" and the "BaselineAdminNetwork Policy Deny example", tenants are then left with the ability to choose to create a NetworkPolicy resource that will be evaluated before the BANP. For example, Tenant 1 can set up the following NetworkPolicy resource to monitor ingress traffic: Example 6.7. Example NetworkPolicy apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-monitoring namespace: tenant 1 spec: podSelector: policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: monitoring # ... In this scenario, Tenant 1's policy would be evaluated after the "AdminNetworkPolicy Pass action example" and before the "BaselineAdminNetwork Policy Deny example", which denies all ingress monitoring traffic coming into tenants with security level internal . With Tenant 1's NetworkPolicy object in place, they will be able to collect data on their application. Tenant 2, however, who does not have any NetworkPolicy objects in place, will not be able to collect data. As an administrator, you have not by default monitored internal tenants, but instead, you created a BANP that allows tenants to use NetworkPolicy objects to override the default behavior of your BANP. 6.3. Network policy 6.3.1. About network policy As a developer, you can define network policies that restrict traffic to pods in your cluster. 6.3.1.1. About network policy By default, all pods in a project are accessible from other pods and network endpoints. To isolate one or more pods in a project, you can create NetworkPolicy objects in that project to indicate the allowed incoming connections. Project administrators can create and delete NetworkPolicy objects within their own project. If a pod is matched by selectors in one or more NetworkPolicy objects, then the pod will accept only connections that are allowed by at least one of those NetworkPolicy objects. A pod that is not selected by any NetworkPolicy objects is fully accessible. A network policy applies to only the TCP, UDP, ICMP, and SCTP protocols. Other protocols are not affected. Warning Network policy does not apply to the host network namespace. Pods with host networking enabled are unaffected by network policy rules. However, pods connecting to the host-networked pods might be affected by the network policy rules. Network policies cannot block traffic from localhost or from their resident nodes. The following example NetworkPolicy objects demonstrate supporting different scenarios: Deny all traffic: To make a project deny by default, add a NetworkPolicy object that matches all pods but accepts no traffic: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: {} ingress: [] Only allow connections from the Red Hat OpenShift Service on AWS Ingress Controller: To make a project allow only connections from the Red Hat OpenShift Service on AWS Ingress Controller, add the following NetworkPolicy object. apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress Only accept connections from pods within a project: Important To allow ingress connections from hostNetwork pods in the same namespace, you need to apply the allow-from-hostnetwork policy together with the allow-same-namespace policy. To make pods accept connections from other pods in the same project, but reject all other connections from pods in other projects, add the following NetworkPolicy object: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {} Only allow HTTP and HTTPS traffic based on pod labels: To enable only HTTP and HTTPS access to the pods with a specific label ( role=frontend in following example), add a NetworkPolicy object similar to the following: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-http-and-https spec: podSelector: matchLabels: role: frontend ingress: - ports: - protocol: TCP port: 80 - protocol: TCP port: 443 Accept connections by using both namespace and pod selectors: To match network traffic by combining namespace and pod selectors, you can use a NetworkPolicy object similar to the following: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-pod-and-namespace-both spec: podSelector: matchLabels: name: test-pods ingress: - from: - namespaceSelector: matchLabels: project: project_name podSelector: matchLabels: name: test-pods NetworkPolicy objects are additive, which means you can combine multiple NetworkPolicy objects together to satisfy complex network requirements. For example, for the NetworkPolicy objects defined in samples, you can define both allow-same-namespace and allow-http-and-https policies within the same project. Thus allowing the pods with the label role=frontend , to accept any connection allowed by each policy. That is, connections on any port from pods in the same namespace, and connections on ports 80 and 443 from pods in any namespace. 6.3.1.1.1. Using the allow-from-router network policy Use the following NetworkPolicy to allow external traffic regardless of the router configuration: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-router spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/ingress: "" 1 podSelector: {} policyTypes: - Ingress 1 policy-group.network.openshift.io/ingress:"" label supports OVN-Kubernetes. 6.3.1.1.2. Using the allow-from-hostnetwork network policy Add the following allow-from-hostnetwork NetworkPolicy object to direct traffic from the host network pods. apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-hostnetwork spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/host-network: "" podSelector: {} policyTypes: - Ingress 6.3.1.2. Optimizations for network policy with OVN-Kubernetes network plugin When designing your network policy, refer to the following guidelines: For network policies with the same spec.podSelector spec, it is more efficient to use one network policy with multiple ingress or egress rules, than multiple network policies with subsets of ingress or egress rules. Every ingress or egress rule based on the podSelector or namespaceSelector spec generates the number of OVS flows proportional to number of pods selected by network policy + number of pods selected by ingress or egress rule . Therefore, it is preferable to use the podSelector or namespaceSelector spec that can select as many pods as you need in one rule, instead of creating individual rules for every pod. For example, the following policy contains two rules: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy spec: podSelector: {} ingress: - from: - podSelector: matchLabels: role: frontend - from: - podSelector: matchLabels: role: backend The following policy expresses those same two rules as one: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy spec: podSelector: {} ingress: - from: - podSelector: matchExpressions: - {key: role, operator: In, values: [frontend, backend]} The same guideline applies to the spec.podSelector spec. If you have the same ingress or egress rules for different network policies, it might be more efficient to create one network policy with a common spec.podSelector spec. For example, the following two policies have different rules: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: policy1 spec: podSelector: matchLabels: role: db ingress: - from: - podSelector: matchLabels: role: frontend --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: policy2 spec: podSelector: matchLabels: role: client ingress: - from: - podSelector: matchLabels: role: frontend The following network policy expresses those same two rules as one: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: policy3 spec: podSelector: matchExpressions: - {key: role, operator: In, values: [db, client]} ingress: - from: - podSelector: matchLabels: role: frontend You can apply this optimization when only multiple selectors are expressed as one. In cases where selectors are based on different labels, it may not be possible to apply this optimization. In those cases, consider applying some new labels for network policy optimization specifically. 6.3.1.3. steps Creating a network policy 6.3.2. Creating a network policy As a user with the admin role, you can create a network policy for a namespace. 6.3.2.1. Example NetworkPolicy object The following annotates an example NetworkPolicy object: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017 1 The name of the NetworkPolicy object. 2 A selector that describes the pods to which the policy applies. The policy object can only select pods in the project that defines the NetworkPolicy object. 3 A selector that matches the pods from which the policy object allows ingress traffic. The selector matches pods in the same namespace as the NetworkPolicy. 4 A list of one or more destination ports on which to accept traffic. 6.3.2.2. Creating a network policy using the CLI To define granular rules describing ingress or egress network traffic allowed for namespaces in your cluster, you can create a network policy. Note If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin, with mode: NetworkPolicy set. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace that the network policy applies to. Procedure Create a policy rule: Create a <policy_name>.yaml file: USD touch <policy_name>.yaml where: <policy_name> Specifies the network policy file name. Define a network policy in the file that you just created, such as in the following examples: Deny ingress from all pods in all namespaces This is a fundamental policy, blocking all cross-pod networking other than cross-pod traffic allowed by the configuration of other Network Policies. kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: {} policyTypes: - Ingress ingress: [] Allow ingress from all pods in the same namespace kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {} Allow ingress traffic to one pod from a particular namespace This policy allows traffic to pods labelled pod-a from pods running in namespace-y . kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-traffic-pod spec: podSelector: matchLabels: pod: pod-a policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: namespace-y To create the network policy object, enter the following command: USD oc apply -f <policy_name>.yaml -n <namespace> where: <policy_name> Specifies the network policy file name. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Example output networkpolicy.networking.k8s.io/deny-by-default created Note If you log in to the web console with cluster-admin privileges, you have a choice of creating a network policy in any namespace in the cluster directly in YAML or from a form in the web console. 6.3.2.3. Creating a default deny all network policy This is a fundamental policy, blocking all cross-pod networking other than network traffic allowed by the configuration of other deployed network policies. This procedure enforces a default deny-by-default policy. Note If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin, with mode: NetworkPolicy set. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace that the network policy applies to. Procedure Create the following YAML that defines a deny-by-default policy to deny ingress from all pods in all namespaces. Save the YAML in the deny-by-default.yaml file: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default namespace: default 1 spec: podSelector: {} 2 ingress: [] 3 1 namespace: default deploys this policy to the default namespace. 2 podSelector: is empty, this means it matches all the pods. Therefore, the policy applies to all pods in the default namespace. 3 There are no ingress rules specified. This causes incoming traffic to be dropped to all pods. Apply the policy by entering the following command: USD oc apply -f deny-by-default.yaml Example output networkpolicy.networking.k8s.io/deny-by-default created 6.3.2.4. Creating a network policy to allow traffic from external clients With the deny-by-default policy in place you can proceed to configure a policy that allows traffic from external clients to a pod with the label app=web . Note If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster. Follow this procedure to configure a policy that allows external service from the public Internet directly or by using a Load Balancer to access the pod. Traffic is only allowed to a pod with the label app=web . Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin, with mode: NetworkPolicy set. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace that the network policy applies to. Procedure Create a policy that allows traffic from the public Internet directly or by using a load balancer to access the pod. Save the YAML in the web-allow-external.yaml file: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-allow-external namespace: default spec: policyTypes: - Ingress podSelector: matchLabels: app: web ingress: - {} Apply the policy by entering the following command: USD oc apply -f web-allow-external.yaml Example output networkpolicy.networking.k8s.io/web-allow-external created This policy allows traffic from all resources, including external traffic as illustrated in the following diagram: 6.3.2.5. Creating a network policy allowing traffic to an application from all namespaces Note If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster. Follow this procedure to configure a policy that allows traffic from all pods in all namespaces to a particular application. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin, with mode: NetworkPolicy set. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace that the network policy applies to. Procedure Create a policy that allows traffic from all pods in all namespaces to a particular application. Save the YAML in the web-allow-all-namespaces.yaml file: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-allow-all-namespaces namespace: default spec: podSelector: matchLabels: app: web 1 policyTypes: - Ingress ingress: - from: - namespaceSelector: {} 2 1 Applies the policy only to app:web pods in default namespace. 2 Selects all pods in all namespaces. Note By default, if you omit specifying a namespaceSelector it does not select any namespaces, which means the policy allows traffic only from the namespace the network policy is deployed to. Apply the policy by entering the following command: USD oc apply -f web-allow-all-namespaces.yaml Example output networkpolicy.networking.k8s.io/web-allow-all-namespaces created Verification Start a web service in the default namespace by entering the following command: USD oc run web --namespace=default --image=nginx --labels="app=web" --expose --port=80 Run the following command to deploy an alpine image in the secondary namespace and to start a shell: USD oc run test-USDRANDOM --namespace=secondary --rm -i -t --image=alpine -- sh Run the following command in the shell and observe that the request is allowed: # wget -qO- --timeout=2 http://web.default Expected output <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> 6.3.2.6. Creating a network policy allowing traffic to an application from a namespace Note If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster. Follow this procedure to configure a policy that allows traffic to a pod with the label app=web from a particular namespace. You might want to do this to: Restrict traffic to a production database only to namespaces where production workloads are deployed. Enable monitoring tools deployed to a particular namespace to scrape metrics from the current namespace. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin, with mode: NetworkPolicy set. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace that the network policy applies to. Procedure Create a policy that allows traffic from all pods in a particular namespaces with a label purpose=production . Save the YAML in the web-allow-prod.yaml file: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-allow-prod namespace: default spec: podSelector: matchLabels: app: web 1 policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: purpose: production 2 1 Applies the policy only to app:web pods in the default namespace. 2 Restricts traffic to only pods in namespaces that have the label purpose=production . Apply the policy by entering the following command: USD oc apply -f web-allow-prod.yaml Example output networkpolicy.networking.k8s.io/web-allow-prod created Verification Start a web service in the default namespace by entering the following command: USD oc run web --namespace=default --image=nginx --labels="app=web" --expose --port=80 Run the following command to create the prod namespace: USD oc create namespace prod Run the following command to label the prod namespace: USD oc label namespace/prod purpose=production Run the following command to create the dev namespace: USD oc create namespace dev Run the following command to label the dev namespace: USD oc label namespace/dev purpose=testing Run the following command to deploy an alpine image in the dev namespace and to start a shell: USD oc run test-USDRANDOM --namespace=dev --rm -i -t --image=alpine -- sh Run the following command in the shell and observe that the request is blocked: # wget -qO- --timeout=2 http://web.default Expected output wget: download timed out Run the following command to deploy an alpine image in the prod namespace and start a shell: USD oc run test-USDRANDOM --namespace=prod --rm -i -t --image=alpine -- sh Run the following command in the shell and observe that the request is allowed: # wget -qO- --timeout=2 http://web.default Expected output <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> 6.3.2.7. Creating a network policy using OpenShift Cluster Manager To define granular rules describing the ingress or egress network traffic allowed for namespaces in your cluster, you can create a network policy. Prerequisites You logged in to OpenShift Cluster Manager . You created an Red Hat OpenShift Service on AWS cluster. You configured an identity provider for your cluster. You added your user account to the configured identity provider. You created a project within your Red Hat OpenShift Service on AWS cluster. Procedure From OpenShift Cluster Manager , click on the cluster you want to access. Click Open console to navigate to the OpenShift web console. Click on your identity provider and provide your credentials to log in to the cluster. From the administrator perspective, under Networking , click NetworkPolicies . Click Create NetworkPolicy . Provide a name for the policy in the Policy name field. Optional: You can provide the label and selector for a specific pod if this policy applies only to one or more specific pods. If you do not select a specific pod, then this policy will be applicable to all pods on the cluster. Optional: You can block all ingress and egress traffic by using the Deny all ingress traffic or Deny all egress traffic checkboxes. You can also add any combination of ingress and egress rules, allowing you to specify the port, namespace, or IP blocks you want to approve. Add ingress rules to your policy: Select Add ingress rule to configure a new rule. This action creates a new Ingress rule row with an Add allowed source drop-down menu that enables you to specify how you want to limit inbound traffic. The drop-down menu offers three options to limit your ingress traffic: Allow pods from the same namespace limits traffic to pods within the same namespace. You can specify the pods in a namespace, but leaving this option blank allows all of the traffic from pods in the namespace. Allow pods from inside the cluster limits traffic to pods within the same cluster as the policy. You can specify namespaces and pods from which you want to allow inbound traffic. Leaving this option blank allows inbound traffic from all namespaces and pods within this cluster. Allow peers by IP block limits traffic from a specified Classless Inter-Domain Routing (CIDR) IP block. You can block certain IPs with the exceptions option. Leaving the CIDR field blank allows all inbound traffic from all external sources. You can restrict all of your inbound traffic to a port. If you do not add any ports then all ports are accessible to traffic. Add egress rules to your network policy: Select Add egress rule to configure a new rule. This action creates a new Egress rule row with an Add allowed destination "* drop-down menu that enables you to specify how you want to limit outbound traffic. The drop-down menu offers three options to limit your egress traffic: Allow pods from the same namespace limits outbound traffic to pods within the same namespace. You can specify the pods in a namespace, but leaving this option blank allows all of the traffic from pods in the namespace. Allow pods from inside the cluster limits traffic to pods within the same cluster as the policy. You can specify namespaces and pods from which you want to allow outbound traffic. Leaving this option blank allows outbound traffic from all namespaces and pods within this cluster. Allow peers by IP block limits traffic from a specified CIDR IP block. You can block certain IPs with the exceptions option. Leaving the CIDR field blank allows all outbound traffic from all external sources. You can restrict all of your outbound traffic to a port. If you do not add any ports then all ports are accessible to traffic. 6.3.3. Viewing a network policy As a user with the admin role, you can view a network policy for a namespace. 6.3.3.1. Example NetworkPolicy object The following annotates an example NetworkPolicy object: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017 1 The name of the NetworkPolicy object. 2 A selector that describes the pods to which the policy applies. The policy object can only select pods in the project that defines the NetworkPolicy object. 3 A selector that matches the pods from which the policy object allows ingress traffic. The selector matches pods in the same namespace as the NetworkPolicy. 4 A list of one or more destination ports on which to accept traffic. 6.3.3.2. Viewing network policies using the CLI You can examine the network policies in a namespace. Note If you log in with a user with the cluster-admin role, then you can view any network policy in the cluster. Prerequisites You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace where the network policy exists. Procedure List network policies in a namespace: To view network policy objects defined in a namespace, enter the following command: USD oc get networkpolicy Optional: To examine a specific network policy, enter the following command: USD oc describe networkpolicy <policy_name> -n <namespace> where: <policy_name> Specifies the name of the network policy to inspect. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. For example: USD oc describe networkpolicy allow-same-namespace Output for oc describe command Name: allow-same-namespace Namespace: ns1 Created on: 2021-05-24 22:28:56 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: PodSelector: <none> Not affecting egress traffic Policy Types: Ingress Note If you log in to the web console with cluster-admin privileges, you have a choice of viewing a network policy in any namespace in the cluster directly in YAML or from a form in the web console. 6.3.3.3. Viewing network policies using OpenShift Cluster Manager You can view the configuration details of your network policy in Red Hat OpenShift Cluster Manager. Prerequisites You logged in to OpenShift Cluster Manager . You created an Red Hat OpenShift Service on AWS cluster. You configured an identity provider for your cluster. You added your user account to the configured identity provider. You created a network policy. Procedure From the Administrator perspective in the OpenShift Cluster Manager web console, under Networking , click NetworkPolicies . Select the desired network policy to view. In the Network Policy details page, you can view all of the associated ingress and egress rules. Select YAML on the network policy details to view the policy configuration in YAML format. Note You can only view the details of these policies. You cannot edit these policies. 6.3.4. Editing a network policy As a user with the admin role, you can edit an existing network policy for a namespace. 6.3.4.1. Editing a network policy You can edit a network policy in a namespace. Note If you log in with a user with the cluster-admin role, then you can edit a network policy in any namespace in the cluster. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin, with mode: NetworkPolicy set. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace where the network policy exists. Procedure Optional: To list the network policy objects in a namespace, enter the following command: USD oc get networkpolicy where: <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Edit the network policy object. If you saved the network policy definition in a file, edit the file and make any necessary changes, and then enter the following command. USD oc apply -n <namespace> -f <policy_file>.yaml where: <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. <policy_file> Specifies the name of the file containing the network policy. If you need to update the network policy object directly, enter the following command: USD oc edit networkpolicy <policy_name> -n <namespace> where: <policy_name> Specifies the name of the network policy. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Confirm that the network policy object is updated. USD oc describe networkpolicy <policy_name> -n <namespace> where: <policy_name> Specifies the name of the network policy. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Note If you log in to the web console with cluster-admin privileges, you have a choice of editing a network policy in any namespace in the cluster directly in YAML or from the policy in the web console through the Actions menu. 6.3.4.2. Example NetworkPolicy object The following annotates an example NetworkPolicy object: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017 1 The name of the NetworkPolicy object. 2 A selector that describes the pods to which the policy applies. The policy object can only select pods in the project that defines the NetworkPolicy object. 3 A selector that matches the pods from which the policy object allows ingress traffic. The selector matches pods in the same namespace as the NetworkPolicy. 4 A list of one or more destination ports on which to accept traffic. 6.3.4.3. Additional resources Creating a network policy 6.3.5. Deleting a network policy As a user with the admin role, you can delete a network policy from a namespace. 6.3.5.1. Deleting a network policy using the CLI You can delete a network policy in a namespace. Note If you log in with a user with the cluster-admin role, then you can delete any network policy in the cluster. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin, with mode: NetworkPolicy set. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace where the network policy exists. Procedure To delete a network policy object, enter the following command: USD oc delete networkpolicy <policy_name> -n <namespace> where: <policy_name> Specifies the name of the network policy. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Example output networkpolicy.networking.k8s.io/default-deny deleted Note If you log in to the web console with cluster-admin privileges, you have a choice of deleting a network policy in any namespace in the cluster directly in YAML or from the policy in the web console through the Actions menu. 6.3.5.2. Deleting a network policy using OpenShift Cluster Manager You can delete a network policy in a namespace. Prerequisites You logged in to OpenShift Cluster Manager . You created an Red Hat OpenShift Service on AWS cluster. You configured an identity provider for your cluster. You added your user account to the configured identity provider. Procedure From the Administrator perspective in the OpenShift Cluster Manager web console, under Networking , click NetworkPolicies . Use one of the following methods for deleting your network policy: Delete the policy from the Network Policies table: From the Network Policies table, select the stack menu on the row of the network policy you want to delete and then, click Delete NetworkPolicy . Delete the policy using the Actions drop-down menu from the individual network policy details: Click on Actions drop-down menu for your network policy. Select Delete NetworkPolicy from the menu. 6.3.6. Defining a default network policy for projects As a cluster administrator, you can modify the new project template to automatically include network policies when you create a new project. If you do not yet have a customized template for new projects, you must first create one. 6.3.6.1. Modifying the template for new projects As a cluster administrator, you can modify the default project template so that new projects are created using your custom requirements. To create your own custom project template: Prerequisites You have access to an Red Hat OpenShift Service on AWS cluster using an account with dedicated-admin permissions. Procedure Log in as a user with cluster-admin privileges. Generate the default project template: USD oc adm create-bootstrap-project-template -o yaml > template.yaml Use a text editor to modify the generated template.yaml file by adding objects or modifying existing objects. The project template must be created in the openshift-config namespace. Load your modified template: USD oc create -f template.yaml -n openshift-config Edit the project configuration resource using the web console or CLI. Using the web console: Navigate to the Administration Cluster Settings page. Click Configuration to view all configuration resources. Find the entry for Project and click Edit YAML . Using the CLI: Edit the project.config.openshift.io/cluster resource: USD oc edit project.config.openshift.io/cluster Update the spec section to include the projectRequestTemplate and name parameters, and set the name of your uploaded project template. The default name is project-request . Project configuration resource with custom project template apiVersion: config.openshift.io/v1 kind: Project metadata: # ... spec: projectRequestTemplate: name: <template_name> # ... After you save your changes, create a new project to verify that your changes were successfully applied. 6.3.6.2. Adding network policies to the new project template As a cluster administrator, you can add network policies to the default template for new projects. Red Hat OpenShift Service on AWS will automatically create all the NetworkPolicy objects specified in the template in the project. Prerequisites Your cluster uses a default CNI network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes. You installed the OpenShift CLI ( oc ). You must log in to the cluster with a user with cluster-admin privileges. You must have created a custom default project template for new projects. Procedure Edit the default template for a new project by running the following command: USD oc edit template <project_template> -n openshift-config Replace <project_template> with the name of the default template that you configured for your cluster. The default template name is project-request . In the template, add each NetworkPolicy object as an element to the objects parameter. The objects parameter accepts a collection of one or more objects. In the following example, the objects parameter collection includes several NetworkPolicy objects. objects: - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {} - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-kube-apiserver-operator spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-kube-apiserver-operator podSelector: matchLabels: app: kube-apiserver-operator policyTypes: - Ingress ... Optional: Create a new project to confirm that your network policy objects are created successfully by running the following commands: Create a new project: USD oc new-project <project> 1 1 Replace <project> with the name for the project you are creating. Confirm that the network policy objects in the new project template exist in the new project: USD oc get networkpolicy NAME POD-SELECTOR AGE allow-from-openshift-ingress <none> 7s allow-from-same-namespace <none> 7s 6.3.7. Configuring multitenant isolation with network policy As a cluster administrator, you can configure your network policies to provide multitenant network isolation. Note Configuring network policies as described in this section provides network isolation similar to the multitenant mode of OpenShift SDN in versions of Red Hat OpenShift Service on AWS. 6.3.7.1. Configuring multitenant isolation by using network policy You can configure your project to isolate it from pods and services in other project namespaces. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin, with mode: NetworkPolicy set. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. Procedure Create the following NetworkPolicy objects: A policy named allow-from-openshift-ingress . USD cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/ingress: "" podSelector: {} policyTypes: - Ingress EOF Note policy-group.network.openshift.io/ingress: "" is the preferred namespace selector label for OVN-Kubernetes. A policy named allow-from-openshift-monitoring : USD cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-monitoring spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: monitoring podSelector: {} policyTypes: - Ingress EOF A policy named allow-same-namespace : USD cat << EOF| oc create -f - kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {} EOF A policy named allow-from-kube-apiserver-operator : USD cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-kube-apiserver-operator spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-kube-apiserver-operator podSelector: matchLabels: app: kube-apiserver-operator policyTypes: - Ingress EOF For more details, see New kube-apiserver-operator webhook controller validating health of webhook . Optional: To confirm that the network policies exist in your current project, enter the following command: USD oc describe networkpolicy Example output Name: allow-from-openshift-ingress Namespace: example1 Created on: 2020-06-09 00:28:17 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: ingress Not affecting egress traffic Policy Types: Ingress Name: allow-from-openshift-monitoring Namespace: example1 Created on: 2020-06-09 00:29:57 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: monitoring Not affecting egress traffic Policy Types: Ingress 6.4. Audit logging for network security The OVN-Kubernetes network plugin uses Open Virtual Network (OVN) access control lists (ACLs) to manage AdminNetworkPolicy , BaselineAdminNetworkPolicy , NetworkPolicy , and EgressFirewall objects. Audit logging exposes allow and deny ACL events for NetworkPolicy , EgressFirewall and BaselineAdminNetworkPolicy custom resources (CR). Logging also exposes allow , deny , and pass ACL events for AdminNetworkPolicy (ANP) CR. Note Audit logging is available for only the OVN-Kubernetes network plugin . 6.4.1. Audit configuration The configuration for audit logging is specified as part of the OVN-Kubernetes cluster network provider configuration. The following YAML illustrates the default values for the audit logging: Audit logging configuration apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: policyAuditConfig: destination: "null" maxFileSize: 50 rateLimit: 20 syslogFacility: local0 The following table describes the configuration fields for audit logging. Table 6.1. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . 6.4.2. Audit logging You can configure the destination for audit logs, such as a syslog server or a UNIX domain socket. Regardless of any additional configuration, an audit log is always saved to /var/log/ovn/acl-audit-log.log on each OVN-Kubernetes pod in the cluster. You can enable audit logging for each namespace by annotating each namespace configuration with a k8s.ovn.org/acl-logging section. In the k8s.ovn.org/acl-logging section, you must specify allow , deny , or both values to enable audit logging for a namespace. Note A network policy does not support setting the Pass action set as a rule. The ACL-logging implementation logs access control list (ACL) events for a network. You can view these logs to analyze any potential security issues. Example namespace annotation kind: Namespace apiVersion: v1 metadata: name: example1 annotations: k8s.ovn.org/acl-logging: |- { "deny": "info", "allow": "info" } To view the default ACL logging configuration values, see the policyAuditConfig object in the cluster-network-03-config.yml file. If required, you can change the ACL logging configuration values for log file parameters in this file. The logging message format is compatible with syslog as defined by RFC5424. The syslog facility is configurable and defaults to local0 . The following example shows key parameters and their values outputted in a log message: Example logging message that outputs parameters and their values <timestamp>|<message_serial>|acl_log(ovn_pinctrl0)|<severity>|name="<acl_name>", verdict="<verdict>", severity="<severity>", direction="<direction>": <flow> Where: <timestamp> states the time and date for the creation of a log message. <message_serial> lists the serial number for a log message. acl_log(ovn_pinctrl0) is a literal string that prints the location of the log message in the OVN-Kubernetes plugin. <severity> sets the severity level for a log message. If you enable audit logging that supports allow and deny tasks then two severity levels show in the log message output. <name> states the name of the ACL-logging implementation in the OVN Network Bridging Database ( nbdb ) that was created by the network policy. <verdict> can be either allow or drop . <direction> can be either to-lport or from-lport to indicate that the policy was applied to traffic going to or away from a pod. <flow> shows packet information in a format equivalent to the OpenFlow protocol. This parameter comprises Open vSwitch (OVS) fields. The following example shows OVS fields that the flow parameter uses to extract packet information from system memory: Example of OVS fields used by the flow parameter to extract packet information <proto>,vlan_tci=0x0000,dl_src=<src_mac>,dl_dst=<source_mac>,nw_src=<source_ip>,nw_dst=<target_ip>,nw_tos=<tos_dscp>,nw_ecn=<tos_ecn>,nw_ttl=<ip_ttl>,nw_frag=<fragment>,tp_src=<tcp_src_port>,tp_dst=<tcp_dst_port>,tcp_flags=<tcp_flags> Where: <proto> states the protocol. Valid values are tcp and udp . vlan_tci=0x0000 states the VLAN header as 0 because a VLAN ID is not set for internal pod network traffic. <src_mac> specifies the source for the Media Access Control (MAC) address. <source_mac> specifies the destination for the MAC address. <source_ip> lists the source IP address <target_ip> lists the target IP address. <tos_dscp> states Differentiated Services Code Point (DSCP) values to classify and prioritize certain network traffic over other traffic. <tos_ecn> states Explicit Congestion Notification (ECN) values that indicate any congested traffic in your network. <ip_ttl> states the Time To Live (TTP) information for an packet. <fragment> specifies what type of IP fragments or IP non-fragments to match. <tcp_src_port> shows the source for the port for TCP and UDP protocols. <tcp_dst_port> lists the destination port for TCP and UDP protocols. <tcp_flags> supports numerous flags such as SYN , ACK , PSH and so on. If you need to set multiple values then each value is separated by a vertical bar ( | ). The UDP protocol does not support this parameter. Note For more information about the field descriptions, go to the OVS manual page for ovs-fields . Example ACL deny log entry for a network policy 2023-11-02T16:28:54.139Z|00004|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:Ingress", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn 2023-11-02T16:28:55.187Z|00005|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:Ingress", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn 2023-11-02T16:28:57.235Z|00006|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:Ingress", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn The following table describes namespace annotation values: Table 6.2. Audit logging namespace annotation for k8s.ovn.org/acl-logging Field Description deny Blocks namespace access to any traffic that matches an ACL rule with the deny action. The field supports alert , warning , notice , info , or debug values. allow Permits namespace access to any traffic that matches an ACL rule with the allow action. The field supports alert , warning , notice , info , or debug values. pass A pass action applies to an admin network policy's ACL rule. A pass action allows either the network policy in the namespace or the baseline admin network policy rule to evaluate all incoming and outgoing traffic. A network policy does not support a pass action. Additional resources Understanding network policy APIs 6.4.3. AdminNetworkPolicy audit logging Audit logging is enabled per AdminNetworkPolicy CR by annotating an ANP policy with the k8s.ovn.org/acl-logging key such as in the following example: Example 6.8. Example of annotation for AdminNetworkPolicy CR apiVersion: policy.networking.k8s.io/v1alpha1 kind: AdminNetworkPolicy metadata: annotations: k8s.ovn.org/acl-logging: '{ "deny": "alert", "allow": "alert", "pass" : "warning" }' name: anp-tenant-log spec: priority: 5 subject: namespaces: matchLabels: tenant: backend-storage # Selects all pods owned by storage tenant. ingress: - name: "allow-all-ingress-product-development-and-customer" # Product development and customer tenant ingress to backend storage. action: "Allow" from: - pods: namespaceSelector: matchExpressions: - key: tenant operator: In values: - product-development - customer podSelector: {} - name: "pass-all-ingress-product-security" action: "Pass" from: - namespaces: matchLabels: tenant: product-security - name: "deny-all-ingress" # Ingress to backend from all other pods in the cluster. action: "Deny" from: - namespaces: {} egress: - name: "allow-all-egress-product-development" action: "Allow" to: - pods: namespaceSelector: matchLabels: tenant: product-development podSelector: {} - name: "pass-egress-product-security" action: "Pass" to: - namespaces: matchLabels: tenant: product-security - name: "deny-all-egress" # Egress from backend denied to all other pods. action: "Deny" to: - namespaces: {} Logs are generated whenever a specific OVN ACL is hit and meets the action criteria set in your logging annotation. For example, an event in which any of the namespaces with the label tenant: product-development accesses the namespaces with the label tenant: backend-storage , a log is generated. Note ACL logging is limited to 60 characters. If your ANP name field is long, the rest of the log will be truncated. The following is a direction index for the examples log entries that follow: Direction Rule Ingress Rule0 Allow from tenant product-development and customer to tenant backend-storage ; Ingress0: Allow Rule1 Pass from product-security`to tenant `backend-storage ; Ingress1: Pass Rule2 Deny ingress from all pods; Ingress2: Deny Egress Rule0 Allow to product-development ; Egress0: Allow Rule1 Pass to product-security ; Egress1: Pass Rule2 Deny egress to all other pods; Egress2: Deny Example 6.9. Example ACL log entry for Allow action of the AdminNetworkPolicy named anp-tenant-log with Ingress:0 and Egress:0 2024-06-10T16:27:45.194Z|00052|acl_log(ovn_pinctrl0)|INFO|name="ANP:anp-tenant-log:Ingress:0", verdict=allow, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:1a,dl_dst=0a:58:0a:80:02:19,nw_src=10.128.2.26,nw_dst=10.128.2.25,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=57814,tp_dst=8080,tcp_flags=syn 2024-06-10T16:28:23.130Z|00059|acl_log(ovn_pinctrl0)|INFO|name="ANP:anp-tenant-log:Ingress:0", verdict=allow, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:18,dl_dst=0a:58:0a:80:02:19,nw_src=10.128.2.24,nw_dst=10.128.2.25,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=38620,tp_dst=8080,tcp_flags=ack 2024-06-10T16:28:38.293Z|00069|acl_log(ovn_pinctrl0)|INFO|name="ANP:anp-tenant-log:Egress:0", verdict=allow, severity=alert, direction=from-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:19,dl_dst=0a:58:0a:80:02:1a,nw_src=10.128.2.25,nw_dst=10.128.2.26,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=47566,tp_dst=8080,tcp_flags=fin|ack=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=55704,tp_dst=8080,tcp_flags=ack Example 6.10. Example ACL log entry for Pass action of the AdminNetworkPolicy named anp-tenant-log with Ingress:1 and Egress:1 2024-06-10T16:33:12.019Z|00075|acl_log(ovn_pinctrl0)|INFO|name="ANP:anp-tenant-log:Ingress:1", verdict=pass, severity=warning, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:1b,dl_dst=0a:58:0a:80:02:19,nw_src=10.128.2.27,nw_dst=10.128.2.25,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=37394,tp_dst=8080,tcp_flags=ack 2024-06-10T16:35:04.209Z|00081|acl_log(ovn_pinctrl0)|INFO|name="ANP:anp-tenant-log:Egress:1", verdict=pass, severity=warning, direction=from-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:19,dl_dst=0a:58:0a:80:02:1b,nw_src=10.128.2.25,nw_dst=10.128.2.27,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=34018,tp_dst=8080,tcp_flags=ack Example 6.11. Example ACL log entry for Deny action of the AdminNetworkPolicy named anp-tenant-log with Egress:2 and Ingress2 2024-06-10T16:43:05.287Z|00087|acl_log(ovn_pinctrl0)|INFO|name="ANP:anp-tenant-log:Egress:2", verdict=drop, severity=alert, direction=from-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:19,dl_dst=0a:58:0a:80:02:18,nw_src=10.128.2.25,nw_dst=10.128.2.24,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=51598,tp_dst=8080,tcp_flags=syn 2024-06-10T16:44:43.591Z|00090|acl_log(ovn_pinctrl0)|INFO|name="ANP:anp-tenant-log:Ingress:2", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:1c,dl_dst=0a:58:0a:80:02:19,nw_src=10.128.2.28,nw_dst=10.128.2.25,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=33774,tp_dst=8080,tcp_flags=syn The following table describes ANP annotation: Table 6.3. Audit logging AdminNetworkPolicy annotation Annotation Value k8s.ovn.org/acl-logging You must specify at least one of Allow , Deny , or Pass to enable audit logging for a namespace. Deny Optional: Specify alert , warning , notice , info , or debug . Allow Optional: Specify alert , warning , notice , info , or debug . Pass Optional: Specify alert , warning , notice , info , or debug . 6.4.4. BaselineAdminNetworkPolicy audit logging Audit logging is enabled in the BaselineAdminNetworkPolicy CR by annotating an BANP policy with the k8s.ovn.org/acl-logging key such as in the following example: Example 6.12. Example of annotation for BaselineAdminNetworkPolicy CR apiVersion: policy.networking.k8s.io/v1alpha1 kind: BaselineAdminNetworkPolicy metadata: annotations: k8s.ovn.org/acl-logging: '{ "deny": "alert", "allow": "alert"}' name: default spec: subject: namespaces: matchLabels: tenant: workloads # Selects all workload pods in the cluster. ingress: - name: "default-allow-dns" # This rule allows ingress from dns tenant to all workloads. action: "Allow" from: - namespaces: matchLabels: tenant: dns - name: "default-deny-dns" # This rule denies all ingress from all pods to workloads. action: "Deny" from: - namespaces: {} # Use the empty selector with caution because it also selects OpenShift namespaces as well. egress: - name: "default-deny-dns" # This rule denies all egress from workloads. It will be applied when no ANP or network policy matches. action: "Deny" to: - namespaces: {} # Use the empty selector with caution because it also selects OpenShift namespaces as well. In the example, an event in which any of the namespaces with the label tenant: dns accesses the namespaces with the label tenant: workloads , a log is generated. The following is a direction index for the examples log entries that follow: Direction Rule Ingress Rule0 Allow from tenant dns to tenant workloads ; Ingress0: Allow Rule1 Deny to tenant workloads from all pods; Ingress1: Deny Egress Rule0 Deny to all pods; Egress0: Deny Example 6.13. Example ACL allow log entry for Allow action of default BANP with Ingress:0 2024-06-10T18:11:58.263Z|00022|acl_log(ovn_pinctrl0)|INFO|name="BANP:default:Ingress:0", verdict=allow, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:57,dl_dst=0a:58:0a:82:02:56,nw_src=10.130.2.87,nw_dst=10.130.2.86,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=60510,tp_dst=8080,tcp_flags=syn 2024-06-10T18:11:58.264Z|00023|acl_log(ovn_pinctrl0)|INFO|name="BANP:default:Ingress:0", verdict=allow, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:57,dl_dst=0a:58:0a:82:02:56,nw_src=10.130.2.87,nw_dst=10.130.2.86,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=60510,tp_dst=8080,tcp_flags=psh|ack 2024-06-10T18:11:58.264Z|00024|acl_log(ovn_pinctrl0)|INFO|name="BANP:default:Ingress:0", verdict=allow, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:57,dl_dst=0a:58:0a:82:02:56,nw_src=10.130.2.87,nw_dst=10.130.2.86,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=60510,tp_dst=8080,tcp_flags=ack 2024-06-10T18:11:58.264Z|00025|acl_log(ovn_pinctrl0)|INFO|name="BANP:default:Ingress:0", verdict=allow, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:57,dl_dst=0a:58:0a:82:02:56,nw_src=10.130.2.87,nw_dst=10.130.2.86,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=60510,tp_dst=8080,tcp_flags=ack 2024-06-10T18:11:58.264Z|00026|acl_log(ovn_pinctrl0)|INFO|name="BANP:default:Ingress:0", verdict=allow, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:57,dl_dst=0a:58:0a:82:02:56,nw_src=10.130.2.87,nw_dst=10.130.2.86,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=60510,tp_dst=8080,tcp_flags=fin|ack 2024-06-10T18:11:58.264Z|00027|acl_log(ovn_pinctrl0)|INFO|name="BANP:default:Ingress:0", verdict=allow, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:57,dl_dst=0a:58:0a:82:02:56,nw_src=10.130.2.87,nw_dst=10.130.2.86,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=60510,tp_dst=8080,tcp_flags=ack Example 6.14. Example ACL allow log entry for Allow action of default BANP with Egress:0 and Ingress:1 2024-06-10T18:09:57.774Z|00016|acl_log(ovn_pinctrl0)|INFO|name="BANP:default:Egress:0", verdict=drop, severity=alert, direction=from-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:56,dl_dst=0a:58:0a:82:02:57,nw_src=10.130.2.86,nw_dst=10.130.2.87,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=45614,tp_dst=8080,tcp_flags=syn 2024-06-10T18:09:58.809Z|00017|acl_log(ovn_pinctrl0)|INFO|name="BANP:default:Egress:0", verdict=drop, severity=alert, direction=from-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:56,dl_dst=0a:58:0a:82:02:57,nw_src=10.130.2.86,nw_dst=10.130.2.87,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=45614,tp_dst=8080,tcp_flags=syn 2024-06-10T18:10:00.857Z|00018|acl_log(ovn_pinctrl0)|INFO|name="BANP:default:Egress:0", verdict=drop, severity=alert, direction=from-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:56,dl_dst=0a:58:0a:82:02:57,nw_src=10.130.2.86,nw_dst=10.130.2.87,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=45614,tp_dst=8080,tcp_flags=syn 2024-06-10T18:10:25.414Z|00019|acl_log(ovn_pinctrl0)|INFO|name="BANP:default:Ingress:1", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:58,dl_dst=0a:58:0a:82:02:56,nw_src=10.130.2.88,nw_dst=10.130.2.86,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=40630,tp_dst=8080,tcp_flags=syn 2024-06-10T18:10:26.457Z|00020|acl_log(ovn_pinctrl0)|INFO|name="BANP:default:Ingress:1", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:58,dl_dst=0a:58:0a:82:02:56,nw_src=10.130.2.88,nw_dst=10.130.2.86,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=40630,tp_dst=8080,tcp_flags=syn 2024-06-10T18:10:28.505Z|00021|acl_log(ovn_pinctrl0)|INFO|name="BANP:default:Ingress:1", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:58,dl_dst=0a:58:0a:82:02:56,nw_src=10.130.2.88,nw_dst=10.130.2.86,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=40630,tp_dst=8080,tcp_flags=syn The following table describes BANP annotation: Table 6.4. Audit logging BaselineAdminNetworkPolicy annotation Annotation Value k8s.ovn.org/acl-logging You must specify at least one of Allow or Deny to enable audit logging for a namespace. Deny Optional: Specify alert , warning , notice , info , or debug . Allow Optional: Specify alert , warning , notice , info , or debug . 6.4.5. Configuring egress firewall and network policy auditing for a cluster As a cluster administrator, you can customize audit logging for your cluster. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster with a user with cluster-admin privileges. Procedure To customize the audit logging configuration, enter the following command: USD oc edit network.operator.openshift.io/cluster Tip You can alternatively customize and apply the following YAML to configure audit logging: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: policyAuditConfig: destination: "null" maxFileSize: 50 rateLimit: 20 syslogFacility: local0 Verification To create a namespace with network policies complete the following steps: Create a namespace for verification: USD cat <<EOF| oc create -f - kind: Namespace apiVersion: v1 metadata: name: verify-audit-logging annotations: k8s.ovn.org/acl-logging: '{ "deny": "alert", "allow": "alert" }' EOF Example output namespace/verify-audit-logging created Create network policies for the namespace: USD cat <<EOF| oc create -n verify-audit-logging -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: deny-all spec: podSelector: matchLabels: policyTypes: - Ingress - Egress --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-same-namespace namespace: verify-audit-logging spec: podSelector: {} policyTypes: - Ingress - Egress ingress: - from: - podSelector: {} egress: - to: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: verify-audit-logging EOF Example output networkpolicy.networking.k8s.io/deny-all created networkpolicy.networking.k8s.io/allow-from-same-namespace created Create a pod for source traffic in the default namespace: USD cat <<EOF| oc create -n default -f - apiVersion: v1 kind: Pod metadata: name: client spec: containers: - name: client image: registry.access.redhat.com/rhel7/rhel-tools command: ["/bin/sh", "-c"] args: ["sleep inf"] EOF Create two pods in the verify-audit-logging namespace: USD for name in client server; do cat <<EOF| oc create -n verify-audit-logging -f - apiVersion: v1 kind: Pod metadata: name: USD{name} spec: containers: - name: USD{name} image: registry.access.redhat.com/rhel7/rhel-tools command: ["/bin/sh", "-c"] args: ["sleep inf"] EOF done Example output pod/client created pod/server created To generate traffic and produce network policy audit log entries, complete the following steps: Obtain the IP address for pod named server in the verify-audit-logging namespace: USD POD_IP=USD(oc get pods server -n verify-audit-logging -o jsonpath='{.status.podIP}') Ping the IP address from the command from the pod named client in the default namespace and confirm that all packets are dropped: USD oc exec -it client -n default -- /bin/ping -c 2 USDPOD_IP Example output PING 10.128.2.55 (10.128.2.55) 56(84) bytes of data. --- 10.128.2.55 ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 2041ms Ping the IP address saved in the POD_IP shell environment variable from the pod named client in the verify-audit-logging namespace and confirm that all packets are allowed: USD oc exec -it client -n verify-audit-logging -- /bin/ping -c 2 USDPOD_IP Example output PING 10.128.0.86 (10.128.0.86) 56(84) bytes of data. 64 bytes from 10.128.0.86: icmp_seq=1 ttl=64 time=2.21 ms 64 bytes from 10.128.0.86: icmp_seq=2 ttl=64 time=0.440 ms --- 10.128.0.86 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 0.440/1.329/2.219/0.890 ms Display the latest entries in the network policy audit log: USD for pod in USD(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node --no-headers=true | awk '{ print USD1 }') ; do oc exec -it USDpod -n openshift-ovn-kubernetes -- tail -4 /var/log/ovn/acl-audit-log.log done Example output 2023-11-02T16:28:54.139Z|00004|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:Ingress", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn 2023-11-02T16:28:55.187Z|00005|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:Ingress", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn 2023-11-02T16:28:57.235Z|00006|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:Ingress", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn 2023-11-02T16:49:57.909Z|00028|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Egress:0", verdict=allow, severity=alert, direction=from-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:57.909Z|00029|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Ingress:0", verdict=allow, severity=alert, direction=to-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:58.932Z|00030|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Egress:0", verdict=allow, severity=alert, direction=from-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:58.932Z|00031|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Ingress:0", verdict=allow, severity=alert, direction=to-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 6.4.6. Enabling egress firewall and network policy audit logging for a namespace As a cluster administrator, you can enable audit logging for a namespace. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster with a user with cluster-admin privileges. Procedure To enable audit logging for a namespace, enter the following command: USD oc annotate namespace <namespace> \ k8s.ovn.org/acl-logging='{ "deny": "alert", "allow": "notice" }' where: <namespace> Specifies the name of the namespace. Tip You can alternatively apply the following YAML to enable audit logging: kind: Namespace apiVersion: v1 metadata: name: <namespace> annotations: k8s.ovn.org/acl-logging: |- { "deny": "alert", "allow": "notice" } Example output namespace/verify-audit-logging annotated Verification Display the latest entries in the audit log: USD for pod in USD(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node --no-headers=true | awk '{ print USD1 }') ; do oc exec -it USDpod -n openshift-ovn-kubernetes -- tail -4 /var/log/ovn/acl-audit-log.log done Example output 2023-11-02T16:49:57.909Z|00028|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Egress:0", verdict=allow, severity=alert, direction=from-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:57.909Z|00029|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Ingress:0", verdict=allow, severity=alert, direction=to-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:58.932Z|00030|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Egress:0", verdict=allow, severity=alert, direction=from-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:58.932Z|00031|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Ingress:0", verdict=allow, severity=alert, direction=to-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 6.4.7. Disabling egress firewall and network policy audit logging for a namespace As a cluster administrator, you can disable audit logging for a namespace. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster with a user with cluster-admin privileges. Procedure To disable audit logging for a namespace, enter the following command: USD oc annotate --overwrite namespace <namespace> k8s.ovn.org/acl-logging- where: <namespace> Specifies the name of the namespace. Tip You can alternatively apply the following YAML to disable audit logging: kind: Namespace apiVersion: v1 metadata: name: <namespace> annotations: k8s.ovn.org/acl-logging: null Example output namespace/verify-audit-logging annotated 6.4.8. Additional resources | [
"apiVersion: policy.networking.k8s.io/v1alpha1 kind: AdminNetworkPolicy metadata: name: sample-anp-deny-pass-rules 1 spec: priority: 50 2 subject: namespaces: matchLabels: kubernetes.io/metadata.name: example.name 3 ingress: 4 - name: \"deny-all-ingress-tenant-1\" 5 action: \"Deny\" from: - pods: namespaceSelector: matchLabels: custom-anp: tenant-1 podSelector: matchLabels: custom-anp: tenant-1 6 egress: 7 - name: \"pass-all-egress-to-tenant-1\" action: \"Pass\" to: - pods: namespaceSelector: matchLabels: custom-anp: tenant-1 podSelector: matchLabels: custom-anp: tenant-1",
"apiVersion: policy.networking.k8s.io/v1alpha1 kind: AdminNetworkPolicy metadata: name: allow-monitoring spec: priority: 9 subject: namespaces: {} # Use the empty selector with caution because it also selects OpenShift namespaces as well. ingress: - name: \"allow-ingress-from-monitoring\" action: \"Allow\" from: - namespaces: matchLabels: kubernetes.io/metadata.name: monitoring",
"apiVersion: policy.networking.k8s.io/v1alpha1 kind: AdminNetworkPolicy metadata: name: block-monitoring spec: priority: 5 subject: namespaces: matchLabels: security: restricted ingress: - name: \"deny-ingress-from-monitoring\" action: \"Deny\" from: - namespaces: matchLabels: kubernetes.io/metadata.name: monitoring",
"apiVersion: policy.networking.k8s.io/v1alpha1 kind: AdminNetworkPolicy metadata: name: pass-monitoring spec: priority: 7 subject: namespaces: matchLabels: security: internal ingress: - name: \"pass-ingress-from-monitoring\" action: \"Pass\" from: - namespaces: matchLabels: kubernetes.io/metadata.name: monitoring",
"apiVersion: policy.networking.k8s.io/v1alpha1 kind: BaselineAdminNetworkPolicy metadata: name: default 1 spec: subject: namespaces: matchLabels: kubernetes.io/metadata.name: example.name 2 ingress: 3 - name: \"deny-all-ingress-from-tenant-1\" 4 action: \"Deny\" from: - pods: namespaceSelector: matchLabels: custom-banp: tenant-1 5 podSelector: matchLabels: custom-banp: tenant-1 6 egress: - name: \"allow-all-egress-to-tenant-1\" action: \"Allow\" to: - pods: namespaceSelector: matchLabels: custom-banp: tenant-1 podSelector: matchLabels: custom-banp: tenant-1",
"apiVersion: policy.networking.k8s.io/v1alpha1 kind: BaselineAdminNetworkPolicy metadata: name: default spec: subject: namespaces: matchLabels: security: internal ingress: - name: \"deny-ingress-from-monitoring\" action: \"Deny\" from: - namespaces: matchLabels: kubernetes.io/metadata.name: monitoring",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-monitoring namespace: tenant 1 spec: podSelector: policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: monitoring",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: {} ingress: []",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {}",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-http-and-https spec: podSelector: matchLabels: role: frontend ingress: - ports: - protocol: TCP port: 80 - protocol: TCP port: 443",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-pod-and-namespace-both spec: podSelector: matchLabels: name: test-pods ingress: - from: - namespaceSelector: matchLabels: project: project_name podSelector: matchLabels: name: test-pods",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-router spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/ingress: \"\" 1 podSelector: {} policyTypes: - Ingress",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-hostnetwork spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/host-network: \"\" podSelector: {} policyTypes: - Ingress",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy spec: podSelector: {} ingress: - from: - podSelector: matchLabels: role: frontend - from: - podSelector: matchLabels: role: backend",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy spec: podSelector: {} ingress: - from: - podSelector: matchExpressions: - {key: role, operator: In, values: [frontend, backend]}",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: policy1 spec: podSelector: matchLabels: role: db ingress: - from: - podSelector: matchLabels: role: frontend --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: policy2 spec: podSelector: matchLabels: role: client ingress: - from: - podSelector: matchLabels: role: frontend",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: policy3 spec: podSelector: matchExpressions: - {key: role, operator: In, values: [db, client]} ingress: - from: - podSelector: matchLabels: role: frontend",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017",
"touch <policy_name>.yaml",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: {} policyTypes: - Ingress ingress: []",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {}",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-traffic-pod spec: podSelector: matchLabels: pod: pod-a policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: namespace-y",
"oc apply -f <policy_name>.yaml -n <namespace>",
"networkpolicy.networking.k8s.io/deny-by-default created",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default namespace: default 1 spec: podSelector: {} 2 ingress: [] 3",
"oc apply -f deny-by-default.yaml",
"networkpolicy.networking.k8s.io/deny-by-default created",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-allow-external namespace: default spec: policyTypes: - Ingress podSelector: matchLabels: app: web ingress: - {}",
"oc apply -f web-allow-external.yaml",
"networkpolicy.networking.k8s.io/web-allow-external created",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-allow-all-namespaces namespace: default spec: podSelector: matchLabels: app: web 1 policyTypes: - Ingress ingress: - from: - namespaceSelector: {} 2",
"oc apply -f web-allow-all-namespaces.yaml",
"networkpolicy.networking.k8s.io/web-allow-all-namespaces created",
"oc run web --namespace=default --image=nginx --labels=\"app=web\" --expose --port=80",
"oc run test-USDRANDOM --namespace=secondary --rm -i -t --image=alpine -- sh",
"wget -qO- --timeout=2 http://web.default",
"<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href=\"http://nginx.org/\">nginx.org</a>.<br/> Commercial support is available at <a href=\"http://nginx.com/\">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-allow-prod namespace: default spec: podSelector: matchLabels: app: web 1 policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: purpose: production 2",
"oc apply -f web-allow-prod.yaml",
"networkpolicy.networking.k8s.io/web-allow-prod created",
"oc run web --namespace=default --image=nginx --labels=\"app=web\" --expose --port=80",
"oc create namespace prod",
"oc label namespace/prod purpose=production",
"oc create namespace dev",
"oc label namespace/dev purpose=testing",
"oc run test-USDRANDOM --namespace=dev --rm -i -t --image=alpine -- sh",
"wget -qO- --timeout=2 http://web.default",
"wget: download timed out",
"oc run test-USDRANDOM --namespace=prod --rm -i -t --image=alpine -- sh",
"wget -qO- --timeout=2 http://web.default",
"<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href=\"http://nginx.org/\">nginx.org</a>.<br/> Commercial support is available at <a href=\"http://nginx.com/\">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017",
"oc get networkpolicy",
"oc describe networkpolicy <policy_name> -n <namespace>",
"oc describe networkpolicy allow-same-namespace",
"Name: allow-same-namespace Namespace: ns1 Created on: 2021-05-24 22:28:56 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: PodSelector: <none> Not affecting egress traffic Policy Types: Ingress",
"oc get networkpolicy",
"oc apply -n <namespace> -f <policy_file>.yaml",
"oc edit networkpolicy <policy_name> -n <namespace>",
"oc describe networkpolicy <policy_name> -n <namespace>",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017",
"oc delete networkpolicy <policy_name> -n <namespace>",
"networkpolicy.networking.k8s.io/default-deny deleted",
"oc adm create-bootstrap-project-template -o yaml > template.yaml",
"oc create -f template.yaml -n openshift-config",
"oc edit project.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestTemplate: name: <template_name>",
"oc edit template <project_template> -n openshift-config",
"objects: - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {} - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-kube-apiserver-operator spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-kube-apiserver-operator podSelector: matchLabels: app: kube-apiserver-operator policyTypes: - Ingress",
"oc new-project <project> 1",
"oc get networkpolicy NAME POD-SELECTOR AGE allow-from-openshift-ingress <none> 7s allow-from-same-namespace <none> 7s",
"cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/ingress: \"\" podSelector: {} policyTypes: - Ingress EOF",
"cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-monitoring spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: monitoring podSelector: {} policyTypes: - Ingress EOF",
"cat << EOF| oc create -f - kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {} EOF",
"cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-kube-apiserver-operator spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-kube-apiserver-operator podSelector: matchLabels: app: kube-apiserver-operator policyTypes: - Ingress EOF",
"oc describe networkpolicy",
"Name: allow-from-openshift-ingress Namespace: example1 Created on: 2020-06-09 00:28:17 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: ingress Not affecting egress traffic Policy Types: Ingress Name: allow-from-openshift-monitoring Namespace: example1 Created on: 2020-06-09 00:29:57 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: monitoring Not affecting egress traffic Policy Types: Ingress",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: policyAuditConfig: destination: \"null\" maxFileSize: 50 rateLimit: 20 syslogFacility: local0",
"kind: Namespace apiVersion: v1 metadata: name: example1 annotations: k8s.ovn.org/acl-logging: |- { \"deny\": \"info\", \"allow\": \"info\" }",
"<timestamp>|<message_serial>|acl_log(ovn_pinctrl0)|<severity>|name=\"<acl_name>\", verdict=\"<verdict>\", severity=\"<severity>\", direction=\"<direction>\": <flow>",
"<proto>,vlan_tci=0x0000,dl_src=<src_mac>,dl_dst=<source_mac>,nw_src=<source_ip>,nw_dst=<target_ip>,nw_tos=<tos_dscp>,nw_ecn=<tos_ecn>,nw_ttl=<ip_ttl>,nw_frag=<fragment>,tp_src=<tcp_src_port>,tp_dst=<tcp_dst_port>,tcp_flags=<tcp_flags>",
"2023-11-02T16:28:54.139Z|00004|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:Ingress\", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn 2023-11-02T16:28:55.187Z|00005|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:Ingress\", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn 2023-11-02T16:28:57.235Z|00006|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:Ingress\", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn",
"apiVersion: policy.networking.k8s.io/v1alpha1 kind: AdminNetworkPolicy metadata: annotations: k8s.ovn.org/acl-logging: '{ \"deny\": \"alert\", \"allow\": \"alert\", \"pass\" : \"warning\" }' name: anp-tenant-log spec: priority: 5 subject: namespaces: matchLabels: tenant: backend-storage # Selects all pods owned by storage tenant. ingress: - name: \"allow-all-ingress-product-development-and-customer\" # Product development and customer tenant ingress to backend storage. action: \"Allow\" from: - pods: namespaceSelector: matchExpressions: - key: tenant operator: In values: - product-development - customer podSelector: {} - name: \"pass-all-ingress-product-security\" action: \"Pass\" from: - namespaces: matchLabels: tenant: product-security - name: \"deny-all-ingress\" # Ingress to backend from all other pods in the cluster. action: \"Deny\" from: - namespaces: {} egress: - name: \"allow-all-egress-product-development\" action: \"Allow\" to: - pods: namespaceSelector: matchLabels: tenant: product-development podSelector: {} - name: \"pass-egress-product-security\" action: \"Pass\" to: - namespaces: matchLabels: tenant: product-security - name: \"deny-all-egress\" # Egress from backend denied to all other pods. action: \"Deny\" to: - namespaces: {}",
"2024-06-10T16:27:45.194Z|00052|acl_log(ovn_pinctrl0)|INFO|name=\"ANP:anp-tenant-log:Ingress:0\", verdict=allow, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:1a,dl_dst=0a:58:0a:80:02:19,nw_src=10.128.2.26,nw_dst=10.128.2.25,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=57814,tp_dst=8080,tcp_flags=syn 2024-06-10T16:28:23.130Z|00059|acl_log(ovn_pinctrl0)|INFO|name=\"ANP:anp-tenant-log:Ingress:0\", verdict=allow, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:18,dl_dst=0a:58:0a:80:02:19,nw_src=10.128.2.24,nw_dst=10.128.2.25,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=38620,tp_dst=8080,tcp_flags=ack 2024-06-10T16:28:38.293Z|00069|acl_log(ovn_pinctrl0)|INFO|name=\"ANP:anp-tenant-log:Egress:0\", verdict=allow, severity=alert, direction=from-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:19,dl_dst=0a:58:0a:80:02:1a,nw_src=10.128.2.25,nw_dst=10.128.2.26,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=47566,tp_dst=8080,tcp_flags=fin|ack=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=55704,tp_dst=8080,tcp_flags=ack",
"2024-06-10T16:33:12.019Z|00075|acl_log(ovn_pinctrl0)|INFO|name=\"ANP:anp-tenant-log:Ingress:1\", verdict=pass, severity=warning, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:1b,dl_dst=0a:58:0a:80:02:19,nw_src=10.128.2.27,nw_dst=10.128.2.25,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=37394,tp_dst=8080,tcp_flags=ack 2024-06-10T16:35:04.209Z|00081|acl_log(ovn_pinctrl0)|INFO|name=\"ANP:anp-tenant-log:Egress:1\", verdict=pass, severity=warning, direction=from-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:19,dl_dst=0a:58:0a:80:02:1b,nw_src=10.128.2.25,nw_dst=10.128.2.27,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=34018,tp_dst=8080,tcp_flags=ack",
"2024-06-10T16:43:05.287Z|00087|acl_log(ovn_pinctrl0)|INFO|name=\"ANP:anp-tenant-log:Egress:2\", verdict=drop, severity=alert, direction=from-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:19,dl_dst=0a:58:0a:80:02:18,nw_src=10.128.2.25,nw_dst=10.128.2.24,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=51598,tp_dst=8080,tcp_flags=syn 2024-06-10T16:44:43.591Z|00090|acl_log(ovn_pinctrl0)|INFO|name=\"ANP:anp-tenant-log:Ingress:2\", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:1c,dl_dst=0a:58:0a:80:02:19,nw_src=10.128.2.28,nw_dst=10.128.2.25,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=33774,tp_dst=8080,tcp_flags=syn",
"apiVersion: policy.networking.k8s.io/v1alpha1 kind: BaselineAdminNetworkPolicy metadata: annotations: k8s.ovn.org/acl-logging: '{ \"deny\": \"alert\", \"allow\": \"alert\"}' name: default spec: subject: namespaces: matchLabels: tenant: workloads # Selects all workload pods in the cluster. ingress: - name: \"default-allow-dns\" # This rule allows ingress from dns tenant to all workloads. action: \"Allow\" from: - namespaces: matchLabels: tenant: dns - name: \"default-deny-dns\" # This rule denies all ingress from all pods to workloads. action: \"Deny\" from: - namespaces: {} # Use the empty selector with caution because it also selects OpenShift namespaces as well. egress: - name: \"default-deny-dns\" # This rule denies all egress from workloads. It will be applied when no ANP or network policy matches. action: \"Deny\" to: - namespaces: {} # Use the empty selector with caution because it also selects OpenShift namespaces as well.",
"2024-06-10T18:11:58.263Z|00022|acl_log(ovn_pinctrl0)|INFO|name=\"BANP:default:Ingress:0\", verdict=allow, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:57,dl_dst=0a:58:0a:82:02:56,nw_src=10.130.2.87,nw_dst=10.130.2.86,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=60510,tp_dst=8080,tcp_flags=syn 2024-06-10T18:11:58.264Z|00023|acl_log(ovn_pinctrl0)|INFO|name=\"BANP:default:Ingress:0\", verdict=allow, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:57,dl_dst=0a:58:0a:82:02:56,nw_src=10.130.2.87,nw_dst=10.130.2.86,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=60510,tp_dst=8080,tcp_flags=psh|ack 2024-06-10T18:11:58.264Z|00024|acl_log(ovn_pinctrl0)|INFO|name=\"BANP:default:Ingress:0\", verdict=allow, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:57,dl_dst=0a:58:0a:82:02:56,nw_src=10.130.2.87,nw_dst=10.130.2.86,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=60510,tp_dst=8080,tcp_flags=ack 2024-06-10T18:11:58.264Z|00025|acl_log(ovn_pinctrl0)|INFO|name=\"BANP:default:Ingress:0\", verdict=allow, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:57,dl_dst=0a:58:0a:82:02:56,nw_src=10.130.2.87,nw_dst=10.130.2.86,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=60510,tp_dst=8080,tcp_flags=ack 2024-06-10T18:11:58.264Z|00026|acl_log(ovn_pinctrl0)|INFO|name=\"BANP:default:Ingress:0\", verdict=allow, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:57,dl_dst=0a:58:0a:82:02:56,nw_src=10.130.2.87,nw_dst=10.130.2.86,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=60510,tp_dst=8080,tcp_flags=fin|ack 2024-06-10T18:11:58.264Z|00027|acl_log(ovn_pinctrl0)|INFO|name=\"BANP:default:Ingress:0\", verdict=allow, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:57,dl_dst=0a:58:0a:82:02:56,nw_src=10.130.2.87,nw_dst=10.130.2.86,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=60510,tp_dst=8080,tcp_flags=ack",
"2024-06-10T18:09:57.774Z|00016|acl_log(ovn_pinctrl0)|INFO|name=\"BANP:default:Egress:0\", verdict=drop, severity=alert, direction=from-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:56,dl_dst=0a:58:0a:82:02:57,nw_src=10.130.2.86,nw_dst=10.130.2.87,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=45614,tp_dst=8080,tcp_flags=syn 2024-06-10T18:09:58.809Z|00017|acl_log(ovn_pinctrl0)|INFO|name=\"BANP:default:Egress:0\", verdict=drop, severity=alert, direction=from-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:56,dl_dst=0a:58:0a:82:02:57,nw_src=10.130.2.86,nw_dst=10.130.2.87,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=45614,tp_dst=8080,tcp_flags=syn 2024-06-10T18:10:00.857Z|00018|acl_log(ovn_pinctrl0)|INFO|name=\"BANP:default:Egress:0\", verdict=drop, severity=alert, direction=from-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:56,dl_dst=0a:58:0a:82:02:57,nw_src=10.130.2.86,nw_dst=10.130.2.87,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=45614,tp_dst=8080,tcp_flags=syn 2024-06-10T18:10:25.414Z|00019|acl_log(ovn_pinctrl0)|INFO|name=\"BANP:default:Ingress:1\", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:58,dl_dst=0a:58:0a:82:02:56,nw_src=10.130.2.88,nw_dst=10.130.2.86,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=40630,tp_dst=8080,tcp_flags=syn 2024-06-10T18:10:26.457Z|00020|acl_log(ovn_pinctrl0)|INFO|name=\"BANP:default:Ingress:1\", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:58,dl_dst=0a:58:0a:82:02:56,nw_src=10.130.2.88,nw_dst=10.130.2.86,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=40630,tp_dst=8080,tcp_flags=syn 2024-06-10T18:10:28.505Z|00021|acl_log(ovn_pinctrl0)|INFO|name=\"BANP:default:Ingress:1\", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:58,dl_dst=0a:58:0a:82:02:56,nw_src=10.130.2.88,nw_dst=10.130.2.86,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=40630,tp_dst=8080,tcp_flags=syn",
"oc edit network.operator.openshift.io/cluster",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: policyAuditConfig: destination: \"null\" maxFileSize: 50 rateLimit: 20 syslogFacility: local0",
"cat <<EOF| oc create -f - kind: Namespace apiVersion: v1 metadata: name: verify-audit-logging annotations: k8s.ovn.org/acl-logging: '{ \"deny\": \"alert\", \"allow\": \"alert\" }' EOF",
"namespace/verify-audit-logging created",
"cat <<EOF| oc create -n verify-audit-logging -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: deny-all spec: podSelector: matchLabels: policyTypes: - Ingress - Egress --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-same-namespace namespace: verify-audit-logging spec: podSelector: {} policyTypes: - Ingress - Egress ingress: - from: - podSelector: {} egress: - to: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: verify-audit-logging EOF",
"networkpolicy.networking.k8s.io/deny-all created networkpolicy.networking.k8s.io/allow-from-same-namespace created",
"cat <<EOF| oc create -n default -f - apiVersion: v1 kind: Pod metadata: name: client spec: containers: - name: client image: registry.access.redhat.com/rhel7/rhel-tools command: [\"/bin/sh\", \"-c\"] args: [\"sleep inf\"] EOF",
"for name in client server; do cat <<EOF| oc create -n verify-audit-logging -f - apiVersion: v1 kind: Pod metadata: name: USD{name} spec: containers: - name: USD{name} image: registry.access.redhat.com/rhel7/rhel-tools command: [\"/bin/sh\", \"-c\"] args: [\"sleep inf\"] EOF done",
"pod/client created pod/server created",
"POD_IP=USD(oc get pods server -n verify-audit-logging -o jsonpath='{.status.podIP}')",
"oc exec -it client -n default -- /bin/ping -c 2 USDPOD_IP",
"PING 10.128.2.55 (10.128.2.55) 56(84) bytes of data. --- 10.128.2.55 ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 2041ms",
"oc exec -it client -n verify-audit-logging -- /bin/ping -c 2 USDPOD_IP",
"PING 10.128.0.86 (10.128.0.86) 56(84) bytes of data. 64 bytes from 10.128.0.86: icmp_seq=1 ttl=64 time=2.21 ms 64 bytes from 10.128.0.86: icmp_seq=2 ttl=64 time=0.440 ms --- 10.128.0.86 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 0.440/1.329/2.219/0.890 ms",
"for pod in USD(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node --no-headers=true | awk '{ print USD1 }') ; do oc exec -it USDpod -n openshift-ovn-kubernetes -- tail -4 /var/log/ovn/acl-audit-log.log done",
"2023-11-02T16:28:54.139Z|00004|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:Ingress\", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn 2023-11-02T16:28:55.187Z|00005|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:Ingress\", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn 2023-11-02T16:28:57.235Z|00006|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:Ingress\", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn 2023-11-02T16:49:57.909Z|00028|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:allow-from-same-namespace:Egress:0\", verdict=allow, severity=alert, direction=from-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:57.909Z|00029|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:allow-from-same-namespace:Ingress:0\", verdict=allow, severity=alert, direction=to-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:58.932Z|00030|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:allow-from-same-namespace:Egress:0\", verdict=allow, severity=alert, direction=from-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:58.932Z|00031|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:allow-from-same-namespace:Ingress:0\", verdict=allow, severity=alert, direction=to-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0",
"oc annotate namespace <namespace> k8s.ovn.org/acl-logging='{ \"deny\": \"alert\", \"allow\": \"notice\" }'",
"kind: Namespace apiVersion: v1 metadata: name: <namespace> annotations: k8s.ovn.org/acl-logging: |- { \"deny\": \"alert\", \"allow\": \"notice\" }",
"namespace/verify-audit-logging annotated",
"for pod in USD(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node --no-headers=true | awk '{ print USD1 }') ; do oc exec -it USDpod -n openshift-ovn-kubernetes -- tail -4 /var/log/ovn/acl-audit-log.log done",
"2023-11-02T16:49:57.909Z|00028|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:allow-from-same-namespace:Egress:0\", verdict=allow, severity=alert, direction=from-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:57.909Z|00029|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:allow-from-same-namespace:Ingress:0\", verdict=allow, severity=alert, direction=to-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:58.932Z|00030|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:allow-from-same-namespace:Egress:0\", verdict=allow, severity=alert, direction=from-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:58.932Z|00031|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:allow-from-same-namespace:Ingress:0\", verdict=allow, severity=alert, direction=to-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0",
"oc annotate --overwrite namespace <namespace> k8s.ovn.org/acl-logging-",
"kind: Namespace apiVersion: v1 metadata: name: <namespace> annotations: k8s.ovn.org/acl-logging: null",
"namespace/verify-audit-logging annotated"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/networking/network-security |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/replacing_nodes/providing-feedback-on-red-hat-documentation_rhodf |
Chapter 6. Upgrading the Ansible plug-ins on an Operator installation on OpenShift Container Platform | Chapter 6. Upgrading the Ansible plug-ins on an Operator installation on OpenShift Container Platform To upgrade the Ansible plug-ins, you must update the plugin-registry application with the latest Ansible plug-ins files. 6.1. Downloading the Ansible plug-ins files Download the latest .tar file for the plug-ins from the Red Hat Ansible Automation Platform Product Software downloads page . The format of the filename is ansible-backstage-rhaap-bundle-x.y.z.tar.gz . Substitute the Ansible plug-ins release version, for example 1.0.0 , for x.y.z . Create a directory on your local machine to store the .tar files. USD mkdir /path/to/<ansible-backstage-plugins-local-dir-changeme> Set an environment variable ( USDDYNAMIC_PLUGIN_ROOT_DIR ) to represent the directory path. USD export DYNAMIC_PLUGIN_ROOT_DIR=/path/to/<ansible-backstage-plugins-local-dir-changeme> Extract the ansible-backstage-rhaap-bundle-<version-number>.tar.gz contents to USDDYNAMIC_PLUGIN_ROOT_DIR . USD tar --exclude='*code*' -xzf ansible-backstage-rhaap-bundle-x.y.z.tar.gz -C USDDYNAMIC_PLUGIN_ROOT_DIR Substitute the Ansible plug-ins release version, for example 1.0.0 , for x.y.z . Verification Run ls to verify that the extracted files are in the USDDYNAMIC_PLUGIN_ROOT_DIR directory: USD ls USDDYNAMIC_PLUGIN_ROOT_DIR ansible-plugin-backstage-rhaap-x.y.z.tgz ansible-plugin-backstage-rhaap-x.y.z.tgz.integrity ansible-plugin-backstage-rhaap-backend-x.y.z.tgz ansible-plugin-backstage-rhaap-backend-x.y.z.tgz.integrity ansible-plugin-scaffolder-backend-module-backstage-rhaap-x.y.z.tgz ansible-plugin-scaffolder-backend-module-backstage-rhaap-x.y.z.tgz.integrity The files with the .integrity file type contain the plugin SHA value. The SHA value is used during the plug-in configuration. 6.2. Updating the plug-in registry Rebuild your plug-in registry application in your OpenShift cluster with the latest Ansible plug-ins files. Prerequisites You have downloaded the Ansible plug-ins files. You have set an environment variable, for example ( USDDYNAMIC_PLUGIN_ROOT_DIR ), to represent the path to the local directory where you have stored the .tar files. Procedure Log in to your OpenShift Container Platform instance with credentials to create a new application. Open your Red Hat Developer Hub OpenShift project. USD oc project <YOUR_DEVELOPER_HUB_PROJECT> Run the following commands to update your plug-in registry build in the OpenShift cluster. The commands assume that USDDYNAMIC_PLUGIN_ROOT_DIR represents the directory for your .tar files. Replace this in the command if you have chosen a different environment variable name. USD oc start-build plugin-registry --from-dir=USDDYNAMIC_PLUGIN_ROOT_DIR --wait USD oc start-build plugin-registry --from-dir=USDDYNAMIC_PLUGIN_ROOT_DIR --wait When the registry has started, the output displays the following message: Uploading directory "/path/to/dynamic_plugin_root" as binary input for the build ... Uploading finished build.build.openshift.io/plugin-registry-1 started Verification Verify that the plugin-registry has been updated. In the OpenShift UI, click Topology . Click the redhat-developer-hub icon to view the pods for the plug-in registry. Click View logs for the plug-in registry pod. Open the Terminal tab and run ls to view the .tar files in the plug-in registry . Verify that the new .tar file has been uploaded. 6.3. Updating the Ansible plug-ins version numbers for an Operator installation Procedure Log in to your OpenShift Container Platform instance. In the OpenShift UI, open the ConfigMap where you added the Ansible plug-ins during installation. This example uses a ConfigMap file called rhaap-dynamic-plugins-config . Update x.y.z with the version numbers for the updated Ansible plug-ins. Update the integrity values for each plug-in with the .integrity value from the corresponding extracted Ansible plug-ins .tar file. kind: ConfigMap apiVersion: v1 metadata: name: rhaap-dynamic-plugins-config data: dynamic-plugins.yaml: | ... plugins: # Update the Ansible plug-in entries below with the updated plugin versions - disabled: false package: 'http://plugin-registry:8080/ansible-plugin-backstage-rhaap-x.y.z.tgz' integrity: <SHA512 value> # Use hash in ansible-plugin-backstage-rhaap-x.y.z.tgz.integrity ... - disabled: false package: >- http://plugin-registry:8080/ansible-plugin-backstage-rhaap-backend-x.y.z.tgz integrity: <SHA512 value> # Use hash in ansible-plugin-backstage-rhaap-backend-x.y.z.tgz.integrity ... - disabled: false package: >- http://plugin-registry:8080/ansible-plugin-scaffolder-backend-module-backstage-rhaap-x.y.z.tgz integrity: <SHA512 value> # Use hash in ansible-plugin-scaffolder-backend-module-backstage-rhaap-x.y.z.tgz.integrity ... Click Save . The developer hub pods restart and the plug-ins are installed. Verification In the OpenShift UI, click Topology . Make sure that the Red Hat Developer Hub instance is available. | [
"mkdir /path/to/<ansible-backstage-plugins-local-dir-changeme>",
"export DYNAMIC_PLUGIN_ROOT_DIR=/path/to/<ansible-backstage-plugins-local-dir-changeme>",
"tar --exclude='*code*' -xzf ansible-backstage-rhaap-bundle-x.y.z.tar.gz -C USDDYNAMIC_PLUGIN_ROOT_DIR",
"ls USDDYNAMIC_PLUGIN_ROOT_DIR ansible-plugin-backstage-rhaap-x.y.z.tgz ansible-plugin-backstage-rhaap-x.y.z.tgz.integrity ansible-plugin-backstage-rhaap-backend-x.y.z.tgz ansible-plugin-backstage-rhaap-backend-x.y.z.tgz.integrity ansible-plugin-scaffolder-backend-module-backstage-rhaap-x.y.z.tgz ansible-plugin-scaffolder-backend-module-backstage-rhaap-x.y.z.tgz.integrity",
"oc project <YOUR_DEVELOPER_HUB_PROJECT>",
"oc start-build plugin-registry --from-dir=USDDYNAMIC_PLUGIN_ROOT_DIR --wait",
"oc start-build plugin-registry --from-dir=USDDYNAMIC_PLUGIN_ROOT_DIR --wait",
"Uploading directory \"/path/to/dynamic_plugin_root\" as binary input for the build ... Uploading finished build.build.openshift.io/plugin-registry-1 started",
"kind: ConfigMap apiVersion: v1 metadata: name: rhaap-dynamic-plugins-config data: dynamic-plugins.yaml: | plugins: # Update the Ansible plug-in entries below with the updated plugin versions - disabled: false package: 'http://plugin-registry:8080/ansible-plugin-backstage-rhaap-x.y.z.tgz' integrity: <SHA512 value> # Use hash in ansible-plugin-backstage-rhaap-x.y.z.tgz.integrity - disabled: false package: >- http://plugin-registry:8080/ansible-plugin-backstage-rhaap-backend-x.y.z.tgz integrity: <SHA512 value> # Use hash in ansible-plugin-backstage-rhaap-backend-x.y.z.tgz.integrity - disabled: false package: >- http://plugin-registry:8080/ansible-plugin-scaffolder-backend-module-backstage-rhaap-x.y.z.tgz integrity: <SHA512 value> # Use hash in ansible-plugin-scaffolder-backend-module-backstage-rhaap-x.y.z.tgz.integrity"
] | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/installing_ansible_plug-ins_for_red_hat_developer_hub/rhdh-upgrade-ocp-operator_aap-plugin-rhdh-installing |
Chapter 3. Technology Preview features | Chapter 3. Technology Preview features Important This section describes Technology Preview features in Red Hat OpenShift AI. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Guardrails Orchestrator Framework Red Hat introduces the Guardrails Orchestrator as Technology Preview, enabling flexible AI content filtering with support for multiple detection methods and scalable integration options. This initial release launches advanced response handling capabilities, including blocking and masking of inappropriate content. Distributed InstructLab training InstructLab is an open-source project for enhancing large language models (LLMs) in generative artificial intelligence (gen AI) applications. It fine-tunes models using synthetic data generation (SDG) techniques and a structured taxonomy to create diverse, high-quality training datasets. There are two ways to run the InstructLab training flow: the InstructLab pipeline, which is the recommended approach, and the standalone script: InstructLab pipeline The InstructLab pipeline is now available as a Technology Preview feature, enabling you to run the full InstructLab workflow through a data science pipeline in OpenShift AI. For prerequisites and setup instructions to run this pipeline, see Running InstructLab Pipeline with Data Science Pipelines on RHOAI . Important You must have NVIDIA GPU Operator 24.6 installed to use the InstructLab pipeline in OpenShift AI 2.18. Standalone script Distributed InstructLab training is available as a Technology Preview feature, offering enhanced performance for training tasks in distributed environments compared to single-node setups. By orchestrating multi-node training jobs, this feature improves efficiency and scalability and allows users to leverage distributed resources for more effective AI model development. To implement the full InstructLab training flow with the standalone script, including data preparation, transfer, and distributed execution, see Distributed InstructLab Training on RHOAI . Mandatory Kueue local-queue labeling policy for Ray cluster creation Cluster administrators can use the Validating Admission Policy feature to enforce the mandatory labeling of Ray cluster resources with Kueue local-queue identifiers. This labeling ensures that workloads are properly categorized and routed based on queue management policies, which prevents resource contention and enhances operational efficiency. When the local-queue labeling policy is enforced, Ray clusters are created only if they are configured to use a local queue, and the Ray cluster resources are then managed by Kueue. The local-queue labeling policy is enforced for all projects by default, but can be disabled for some or all projects. For more information about the local-queue labeling policy, see Enforcing the use of local queues . Note This feature might introduce a breaking change for users who did not previously use Kueue local queues to manage their Ray cluster resources. RStudio Server notebook image With the RStudio Server notebook image, you can access the RStudio IDE, an integrated development environment for R. The R programming language is used for statistical computing and graphics to support data analysis and predictions. To use the RStudio Server notebook image, you must first build it by creating a secret and triggering the BuildConfig , and then enable it in the OpenShift AI UI by editing the rstudio-rhel9 image stream. For more information, see Building the RStudio Server workbench images . Important Disclaimer: Red Hat supports managing workbenches in OpenShift AI. However, Red Hat does not provide support for the RStudio software. RStudio Server is available through rstudio.org and is subject to their licensing terms. You should review their licensing terms before you use this sample workbench. CUDA - RStudio Server notebook image With the CUDA - RStudio Server notebook image, you can access the RStudio IDE and NVIDIA CUDA Toolkit. The RStudio IDE is an integrated development environment for the R programming language for statistical computing and graphics. With the NVIDIA CUDA toolkit, you can enhance your work by using GPU-accelerated libraries and optimization tools. To use the CUDA - RStudio Server notebook image, you must first build it by creating a secret and triggering the BuildConfig , and then enable it in the OpenShift AI UI by editing the rstudio-rhel9 image stream. For more information, see Building the RStudio Server workbench images . Important Disclaimer: Red Hat supports managing workbenches in OpenShift AI. However, Red Hat does not provide support for the RStudio software. RStudio Server is available through rstudio.org and is subject to their licensing terms. You should review their licensing terms before you use this sample workbench. The CUDA - RStudio Server notebook image contains NVIDIA CUDA technology. CUDA licensing information is available in the CUDA Toolkit documentation. You should review their licensing terms before you use this sample workbench. Model Registry OpenShift AI now supports the Model Registry Operator. The Model Registry Operator is not installed by default in Technology Preview mode. The model registry is a central repository that contains metadata related to machine learning models from inception to deployment. OCI containers for model storage You can use OCI storage as an alternative to cloud storage services for model serving. First, you create an OCI container image to contain the model. The image is uploaded to an OCI-compatible registry, such as Quay. Later, when deploying a model, the model serving platform references the repository of the containerized model. Using an OCI container can provide the following advantages: Reduced startup times, because the cluster keeps a cache of downloaded images. Restarting the model pod does not download the model again. Lower disk space usage, because the model is not downloaded on each pod replica, assuming pods are scheduled on the same node. Enhanced performance when pre-fetching images or asynchronous loading. Compatibility and integration, because it can be easily integrated with KServe. No additional dependencies are required and the infrastructure might already be available. Support for multinode deployment of very large models Serving models over multiple graphical processing unit (GPU) nodes when using a single-model serving runtime is now available as a Technology Preview feature. Deploy your models across multiple GPU nodes to improve efficiency when deploying large models such as large language models (LLMs). For more information, see Deploying models across multiple GPU nodes . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/release_notes/technology-preview-features_relnotes |
Chapter 4. Setting up and configuring a BIND DNS server | Chapter 4. Setting up and configuring a BIND DNS server BIND is a feature-rich DNS server that is fully compliant with the Internet Engineering Task Force (IETF) DNS standards and draft standards. For example, administrators frequently use BIND as: Caching DNS server in the local network Authoritative DNS server for zones Secondary server to provide high availability for zones 4.1. Considerations about protecting BIND with SELinux or running it in a change-root environment To secure a BIND installation, you can: Run the named service without a change-root environment. In this case, SELinux in enforcing mode prevents exploitation of known BIND security vulnerabilities. By default, Red Hat Enterprise Linux uses SELinux in enforcing mode. Important Running BIND on RHEL with SELinux in enforcing mode is more secure than running BIND in a change-root environment. Run the named-chroot service in a change-root environment. Using the change-root feature, administrators can define that the root directory of a process and its sub-processes is different to the / directory. When you start the named-chroot service, BIND switches its root directory to /var/named/chroot/ . As a consequence, the service uses mount --bind commands to make the files and directories listed in /etc/named-chroot.files available in /var/named/chroot/ , and the process has no access to files outside of /var/named/chroot/ . If you decide to use BIND: In normal mode, use the named service. In a change-root environment, use the named-chroot service. This requires that you install, additionally, the named-chroot package. Additional resources The Red Hat SELinux BIND security profile section in the named(8) man page on your system 4.2. The BIND Administrator Reference Manual The comprehensive BIND Administrator Reference Manual , that is included in the bind package, provides: Configuration examples Documentation on advanced features A configuration reference Security considerations To display the BIND Administrator Reference Manual on a host that has the bind package installed, open the /usr/share/doc/bind/Bv9ARM.html file in a browser. 4.3. Configuring BIND as a caching DNS server By default, the BIND DNS server resolves and caches successful and failed lookups. The service then answers requests to the same records from its cache. This significantly improves the speed of DNS lookups. Prerequisites The IP address of the server is static. Procedure Install the bind and bind-utils packages: These packages provide BIND 9.11. If you require BIND 9.16, install the bind9.16 and bind9.16-utils packages. If you want to run BIND in a change-root environment install the bind-chroot package: Note that running BIND on a host with SELinux in enforcing mode, which is default, is more secure. Edit the /etc/named.conf file, and make the following changes in the options statement: Update the listen-on and listen-on-v6 statements to specify on which IPv4 and IPv6 interfaces BIND should listen: Update the allow-query statement to configure from which IP addresses and ranges clients can query this DNS server: Add an allow-recursion statement to define from which IP addresses and ranges BIND accepts recursive queries: Warning Do not allow recursion on public IP addresses of the server. Otherwise, the server can become part of large-scale DNS amplification attacks. By default, BIND resolves queries by recursively querying from the root servers to an authoritative DNS server. Alternatively, you can configure BIND to forward queries to other DNS servers, such as the ones of your provider. In this case, add a forwarders statement with the list of IP addresses of the DNS servers that BIND should forward queries to: As a fall-back behavior, BIND resolves queries recursively if the forwarder servers do not respond. To disable this behavior, add a forward only; statement. Verify the syntax of the /etc/named.conf file: If the command displays no output, the syntax is correct. Update the firewalld rules to allow incoming DNS traffic: Start and enable BIND: If you want to run BIND in a change-root environment, use the systemctl enable --now named-chroot command to enable and start the service. Verification Use the newly set up DNS server to resolve a domain: This example assumes that BIND runs on the same host and responds to queries on the localhost interface. After querying a record for the first time, BIND adds the entry to its cache. Repeat the query: Because of the cached entry, further requests for the same record are significantly faster until the entry expires. steps Configure the clients in your network to use this DNS server. If a DHCP server provides the DNS server setting to the clients, update the DHCP server's configuration accordingly. Additional resources Considerations about protecting BIND with SELinux or running it in a change-root environment named.conf(5) man page on your system /usr/share/doc/bind/sample/etc/named.conf The BIND Administrator Reference Manual 4.4. Configuring logging on a BIND DNS server The configuration in the default /etc/named.conf file, as provided by the bind package, uses the default_debug channel and logs messages to the /var/named/data/named.run file. The default_debug channel only logs entries when the server's debug level is non-zero. Using different channels and categories, you can configure BIND to write different events with a defined severity to separate files. Prerequisites BIND is already configured, for example, as a caching name server. The named or named-chroot service is running. Procedure Edit the /etc/named.conf file, and add category and channel phrases to the logging statement, for example: With this example configuration, BIND logs messages related to zone transfers to /var/named/log/transfer.log . BIND creates up to 10 versions of the log file and rotates them if they reach a maximum size of 50 MB. The category phrase defines to which channels BIND sends messages of a category. The channel phrase defines the destination of log messages including the number of versions, the maximum file size, and the severity level BIND should log to a channel. Additional settings, such as enabling logging the time stamp, category, and severity of an event are optional, but useful for debugging purposes. Create the log directory if it does not exist, and grant write permissions to the named user on this directory: Verify the syntax of the /etc/named.conf file: If the command displays no output, the syntax is correct. Restart BIND: If you run BIND in a change-root environment, use the systemctl restart named-chroot command to restart the service. Verification Display the content of the log file: Additional resources named.conf(5) man page on your system The BIND Administrator Reference Manual 4.5. Writing BIND ACLs Controlling access to certain features of BIND can prevent unauthorized access and attacks, such as denial of service (DoS). BIND access control list ( acl ) statements are lists of IP addresses and ranges. Each ACL has a nickname that you can use in several statements, such as allow-query , to refer to the specified IP addresses and ranges. Warning BIND uses only the first matching entry in an ACL. For example, if you define an ACL { 192.0.2/24; !192.0.2.1; } and the host with IP address 192.0.2.1 connects, access is granted even if the second entry excludes this address. BIND has the following built-in ACLs: none : Matches no hosts. any : Matches all hosts. localhost : Matches the loopback addresses 127.0.0.1 and ::1 , as well as the IP addresses of all interfaces on the server that runs BIND. localnets : Matches the loopback addresses 127.0.0.1 and ::1 , as well as all subnets the server that runs BIND is directly connected to. Prerequisites BIND is already configured, for example, as a caching name server. The named or named-chroot service is running. Procedure Edit the /etc/named.conf file and make the following changes: Add acl statements to the file. For example, to create an ACL named internal-networks for 127.0.0.1 , 192.0.2.0/24 , and 2001:db8:1::/64 , enter: Use the ACL's nickname in statements that support them, for example: Verify the syntax of the /etc/named.conf file: If the command displays no output, the syntax is correct. Reload BIND: If you run BIND in a change-root environment, use the systemctl reload named-chroot command to reload the service. Verification Execute an action that triggers a feature which uses the configured ACL. For example, the ACL in this procedure allows only recursive queries from the defined IP addresses. In this case, enter the following command on a host that is not within the ACL's definition to attempt resolving an external domain: If the command returns no output, BIND denied access, and the ACL works. For a verbose output on the client, use the command without +short option: Additional resources The Access control lists section in the The BIND Administrator Reference Manual . 4.6. Configuring zones on a BIND DNS server A DNS zone is a database with resource records for a specific sub-tree in the domain space. For example, if you are responsible for the example.com domain, you can set up a zone for it in BIND. As a result, clients can, resolve www.example.com to the IP address configured in this zone. 4.6.1. The SOA record in zone files The start of authority (SOA) record is a required record in a DNS zone. This record is important, for example, if multiple DNS servers are authoritative for a zone but also to DNS resolvers. A SOA record in BIND has the following syntax: For better readability, administrators typically split the record in zone files into multiple lines with comments that start with a semicolon ( ; ). Note that, if you split a SOA record, parentheses keep the record together: Important Note the trailing dot at the end of the fully-qualified domain names (FQDNs). FQDNs consist of multiple domain labels, separated by dots. Because the DNS root has an empty label, FQDNs end with a dot. Therefore, BIND appends the zone name to names without a trailing dot. A hostname without a trailing dot, for example, ns1.example.com would be expanded to ns1.example.com.example.com. , which is not the correct address of the primary name server. These are the fields in a SOA record: name : The name of the zone, the so-called origin . If you set this field to @ , BIND expands it to the zone name defined in /etc/named.conf . class : In SOA records, you must set this field always to Internet ( IN ). type : In SOA records, you must set this field always to SOA . mname (master name): The hostname of the primary name server of this zone. rname (responsible name): The email address of who is responsible for this zone. Note that the format is different. You must replace the at sign ( @ ) with a dot ( . ). serial : The version number of this zone file. Secondary name servers only update their copies of the zone if the serial number on the primary server is higher. The format can be any numeric value. A commonly-used format is <year><month><day><two-digit-number> . With this format, you can, theoretically, change the zone file up to a hundred times per day. refresh : The amount of time secondary servers should wait before checking the primary server if the zone was updated. retry : The amount of time after that a secondary server retries to query the primary server after a failed attempt. expire : The amount of time after that a secondary server stops querying the primary server, if all attempts failed. minimum : RFC 2308 changed the meaning of this field to the negative caching time. Compliant resolvers use it to determine how long to cache NXDOMAIN name errors. Note A numeric value in the refresh , retry , expire , and minimum fields define a time in seconds. However, for better readability, use time suffixes, such as m for minute, h for hours, and d for days. For example, 3h stands for 3 hours. Additional resources RFC 1035 : Domain names - implementation and specification RFC 1034 : Domain names - concepts and facilities RFC 2308 : Negative caching of DNS queries (DNS cache) 4.6.2. Setting up a forward zone on a BIND primary server Forward zones map names to IP addresses and other information. For example, if you are responsible for the domain example.com , you can set up a forward zone in BIND to resolve names, such as www.example.com . Prerequisites BIND is already configured, for example, as a caching name server. The named or named-chroot service is running. Procedure Add a zone definition to the /etc/named.conf file: These settings define: This server as the primary server ( type master ) for the example.com zone. The /var/named/example.com.zone file is the zone file. If you set a relative path, as in this example, this path is relative to the directory you set in directory in the options statement. Any host can query this zone. Alternatively, specify IP ranges or BIND access control list (ACL) nicknames to limit the access. No host can transfer the zone. Allow zone transfers only when you set up secondary servers and only for the IP addresses of the secondary servers. Verify the syntax of the /etc/named.conf file: If the command displays no output, the syntax is correct. Create the /var/named/example.com.zone file, for example, with the following content: This zone file: Sets the default time-to-live (TTL) value for resource records to 8 hours. Without a time suffix, such as h for hour, BIND interprets the value as seconds. Contains the required SOA resource record with details about the zone. Sets ns1.example.com as an authoritative DNS server for this zone. To be functional, a zone requires at least one name server ( NS ) record. However, to be compliant with RFC 1912, you require at least two name servers. Sets mail.example.com as the mail exchanger ( MX ) of the example.com domain. The numeric value in front of the host name is the priority of the record. Entries with a lower value have a higher priority. Sets the IPv4 and IPv6 addresses of www.example.com , mail.example.com , and ns1.example.com . Set secure permissions on the zone file that allow only the named group to read it: Verify the syntax of the /var/named/example.com.zone file: Reload BIND: If you run BIND in a change-root environment, use the systemctl reload named-chroot command to reload the service. Verification Query different records from the example.com zone, and verify that the output matches the records you have configured in the zone file: This example assumes that BIND runs on the same host and responds to queries on the localhost interface. Additional resources The SOA record in zone files Writing BIND ACLs The BIND Administrator Reference Manual RFC 1912 - Common DNS operational and configuration errors 4.6.3. Setting up a reverse zone on a BIND primary server Reverse zones map IP addresses to names. For example, if you are responsible for IP range 192.0.2.0/24 , you can set up a reverse zone in BIND to resolve IP addresses from this range to hostnames. Note If you create a reverse zone for whole classful networks, name the zone accordingly. For example, for the class C network 192.0.2.0/24 , the name of the zone is 2.0.192.in-addr.arpa . If you want to create a reverse zone for a different network size, for example 192.0.2.0/28 , the name of the zone is 28-2.0.192.in-addr.arpa . Prerequisites BIND is already configured, for example, as a caching name server. The named or named-chroot service is running. Procedure Add a zone definition to the /etc/named.conf file: These settings define: This server as the primary server ( type master ) for the 2.0.192.in-addr.arpa reverse zone. The /var/named/2.0.192.in-addr.arpa.zone file is the zone file. If you set a relative path, as in this example, this path is relative to the directory you set in directory in the options statement. Any host can query this zone. Alternatively, specify IP ranges or BIND access control list (ACL) nicknames to limit the access. No host can transfer the zone. Allow zone transfers only when you set up secondary servers and only for the IP addresses of the secondary servers. Verify the syntax of the /etc/named.conf file: If the command displays no output, the syntax is correct. Create the /var/named/2.0.192.in-addr.arpa.zone file, for example, with the following content: This zone file: Sets the default time-to-live (TTL) value for resource records to 8 hours. Without a time suffix, such as h for hour, BIND interprets the value as seconds. Contains the required SOA resource record with details about the zone. Sets ns1.example.com as an authoritative DNS server for this reverse zone. To be functional, a zone requires at least one name server ( NS ) record. However, to be compliant with RFC 1912, you require at least two name servers. Sets the pointer ( PTR ) record for the 192.0.2.1 and 192.0.2.30 addresses. Set secure permissions on the zone file that only allow the named group to read it: Verify the syntax of the /var/named/2.0.192.in-addr.arpa.zone file: Reload BIND: If you run BIND in a change-root environment, use the systemctl reload named-chroot command to reload the service. Verification Query different records from the reverse zone, and verify that the output matches the records you have configured in the zone file: This example assumes that BIND runs on the same host and responds to queries on the localhost interface. Additional resources The SOA record in zone files Writing BIND ACLs The BIND Administrator Reference Manual RFC 1912 - Common DNS operational and configuration errors 4.6.4. Updating a BIND zone file In certain situations, for example if an IP address of a server changes, you must update a zone file. If multiple DNS servers are responsible for a zone, perform this procedure only on the primary server. Other DNS servers that store a copy of the zone will receive the update through a zone transfer. Prerequisites The zone is configured. The named or named-chroot service is running. Procedure Optional: Identify the path to the zone file in the /etc/named.conf file: You find the path to the zone file in the file statement in the zone's definition. A relative path is relative to the directory set in directory in the options statement. Edit the zone file: Make the required changes. Increment the serial number in the start of authority (SOA) record. Important If the serial number is equal to or lower than the value, secondary servers will not update their copy of the zone. Verify the syntax of the zone file: Reload BIND: If you run BIND in a change-root environment, use the systemctl reload named-chroot command to reload the service. Verification Query the record you have added, modified, or removed, for example: This example assumes that BIND runs on the same host and responds to queries on the localhost interface. Additional resources The SOA record in zone files Setting up a forward zone on a BIND primary server Setting up a reverse zone on a BIND primary server 4.6.5. DNSSEC zone signing using the automated key generation and zone maintenance features You can sign zones with domain name system security extensions (DNSSEC) to ensure authentication and data integrity. Such zones contain additional resource records. Clients can use them to verify the authenticity of the zone information. If you enable the DNSSEC policy feature for a zone, BIND performs the following actions automatically: Creates the keys Signs the zone Maintains the zone, including re-signing and periodically replacing the keys. Important To enable external DNS servers to verify the authenticity of a zone, you must add the public key of the zone to the parent zone. Contact your domain provider or registry for further details on how to accomplish this. This procedure uses the built-in default DNSSEC policy in BIND. This policy uses single ECDSAP256SHA key signatures. Alternatively, create your own policy to use custom keys, algorithms, and timings. Prerequisites BIND 9.16 or later is installed. To meet this requirement, install the bind9.16 package instead of bind . The zone for which you want to enable DNSSEC is configured. The named or named-chroot service is running. The server synchronizes the time with a time server. An accurate system time is important for DNSSEC validation. Procedure Edit the /etc/named.conf file, and add dnssec-policy default; to the zone for which you want to enable DNSSEC: Reload BIND: If you run BIND in a change-root environment, use the systemctl reload named-chroot command to reload the service. BIND stores the public key in the /var/named/K <zone_name> .+ <algorithm> + <key_ID> .key file. Use this file to display the public key of the zone in the format that the parent zone requires: DS record format: DNSKEY format: Request to add the public key of the zone to the parent zone. Contact your domain provider or registry for further details on how to accomplish this. Verification Query your own DNS server for a record from the zone for which you enabled DNSSEC signing: This example assumes that BIND runs on the same host and responds to queries on the localhost interface. After the public key has been added to the parent zone and propagated to other servers, verify that the server sets the authenticated data ( ad ) flag on queries to the signed zone: Additional resources Setting up a forward zone on a BIND primary server Setting up a reverse zone on a BIND primary server 4.7. Configuring zone transfers among BIND DNS servers Zone transfers ensure that all DNS servers that have a copy of the zone use up-to-date data. Prerequisites On the future primary server, the zone for which you want to set up zone transfers is already configured. On the future secondary server, BIND is already configured, for example, as a caching name server. On both servers, the named or named-chroot service is running. Procedure On the existing primary server: Create a shared key, and append it to the /etc/named.conf file: This command displays the output of the tsig-keygen command and automatically appends it to /etc/named.conf . You will require the output of the command later on the secondary server as well. Edit the zone definition in the /etc/named.conf file: In the allow-transfer statement, define that servers must provide the key specified in the example-transfer-key statement to transfer a zone: Alternatively, use BIND access control list (ACL) nicknames in the allow-transfer statement. By default, after a zone has been updated, BIND notifies all name servers which have a name server ( NS ) record in this zone. If you do not plan to add an NS record for the secondary server to the zone, you can, configure that BIND notifies this server anyway. For that, add the also-notify statement with the IP addresses of this secondary server to the zone: Verify the syntax of the /etc/named.conf file: If the command displays no output, the syntax is correct. Reload BIND: If you run BIND in a change-root environment, use the systemctl reload named-chroot command to reload the service. On the future secondary server: Edit the /etc/named.conf file as follows: Add the same key definition as on the primary server: Add the zone definition to the /etc/named.conf file: These settings state: This server is a secondary server ( type slave ) for the example.com zone. The /var/named/slaves/example.com.zone file is the zone file. If you set a relative path, as in this example, this path is relative to the directory you set in directory in the options statement. To separate zone files for which this server is secondary from primary ones, you can store them, for example, in the /var/named/slaves/ directory. Any host can query this zone. Alternatively, specify IP ranges or ACL nicknames to limit the access. No host can transfer the zone from this server. The IP addresses of the primary server of this zone are 192.0.2.1 and 2001:db8:1::2 . Alternatively, you can specify ACL nicknames. This secondary server will use the key named example-transfer-key to authenticate to the primary server. Verify the syntax of the /etc/named.conf file: Reload BIND: If you run BIND in a change-root environment, use the systemctl reload named-chroot command to reload the service. Optional: Modify the zone file on the primary server and add an NS record for the new secondary server. Verification On the secondary server: Display the systemd journal entries of the named service: If you run BIND in a change-root environment, use the journalctl -u named-chroot command to display the journal entries. Verify that BIND created the zone file: Note that, by default, secondary servers store zone files in a binary raw format. Query a record of the transferred zone from the secondary server: This example assumes that the secondary server you set up in this procedure listens on IP address 192.0.2.2 . Additional resources Setting up a forward zone on a BIND primary server Setting up a reverse zone on a BIND primary server Writing BIND ACLs Updating a BIND zone file 4.8. Configuring response policy zones in BIND to override DNS records Using DNS blocking and filtering, administrators can rewrite a DNS response to block access to certain domains or hosts. In BIND, response policy zones (RPZs) provide this feature. You can configure different actions for blocked entries, such as returning an NXDOMAIN error or not responding to the query. If you have multiple DNS servers in your environment, use this procedure to configure the RPZ on the primary server, and later configure zone transfers to make the RPZ available on your secondary servers. Prerequisites BIND is already configured, for example, as a caching name server. The named or named-chroot service is running. Procedure Edit the /etc/named.conf file, and make the following changes: Add a response-policy definition to the options statement: You can set a custom name for the RPZ in the zone statement in response-policy . However, you must use the same name in the zone definition in the step. Add a zone definition for the RPZ you set in the step: These settings state: This server is the primary server ( type master ) for the RPZ named rpz.local . The /var/named/rpz.local file is the zone file. If you set a relative path, as in this example, this path is relative to the directory you set in directory in the options statement. Any hosts defined in allow-query can query this RPZ. Alternatively, specify IP ranges or BIND access control list (ACL) nicknames to limit the access. No host can transfer the zone. Allow zone transfers only when you set up secondary servers and only for the IP addresses of the secondary servers. Verify the syntax of the /etc/named.conf file: If the command displays no output, the syntax is correct. Create the /var/named/rpz.local file, for example, with the following content: This zone file: Sets the default time-to-live (TTL) value for resource records to 10 minutes. Without a time suffix, such as h for hour, BIND interprets the value as seconds. Contains the required start of authority (SOA) resource record with details about the zone. Sets ns1.example.com as an authoritative DNS server for this zone. To be functional, a zone requires at least one name server ( NS ) record. However, to be compliant with RFC 1912, you require at least two name servers. Return an NXDOMAIN error for queries to example.org and hosts in this domain. Drop queries to example.net and hosts in this domain. For a full list of actions and examples, see IETF draft: DNS Response Policy Zones (RPZ) . Verify the syntax of the /var/named/rpz.local file: Reload BIND: If you run BIND in a change-root environment, use the systemctl reload named-chroot command to reload the service. Verification Attempt to resolve a host in example.org , that is configured in the RPZ to return an NXDOMAIN error: This example assumes that BIND runs on the same host and responds to queries on the localhost interface. Attempt to resolve a host in the example.net domain, that is configured in the RPZ to drop queries: Additional resources IETF draft: DNS Response Policy Zones (RPZ) 4.9. Bind Migration from RHEL 7 to RHEL 8 To migrate the BIND from RHEL 7 to RHEL 8 you need to adjust the bind configuration in the following ways : Remove the dnssec-lookaside auto configuration option. BIND will listen on any configured IPv6 addresses by default because the default value for the listen-on-v6 configuration option has been changed to any from none . Multiple zones cannot share the same zone file when updates to its zone are allowed. If you need to use the same file in multiple zone definitions, ensure that allow-updates uses only none . Do not use non-empty update-policy or enable inline-signing , otherwise use in-view clause to share the zone. Updated command-line options, default behavior and output formats : The number of UDP listeners employed per interface has been changed to be a function of the number of processors. You can override it by using the -U argument to BIND . The XML format used in the statistics-channel has been changed. The rndc flushtree option now flushes DNSSEC validation failures as well as specific name records. You must use the /etc/named.root.key file instead of the /etc/named.iscdlv.key file. The /etc/named.iscdlv.key file is not available anymore. The querylog format has been changed to include a memory address of the client object. It can be helpful in debugging. The named and dig utility now send a DNS COOKIE (RFC 7873) by default, which might break on restrictive firewall or intrusion detection system. You can change this behaviour by using the send-cookie configuration option. The dig utility can display the Extended DNS Errors (EDE, RFC 8914) in a text format. 4.10. Recording DNS queries by using dnstap As a network administrator, you can record Domain Name System (DNS) details to analyze DNS traffic patterns, monitor DNS server performance, and troubleshoot DNS issues. If you want an advanced way to monitor and log details of incoming name queries, use the dnstap interface that records sent messages from the named service. You can capture and record DNS queries to collect information about websites or IP addresses. Prerequisites The bind-9.11.26-2 package or a later version is installed. Warning If you already have a BIND version installed and running, adding a new version of BIND will overwrite the existing version. Procedure Enable dnstap and the target file by editing the /etc/named.conf file in the options block: To specify which types of DNS traffic you want to log, add dnstap filters to the dnstap block in the /etc/named.conf file. You can use the following filters: auth - Authoritative zone response or answer. client - Internal client query or answer. forwarder - Forwarded query or response from it. resolver - Iterative resolution query or response. update - Dynamic zone update requests. all - Any from the above options. query or response - If you do not specify a query or a response keyword, dnstap records both. Note The dnstap filter contains multiple definitions delimited by a ; in the dnstap {} block with the following syntax: dnstap { ( all | auth | client | forwarder | resolver | update ) [ ( query | response ) ]; ... }; To apply your changes, restart the named service: Configure a periodic rollout for active logs In the following example, the cron scheduler runs the content of the user-edited script once a day. The roll option with the value 3 specifies that dnstap can create up to three backup log files. The value 3 overrides the version parameter of the dnstap-output variable, and limits the number of backup log files to three. Additionally, the binary log file is moved to another directory and renamed, and it never reaches the .2 suffix, even if three backup log files already exist. You can skip this step if automatic rolling of binary logs based on size limit is sufficient. Handle and analyze logs in a human-readable format by using the dnstap-read utility: In the following example, the dnstap-read utility prints the output in the YAML file format. | [
"yum install bind bind-utils",
"yum install bind-chroot",
"listen-on port 53 { 127.0.0.1; 192.0.2.1; }; listen-on-v6 port 53 { ::1; 2001:db8:1::1; };",
"allow-query { localhost; 192.0.2.0/24; 2001:db8:1::/64; };",
"allow-recursion { localhost; 192.0.2.0/24; 2001:db8:1::/64; };",
"forwarders { 198.51.100.1; 203.0.113.5; };",
"named-checkconf",
"firewall-cmd --permanent --add-service=dns firewall-cmd --reload",
"systemctl enable --now named",
"dig @ localhost www.example.org www.example.org. 86400 IN A 198.51.100.34 ;; Query time: 917 msec",
"dig @ localhost www.example.org www.example.org. 85332 IN A 198.51.100.34 ;; Query time: 1 msec",
"logging { category notify { zone_transfer_log; }; category xfer-in { zone_transfer_log; }; category xfer-out { zone_transfer_log; }; channel zone_transfer_log { file \" /var/named/log/transfer.log \" versions 10 size 50m ; print-time yes; print-category yes; print-severity yes; severity info; }; };",
"mkdir /var/named/log/ chown named:named /var/named/log/ chmod 700 /var/named/log/",
"named-checkconf",
"systemctl restart named",
"cat /var/named/log/transfer.log 06-Jul-2022 15:08:51.261 xfer-out: info: client @0x7fecbc0b0700 192.0.2.2#36121/key example-transfer-key (example.com): transfer of 'example.com/IN': AXFR started: TSIG example-transfer-key (serial 2022070603) 06-Jul-2022 15:08:51.261 xfer-out: info: client @0x7fecbc0b0700 192.0.2.2#36121/key example-transfer-key (example.com): transfer of 'example.com/IN': AXFR ended",
"acl internal-networks { 127.0.0.1; 192.0.2.0/24; 2001:db8:1::/64; }; acl dmz-networks { 198.51.100.0/24; 2001:db8:2::/64; };",
"allow-query { internal-networks; dmz-networks; }; allow-recursion { internal-networks; };",
"named-checkconf",
"systemctl reload named",
"dig +short @ 192.0.2.1 www.example.com",
"dig @ 192.0.2.1 www.example.com ;; WARNING: recursion requested but not available",
"name class type mname rname serial refresh retry expire minimum",
"@ IN SOA ns1.example.com. hostmaster.example.com. ( 2022070601 ; serial number 1d ; refresh period 3h ; retry period 3d ; expire time 3h ) ; minimum TTL",
"zone \" example.com \" { type master; file \" example.com.zone \"; allow-query { any; }; allow-transfer { none; }; };",
"named-checkconf",
"USDTTL 8h @ IN SOA ns1.example.com. hostmaster.example.com. ( 2022070601 ; serial number 1d ; refresh period 3h ; retry period 3d ; expire time 3h ) ; minimum TTL IN NS ns1.example.com. IN MX 10 mail.example.com. www IN A 192.0.2.30 www IN AAAA 2001:db8:1::30 ns1 IN A 192.0.2.1 ns1 IN AAAA 2001:db8:1::1 mail IN A 192.0.2.20 mail IN AAAA 2001:db8:1::20",
"chown root:named /var/named/ example.com.zone chmod 640 /var/named/ example.com.zone",
"named-checkzone example.com /var/named/example.com.zone zone example.com/IN : loaded serial 2022070601 OK",
"systemctl reload named",
"dig +short @ localhost AAAA www.example.com 2001:db8:1::30 dig +short @ localhost NS example.com ns1.example.com. dig +short @ localhost A ns1.example.com 192.0.2.1",
"zone \" 2.0.192.in-addr.arpa \" { type master; file \" 2.0.192.in-addr.arpa.zone \"; allow-query { any; }; allow-transfer { none; }; };",
"named-checkconf",
"USDTTL 8h @ IN SOA ns1.example.com. hostmaster.example.com. ( 2022070601 ; serial number 1d ; refresh period 3h ; retry period 3d ; expire time 3h ) ; minimum TTL IN NS ns1.example.com. 1 IN PTR ns1.example.com. 30 IN PTR www.example.com.",
"chown root:named /var/named/ 2.0.192.in-addr.arpa.zone chmod 640 /var/named/ 2.0.192.in-addr.arpa.zone",
"named-checkzone 2.0.192.in-addr.arpa /var/named/2.0.192.in-addr.arpa.zone zone 2.0.192.in-addr.arpa/IN : loaded serial 2022070601 OK",
"systemctl reload named",
"dig +short @ localhost -x 192.0.2.1 ns1.example.com. dig +short @ localhost -x 192.0.2.30 www.example.com.",
"options { directory \" /var/named \"; } zone \" example.com \" { file \" example.com.zone \"; };",
"named-checkzone example.com /var/named/example.com.zone zone example.com/IN : loaded serial 2022062802 OK",
"systemctl reload named",
"dig +short @ localhost A ns2.example.com 192.0.2.2",
"zone \" example.com \" { dnssec-policy default; };",
"systemctl reload named",
"dnssec-dsfromkey /var/named/K example.com.+013+61141 .key example.com. IN DS 61141 13 2 3E184188CF6D2521EDFDC3F07CFEE8D0195AACBD85E68BAE0620F638B4B1B027",
"grep DNSKEY /var/named/K example.com.+013+61141.key example.com. 3600 IN DNSKEY 257 3 13 sjzT3jNEp120aSO4mPEHHSkReHUf7AABNnT8hNRTzD5cKMQSjDJin2I3 5CaKVcWO1pm+HltxUEt+X9dfp8OZkg==",
"dig +dnssec +short @ localhost A www.example.com 192.0.2.30 A 13 3 28800 20220718081258 20220705120353 61141 example.com. e7Cfh6GuOBMAWsgsHSVTPh+JJSOI/Y6zctzIuqIU1JqEgOOAfL/Qz474 M0sgi54m1Kmnr2ANBKJN9uvOs5eXYw==",
"dig @ localhost example.com +dnssec ;; flags: qr rd ra ad ; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1",
"tsig-keygen example-transfer-key | tee -a /etc/named.conf key \" example-transfer-key \" { algorithm hmac-sha256; secret \" q7ANbnyliDMuvWgnKOxMLi313JGcTZB5ydMW5CyUGXQ= \"; };",
"zone \" example.com \" { allow-transfer { key example-transfer-key; }; };",
"zone \" example.com \" { also-notify { 192.0.2.2; 2001:db8:1::2; }; };",
"named-checkconf",
"systemctl reload named",
"key \" example-transfer-key \" { algorithm hmac-sha256; secret \" q7ANbnyliDMuvWgnKOxMLi313JGcTZB5ydMW5CyUGXQ= \"; };",
"zone \" example.com \" { type slave; file \" slaves/example.com.zone \"; allow-query { any; }; allow-transfer { none; }; masters { 192.0.2.1 key example-transfer-key; 2001:db8:1::1 key example-transfer-key; }; };",
"named-checkconf",
"systemctl reload named",
"journalctl -u named Jul 06 15:08:51 ns2.example.com named[2024]: zone example.com/IN: Transfer started. Jul 06 15:08:51 ns2.example.com named[2024]: transfer of 'example.com/IN' from 192.0.2.1#53: connected using 192.0.2.2#45803 Jul 06 15:08:51 ns2.example.com named[2024]: zone example.com/IN: transferred serial 2022070101 Jul 06 15:08:51 ns2.example.com named[2024]: transfer of 'example.com/IN' from 192.0.2.1#53: Transfer status: success Jul 06 15:08:51 ns2.example.com named[2024]: transfer of 'example.com/IN' from 192.0.2.1#53: Transfer completed: 1 messages, 29 records, 2002 bytes, 0.003 secs (667333 bytes/sec)",
"ls -l /var/named/slaves/ total 4 -rw-r--r--. 1 named named 2736 Jul 6 15:08 example.com.zone",
"dig +short @ 192.0.2.2 AAAA www.example.com 2001:db8:1::30",
"options { response-policy { zone \" rpz.local \"; }; }",
"zone \"rpz.local\" { type master; file \"rpz.local\"; allow-query { localhost; 192.0.2.0/24; 2001:db8:1::/64; }; allow-transfer { none; }; };",
"named-checkconf",
"USDTTL 10m @ IN SOA ns1.example.com. hostmaster.example.com. ( 2022070601 ; serial number 1h ; refresh period 1m ; retry period 3d ; expire time 1m ) ; minimum TTL IN NS ns1.example.com. example.org IN CNAME . *.example.org IN CNAME . example.net IN CNAME rpz-drop. *.example.net IN CNAME rpz-drop.",
"named-checkzone rpz.local /var/named/rpz.local zone rpz.local/IN : loaded serial 2022070601 OK",
"systemctl reload named",
"dig @localhost www.example.org ;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN , id: 30286",
"dig @localhost www.example.net ;; connection timed out; no servers could be reached",
"options { dnstap { all; }; # Configure filter dnstap-output file \"/var/named/data/dnstap.bin\"; }; end of options",
"systemctl restart named.service",
"Example: sudoedit /etc/cron.daily/dnstap #!/bin/sh rndc dnstap -roll 3 mv /var/named/data/dnstap.bin.1 /var/log/named/dnstap/dnstap-USD(date -I).bin use dnstap-read to analyze saved logs sudo chmod a+x /etc/cron.daily/dnstap",
"Example: dnstap-read -y [file-name]"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/deploying_different_types_of_servers/assembly_setting-up-and-configuring-a-bind-dns-server_deploying-different-types-of-servers |
function::asmlinkage | function::asmlinkage Name function::asmlinkage - Mark function as declared asmlinkage Synopsis Arguments None Description Call this function before accessing arguments using the *_arg functions if the probed kernel function was declared asmlinkage in the source. | [
"asmlinkage()"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-asmlinkage |
Chapter 2. API Reference | Chapter 2. API Reference The full API reference is available on your Satellite Server at https:// satellite.example.com /apidoc/v2.html . Be aware that even though versions 1 and 2 of the Satellite 6 API are available, Red Hat only supports version 2. 2.1. Understanding the API Syntax Note The example requests below use python3 to format the respone from the Satellite Server. On RHEL 7 and some older systems, you must use python instead of python3 . The built-in API reference shows the API route, or path, preceded by an HTTP verb: To work with the API, construct a command using the curl command syntax and the API route from the reference document: 1 To use curl for the API call, specify an HTTP verb with the --request option. For example, --request POST . 2 Add the --insecure option to skip SSL peer certificate verification check. 3 Provide user credentials with the --user option. 4 For POST and PUT requests, use the --data option to pass JSON formatted data. For more information, see Section 4.1.1, "Passing JSON Data to the API Request" . 5 6 When passing the JSON data with the --data option, you must specify the following headers with the --header option. For more information, see Section 4.1.1, "Passing JSON Data to the API Request" . 7 When downloading content from Satellite Server, specify the output file with the --output option. 8 Use the API route in the following format: https:// satellite.example.com /katello/api/activation_keys . In Satellite 6, version 2 of the API is the default. Therefore it is not necessary to use v2 in the URL for API calls. 9 Redirect the output to the Python json.tool module to make the output easier to read. 2.1.1. Using the GET HTTP Verb Use the GET HTTP verb to get data from the API about an existing entry or resource. Example This example requests the amount of Satellite hosts: Example request: Example response: The response from the API indicates that there are two results in total, this is the first page of the results, and the maximum results per page is set to 20. For more information, see Section 2.2, "Understanding the JSON Response Format" . 2.1.2. Using the POST HTTP Verb Use the POST HTTP verb to submit data to the API to create an entry or resource. You must submit the data in JSON format. For more information, see Section 4.1.1, "Passing JSON Data to the API Request" . Example This example creates an activation key. Create a test file, for example, activation-key.json , with the following content: Create an activation key by applying the data in the activation-key.json file: Example request: Example response: Verify that the new activation key is present. In the Satellite web UI, navigate to Content > Activation keys to view your activation keys. 2.1.3. Using the PUT HTTP Verb Use the PUT HTTP verb to change an existing value or append to an existing resource. You must submit the data in JSON format. For more information, see Section 4.1.1, "Passing JSON Data to the API Request" . Example This example updates the TestKey activation key created in the example. Edit the activation-key.json file created previously as follows: Apply the changes in the JSON file: Example request: Example response: In the Satellite web UI, verify the changes by navigating to Content > Activation keys . 2.1.4. Using the DELETE HTTP Verb To delete a resource, use the DELETE verb with an API route that includes the ID of the resource you want to delete. Example This example deletes the TestKey activation key which ID is 2: Example request: Example response: 2.1.5. Relating API Error Messages to the API Reference The API uses a RAILs format to indicate an error: This translates to the following format used in the API reference: 2.2. Understanding the JSON Response Format Calls to the API return results in JSON format. The API call returns the result for a single-option response or for responses collections. Note The example requests below use python3 to format the respone from the Satellite Server. On RHEL 7 and some older systems, you must use python instead of python3 . JSON Response Format for Single Objects You can use single-object JSON responses to work with a single object. API requests to a single object require the object's unique identifier :id . This is an example of the format for a single-object request for the Satellite domain which ID is 23: Example request: Example response: JSON Response Format for Collections Collections are a list of objects such as hosts and domains. The format for a collection JSON response consists of a metadata fields section and a results section. This is an example of the format for a collection request for a list of Satellite domains: Example request: Example response: The response metadata fields API response uses the following metadata fields: total - The total number of objects without any search parameters. subtotal - The number of objects returned with the given search parameters. If there is no search, then subtotal is equal to total. page - The page number. per_page - The maximum number of objects returned per page. limit - The specified number of objects to return in a collection response. offset - The number of objects skipped before returning a collection. search - The search string based on scoped_scoped syntax. sort by - Specifies by what field the API sorts the collection. order - The sort order, either ASC for ascending or DESC for descending. results - The collection of objects. | [
"HTTP_VERB API_ROUTE",
"curl --request HTTP_VERB \\ 1 --insecure \\ 2 --user sat_username:sat_password \\ 3 --data @ file .json \\ 4 --header \"Accept:application/json\" \\ 5 --header \"Content-Type:application/json\" \\ 6 --output file 7 API_ROUTE \\ 8 | python3 -m json.tool 9",
"curl --request GET --insecure --user sat_username:sat_password https:// satellite.example.com /api/hosts | python3 -m json.tool",
"{ \"total\": 2, \"subtotal\": 2, \"page\": 1, \"per_page\": 20, \"search\": null, \"sort\": { \"by\": null, \"order\": null }, \"results\": output truncated",
"{\"organization_id\":1, \"name\":\"TestKey\", \"description\":\"Just for testing\"}",
"curl --header \"Accept:application/json\" --header \"Content-Type:application/json\" --request POST --user sat_username:sat_password --insecure --data @activation-key.json https:// satellite.example.com /katello/api/activation_keys | python3 -m json.tool",
"{ \"id\": 2, \"name\": \"TestKey\", \"description\": \"Just for testing\", \"unlimited_hosts\": true, \"auto_attach\": true, \"content_view_id\": null, \"environment_id\": null, \"usage_count\": 0, \"user_id\": 3, \"max_hosts\": null, \"release_version\": null, \"service_level\": null, \"content_overrides\": [ ], \"organization\": { \"name\": \"Default Organization\", \"label\": \"Default_Organization\", \"id\": 1 }, \"created_at\": \"2017-02-16 12:37:47 UTC\", \"updated_at\": \"2017-02-16 12:37:48 UTC\", \"content_view\": null, \"environment\": null, \"products\": null, \"host_collections\": [ ], \"permissions\": { \"view_activation_keys\": true, \"edit_activation_keys\": true, \"destroy_activation_keys\": true } }",
"{\"organization_id\":1, \"name\":\"TestKey\", \"description\":\"Just for testing\",\"max_hosts\":\"10\" }",
"curl --header \"Accept:application/json\" --header \"Content-Type:application/json\" --request PUT --user sat_username:sat_password --insecure --data @activation-key.json https:// satellite.example.com /katello/api/activation_keys/2 | python3 -m json.tool",
"{ \"id\": 2, \"name\": \"TestKey\", \"description\": \"Just for testing\", \"unlimited_hosts\": false, \"auto_attach\": true, \"content_view_id\": null, \"environment_id\": null, \"usage_count\": 0, \"user_id\": 3, \"max_hosts\": 10, \"release_version\": null, \"service_level\": null, \"content_overrides\": [ ], \"organization\": { \"name\": \"Default Organization\", \"label\": \"Default_Organization\", \"id\": 1 }, \"created_at\": \"2017-02-16 12:37:47 UTC\", \"updated_at\": \"2017-02-16 12:46:17 UTC\", \"content_view\": null, \"environment\": null, \"products\": null, \"host_collections\": [ ], \"permissions\": { \"view_activation_keys\": true, \"edit_activation_keys\": true, \"destroy_activation_keys\": true } }",
"curl --header \"Accept:application/json\" --header \"Content-Type:application/json\" --request DELETE --user sat_username:sat_password --insecure https:// satellite.example.com /katello/api/activation_keys/2 | python3 -m json.tool",
"output omitted \"started_at\": \"2017-02-16 12:58:17 UTC\", \"ended_at\": \"2017-02-16 12:58:18 UTC\", \"state\": \"stopped\", \"result\": \"success\", \"progress\": 1.0, \"input\": { \"activation_key\": { \"id\": 2, \"name\": \"TestKey\" output truncated",
"Nested_Resource . Attribute_Name",
"Resource [ Nested_Resource_attributes ][ Attribute_Name_id ]",
"curl --request GET --insecure --user sat_username:sat_password https:// satellite.example.com /api/domains/23 | python3 -m json.tool",
"{ \"id\": 23, \"name\": \"qa.lab.example.com\", \"fullname\": \"QA\", \"dns_id\": 10, \"created_at\": \"2013-08-13T09:02:31Z\", \"updated_at\": \"2013-08-13T09:02:31Z\" }",
"curl --request GET --insecure --user sat_username:sat_password https:// satellite.example.com /api/domains | python3 -m json.tool",
"{ \"total\": 3, \"subtotal\": 3, \"page\": 1, \"per_page\": 20, \"search\": null, \"sort\": { \"by\": null, \"order\": null }, \"results\": [ { \"id\": 23, \"name\": \"qa.lab.example.com\", \"fullname\": \"QA\", \"dns_id\": 10, \"created_at\": \"2013-08-13T09:02:31Z\", \"updated_at\": \"2013-08-13T09:02:31Z\" }, { \"id\": 25, \"name\": \"sat.lab.example.com\", \"fullname\": \"SATLAB\", \"dns_id\": 8, \"created_at\": \"2013-08-13T08:32:48Z\", \"updated_at\": \"2013-08-14T07:04:03Z\" }, { \"id\": 32, \"name\": \"hr.lab.example.com\", \"fullname\": \"HR\", \"dns_id\": 8, \"created_at\": \"2013-08-16T08:32:48Z\", \"updated_at\": \"2013-08-16T07:04:03Z\" } ] }"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/api_guide/chap-red_hat_satellite-api_guide-api_reference |
Chapter 3. PodTemplate [v1] | Chapter 3. PodTemplate [v1] Description PodTemplate describes a template for creating copies of a predefined pod. Type object 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata template object PodTemplateSpec describes the data a pod should have when created from a template 3.1.1. .template Description PodTemplateSpec describes the data a pod should have when created from a template Type object Property Type Description metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object PodSpec is a description of a pod. 3.1.2. .template.spec Description PodSpec is a description of a pod. Type object Required containers Property Type Description activeDeadlineSeconds integer Optional duration in seconds the pod may be active on the node relative to StartTime before the system will actively try to mark it failed and kill associated containers. Value must be a positive integer. affinity object Affinity is a group of affinity scheduling rules. automountServiceAccountToken boolean AutomountServiceAccountToken indicates whether a service account token should be automatically mounted. containers array List of containers belonging to the pod. Containers cannot currently be added or removed. There must be at least one container in a Pod. Cannot be updated. containers[] object A single application container that you want to run within a pod. dnsConfig object PodDNSConfig defines the DNS parameters of a pod in addition to those generated from DNSPolicy. dnsPolicy string Set DNS policy for the pod. Defaults to "ClusterFirst". Valid values are 'ClusterFirstWithHostNet', 'ClusterFirst', 'Default' or 'None'. DNS parameters given in DNSConfig will be merged with the policy selected with DNSPolicy. To have DNS options set along with hostNetwork, you have to specify DNS policy explicitly to 'ClusterFirstWithHostNet'. Possible enum values: - "ClusterFirst" indicates that the pod should use cluster DNS first unless hostNetwork is true, if it is available, then fall back on the default (as determined by kubelet) DNS settings. - "ClusterFirstWithHostNet" indicates that the pod should use cluster DNS first, if it is available, then fall back on the default (as determined by kubelet) DNS settings. - "Default" indicates that the pod should use the default (as determined by kubelet) DNS settings. - "None" indicates that the pod should use empty DNS settings. DNS parameters such as nameservers and search paths should be defined via DNSConfig. enableServiceLinks boolean EnableServiceLinks indicates whether information about services should be injected into pod's environment variables, matching the syntax of Docker links. Optional: Defaults to true. ephemeralContainers array List of ephemeral containers run in this pod. Ephemeral containers may be run in an existing pod to perform user-initiated actions such as debugging. This list cannot be specified when creating a pod, and it cannot be modified by updating the pod spec. In order to add an ephemeral container to an existing pod, use the pod's ephemeralcontainers subresource. ephemeralContainers[] object An EphemeralContainer is a temporary container that you may add to an existing Pod for user-initiated activities such as debugging. Ephemeral containers have no resource or scheduling guarantees, and they will not be restarted when they exit or when a Pod is removed or restarted. The kubelet may evict a Pod if an ephemeral container causes the Pod to exceed its resource allocation. To add an ephemeral container, use the ephemeralcontainers subresource of an existing Pod. Ephemeral containers may not be removed or restarted. hostAliases array HostAliases is an optional list of hosts and IPs that will be injected into the pod's hosts file if specified. This is only valid for non-hostNetwork pods. hostAliases[] object HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pod's hosts file. hostIPC boolean Use the host's ipc namespace. Optional: Default to false. hostNetwork boolean Host networking requested for this pod. Use the host's network namespace. If this option is set, the ports that will be used must be specified. Default to false. hostPID boolean Use the host's pid namespace. Optional: Default to false. hostUsers boolean Use the host's user namespace. Optional: Default to true. If set to true or not present, the pod will be run in the host user namespace, useful for when the pod needs a feature only available to the host user namespace, such as loading a kernel module with CAP_SYS_MODULE. When set to false, a new userns is created for the pod. Setting false is useful for mitigating container breakout vulnerabilities even allowing users to run their containers as root without actually having root privileges on the host. This field is alpha-level and is only honored by servers that enable the UserNamespacesSupport feature. hostname string Specifies the hostname of the Pod If not specified, the pod's hostname will be set to a system-defined value. imagePullSecrets array ImagePullSecrets is an optional list of references to secrets in the same namespace to use for pulling any of the images used by this PodSpec. If specified, these secrets will be passed to individual puller implementations for them to use. More info: https://kubernetes.io/docs/concepts/containers/images#specifying-imagepullsecrets-on-a-pod imagePullSecrets[] object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. initContainers array List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added or removed. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ initContainers[] object A single application container that you want to run within a pod. nodeName string NodeName is a request to schedule this pod onto a specific node. If it is non-empty, the scheduler simply schedules this pod onto that node, assuming that it fits resource requirements. nodeSelector object (string) NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ os object PodOS defines the OS parameters of a pod. overhead object (Quantity) Overhead represents the resource overhead associated with running a pod for a given RuntimeClass. This field will be autopopulated at admission time by the RuntimeClass admission controller. If the RuntimeClass admission controller is enabled, overhead must not be set in Pod create requests. The RuntimeClass admission controller will reject Pod create requests which have the overhead already set. If RuntimeClass is configured and selected in the PodSpec, Overhead will be set to the value defined in the corresponding RuntimeClass, otherwise it will remain unset and treated as zero. More info: https://git.k8s.io/enhancements/keps/sig-node/688-pod-overhead/README.md preemptionPolicy string PreemptionPolicy is the Policy for preempting pods with lower priority. One of Never, PreemptLowerPriority. Defaults to PreemptLowerPriority if unset. priority integer The priority value. Various system components use this field to find the priority of the pod. When Priority Admission Controller is enabled, it prevents users from setting this field. The admission controller populates this field from PriorityClassName. The higher the value, the higher the priority. priorityClassName string If specified, indicates the pod's priority. "system-node-critical" and "system-cluster-critical" are two special keywords which indicate the highest priorities with the former being the highest priority. Any other name must be defined by creating a PriorityClass object with that name. If not specified, the pod priority will be default or zero if there is no default. readinessGates array If specified, all readiness gates will be evaluated for pod readiness. A pod is ready when all its containers are ready AND all conditions specified in the readiness gates have status equal to "True" More info: https://git.k8s.io/enhancements/keps/sig-network/580-pod-readiness-gates readinessGates[] object PodReadinessGate contains the reference to a pod condition restartPolicy string Restart policy for all containers within the pod. One of Always, OnFailure, Never. Default to Always. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy Possible enum values: - "Always" - "Never" - "OnFailure" runtimeClassName string RuntimeClassName refers to a RuntimeClass object in the node.k8s.io group, which should be used to run this pod. If no RuntimeClass resource matches the named class, the pod will not be run. If unset or empty, the "legacy" RuntimeClass will be used, which is an implicit class with an empty definition that uses the default runtime handler. More info: https://git.k8s.io/enhancements/keps/sig-node/585-runtime-class schedulerName string If specified, the pod will be dispatched by specified scheduler. If not specified, the pod will be dispatched by default scheduler. securityContext object PodSecurityContext holds pod-level security attributes and common container settings. Some fields are also present in container.securityContext. Field values of container.securityContext take precedence over field values of PodSecurityContext. serviceAccount string DeprecatedServiceAccount is a depreciated alias for ServiceAccountName. Deprecated: Use serviceAccountName instead. serviceAccountName string ServiceAccountName is the name of the ServiceAccount to use to run this pod. More info: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ setHostnameAsFQDN boolean If true the pod's hostname will be configured as the pod's FQDN, rather than the leaf name (the default). In Linux containers, this means setting the FQDN in the hostname field of the kernel (the nodename field of struct utsname). In Windows containers, this means setting the registry value of hostname for the registry key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters to FQDN. If a pod does not have FQDN, this has no effect. Default to false. shareProcessNamespace boolean Share a single process namespace between all of the containers in a pod. When this is set containers will be able to view and signal processes from other containers in the same pod, and the first process in each container will not be assigned PID 1. HostPID and ShareProcessNamespace cannot both be set. Optional: Default to false. subdomain string If specified, the fully qualified Pod hostname will be "<hostname>.<subdomain>.<pod namespace>.svc.<cluster domain>". If not specified, the pod will not have a domainname at all. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully. May be decreased in delete request. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). If this value is nil, the default grace period will be used instead. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. Defaults to 30 seconds. tolerations array If specified, the pod's tolerations. tolerations[] object The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. topologySpreadConstraints array TopologySpreadConstraints describes how a group of pods ought to spread across topology domains. Scheduler will schedule pods in a way which abides by the constraints. All topologySpreadConstraints are ANDed. topologySpreadConstraints[] object TopologySpreadConstraint specifies how to spread matching pods among the given topology. volumes array List of volumes that can be mounted by containers belonging to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes volumes[] object Volume represents a named volume in a pod that may be accessed by any container in the pod. 3.1.3. .template.spec.affinity Description Affinity is a group of affinity scheduling rules. Type object Property Type Description nodeAffinity object Node affinity is a group of node affinity scheduling rules. podAffinity object Pod affinity is a group of inter pod affinity scheduling rules. podAntiAffinity object Pod anti affinity is a group of inter pod anti affinity scheduling rules. 3.1.4. .template.spec.affinity.nodeAffinity Description Node affinity is a group of node affinity scheduling rules. Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). requiredDuringSchedulingIgnoredDuringExecution object A node selector represents the union of the results of one or more label queries over a set of nodes; that is, it represents the OR of the selectors represented by the node selector terms. 3.1.5. .template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. Type array 3.1.6. .template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). Type object Required weight preference Property Type Description preference object A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. weight integer Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100. 3.1.7. .template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference Description A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 3.1.8. .template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions Description A list of node selector requirements by node's labels. Type array 3.1.9. .template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. Possible enum values: - "DoesNotExist" - "Exists" - "Gt" - "In" - "Lt" - "NotIn" values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 3.1.10. .template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields Description A list of node selector requirements by node's fields. Type array 3.1.11. .template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. Possible enum values: - "DoesNotExist" - "Exists" - "Gt" - "In" - "Lt" - "NotIn" values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 3.1.12. .template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution Description A node selector represents the union of the results of one or more label queries over a set of nodes; that is, it represents the OR of the selectors represented by the node selector terms. Type object Required nodeSelectorTerms Property Type Description nodeSelectorTerms array Required. A list of node selector terms. The terms are ORed. nodeSelectorTerms[] object A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. 3.1.13. .template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms Description Required. A list of node selector terms. The terms are ORed. Type array 3.1.14. .template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[] Description A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 3.1.15. .template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions Description A list of node selector requirements by node's labels. Type array 3.1.16. .template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. Possible enum values: - "DoesNotExist" - "Exists" - "Gt" - "In" - "Lt" - "NotIn" values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 3.1.17. .template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields Description A list of node selector requirements by node's fields. Type array 3.1.18. .template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. Possible enum values: - "DoesNotExist" - "Exists" - "Gt" - "In" - "Lt" - "NotIn" values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 3.1.19. .template.spec.affinity.podAffinity Description Pod affinity is a group of inter pod affinity scheduling rules. Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 3.1.20. .template.spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 3.1.21. .template.spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required weight podAffinityTerm Property Type Description podAffinityTerm object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 3.1.22. .template.spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector LabelSelector A label query over a set of resources, in this case pods. namespaceSelector LabelSelector A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 3.1.23. .template.spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 3.1.24. .template.spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector LabelSelector A label query over a set of resources, in this case pods. namespaceSelector LabelSelector A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 3.1.25. .template.spec.affinity.podAntiAffinity Description Pod anti affinity is a group of inter pod anti affinity scheduling rules. Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 3.1.26. .template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 3.1.27. .template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required weight podAffinityTerm Property Type Description podAffinityTerm object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 3.1.28. .template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector LabelSelector A label query over a set of resources, in this case pods. namespaceSelector LabelSelector A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 3.1.29. .template.spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 3.1.30. .template.spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector LabelSelector A label query over a set of resources, in this case pods. namespaceSelector LabelSelector A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 3.1.31. .template.spec.containers Description List of containers belonging to the pod. Containers cannot currently be added or removed. There must be at least one container in a Pod. Cannot be updated. Type array 3.1.32. .template.spec.containers[] Description A single application container that you want to run within a pod. Type object Required name Property Type Description args array (string) Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command array (string) Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env array List of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images Possible enum values: - "Always" means that kubelet always attempts to pull the latest image. Container will fail If the pull fails. - "IfNotPresent" means that kubelet pulls if the image isn't present on disk. Container will fail if the image isn't present and the pull fails. - "Never" means that kubelet never pulls an image, but only uses a local image. Container will fail if the image isn't present lifecycle object Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted. livenessProbe object Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports array List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. ports[] object ContainerPort represents a network port in a single container. readinessProbe object Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. resources object ResourceRequirements describes the compute resource requirements. securityContext object SecurityContext holds security configuration that will be applied to a container. Some fields are present in both SecurityContext and PodSecurityContext. When both are set, the values in SecurityContext take precedence. startupProbe object Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. Possible enum values: - "FallbackToLogsOnError" will read the most recent contents of the container logs for the container status message when the container exits with an error and the terminationMessagePath has no contents. - "File" is the default behavior and will set the container status message to the contents of the container's terminationMessagePath when the container exits. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices array volumeDevices is the list of block devices to be used by the container. volumeDevices[] object volumeDevice describes a mapping of a raw block device within a container. volumeMounts array Pod volumes to mount into the container's filesystem. Cannot be updated. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. 3.1.33. .template.spec.containers[].env Description List of environment variables to set in the container. Cannot be updated. Type array 3.1.34. .template.spec.containers[].env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object EnvVarSource represents a source for the value of an EnvVar. 3.1.35. .template.spec.containers[].env[].valueFrom Description EnvVarSource represents a source for the value of an EnvVar. Type object Property Type Description configMapKeyRef object Selects a key from a ConfigMap. fieldRef object ObjectFieldSelector selects an APIVersioned field of an object. resourceFieldRef object ResourceFieldSelector represents container resources (cpu, memory) and their output format secretKeyRef object SecretKeySelector selects a key of a Secret. 3.1.36. .template.spec.containers[].env[].valueFrom.configMapKeyRef Description Selects a key from a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.37. .template.spec.containers[].env[].valueFrom.fieldRef Description ObjectFieldSelector selects an APIVersioned field of an object. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 3.1.38. .template.spec.containers[].env[].valueFrom.resourceFieldRef Description ResourceFieldSelector represents container resources (cpu, memory) and their output format Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor Quantity Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 3.1.39. .template.spec.containers[].env[].valueFrom.secretKeyRef Description SecretKeySelector selects a key of a Secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.40. .template.spec.containers[].envFrom Description List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. Type array 3.1.41. .template.spec.containers[].envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object ConfigMapEnvSource selects a ConfigMap to populate the environment variables with. The contents of the target ConfigMap's Data field will represent the key-value pairs as environment variables. prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object SecretEnvSource selects a Secret to populate the environment variables with. The contents of the target Secret's Data field will represent the key-value pairs as environment variables. 3.1.42. .template.spec.containers[].envFrom[].configMapRef Description ConfigMapEnvSource selects a ConfigMap to populate the environment variables with. The contents of the target ConfigMap's Data field will represent the key-value pairs as environment variables. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap must be defined 3.1.43. .template.spec.containers[].envFrom[].secretRef Description SecretEnvSource selects a Secret to populate the environment variables with. The contents of the target Secret's Data field will represent the key-value pairs as environment variables. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret must be defined 3.1.44. .template.spec.containers[].lifecycle Description Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted. Type object Property Type Description postStart object LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. preStop object LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. 3.1.45. .template.spec.containers[].lifecycle.postStart Description LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. Type object Property Type Description exec object ExecAction describes a "run in container" action. httpGet object HTTPGetAction describes an action based on HTTP Get requests. tcpSocket object TCPSocketAction describes an action based on opening a socket 3.1.46. .template.spec.containers[].lifecycle.postStart.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.47. .template.spec.containers[].lifecycle.postStart.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 3.1.48. .template.spec.containers[].lifecycle.postStart.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.49. .template.spec.containers[].lifecycle.postStart.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.50. .template.spec.containers[].lifecycle.postStart.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.51. .template.spec.containers[].lifecycle.preStop Description LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. Type object Property Type Description exec object ExecAction describes a "run in container" action. httpGet object HTTPGetAction describes an action based on HTTP Get requests. tcpSocket object TCPSocketAction describes an action based on opening a socket 3.1.52. .template.spec.containers[].lifecycle.preStop.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.53. .template.spec.containers[].lifecycle.preStop.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 3.1.54. .template.spec.containers[].lifecycle.preStop.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.55. .template.spec.containers[].lifecycle.preStop.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.56. .template.spec.containers[].lifecycle.preStop.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.57. .template.spec.containers[].livenessProbe Description Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Type object Property Type Description exec object ExecAction describes a "run in container" action. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. httpGet object HTTPGetAction describes an action based on HTTP Get requests. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocketAction describes an action based on opening a socket terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.58. .template.spec.containers[].livenessProbe.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.59. .template.spec.containers[].livenessProbe.grpc Description GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.60. .template.spec.containers[].livenessProbe.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 3.1.61. .template.spec.containers[].livenessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.62. .template.spec.containers[].livenessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.63. .template.spec.containers[].livenessProbe.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.64. .template.spec.containers[].ports Description List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. Type array 3.1.65. .template.spec.containers[].ports[] Description ContainerPort represents a network port in a single container. Type object Required containerPort Property Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". Possible enum values: - "SCTP" is the SCTP protocol. - "TCP" is the TCP protocol. - "UDP" is the UDP protocol. 3.1.66. .template.spec.containers[].readinessProbe Description Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Type object Property Type Description exec object ExecAction describes a "run in container" action. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. httpGet object HTTPGetAction describes an action based on HTTP Get requests. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocketAction describes an action based on opening a socket terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.67. .template.spec.containers[].readinessProbe.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.68. .template.spec.containers[].readinessProbe.grpc Description GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.69. .template.spec.containers[].readinessProbe.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 3.1.70. .template.spec.containers[].readinessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.71. .template.spec.containers[].readinessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.72. .template.spec.containers[].readinessProbe.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.73. .template.spec.containers[].resources Description ResourceRequirements describes the compute resource requirements. Type object Property Type Description limits object (Quantity) Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests object (Quantity) Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 3.1.74. .template.spec.containers[].securityContext Description SecurityContext holds security configuration that will be applied to a container. Some fields are present in both SecurityContext and PodSecurityContext. When both are set, the values in SecurityContext take precedence. Type object Property Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. capabilities object Adds and removes POSIX capabilities from running containers. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object SELinuxOptions are the labels to be applied to the container seccompProfile object SeccompProfile defines a pod/container's seccomp profile settings. Only one profile source may be set. windowsOptions object WindowsSecurityContextOptions contain Windows-specific options and credentials. 3.1.75. .template.spec.containers[].securityContext.capabilities Description Adds and removes POSIX capabilities from running containers. Type object Property Type Description add array (string) Added capabilities drop array (string) Removed capabilities 3.1.76. .template.spec.containers[].securityContext.seLinuxOptions Description SELinuxOptions are the labels to be applied to the container Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 3.1.77. .template.spec.containers[].securityContext.seccompProfile Description SeccompProfile defines a pod/container's seccomp profile settings. Only one profile source may be set. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must only be set if type is "Localhost". type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. Possible enum values: - "Localhost" indicates a profile defined in a file on the node should be used. The file's location relative to <kubelet-root-dir>/seccomp. - "RuntimeDefault" represents the default container runtime seccomp profile. - "Unconfined" indicates no seccomp profile is applied (A.K.A. unconfined). 3.1.78. .template.spec.containers[].securityContext.windowsOptions Description WindowsSecurityContextOptions contain Windows-specific options and credentials. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 3.1.79. .template.spec.containers[].startupProbe Description Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Type object Property Type Description exec object ExecAction describes a "run in container" action. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. httpGet object HTTPGetAction describes an action based on HTTP Get requests. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocketAction describes an action based on opening a socket terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.80. .template.spec.containers[].startupProbe.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.81. .template.spec.containers[].startupProbe.grpc Description GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.82. .template.spec.containers[].startupProbe.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 3.1.83. .template.spec.containers[].startupProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.84. .template.spec.containers[].startupProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.85. .template.spec.containers[].startupProbe.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.86. .template.spec.containers[].volumeDevices Description volumeDevices is the list of block devices to be used by the container. Type array 3.1.87. .template.spec.containers[].volumeDevices[] Description volumeDevice describes a mapping of a raw block device within a container. Type object Required name devicePath Property Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod 3.1.88. .template.spec.containers[].volumeMounts Description Pod volumes to mount into the container's filesystem. Cannot be updated. Type array 3.1.89. .template.spec.containers[].volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required name mountPath Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 3.1.90. .template.spec.dnsConfig Description PodDNSConfig defines the DNS parameters of a pod in addition to those generated from DNSPolicy. Type object Property Type Description nameservers array (string) A list of DNS name server IP addresses. This will be appended to the base nameservers generated from DNSPolicy. Duplicated nameservers will be removed. options array A list of DNS resolver options. This will be merged with the base options generated from DNSPolicy. Duplicated entries will be removed. Resolution options given in Options will override those that appear in the base DNSPolicy. options[] object PodDNSConfigOption defines DNS resolver options of a pod. searches array (string) A list of DNS search domains for host-name lookup. This will be appended to the base search paths generated from DNSPolicy. Duplicated search paths will be removed. 3.1.91. .template.spec.dnsConfig.options Description A list of DNS resolver options. This will be merged with the base options generated from DNSPolicy. Duplicated entries will be removed. Resolution options given in Options will override those that appear in the base DNSPolicy. Type array 3.1.92. .template.spec.dnsConfig.options[] Description PodDNSConfigOption defines DNS resolver options of a pod. Type object Property Type Description name string Required. value string 3.1.93. .template.spec.ephemeralContainers Description List of ephemeral containers run in this pod. Ephemeral containers may be run in an existing pod to perform user-initiated actions such as debugging. This list cannot be specified when creating a pod, and it cannot be modified by updating the pod spec. In order to add an ephemeral container to an existing pod, use the pod's ephemeralcontainers subresource. Type array 3.1.94. .template.spec.ephemeralContainers[] Description An EphemeralContainer is a temporary container that you may add to an existing Pod for user-initiated activities such as debugging. Ephemeral containers have no resource or scheduling guarantees, and they will not be restarted when they exit or when a Pod is removed or restarted. The kubelet may evict a Pod if an ephemeral container causes the Pod to exceed its resource allocation. To add an ephemeral container, use the ephemeralcontainers subresource of an existing Pod. Ephemeral containers may not be removed or restarted. Type object Required name Property Type Description args array (string) Arguments to the entrypoint. The image's CMD is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command array (string) Entrypoint array. Not executed within a shell. The image's ENTRYPOINT is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env array List of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images Possible enum values: - "Always" means that kubelet always attempts to pull the latest image. Container will fail If the pull fails. - "IfNotPresent" means that kubelet pulls if the image isn't present on disk. Container will fail if the image isn't present and the pull fails. - "Never" means that kubelet never pulls an image, but only uses a local image. Container will fail if the image isn't present lifecycle object Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted. livenessProbe object Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. name string Name of the ephemeral container specified as a DNS_LABEL. This name must be unique among all containers, init containers and ephemeral containers. ports array Ports are not allowed for ephemeral containers. ports[] object ContainerPort represents a network port in a single container. readinessProbe object Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. resources object ResourceRequirements describes the compute resource requirements. securityContext object SecurityContext holds security configuration that will be applied to a container. Some fields are present in both SecurityContext and PodSecurityContext. When both are set, the values in SecurityContext take precedence. startupProbe object Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false targetContainerName string If set, the name of the container from PodSpec that this ephemeral container targets. The ephemeral container will be run in the namespaces (IPC, PID, etc) of this container. If not set then the ephemeral container uses the namespaces configured in the Pod spec. The container runtime must implement support for this feature. If the runtime does not support namespace targeting then the result of setting this field is undefined. terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. Possible enum values: - "FallbackToLogsOnError" will read the most recent contents of the container logs for the container status message when the container exits with an error and the terminationMessagePath has no contents. - "File" is the default behavior and will set the container status message to the contents of the container's terminationMessagePath when the container exits. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices array volumeDevices is the list of block devices to be used by the container. volumeDevices[] object volumeDevice describes a mapping of a raw block device within a container. volumeMounts array Pod volumes to mount into the container's filesystem. Subpath mounts are not allowed for ephemeral containers. Cannot be updated. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. 3.1.95. .template.spec.ephemeralContainers[].env Description List of environment variables to set in the container. Cannot be updated. Type array 3.1.96. .template.spec.ephemeralContainers[].env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object EnvVarSource represents a source for the value of an EnvVar. 3.1.97. .template.spec.ephemeralContainers[].env[].valueFrom Description EnvVarSource represents a source for the value of an EnvVar. Type object Property Type Description configMapKeyRef object Selects a key from a ConfigMap. fieldRef object ObjectFieldSelector selects an APIVersioned field of an object. resourceFieldRef object ResourceFieldSelector represents container resources (cpu, memory) and their output format secretKeyRef object SecretKeySelector selects a key of a Secret. 3.1.98. .template.spec.ephemeralContainers[].env[].valueFrom.configMapKeyRef Description Selects a key from a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.99. .template.spec.ephemeralContainers[].env[].valueFrom.fieldRef Description ObjectFieldSelector selects an APIVersioned field of an object. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 3.1.100. .template.spec.ephemeralContainers[].env[].valueFrom.resourceFieldRef Description ResourceFieldSelector represents container resources (cpu, memory) and their output format Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor Quantity Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 3.1.101. .template.spec.ephemeralContainers[].env[].valueFrom.secretKeyRef Description SecretKeySelector selects a key of a Secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.102. .template.spec.ephemeralContainers[].envFrom Description List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. Type array 3.1.103. .template.spec.ephemeralContainers[].envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object ConfigMapEnvSource selects a ConfigMap to populate the environment variables with. The contents of the target ConfigMap's Data field will represent the key-value pairs as environment variables. prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object SecretEnvSource selects a Secret to populate the environment variables with. The contents of the target Secret's Data field will represent the key-value pairs as environment variables. 3.1.104. .template.spec.ephemeralContainers[].envFrom[].configMapRef Description ConfigMapEnvSource selects a ConfigMap to populate the environment variables with. The contents of the target ConfigMap's Data field will represent the key-value pairs as environment variables. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap must be defined 3.1.105. .template.spec.ephemeralContainers[].envFrom[].secretRef Description SecretEnvSource selects a Secret to populate the environment variables with. The contents of the target Secret's Data field will represent the key-value pairs as environment variables. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret must be defined 3.1.106. .template.spec.ephemeralContainers[].lifecycle Description Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted. Type object Property Type Description postStart object LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. preStop object LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. 3.1.107. .template.spec.ephemeralContainers[].lifecycle.postStart Description LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. Type object Property Type Description exec object ExecAction describes a "run in container" action. httpGet object HTTPGetAction describes an action based on HTTP Get requests. tcpSocket object TCPSocketAction describes an action based on opening a socket 3.1.108. .template.spec.ephemeralContainers[].lifecycle.postStart.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.109. .template.spec.ephemeralContainers[].lifecycle.postStart.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 3.1.110. .template.spec.ephemeralContainers[].lifecycle.postStart.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.111. .template.spec.ephemeralContainers[].lifecycle.postStart.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.112. .template.spec.ephemeralContainers[].lifecycle.postStart.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.113. .template.spec.ephemeralContainers[].lifecycle.preStop Description LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. Type object Property Type Description exec object ExecAction describes a "run in container" action. httpGet object HTTPGetAction describes an action based on HTTP Get requests. tcpSocket object TCPSocketAction describes an action based on opening a socket 3.1.114. .template.spec.ephemeralContainers[].lifecycle.preStop.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.115. .template.spec.ephemeralContainers[].lifecycle.preStop.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 3.1.116. .template.spec.ephemeralContainers[].lifecycle.preStop.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.117. .template.spec.ephemeralContainers[].lifecycle.preStop.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.118. .template.spec.ephemeralContainers[].lifecycle.preStop.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.119. .template.spec.ephemeralContainers[].livenessProbe Description Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Type object Property Type Description exec object ExecAction describes a "run in container" action. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. httpGet object HTTPGetAction describes an action based on HTTP Get requests. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocketAction describes an action based on opening a socket terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.120. .template.spec.ephemeralContainers[].livenessProbe.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.121. .template.spec.ephemeralContainers[].livenessProbe.grpc Description GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.122. .template.spec.ephemeralContainers[].livenessProbe.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 3.1.123. .template.spec.ephemeralContainers[].livenessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.124. .template.spec.ephemeralContainers[].livenessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.125. .template.spec.ephemeralContainers[].livenessProbe.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.126. .template.spec.ephemeralContainers[].ports Description Ports are not allowed for ephemeral containers. Type array 3.1.127. .template.spec.ephemeralContainers[].ports[] Description ContainerPort represents a network port in a single container. Type object Required containerPort Property Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". Possible enum values: - "SCTP" is the SCTP protocol. - "TCP" is the TCP protocol. - "UDP" is the UDP protocol. 3.1.128. .template.spec.ephemeralContainers[].readinessProbe Description Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Type object Property Type Description exec object ExecAction describes a "run in container" action. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. httpGet object HTTPGetAction describes an action based on HTTP Get requests. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocketAction describes an action based on opening a socket terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.129. .template.spec.ephemeralContainers[].readinessProbe.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.130. .template.spec.ephemeralContainers[].readinessProbe.grpc Description GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.131. .template.spec.ephemeralContainers[].readinessProbe.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 3.1.132. .template.spec.ephemeralContainers[].readinessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.133. .template.spec.ephemeralContainers[].readinessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.134. .template.spec.ephemeralContainers[].readinessProbe.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.135. .template.spec.ephemeralContainers[].resources Description ResourceRequirements describes the compute resource requirements. Type object Property Type Description limits object (Quantity) Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests object (Quantity) Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 3.1.136. .template.spec.ephemeralContainers[].securityContext Description SecurityContext holds security configuration that will be applied to a container. Some fields are present in both SecurityContext and PodSecurityContext. When both are set, the values in SecurityContext take precedence. Type object Property Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. capabilities object Adds and removes POSIX capabilities from running containers. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object SELinuxOptions are the labels to be applied to the container seccompProfile object SeccompProfile defines a pod/container's seccomp profile settings. Only one profile source may be set. windowsOptions object WindowsSecurityContextOptions contain Windows-specific options and credentials. 3.1.137. .template.spec.ephemeralContainers[].securityContext.capabilities Description Adds and removes POSIX capabilities from running containers. Type object Property Type Description add array (string) Added capabilities drop array (string) Removed capabilities 3.1.138. .template.spec.ephemeralContainers[].securityContext.seLinuxOptions Description SELinuxOptions are the labels to be applied to the container Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 3.1.139. .template.spec.ephemeralContainers[].securityContext.seccompProfile Description SeccompProfile defines a pod/container's seccomp profile settings. Only one profile source may be set. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must only be set if type is "Localhost". type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. Possible enum values: - "Localhost" indicates a profile defined in a file on the node should be used. The file's location relative to <kubelet-root-dir>/seccomp. - "RuntimeDefault" represents the default container runtime seccomp profile. - "Unconfined" indicates no seccomp profile is applied (A.K.A. unconfined). 3.1.140. .template.spec.ephemeralContainers[].securityContext.windowsOptions Description WindowsSecurityContextOptions contain Windows-specific options and credentials. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 3.1.141. .template.spec.ephemeralContainers[].startupProbe Description Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Type object Property Type Description exec object ExecAction describes a "run in container" action. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. httpGet object HTTPGetAction describes an action based on HTTP Get requests. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocketAction describes an action based on opening a socket terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.142. .template.spec.ephemeralContainers[].startupProbe.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.143. .template.spec.ephemeralContainers[].startupProbe.grpc Description GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.144. .template.spec.ephemeralContainers[].startupProbe.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 3.1.145. .template.spec.ephemeralContainers[].startupProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.146. .template.spec.ephemeralContainers[].startupProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.147. .template.spec.ephemeralContainers[].startupProbe.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.148. .template.spec.ephemeralContainers[].volumeDevices Description volumeDevices is the list of block devices to be used by the container. Type array 3.1.149. .template.spec.ephemeralContainers[].volumeDevices[] Description volumeDevice describes a mapping of a raw block device within a container. Type object Required name devicePath Property Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod 3.1.150. .template.spec.ephemeralContainers[].volumeMounts Description Pod volumes to mount into the container's filesystem. Subpath mounts are not allowed for ephemeral containers. Cannot be updated. Type array 3.1.151. .template.spec.ephemeralContainers[].volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required name mountPath Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 3.1.152. .template.spec.hostAliases Description HostAliases is an optional list of hosts and IPs that will be injected into the pod's hosts file if specified. This is only valid for non-hostNetwork pods. Type array 3.1.153. .template.spec.hostAliases[] Description HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pod's hosts file. Type object Property Type Description hostnames array (string) Hostnames for the above IP address. ip string IP address of the host file entry. 3.1.154. .template.spec.imagePullSecrets Description ImagePullSecrets is an optional list of references to secrets in the same namespace to use for pulling any of the images used by this PodSpec. If specified, these secrets will be passed to individual puller implementations for them to use. More info: https://kubernetes.io/docs/concepts/containers/images#specifying-imagepullsecrets-on-a-pod Type array 3.1.155. .template.spec.imagePullSecrets[] Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 3.1.156. .template.spec.initContainers Description List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added or removed. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ Type array 3.1.157. .template.spec.initContainers[] Description A single application container that you want to run within a pod. Type object Required name Property Type Description args array (string) Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command array (string) Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env array List of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images Possible enum values: - "Always" means that kubelet always attempts to pull the latest image. Container will fail If the pull fails. - "IfNotPresent" means that kubelet pulls if the image isn't present on disk. Container will fail if the image isn't present and the pull fails. - "Never" means that kubelet never pulls an image, but only uses a local image. Container will fail if the image isn't present lifecycle object Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted. livenessProbe object Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports array List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. ports[] object ContainerPort represents a network port in a single container. readinessProbe object Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. resources object ResourceRequirements describes the compute resource requirements. securityContext object SecurityContext holds security configuration that will be applied to a container. Some fields are present in both SecurityContext and PodSecurityContext. When both are set, the values in SecurityContext take precedence. startupProbe object Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. Possible enum values: - "FallbackToLogsOnError" will read the most recent contents of the container logs for the container status message when the container exits with an error and the terminationMessagePath has no contents. - "File" is the default behavior and will set the container status message to the contents of the container's terminationMessagePath when the container exits. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices array volumeDevices is the list of block devices to be used by the container. volumeDevices[] object volumeDevice describes a mapping of a raw block device within a container. volumeMounts array Pod volumes to mount into the container's filesystem. Cannot be updated. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. 3.1.158. .template.spec.initContainers[].env Description List of environment variables to set in the container. Cannot be updated. Type array 3.1.159. .template.spec.initContainers[].env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object EnvVarSource represents a source for the value of an EnvVar. 3.1.160. .template.spec.initContainers[].env[].valueFrom Description EnvVarSource represents a source for the value of an EnvVar. Type object Property Type Description configMapKeyRef object Selects a key from a ConfigMap. fieldRef object ObjectFieldSelector selects an APIVersioned field of an object. resourceFieldRef object ResourceFieldSelector represents container resources (cpu, memory) and their output format secretKeyRef object SecretKeySelector selects a key of a Secret. 3.1.161. .template.spec.initContainers[].env[].valueFrom.configMapKeyRef Description Selects a key from a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.162. .template.spec.initContainers[].env[].valueFrom.fieldRef Description ObjectFieldSelector selects an APIVersioned field of an object. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 3.1.163. .template.spec.initContainers[].env[].valueFrom.resourceFieldRef Description ResourceFieldSelector represents container resources (cpu, memory) and their output format Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor Quantity Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 3.1.164. .template.spec.initContainers[].env[].valueFrom.secretKeyRef Description SecretKeySelector selects a key of a Secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.165. .template.spec.initContainers[].envFrom Description List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. Type array 3.1.166. .template.spec.initContainers[].envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object ConfigMapEnvSource selects a ConfigMap to populate the environment variables with. The contents of the target ConfigMap's Data field will represent the key-value pairs as environment variables. prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object SecretEnvSource selects a Secret to populate the environment variables with. The contents of the target Secret's Data field will represent the key-value pairs as environment variables. 3.1.167. .template.spec.initContainers[].envFrom[].configMapRef Description ConfigMapEnvSource selects a ConfigMap to populate the environment variables with. The contents of the target ConfigMap's Data field will represent the key-value pairs as environment variables. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap must be defined 3.1.168. .template.spec.initContainers[].envFrom[].secretRef Description SecretEnvSource selects a Secret to populate the environment variables with. The contents of the target Secret's Data field will represent the key-value pairs as environment variables. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret must be defined 3.1.169. .template.spec.initContainers[].lifecycle Description Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted. Type object Property Type Description postStart object LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. preStop object LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. 3.1.170. .template.spec.initContainers[].lifecycle.postStart Description LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. Type object Property Type Description exec object ExecAction describes a "run in container" action. httpGet object HTTPGetAction describes an action based on HTTP Get requests. tcpSocket object TCPSocketAction describes an action based on opening a socket 3.1.171. .template.spec.initContainers[].lifecycle.postStart.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.172. .template.spec.initContainers[].lifecycle.postStart.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 3.1.173. .template.spec.initContainers[].lifecycle.postStart.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.174. .template.spec.initContainers[].lifecycle.postStart.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.175. .template.spec.initContainers[].lifecycle.postStart.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.176. .template.spec.initContainers[].lifecycle.preStop Description LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. Type object Property Type Description exec object ExecAction describes a "run in container" action. httpGet object HTTPGetAction describes an action based on HTTP Get requests. tcpSocket object TCPSocketAction describes an action based on opening a socket 3.1.177. .template.spec.initContainers[].lifecycle.preStop.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.178. .template.spec.initContainers[].lifecycle.preStop.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 3.1.179. .template.spec.initContainers[].lifecycle.preStop.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.180. .template.spec.initContainers[].lifecycle.preStop.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.181. .template.spec.initContainers[].lifecycle.preStop.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.182. .template.spec.initContainers[].livenessProbe Description Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Type object Property Type Description exec object ExecAction describes a "run in container" action. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. httpGet object HTTPGetAction describes an action based on HTTP Get requests. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocketAction describes an action based on opening a socket terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.183. .template.spec.initContainers[].livenessProbe.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.184. .template.spec.initContainers[].livenessProbe.grpc Description GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.185. .template.spec.initContainers[].livenessProbe.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 3.1.186. .template.spec.initContainers[].livenessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.187. .template.spec.initContainers[].livenessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.188. .template.spec.initContainers[].livenessProbe.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.189. .template.spec.initContainers[].ports Description List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. Type array 3.1.190. .template.spec.initContainers[].ports[] Description ContainerPort represents a network port in a single container. Type object Required containerPort Property Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". Possible enum values: - "SCTP" is the SCTP protocol. - "TCP" is the TCP protocol. - "UDP" is the UDP protocol. 3.1.191. .template.spec.initContainers[].readinessProbe Description Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Type object Property Type Description exec object ExecAction describes a "run in container" action. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. httpGet object HTTPGetAction describes an action based on HTTP Get requests. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocketAction describes an action based on opening a socket terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.192. .template.spec.initContainers[].readinessProbe.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.193. .template.spec.initContainers[].readinessProbe.grpc Description GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.194. .template.spec.initContainers[].readinessProbe.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 3.1.195. .template.spec.initContainers[].readinessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.196. .template.spec.initContainers[].readinessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.197. .template.spec.initContainers[].readinessProbe.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.198. .template.spec.initContainers[].resources Description ResourceRequirements describes the compute resource requirements. Type object Property Type Description limits object (Quantity) Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests object (Quantity) Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 3.1.199. .template.spec.initContainers[].securityContext Description SecurityContext holds security configuration that will be applied to a container. Some fields are present in both SecurityContext and PodSecurityContext. When both are set, the values in SecurityContext take precedence. Type object Property Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. capabilities object Adds and removes POSIX capabilities from running containers. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object SELinuxOptions are the labels to be applied to the container seccompProfile object SeccompProfile defines a pod/container's seccomp profile settings. Only one profile source may be set. windowsOptions object WindowsSecurityContextOptions contain Windows-specific options and credentials. 3.1.200. .template.spec.initContainers[].securityContext.capabilities Description Adds and removes POSIX capabilities from running containers. Type object Property Type Description add array (string) Added capabilities drop array (string) Removed capabilities 3.1.201. .template.spec.initContainers[].securityContext.seLinuxOptions Description SELinuxOptions are the labels to be applied to the container Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 3.1.202. .template.spec.initContainers[].securityContext.seccompProfile Description SeccompProfile defines a pod/container's seccomp profile settings. Only one profile source may be set. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must only be set if type is "Localhost". type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. Possible enum values: - "Localhost" indicates a profile defined in a file on the node should be used. The file's location relative to <kubelet-root-dir>/seccomp. - "RuntimeDefault" represents the default container runtime seccomp profile. - "Unconfined" indicates no seccomp profile is applied (A.K.A. unconfined). 3.1.203. .template.spec.initContainers[].securityContext.windowsOptions Description WindowsSecurityContextOptions contain Windows-specific options and credentials. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 3.1.204. .template.spec.initContainers[].startupProbe Description Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Type object Property Type Description exec object ExecAction describes a "run in container" action. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. httpGet object HTTPGetAction describes an action based on HTTP Get requests. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocketAction describes an action based on opening a socket terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.205. .template.spec.initContainers[].startupProbe.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.206. .template.spec.initContainers[].startupProbe.grpc Description GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.207. .template.spec.initContainers[].startupProbe.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 3.1.208. .template.spec.initContainers[].startupProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.209. .template.spec.initContainers[].startupProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.210. .template.spec.initContainers[].startupProbe.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.211. .template.spec.initContainers[].volumeDevices Description volumeDevices is the list of block devices to be used by the container. Type array 3.1.212. .template.spec.initContainers[].volumeDevices[] Description volumeDevice describes a mapping of a raw block device within a container. Type object Required name devicePath Property Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod 3.1.213. .template.spec.initContainers[].volumeMounts Description Pod volumes to mount into the container's filesystem. Cannot be updated. Type array 3.1.214. .template.spec.initContainers[].volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required name mountPath Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 3.1.215. .template.spec.os Description PodOS defines the OS parameters of a pod. Type object Required name Property Type Description name string Name is the name of the operating system. The currently supported values are linux and windows. Additional value may be defined in future and can be one of: https://github.com/opencontainers/runtime-spec/blob/master/config.md#platform-specific-configuration Clients should expect to handle additional values and treat unrecognized values in this field as os: null 3.1.216. .template.spec.readinessGates Description If specified, all readiness gates will be evaluated for pod readiness. A pod is ready when all its containers are ready AND all conditions specified in the readiness gates have status equal to "True" More info: https://git.k8s.io/enhancements/keps/sig-network/580-pod-readiness-gates Type array 3.1.217. .template.spec.readinessGates[] Description PodReadinessGate contains the reference to a pod condition Type object Required conditionType Property Type Description conditionType string ConditionType refers to a condition in the pod's condition list with matching type. 3.1.218. .template.spec.securityContext Description PodSecurityContext holds pod-level security attributes and common container settings. Some fields are also present in container.securityContext. Field values of container.securityContext take precedence over field values of PodSecurityContext. Type object Property Type Description fsGroup integer A special supplemental group that applies to all containers in a pod. Some volume types allow the Kubelet to change the ownership of that volume to be owned by the pod: 1. The owning GID will be the FSGroup 2. The setgid bit is set (new files created in the volume will be owned by FSGroup) 3. The permission bits are OR'd with rw-rw---- If unset, the Kubelet will not modify the ownership and permissions of any volume. Note that this field cannot be set when spec.os.name is windows. fsGroupChangePolicy string fsGroupChangePolicy defines behavior of changing ownership and permission of the volume before being exposed inside Pod. This field will only apply to volume types which support fsGroup based ownership(and permissions). It will have no effect on ephemeral volume types such as: secret, configmaps and emptydir. Valid values are "OnRootMismatch" and "Always". If not specified, "Always" is used. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object SELinuxOptions are the labels to be applied to the container seccompProfile object SeccompProfile defines a pod/container's seccomp profile settings. Only one profile source may be set. supplementalGroups array (integer) A list of groups applied to the first process run in each container, in addition to the container's primary GID. If unspecified, no groups will be added to any container. Note that this field cannot be set when spec.os.name is windows. sysctls array Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. sysctls[] object Sysctl defines a kernel parameter to be set windowsOptions object WindowsSecurityContextOptions contain Windows-specific options and credentials. 3.1.219. .template.spec.securityContext.seLinuxOptions Description SELinuxOptions are the labels to be applied to the container Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 3.1.220. .template.spec.securityContext.seccompProfile Description SeccompProfile defines a pod/container's seccomp profile settings. Only one profile source may be set. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must only be set if type is "Localhost". type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. Possible enum values: - "Localhost" indicates a profile defined in a file on the node should be used. The file's location relative to <kubelet-root-dir>/seccomp. - "RuntimeDefault" represents the default container runtime seccomp profile. - "Unconfined" indicates no seccomp profile is applied (A.K.A. unconfined). 3.1.221. .template.spec.securityContext.sysctls Description Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. Type array 3.1.222. .template.spec.securityContext.sysctls[] Description Sysctl defines a kernel parameter to be set Type object Required name value Property Type Description name string Name of a property to set value string Value of a property to set 3.1.223. .template.spec.securityContext.windowsOptions Description WindowsSecurityContextOptions contain Windows-specific options and credentials. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 3.1.224. .template.spec.tolerations Description If specified, the pod's tolerations. Type array 3.1.225. .template.spec.tolerations[] Description The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. Type object Property Type Description effect string Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. Possible enum values: - "NoExecute" Evict any already-running pods that do not tolerate the taint. Currently enforced by NodeController. - "NoSchedule" Do not allow new pods to schedule onto the node unless they tolerate the taint, but allow all pods submitted to Kubelet without going through the scheduler to start, and allow all already-running pods to continue running. Enforced by the scheduler. - "PreferNoSchedule" Like TaintEffectNoSchedule, but the scheduler tries not to schedule new pods onto the node, rather than prohibiting new pods from scheduling onto the node entirely. Enforced by the scheduler. key string Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. operator string Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. Possible enum values: - "Equal" - "Exists" tolerationSeconds integer TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. value string Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. 3.1.226. .template.spec.topologySpreadConstraints Description TopologySpreadConstraints describes how a group of pods ought to spread across topology domains. Scheduler will schedule pods in a way which abides by the constraints. All topologySpreadConstraints are ANDed. Type array 3.1.227. .template.spec.topologySpreadConstraints[] Description TopologySpreadConstraint specifies how to spread matching pods among the given topology. Type object Required maxSkew topologyKey whenUnsatisfiable Property Type Description labelSelector LabelSelector LabelSelector is used to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select the pods over which spreading will be calculated. The keys are used to lookup values from the incoming pod labels, those key-value labels are ANDed with labelSelector to select the group of existing pods over which spreading will be calculated for the incoming pod. Keys that don't exist in the incoming pod labels will be ignored. A null or empty list means only match against labelSelector. maxSkew integer MaxSkew describes the degree to which pods may be unevenly distributed. When whenUnsatisfiable=DoNotSchedule , it is the maximum permitted difference between the number of matching pods in the target topology and the global minimum. The global minimum is the minimum number of matching pods in an eligible domain or zero if the number of eligible domains is less than MinDomains. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 2/2/1: In this case, the global minimum is 1. | zone1 | zone2 | zone3 | | P P | P P | P | - if MaxSkew is 1, incoming pod can only be scheduled to zone3 to become 2/2/2; scheduling it onto zone1(zone2) would make the ActualSkew(3-1) on zone1(zone2) violate MaxSkew(1). - if MaxSkew is 2, incoming pod can be scheduled onto any zone. When whenUnsatisfiable=ScheduleAnyway , it is used to give higher precedence to topologies that satisfy it. It's a required field. Default value is 1 and 0 is not allowed. minDomains integer MinDomains indicates a minimum number of eligible domains. When the number of eligible domains with matching topology keys is less than minDomains, Pod Topology Spread treats "global minimum" as 0, and then the calculation of Skew is performed. And when the number of eligible domains with matching topology keys equals or greater than minDomains, this value has no effect on scheduling. As a result, when the number of eligible domains is less than minDomains, scheduler won't schedule more than maxSkew Pods to those domains. If value is nil, the constraint behaves as if MinDomains is equal to 1. Valid values are integers greater than 0. When value is not nil, WhenUnsatisfiable must be DoNotSchedule. For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same labelSelector spread as 2/2/2: | zone1 | zone2 | zone3 | | P P | P P | P P | The number of domains is less than 5(MinDomains), so "global minimum" is treated as 0. In this situation, new pod with the same labelSelector cannot be scheduled, because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones, it will violate MaxSkew. This is a beta field and requires the MinDomainsInPodTopologySpread feature gate to be enabled (enabled by default). nodeAffinityPolicy string NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations. If this value is nil, the behavior is equivalent to the Honor policy. This is a alpha-level feature enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. nodeTaintsPolicy string NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included. If this value is nil, the behavior is equivalent to the Ignore policy. This is a alpha-level feature enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. topologyKey string TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each <key, value> as a "bucket", and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy. e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. It's a required field. whenUnsatisfiable string WhenUnsatisfiable indicates how to deal with a pod if it doesn't satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location, but giving higher precedence to topologies that would help reduce the skew. A constraint is considered "Unsatisfiable" for an incoming pod if and only if every possible node assignment for that pod would violate "MaxSkew" on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P | P | P | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won't make it more imbalanced. It's a required field. Possible enum values: - "DoNotSchedule" instructs the scheduler not to schedule the pod when constraints are not satisfied. - "ScheduleAnyway" instructs the scheduler to schedule the pod even if constraints are not satisfied. 3.1.228. .template.spec.volumes Description List of volumes that can be mounted by containers belonging to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes Type array 3.1.229. .template.spec.volumes[] Description Volume represents a named volume in a pod that may be accessed by any container in the pod. Type object Required name Property Type Description awsElasticBlockStore object Represents a Persistent Disk resource in AWS. An AWS EBS disk must exist before mounting to a container. The disk must also be in the same AWS zone as the kubelet. An AWS EBS disk can only be mounted as read/write once. AWS EBS volumes support ownership management and SELinux relabeling. azureDisk object AzureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. azureFile object AzureFile represents an Azure File Service mount on the host and bind mount to the pod. cephfs object Represents a Ceph Filesystem mount that lasts the lifetime of a pod Cephfs volumes do not support ownership management or SELinux relabeling. cinder object Represents a cinder volume resource in Openstack. A Cinder volume must exist before mounting to a container. The volume must also be in the same region as the kubelet. Cinder volumes support ownership management and SELinux relabeling. configMap object Adapts a ConfigMap into a volume. The contents of the target ConfigMap's Data field will be presented in a volume as files using the keys in the Data field as the file names, unless the items element is populated with specific mappings of keys to paths. ConfigMap volumes support ownership management and SELinux relabeling. csi object Represents a source location of a volume to mount, managed by an external CSI driver downwardAPI object DownwardAPIVolumeSource represents a volume containing downward API info. Downward API volumes support ownership management and SELinux relabeling. emptyDir object Represents an empty directory for a pod. Empty directory volumes support ownership management and SELinux relabeling. ephemeral object Represents an ephemeral volume that is handled by a normal storage driver. fc object Represents a Fibre Channel volume. Fibre Channel volumes can only be mounted as read/write once. Fibre Channel volumes support ownership management and SELinux relabeling. flexVolume object FlexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. flocker object Represents a Flocker volume mounted by the Flocker agent. One and only one of datasetName and datasetUUID should be set. Flocker volumes do not support ownership management or SELinux relabeling. gcePersistentDisk object Represents a Persistent Disk resource in Google Compute Engine. A GCE PD must exist before mounting to a container. The disk must also be in the same GCE project and zone as the kubelet. A GCE PD can only be mounted as read/write once or read-only many times. GCE PDs support ownership management and SELinux relabeling. gitRepo object Represents a volume that is populated with the contents of a git repository. Git repo volumes do not support ownership management. Git repo volumes support SELinux relabeling. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. glusterfs object Represents a Glusterfs mount that lasts the lifetime of a pod. Glusterfs volumes do not support ownership management or SELinux relabeling. hostPath object Represents a host path mapped into a pod. Host path volumes do not support ownership management or SELinux relabeling. iscsi object Represents an ISCSI disk. ISCSI volumes can only be mounted as read/write once. ISCSI volumes support ownership management and SELinux relabeling. name string name of the volume. Must be a DNS_LABEL and unique within the pod. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names nfs object Represents an NFS mount that lasts the lifetime of a pod. NFS volumes do not support ownership management or SELinux relabeling. persistentVolumeClaim object PersistentVolumeClaimVolumeSource references the user's PVC in the same namespace. This volume finds the bound PV and mounts that volume for the pod. A PersistentVolumeClaimVolumeSource is, essentially, a wrapper around another type of volume that is owned by someone else (the system). photonPersistentDisk object Represents a Photon Controller persistent disk resource. portworxVolume object PortworxVolumeSource represents a Portworx volume resource. projected object Represents a projected volume source quobyte object Represents a Quobyte mount that lasts the lifetime of a pod. Quobyte volumes do not support ownership management or SELinux relabeling. rbd object Represents a Rados Block Device mount that lasts the lifetime of a pod. RBD volumes support ownership management and SELinux relabeling. scaleIO object ScaleIOVolumeSource represents a persistent ScaleIO volume secret object Adapts a Secret into a volume. The contents of the target Secret's Data field will be presented in a volume as files using the keys in the Data field as the file names. Secret volumes support ownership management and SELinux relabeling. storageos object Represents a StorageOS persistent volume resource. vsphereVolume object Represents a vSphere volume resource. 3.1.230. .template.spec.volumes[].awsElasticBlockStore Description Represents a Persistent Disk resource in AWS. An AWS EBS disk must exist before mounting to a container. The disk must also be in the same AWS zone as the kubelet. An AWS EBS disk can only be mounted as read/write once. AWS EBS volumes support ownership management and SELinux relabeling. Type object Required volumeID Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). readOnly boolean readOnly value true will force the readOnly setting in VolumeMounts. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore volumeID string volumeID is unique ID of the persistent disk resource in AWS (Amazon EBS volume). More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore 3.1.231. .template.spec.volumes[].azureDisk Description AzureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. Type object Required diskName diskURI Property Type Description cachingMode string cachingMode is the Host Caching mode: None, Read Only, Read Write. diskName string diskName is the Name of the data disk in the blob storage diskURI string diskURI is the URI of data disk in the blob storage fsType string fsType is Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. kind string kind expected values are Shared: multiple blob disks per storage account Dedicated: single blob disk per storage account Managed: azure managed data disk (only in managed availability set). defaults to shared readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. 3.1.232. .template.spec.volumes[].azureFile Description AzureFile represents an Azure File Service mount on the host and bind mount to the pod. Type object Required secretName shareName Property Type Description readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretName string secretName is the name of secret that contains Azure Storage Account Name and Key shareName string shareName is the azure share Name 3.1.233. .template.spec.volumes[].cephfs Description Represents a Ceph Filesystem mount that lasts the lifetime of a pod Cephfs volumes do not support ownership management or SELinux relabeling. Type object Required monitors Property Type Description monitors array (string) monitors is Required: Monitors is a collection of Ceph monitors More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it path string path is Optional: Used as the mounted root, rather than the full Ceph tree, default is / readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretFile string secretFile is Optional: SecretFile is the path to key ring for User, default is /etc/ceph/user.secret More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretRef object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. user string user is optional: User is the rados user name, default is admin More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it 3.1.234. .template.spec.volumes[].cephfs.secretRef Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 3.1.235. .template.spec.volumes[].cinder Description Represents a cinder volume resource in Openstack. A Cinder volume must exist before mounting to a container. The volume must also be in the same region as the kubelet. Cinder volumes support ownership management and SELinux relabeling. Type object Required volumeID Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://examples.k8s.io/mysql-cinder-pd/README.md readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/mysql-cinder-pd/README.md secretRef object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. volumeID string volumeID used to identify the volume in cinder. More info: https://examples.k8s.io/mysql-cinder-pd/README.md 3.1.236. .template.spec.volumes[].cinder.secretRef Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 3.1.237. .template.spec.volumes[].configMap Description Adapts a ConfigMap into a volume. The contents of the target ConfigMap's Data field will be presented in a volume as files using the keys in the Data field as the file names, unless the items element is populated with specific mappings of keys to paths. ConfigMap volumes support ownership management and SELinux relabeling. Type object Property Type Description defaultMode integer defaultMode is optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean optional specify whether the ConfigMap or its keys must be defined 3.1.238. .template.spec.volumes[].configMap.items Description items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 3.1.239. .template.spec.volumes[].configMap.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 3.1.240. .template.spec.volumes[].csi Description Represents a source location of a volume to mount, managed by an external CSI driver Type object Required driver Property Type Description driver string driver is the name of the CSI driver that handles this volume. Consult with your admin for the correct name as registered in the cluster. fsType string fsType to mount. Ex. "ext4", "xfs", "ntfs". If not provided, the empty value is passed to the associated CSI driver which will determine the default filesystem to apply. nodePublishSecretRef object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. readOnly boolean readOnly specifies a read-only configuration for the volume. Defaults to false (read/write). volumeAttributes object (string) volumeAttributes stores driver-specific properties that are passed to the CSI driver. Consult your driver's documentation for supported values. 3.1.241. .template.spec.volumes[].csi.nodePublishSecretRef Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 3.1.242. .template.spec.volumes[].downwardAPI Description DownwardAPIVolumeSource represents a volume containing downward API info. Downward API volumes support ownership management and SELinux relabeling. Type object Property Type Description defaultMode integer Optional: mode bits to use on created files by default. Must be a Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array Items is a list of downward API volume file items[] object DownwardAPIVolumeFile represents information to create the file containing the pod field 3.1.243. .template.spec.volumes[].downwardAPI.items Description Items is a list of downward API volume file Type array 3.1.244. .template.spec.volumes[].downwardAPI.items[] Description DownwardAPIVolumeFile represents information to create the file containing the pod field Type object Required path Property Type Description fieldRef object ObjectFieldSelector selects an APIVersioned field of an object. mode integer Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef object ResourceFieldSelector represents container resources (cpu, memory) and their output format 3.1.245. .template.spec.volumes[].downwardAPI.items[].fieldRef Description ObjectFieldSelector selects an APIVersioned field of an object. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 3.1.246. .template.spec.volumes[].downwardAPI.items[].resourceFieldRef Description ResourceFieldSelector represents container resources (cpu, memory) and their output format Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor Quantity Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 3.1.247. .template.spec.volumes[].emptyDir Description Represents an empty directory for a pod. Empty directory volumes support ownership management and SELinux relabeling. Type object Property Type Description medium string medium represents what type of storage medium should back this directory. The default is "" which means to use the node's default medium. Must be an empty string (default) or Memory. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir sizeLimit Quantity sizeLimit is the total amount of local storage required for this EmptyDir volume. The size limit is also applicable for memory medium. The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod. The default is nil which means that the limit is undefined. More info: http://kubernetes.io/docs/user-guide/volumes#emptydir 3.1.248. .template.spec.volumes[].ephemeral Description Represents an ephemeral volume that is handled by a normal storage driver. Type object Property Type Description volumeClaimTemplate object PersistentVolumeClaimTemplate is used to produce PersistentVolumeClaim objects as part of an EphemeralVolumeSource. 3.1.249. .template.spec.volumes[].ephemeral.volumeClaimTemplate Description PersistentVolumeClaimTemplate is used to produce PersistentVolumeClaim objects as part of an EphemeralVolumeSource. Type object Required spec Property Type Description metadata ObjectMeta May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. spec object PersistentVolumeClaimSpec describes the common attributes of storage devices and allows a Source for provider-specific attributes 3.1.250. .template.spec.volumes[].ephemeral.volumeClaimTemplate.spec Description PersistentVolumeClaimSpec describes the common attributes of storage devices and allows a Source for provider-specific attributes Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object TypedLocalObjectReference contains enough information to let you locate the typed referenced object inside the same namespace. dataSourceRef object TypedLocalObjectReference contains enough information to let you locate the typed referenced object inside the same namespace. resources object ResourceRequirements describes the compute resource requirements. selector LabelSelector selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. 3.1.251. .template.spec.volumes[].ephemeral.volumeClaimTemplate.spec.dataSource Description TypedLocalObjectReference contains enough information to let you locate the typed referenced object inside the same namespace. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 3.1.252. .template.spec.volumes[].ephemeral.volumeClaimTemplate.spec.dataSourceRef Description TypedLocalObjectReference contains enough information to let you locate the typed referenced object inside the same namespace. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 3.1.253. .template.spec.volumes[].ephemeral.volumeClaimTemplate.spec.resources Description ResourceRequirements describes the compute resource requirements. Type object Property Type Description limits object (Quantity) Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests object (Quantity) Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 3.1.254. .template.spec.volumes[].fc Description Represents a Fibre Channel volume. Fibre Channel volumes can only be mounted as read/write once. Fibre Channel volumes support ownership management and SELinux relabeling. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. lun integer lun is Optional: FC target lun number readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. targetWWNs array (string) targetWWNs is Optional: FC target worldwide names (WWNs) wwids array (string) wwids Optional: FC volume world wide identifiers (wwids) Either wwids or combination of targetWWNs and lun must be set, but not both simultaneously. 3.1.255. .template.spec.volumes[].flexVolume Description FlexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. Type object Required driver Property Type Description driver string driver is the name of the driver to use for this volume. fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". The default filesystem depends on FlexVolume script. options object (string) options is Optional: this field holds extra command options if any. readOnly boolean readOnly is Optional: defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. 3.1.256. .template.spec.volumes[].flexVolume.secretRef Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 3.1.257. .template.spec.volumes[].flocker Description Represents a Flocker volume mounted by the Flocker agent. One and only one of datasetName and datasetUUID should be set. Flocker volumes do not support ownership management or SELinux relabeling. Type object Property Type Description datasetName string datasetName is Name of the dataset stored as metadata name on the dataset for Flocker should be considered as deprecated datasetUUID string datasetUUID is the UUID of the dataset. This is unique identifier of a Flocker dataset 3.1.258. .template.spec.volumes[].gcePersistentDisk Description Represents a Persistent Disk resource in Google Compute Engine. A GCE PD must exist before mounting to a container. The disk must also be in the same GCE project and zone as the kubelet. A GCE PD can only be mounted as read/write once or read-only many times. GCE PDs support ownership management and SELinux relabeling. Type object Required pdName Property Type Description fsType string fsType is filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk pdName string pdName is unique name of the PD resource in GCE. Used to identify the disk in GCE. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk 3.1.259. .template.spec.volumes[].gitRepo Description Represents a volume that is populated with the contents of a git repository. Git repo volumes do not support ownership management. Git repo volumes support SELinux relabeling. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. Type object Required repository Property Type Description directory string directory is the target directory name. Must not contain or start with '..'. If '.' is supplied, the volume directory will be the git repository. Otherwise, if specified, the volume will contain the git repository in the subdirectory with the given name. repository string repository is the URL revision string revision is the commit hash for the specified revision. 3.1.260. .template.spec.volumes[].glusterfs Description Represents a Glusterfs mount that lasts the lifetime of a pod. Glusterfs volumes do not support ownership management or SELinux relabeling. Type object Required endpoints path Property Type Description endpoints string endpoints is the endpoint name that details Glusterfs topology. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod path string path is the Glusterfs volume path. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod readOnly boolean readOnly here will force the Glusterfs volume to be mounted with read-only permissions. Defaults to false. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod 3.1.261. .template.spec.volumes[].hostPath Description Represents a host path mapped into a pod. Host path volumes do not support ownership management or SELinux relabeling. Type object Required path Property Type Description path string path of the directory on the host. If the path is a symlink, it will follow the link to the real path. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath type string type for HostPath Volume Defaults to "" More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath 3.1.262. .template.spec.volumes[].iscsi Description Represents an ISCSI disk. ISCSI volumes can only be mounted as read/write once. ISCSI volumes support ownership management and SELinux relabeling. Type object Required targetPortal iqn lun Property Type Description chapAuthDiscovery boolean chapAuthDiscovery defines whether support iSCSI Discovery CHAP authentication chapAuthSession boolean chapAuthSession defines whether support iSCSI Session CHAP authentication fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#iscsi initiatorName string initiatorName is the custom iSCSI Initiator Name. If initiatorName is specified with iscsiInterface simultaneously, new iSCSI interface <target portal>:<volume name> will be created for the connection. iqn string iqn is the target iSCSI Qualified Name. iscsiInterface string iscsiInterface is the interface Name that uses an iSCSI transport. Defaults to 'default' (tcp). lun integer lun represents iSCSI Target Lun number. portals array (string) portals is the iSCSI Target Portal List. The portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. secretRef object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. targetPortal string targetPortal is iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). 3.1.263. .template.spec.volumes[].iscsi.secretRef Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 3.1.264. .template.spec.volumes[].nfs Description Represents an NFS mount that lasts the lifetime of a pod. NFS volumes do not support ownership management or SELinux relabeling. Type object Required server path Property Type Description path string path that is exported by the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs readOnly boolean readOnly here will force the NFS export to be mounted with read-only permissions. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs server string server is the hostname or IP address of the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs 3.1.265. .template.spec.volumes[].persistentVolumeClaim Description PersistentVolumeClaimVolumeSource references the user's PVC in the same namespace. This volume finds the bound PV and mounts that volume for the pod. A PersistentVolumeClaimVolumeSource is, essentially, a wrapper around another type of volume that is owned by someone else (the system). Type object Required claimName Property Type Description claimName string claimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims readOnly boolean readOnly Will force the ReadOnly setting in VolumeMounts. Default false. 3.1.266. .template.spec.volumes[].photonPersistentDisk Description Represents a Photon Controller persistent disk resource. Type object Required pdID Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. pdID string pdID is the ID that identifies Photon Controller persistent disk 3.1.267. .template.spec.volumes[].portworxVolume Description PortworxVolumeSource represents a Portworx volume resource. Type object Required volumeID Property Type Description fsType string fSType represents the filesystem type to mount Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. volumeID string volumeID uniquely identifies a Portworx volume 3.1.268. .template.spec.volumes[].projected Description Represents a projected volume source Type object Property Type Description defaultMode integer defaultMode are the mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. sources array sources is the list of volume projections sources[] object Projection that may be projected along with other supported volume types 3.1.269. .template.spec.volumes[].projected.sources Description sources is the list of volume projections Type array 3.1.270. .template.spec.volumes[].projected.sources[] Description Projection that may be projected along with other supported volume types Type object Property Type Description configMap object Adapts a ConfigMap into a projected volume. The contents of the target ConfigMap's Data field will be presented in a projected volume as files using the keys in the Data field as the file names, unless the items element is populated with specific mappings of keys to paths. Note that this is identical to a configmap volume source without the default mode. downwardAPI object Represents downward API info for projecting into a projected volume. Note that this is identical to a downwardAPI volume source without the default mode. secret object Adapts a secret into a projected volume. The contents of the target Secret's Data field will be presented in a projected volume as files using the keys in the Data field as the file names. Note that this is identical to a secret volume source without the default mode. serviceAccountToken object ServiceAccountTokenProjection represents a projected service account token volume. This projection can be used to insert a service account token into the pods runtime filesystem for use against APIs (Kubernetes API Server or otherwise). 3.1.271. .template.spec.volumes[].projected.sources[].configMap Description Adapts a ConfigMap into a projected volume. The contents of the target ConfigMap's Data field will be presented in a projected volume as files using the keys in the Data field as the file names, unless the items element is populated with specific mappings of keys to paths. Note that this is identical to a configmap volume source without the default mode. Type object Property Type Description items array items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean optional specify whether the ConfigMap or its keys must be defined 3.1.272. .template.spec.volumes[].projected.sources[].configMap.items Description items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 3.1.273. .template.spec.volumes[].projected.sources[].configMap.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 3.1.274. .template.spec.volumes[].projected.sources[].downwardAPI Description Represents downward API info for projecting into a projected volume. Note that this is identical to a downwardAPI volume source without the default mode. Type object Property Type Description items array Items is a list of DownwardAPIVolume file items[] object DownwardAPIVolumeFile represents information to create the file containing the pod field 3.1.275. .template.spec.volumes[].projected.sources[].downwardAPI.items Description Items is a list of DownwardAPIVolume file Type array 3.1.276. .template.spec.volumes[].projected.sources[].downwardAPI.items[] Description DownwardAPIVolumeFile represents information to create the file containing the pod field Type object Required path Property Type Description fieldRef object ObjectFieldSelector selects an APIVersioned field of an object. mode integer Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef object ResourceFieldSelector represents container resources (cpu, memory) and their output format 3.1.277. .template.spec.volumes[].projected.sources[].downwardAPI.items[].fieldRef Description ObjectFieldSelector selects an APIVersioned field of an object. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 3.1.278. .template.spec.volumes[].projected.sources[].downwardAPI.items[].resourceFieldRef Description ResourceFieldSelector represents container resources (cpu, memory) and their output format Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor Quantity Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 3.1.279. .template.spec.volumes[].projected.sources[].secret Description Adapts a secret into a projected volume. The contents of the target Secret's Data field will be presented in a projected volume as files using the keys in the Data field as the file names. Note that this is identical to a secret volume source without the default mode. Type object Property Type Description items array items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean optional field specify whether the Secret or its key must be defined 3.1.280. .template.spec.volumes[].projected.sources[].secret.items Description items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 3.1.281. .template.spec.volumes[].projected.sources[].secret.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 3.1.282. .template.spec.volumes[].projected.sources[].serviceAccountToken Description ServiceAccountTokenProjection represents a projected service account token volume. This projection can be used to insert a service account token into the pods runtime filesystem for use against APIs (Kubernetes API Server or otherwise). Type object Required path Property Type Description audience string audience is the intended audience of the token. A recipient of a token must identify itself with an identifier specified in the audience of the token, and otherwise should reject the token. The audience defaults to the identifier of the apiserver. expirationSeconds integer expirationSeconds is the requested duration of validity of the service account token. As the token approaches expiration, the kubelet volume plugin will proactively rotate the service account token. The kubelet will start trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours.Defaults to 1 hour and must be at least 10 minutes. path string path is the path relative to the mount point of the file to project the token into. 3.1.283. .template.spec.volumes[].quobyte Description Represents a Quobyte mount that lasts the lifetime of a pod. Quobyte volumes do not support ownership management or SELinux relabeling. Type object Required registry volume Property Type Description group string group to map volume access to Default is no group readOnly boolean readOnly here will force the Quobyte volume to be mounted with read-only permissions. Defaults to false. registry string registry represents a single or multiple Quobyte Registry services specified as a string as host:port pair (multiple entries are separated with commas) which acts as the central registry for volumes tenant string tenant owning the given Quobyte volume in the Backend Used with dynamically provisioned Quobyte volumes, value is set by the plugin user string user to map volume access to Defaults to serivceaccount user volume string volume is a string that references an already created Quobyte volume by name. 3.1.284. .template.spec.volumes[].rbd Description Represents a Rados Block Device mount that lasts the lifetime of a pod. RBD volumes support ownership management and SELinux relabeling. Type object Required monitors image Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#rbd image string image is the rados image name. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it keyring string keyring is the path to key ring for RBDUser. Default is /etc/ceph/keyring. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it monitors array (string) monitors is a collection of Ceph monitors. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it pool string pool is the rados pool name. Default is rbd. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it secretRef object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. user string user is the rados user name. Default is admin. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it 3.1.285. .template.spec.volumes[].rbd.secretRef Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 3.1.286. .template.spec.volumes[].scaleIO Description ScaleIOVolumeSource represents a persistent ScaleIO volume Type object Required gateway system secretRef Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Default is "xfs". gateway string gateway is the host address of the ScaleIO API Gateway. protectionDomain string protectionDomain is the name of the ScaleIO Protection Domain for the configured storage. readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. sslEnabled boolean sslEnabled Flag enable/disable SSL communication with Gateway, default false storageMode string storageMode indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned. Default is ThinProvisioned. storagePool string storagePool is the ScaleIO Storage Pool associated with the protection domain. system string system is the name of the storage system as configured in ScaleIO. volumeName string volumeName is the name of a volume already created in the ScaleIO system that is associated with this volume source. 3.1.287. .template.spec.volumes[].scaleIO.secretRef Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 3.1.288. .template.spec.volumes[].secret Description Adapts a Secret into a volume. The contents of the target Secret's Data field will be presented in a volume as files using the keys in the Data field as the file names. Secret volumes support ownership management and SELinux relabeling. Type object Property Type Description defaultMode integer defaultMode is Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. optional boolean optional field specify whether the Secret or its keys must be defined secretName string secretName is the name of the secret in the pod's namespace to use. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret 3.1.289. .template.spec.volumes[].secret.items Description items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 3.1.290. .template.spec.volumes[].secret.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 3.1.291. .template.spec.volumes[].storageos Description Represents a StorageOS persistent volume resource. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. volumeName string volumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace. volumeNamespace string volumeNamespace specifies the scope of the volume within StorageOS. If no namespace is specified then the Pod's namespace will be used. This allows the Kubernetes name scoping to be mirrored within StorageOS for tighter integration. Set VolumeName to any name to override the default behaviour. Set to "default" if you are not using namespaces within StorageOS. Namespaces that do not pre-exist within StorageOS will be created. 3.1.292. .template.spec.volumes[].storageos.secretRef Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 3.1.293. .template.spec.volumes[].vsphereVolume Description Represents a vSphere volume resource. Type object Required volumePath Property Type Description fsType string fsType is filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. storagePolicyID string storagePolicyID is the storage Policy Based Management (SPBM) profile ID associated with the StoragePolicyName. storagePolicyName string storagePolicyName is the storage Policy Based Management (SPBM) profile name. volumePath string volumePath is the path that identifies vSphere volume vmdk 3.2. API endpoints The following API endpoints are available: /api/v1/podtemplates GET : list or watch objects of kind PodTemplate /api/v1/watch/podtemplates GET : watch individual changes to a list of PodTemplate. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/podtemplates DELETE : delete collection of PodTemplate GET : list or watch objects of kind PodTemplate POST : create a PodTemplate /api/v1/watch/namespaces/{namespace}/podtemplates GET : watch individual changes to a list of PodTemplate. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/podtemplates/{name} DELETE : delete a PodTemplate GET : read the specified PodTemplate PATCH : partially update the specified PodTemplate PUT : replace the specified PodTemplate /api/v1/watch/namespaces/{namespace}/podtemplates/{name} GET : watch changes to an object of kind PodTemplate. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 3.2.1. /api/v1/podtemplates Table 3.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind PodTemplate Table 3.2. HTTP responses HTTP code Reponse body 200 - OK PodTemplateList schema 401 - Unauthorized Empty 3.2.2. /api/v1/watch/podtemplates Table 3.3. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of PodTemplate. deprecated: use the 'watch' parameter with a list operation instead. Table 3.4. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 3.2.3. /api/v1/namespaces/{namespace}/podtemplates Table 3.5. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 3.6. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of PodTemplate Table 3.7. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 3.8. Body parameters Parameter Type Description body DeleteOptions schema Table 3.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind PodTemplate Table 3.10. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 3.11. HTTP responses HTTP code Reponse body 200 - OK PodTemplateList schema 401 - Unauthorized Empty HTTP method POST Description create a PodTemplate Table 3.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.13. Body parameters Parameter Type Description body PodTemplate schema Table 3.14. HTTP responses HTTP code Reponse body 200 - OK PodTemplate schema 201 - Created PodTemplate schema 202 - Accepted PodTemplate schema 401 - Unauthorized Empty 3.2.4. /api/v1/watch/namespaces/{namespace}/podtemplates Table 3.15. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 3.16. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of PodTemplate. deprecated: use the 'watch' parameter with a list operation instead. Table 3.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 3.2.5. /api/v1/namespaces/{namespace}/podtemplates/{name} Table 3.18. Global path parameters Parameter Type Description name string name of the PodTemplate namespace string object name and auth scope, such as for teams and projects Table 3.19. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a PodTemplate Table 3.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 3.21. Body parameters Parameter Type Description body DeleteOptions schema Table 3.22. HTTP responses HTTP code Reponse body 200 - OK PodTemplate schema 202 - Accepted PodTemplate schema 401 - Unauthorized Empty HTTP method GET Description read the specified PodTemplate Table 3.23. HTTP responses HTTP code Reponse body 200 - OK PodTemplate schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified PodTemplate Table 3.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 3.25. Body parameters Parameter Type Description body Patch schema Table 3.26. HTTP responses HTTP code Reponse body 200 - OK PodTemplate schema 201 - Created PodTemplate schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified PodTemplate Table 3.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.28. Body parameters Parameter Type Description body PodTemplate schema Table 3.29. HTTP responses HTTP code Reponse body 200 - OK PodTemplate schema 201 - Created PodTemplate schema 401 - Unauthorized Empty 3.2.6. /api/v1/watch/namespaces/{namespace}/podtemplates/{name} Table 3.30. Global path parameters Parameter Type Description name string name of the PodTemplate namespace string object name and auth scope, such as for teams and projects Table 3.31. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind PodTemplate. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 3.32. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/template_apis/podtemplate-v1 |
Chapter 1. Configuring Jenkins images | Chapter 1. Configuring Jenkins images OpenShift Container Platform provides a container image for running Jenkins. This image provides a Jenkins server instance, which can be used to set up a basic flow for continuous testing, integration, and delivery. The image is based on the Red Hat Universal Base Images (UBI). OpenShift Container Platform follows the LTS release of Jenkins. OpenShift Container Platform provides an image that contains Jenkins 2.x. The OpenShift Container Platform Jenkins images are available on Quay.io or registry.redhat.io . For example: USD podman pull registry.redhat.io/ocp-tools-4/jenkins-rhel8:<image_tag> To use these images, you can either access them directly from these registries or push them into your OpenShift Container Platform container image registry. Additionally, you can create an image stream that points to the image, either in your container image registry or at the external location. Your OpenShift Container Platform resources can then reference the image stream. But for convenience, OpenShift Container Platform provides image streams in the openshift namespace for the core Jenkins image as well as the example Agent images provided for OpenShift Container Platform integration with Jenkins. 1.1. Configuration and customization You can manage Jenkins authentication in two ways: OpenShift Container Platform OAuth authentication provided by the OpenShift Container Platform Login plugin. Standard authentication provided by Jenkins. 1.1.1. OpenShift Container Platform OAuth authentication OAuth authentication is activated by configuring options on the Configure Global Security panel in the Jenkins UI, or by setting the OPENSHIFT_ENABLE_OAUTH environment variable on the Jenkins Deployment configuration to anything other than false . This activates the OpenShift Container Platform Login plugin, which retrieves the configuration information from pod data or by interacting with the OpenShift Container Platform API server. Valid credentials are controlled by the OpenShift Container Platform identity provider. Jenkins supports both browser and non-browser access. Valid users are automatically added to the Jenkins authorization matrix at log in, where OpenShift Container Platform roles dictate the specific Jenkins permissions that users have. The roles used by default are the predefined admin , edit , and view . The login plugin executes self-SAR requests against those roles in the project or namespace that Jenkins is running in. Users with the admin role have the traditional Jenkins administrative user permissions. Users with the edit or view role have progressively fewer permissions. The default OpenShift Container Platform admin , edit , and view roles and the Jenkins permissions those roles are assigned in the Jenkins instance are configurable. When running Jenkins in an OpenShift Container Platform pod, the login plugin looks for a config map named openshift-jenkins-login-plugin-config in the namespace that Jenkins is running in. If this plugin finds and can read in that config map, you can define the role to Jenkins Permission mappings. Specifically: The login plugin treats the key and value pairs in the config map as Jenkins permission to OpenShift Container Platform role mappings. The key is the Jenkins permission group short ID and the Jenkins permission short ID, with those two separated by a hyphen character. If you want to add the Overall Jenkins Administer permission to an OpenShift Container Platform role, the key should be Overall-Administer . To get a sense of which permission groups and permissions IDs are available, go to the matrix authorization page in the Jenkins console and IDs for the groups and individual permissions in the table they provide. The value of the key and value pair is the list of OpenShift Container Platform roles the permission should apply to, with each role separated by a comma. If you want to add the Overall Jenkins Administer permission to both the default admin and edit roles, as well as a new Jenkins role you have created, the value for the key Overall-Administer would be admin,edit,jenkins . Note The admin user that is pre-populated in the OpenShift Container Platform Jenkins image with administrative privileges is not given those privileges when OpenShift Container Platform OAuth is used. To grant these permissions the OpenShift Container Platform cluster administrator must explicitly define that user in the OpenShift Container Platform identity provider and assign the admin role to the user. Jenkins users' permissions that are stored can be changed after the users are initially established. The OpenShift Container Platform Login plugin polls the OpenShift Container Platform API server for permissions and updates the permissions stored in Jenkins for each user with the permissions retrieved from OpenShift Container Platform. If the Jenkins UI is used to update permissions for a Jenkins user, the permission changes are overwritten the time the plugin polls OpenShift Container Platform. You can control how often the polling occurs with the OPENSHIFT_PERMISSIONS_POLL_INTERVAL environment variable. The default polling interval is five minutes. The easiest way to create a new Jenkins service using OAuth authentication is to use a template. 1.1.2. Jenkins authentication Jenkins authentication is used by default if the image is run directly, without using a template. The first time Jenkins starts, the configuration is created along with the administrator user and password. The default user credentials are admin and password . Configure the default password by setting the JENKINS_PASSWORD environment variable when using, and only when using, standard Jenkins authentication. Procedure Create a Jenkins application that uses standard Jenkins authentication by entering the following command: USD oc new-app -e \ JENKINS_PASSWORD=<password> \ ocp-tools-4/jenkins-rhel8 1.2. Jenkins environment variables The Jenkins server can be configured with the following environment variables: Variable Definition Example values and settings OPENSHIFT_ENABLE_OAUTH Determines whether the OpenShift Container Platform Login plugin manages authentication when logging in to Jenkins. To enable, set to true . Default: false JENKINS_PASSWORD The password for the admin user when using standard Jenkins authentication. Not applicable when OPENSHIFT_ENABLE_OAUTH is set to true . Default: password JAVA_MAX_HEAP_PARAM , CONTAINER_HEAP_PERCENT , JENKINS_MAX_HEAP_UPPER_BOUND_MB These values control the maximum heap size of the Jenkins JVM. If JAVA_MAX_HEAP_PARAM is set, its value takes precedence. Otherwise, the maximum heap size is dynamically calculated as CONTAINER_HEAP_PERCENT of the container memory limit, optionally capped at JENKINS_MAX_HEAP_UPPER_BOUND_MB MiB. By default, the maximum heap size of the Jenkins JVM is set to 50% of the container memory limit with no cap. JAVA_MAX_HEAP_PARAM example setting: -Xmx512m CONTAINER_HEAP_PERCENT default: 0.5 , or 50% JENKINS_MAX_HEAP_UPPER_BOUND_MB example setting: 512 MiB JAVA_INITIAL_HEAP_PARAM , CONTAINER_INITIAL_PERCENT These values control the initial heap size of the Jenkins JVM. If JAVA_INITIAL_HEAP_PARAM is set, its value takes precedence. Otherwise, the initial heap size is dynamically calculated as CONTAINER_INITIAL_PERCENT of the dynamically calculated maximum heap size. By default, the JVM sets the initial heap size. JAVA_INITIAL_HEAP_PARAM example setting: -Xms32m CONTAINER_INITIAL_PERCENT example setting: 0.1 , or 10% CONTAINER_CORE_LIMIT If set, specifies an integer number of cores used for sizing numbers of internal JVM threads. Example setting: 2 JAVA_TOOL_OPTIONS Specifies options to apply to all JVMs running in this container. It is not recommended to override this value. Default: -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Dsun.zip.disableMemoryMapping=true JAVA_GC_OPTS Specifies Jenkins JVM garbage collection parameters. It is not recommended to override this value. Default: -XX:+UseParallelGC -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90 JENKINS_JAVA_OVERRIDES Specifies additional options for the Jenkins JVM. These options are appended to all other options, including the Java options above, and may be used to override any of them if necessary. Separate each additional option with a space; if any option contains space characters, escape them with a backslash. Example settings: -Dfoo -Dbar ; -Dfoo=first\ value -Dbar=second\ value . JENKINS_OPTS Specifies arguments to Jenkins. INSTALL_PLUGINS Specifies additional Jenkins plugins to install when the container is first run or when OVERRIDE_PV_PLUGINS_WITH_IMAGE_PLUGINS is set to true . Plugins are specified as a comma-delimited list of name:version pairs. Example setting: git:3.7.0,subversion:2.10.2 . OPENSHIFT_PERMISSIONS_POLL_INTERVAL Specifies the interval in milliseconds that the OpenShift Container Platform Login plugin polls OpenShift Container Platform for the permissions that are associated with each user that is defined in Jenkins. Default: 300000 - 5 minutes OVERRIDE_PV_CONFIG_WITH_IMAGE_CONFIG When running this image with an OpenShift Container Platform persistent volume (PV) for the Jenkins configuration directory, the transfer of configuration from the image to the PV is performed only the first time the image starts because the PV is assigned when the persistent volume claim (PVC) is created. If you create a custom image that extends this image and updates the configuration in the custom image after the initial startup, the configuration is not copied over unless you set this environment variable to true . Default: false OVERRIDE_PV_PLUGINS_WITH_IMAGE_PLUGINS When running this image with an OpenShift Container Platform PV for the Jenkins configuration directory, the transfer of plugins from the image to the PV is performed only the first time the image starts because the PV is assigned when the PVC is created. If you create a custom image that extends this image and updates plugins in the custom image after the initial startup, the plugins are not copied over unless you set this environment variable to true . Default: false ENABLE_FATAL_ERROR_LOG_FILE When running this image with an OpenShift Container Platform PVC for the Jenkins configuration directory, this environment variable allows the fatal error log file to persist when a fatal error occurs. The fatal error file is saved at /var/lib/jenkins/logs . Default: false AGENT_BASE_IMAGE Setting this value overrides the image used for the jnlp container in the sample Kubernetes plugin pod templates provided with this image. Otherwise, the image from the jenkins-agent-base-rhel8:latest image stream tag in the openshift namespace is used. Default: image-registry.openshift-image-registry.svc:5000/openshift/jenkins-agent-base-rhel8:latest JAVA_BUILDER_IMAGE Setting this value overrides the image used for the java-builder container in the java-builder sample Kubernetes plugin pod templates provided with this image. Otherwise, the image from the java:latest image stream tag in the openshift namespace is used. Default: image-registry.openshift-image-registry.svc:5000/openshift/java:latest JAVA_FIPS_OPTIONS Setting this value controls how the JVM operates when running on a FIPS node. For more information, see Configure Red Hat build of OpenJDK 11 in FIPS mode . Default: -Dcom.redhat.fips=false 1.3. Providing Jenkins cross project access If you are going to run Jenkins somewhere other than your same project, you must provide an access token to Jenkins to access your project. Procedure Identify the secret for the service account that has appropriate permissions to access the project that Jenkins must access by entering the following command: USD oc describe serviceaccount jenkins Example output Name: default Labels: <none> Secrets: { jenkins-token-uyswp } { jenkins-dockercfg-xcr3d } Tokens: jenkins-token-izv1u jenkins-token-uyswp In this case the secret is named jenkins-token-uyswp . Retrieve the token from the secret by entering the following command: USD oc describe secret <secret name from above> Example output Name: jenkins-token-uyswp Labels: <none> Annotations: kubernetes.io/service-account.name=jenkins,kubernetes.io/service-account.uid=32f5b661-2a8f-11e5-9528-3c970e3bf0b7 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1066 bytes token: eyJhbGc..<content cut>....wRA The token parameter contains the token value Jenkins requires to access the project. 1.4. Jenkins cross volume mount points The Jenkins image can be run with mounted volumes to enable persistent storage for the configuration: /var/lib/jenkins is the data directory where Jenkins stores configuration files, including job definitions. 1.5. Customizing the Jenkins image through source-to-image To customize the official OpenShift Container Platform Jenkins image, you can use the image as a source-to-image (S2I) builder. You can use S2I to copy your custom Jenkins jobs definitions, add additional plugins, or replace the provided config.xml file with your own, custom, configuration. To include your modifications in the Jenkins image, you must have a Git repository with the following directory structure: plugins This directory contains those binary Jenkins plugins you want to copy into Jenkins. plugins.txt This file lists the plugins you want to install using the following syntax: configuration/jobs This directory contains the Jenkins job definitions. configuration/config.xml This file contains your custom Jenkins configuration. The contents of the configuration/ directory is copied to the /var/lib/jenkins/ directory, so you can also include additional files, such as credentials.xml , there. Sample build configuration to customize the Jenkins image in OpenShift Container Platform apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: custom-jenkins-build spec: source: 1 git: uri: https://github.com/custom/repository type: Git strategy: 2 sourceStrategy: from: kind: ImageStreamTag name: jenkins:2 namespace: openshift type: Source output: 3 to: kind: ImageStreamTag name: custom-jenkins:latest 1 The source parameter defines the source Git repository with the layout described above. 2 The strategy parameter defines the original Jenkins image to use as a source image for the build. 3 The output parameter defines the resulting, customized Jenkins image that you can use in deployment configurations instead of the official Jenkins image. 1.6. Configuring the Jenkins Kubernetes plugin The OpenShift Jenkins image includes the preinstalled Kubernetes plugin for Jenkins so that Jenkins agents can be dynamically provisioned on multiple container hosts using Kubernetes and OpenShift Container Platform. To use the Kubernetes plugin, OpenShift Container Platform provides an OpenShift Agent Base image that is suitable for use as a Jenkins agent. Important OpenShift Container Platform 4.11 moves the OpenShift Jenkins and OpenShift Agent Base images to the ocp-tools-4 repository at registry.redhat.io so that Red Hat can produce and update the images outside the OpenShift Container Platform lifecycle. Previously, these images were in the OpenShift Container Platform install payload and the openshift4 repository at registry.redhat.io . The OpenShift Jenkins Maven and NodeJS Agent images were removed from the OpenShift Container Platform 4.11 payload. Red Hat no longer produces these images, and they are not available from the ocp-tools-4 repository at registry.redhat.io . Red Hat maintains the 4.10 and earlier versions of these images for any significant bug fixes or security CVEs, following the OpenShift Container Platform lifecycle policy . For more information, see the "Important changes to OpenShift Jenkins images" link in the following "Additional resources" section. The Maven and Node.js agent images are automatically configured as Kubernetes pod template images within the OpenShift Container Platform Jenkins image configuration for the Kubernetes plugin. That configuration includes labels for each image that you can apply to any of your Jenkins jobs under their Restrict where this project can be run setting. If the label is applied, jobs run under an OpenShift Container Platform pod running the respective agent image. Important In OpenShift Container Platform 4.10 and later, the recommended pattern for running Jenkins agents using the Kubernetes plugin is to use pod templates with both jnlp and sidecar containers. The jnlp container uses the OpenShift Container Platform Jenkins Base agent image to facilitate launching a separate pod for your build. The sidecar container image has the tools needed to build in a particular language within the separate pod that was launched. Many container images from the Red Hat Container Catalog are referenced in the sample image streams in the openshift namespace. The OpenShift Container Platform Jenkins image has a pod template named java-build with sidecar containers that demonstrate this approach. This pod template uses the latest Java version provided by the java image stream in the openshift namespace. The Jenkins image also provides auto-discovery and auto-configuration of additional agent images for the Kubernetes plugin. With the OpenShift Container Platform sync plugin, on Jenkins startup, the Jenkins image searches within the project it is running, or the projects listed in the plugin's configuration, for the following items: Image streams with the role label set to jenkins-agent . Image stream tags with the role annotation set to jenkins-agent . Config maps with the role label set to jenkins-agent . When the Jenkins image finds an image stream with the appropriate label, or an image stream tag with the appropriate annotation, it generates the corresponding Kubernetes plugin configuration. This way, you can assign your Jenkins jobs to run in a pod running the container image provided by the image stream. The name and image references of the image stream, or image stream tag, are mapped to the name and image fields in the Kubernetes plugin pod template. You can control the label field of the Kubernetes plugin pod template by setting an annotation on the image stream, or image stream tag object, with the key agent-label . Otherwise, the name is used as the label. Note Do not log in to the Jenkins console and change the pod template configuration. If you do so after the pod template is created, and the OpenShift Container Platform Sync plugin detects that the image associated with the image stream or image stream tag has changed, it replaces the pod template and overwrites those configuration changes. You cannot merge a new configuration with the existing configuration. Consider the config map approach if you have more complex configuration needs. When it finds a config map with the appropriate label, the Jenkins image assumes that any values in the key-value data payload of the config map contain Extensible Markup Language (XML) consistent with the configuration format for Jenkins and the Kubernetes plugin pod templates. One key advantage of config maps over image streams and image stream tags is that you can control all the Kubernetes plugin pod template parameters. Sample config map for jenkins-agent kind: ConfigMap apiVersion: v1 metadata: name: jenkins-agent labels: role: jenkins-agent data: template1: |- <org.csanchez.jenkins.plugins.kubernetes.PodTemplate> <inheritFrom></inheritFrom> <name>template1</name> <instanceCap>2147483647</instanceCap> <idleMinutes>0</idleMinutes> <label>template1</label> <serviceAccount>jenkins</serviceAccount> <nodeSelector></nodeSelector> <volumes/> <containers> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>jnlp</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/tmp</workingDir> <command></command> <args>USD{computer.jnlpmac} USD{computer.name}</args> <ttyEnabled>false</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> </containers> <envVars/> <annotations/> <imagePullSecrets/> <nodeProperties/> </org.csanchez.jenkins.plugins.kubernetes.PodTemplate> The following example shows two containers that reference image streams in the openshift namespace. One container handles the JNLP contract for launching Pods as Jenkins Agents. The other container uses an image with tools for building code in a particular coding language: kind: ConfigMap apiVersion: v1 metadata: name: jenkins-agent labels: role: jenkins-agent data: template2: |- <org.csanchez.jenkins.plugins.kubernetes.PodTemplate> <inheritFrom></inheritFrom> <name>template2</name> <instanceCap>2147483647</instanceCap> <idleMinutes>0</idleMinutes> <label>template2</label> <serviceAccount>jenkins</serviceAccount> <nodeSelector></nodeSelector> <volumes/> <containers> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>jnlp</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/home/jenkins/agent</workingDir> <command></command> <args>\USD(JENKINS_SECRET) \USD(JENKINS_NAME)</args> <ttyEnabled>false</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>java</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/home/jenkins/agent</workingDir> <command>cat</command> <args></args> <ttyEnabled>true</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> </containers> <envVars/> <annotations/> <imagePullSecrets/> <nodeProperties/> </org.csanchez.jenkins.plugins.kubernetes.PodTemplate> Note Do not log in to the Jenkins console and change the pod template configuration. If you do so after the pod template is created, and the OpenShift Container Platform Sync plugin detects that the image associated with the image stream or image stream tag has changed, it replaces the pod template and overwrites those configuration changes. You cannot merge a new configuration with the existing configuration. Consider the config map approach if you have more complex configuration needs. After it is installed, the OpenShift Container Platform Sync plugin monitors the API server of OpenShift Container Platform for updates to image streams, image stream tags, and config maps and adjusts the configuration of the Kubernetes plugin. The following rules apply: Removing the label or annotation from the config map, image stream, or image stream tag deletes any existing PodTemplate from the configuration of the Kubernetes plugin. If those objects are removed, the corresponding configuration is removed from the Kubernetes plugin. If you create appropriately labeled or annotated ConfigMap , ImageStream , or ImageStreamTag objects, or add labels after their initial creation, this results in the creation of a PodTemplate in the Kubernetes-plugin configuration. In the case of the PodTemplate by config map form, changes to the config map data for the PodTemplate are applied to the PodTemplate settings in the Kubernetes plugin configuration. The changes also override any changes that were made to the PodTemplate through the Jenkins UI between changes to the config map. To use a container image as a Jenkins agent, the image must run the agent as an entry point. For more details, see the official Jenkins documentation . Additional resources Important changes to OpenShift Jenkins images 1.7. Jenkins permissions If in the config map the <serviceAccount> element of the pod template XML is the OpenShift Container Platform service account used for the resulting pod, the service account credentials are mounted into the pod. The permissions are associated with the service account and control which operations against the OpenShift Container Platform master are allowed from the pod. Consider the following scenario with service accounts used for the pod, which is launched by the Kubernetes Plugin that runs in the OpenShift Container Platform Jenkins image. If you use the example template for Jenkins that is provided by OpenShift Container Platform, the jenkins service account is defined with the edit role for the project Jenkins runs in, and the master Jenkins pod has that service account mounted. The two default Maven and NodeJS pod templates that are injected into the Jenkins configuration are also set to use the same service account as the Jenkins master. Any pod templates that are automatically discovered by the OpenShift Container Platform sync plugin because their image streams or image stream tags have the required label or annotations are configured to use the Jenkins master service account as their service account. For the other ways you can provide a pod template definition into Jenkins and the Kubernetes plugin, you have to explicitly specify the service account to use. Those other ways include the Jenkins console, the podTemplate pipeline DSL that is provided by the Kubernetes plugin, or labeling a config map whose data is the XML configuration for a pod template. If you do not specify a value for the service account, the default service account is used. Ensure that whatever service account is used has the necessary permissions, roles, and so on defined within OpenShift Container Platform to manipulate whatever projects you choose to manipulate from the within the pod. 1.8. Creating a Jenkins service from a template Templates provide parameter fields to define all the environment variables with predefined default values. OpenShift Container Platform provides templates to make creating a new Jenkins service easy. The Jenkins templates should be registered in the default openshift project by your cluster administrator during the initial cluster setup. The two available templates both define deployment configuration and a service. The templates differ in their storage strategy, which affects whether the Jenkins content persists across a pod restart. Note A pod might be restarted when it is moved to another node or when an update of the deployment configuration triggers a redeployment. jenkins-ephemeral uses ephemeral storage. On pod restart, all data is lost. This template is only useful for development or testing. jenkins-persistent uses a Persistent Volume (PV) store. Data survives a pod restart. To use a PV store, the cluster administrator must define a PV pool in the OpenShift Container Platform deployment. After you select which template you want, you must instantiate the template to be able to use Jenkins. Procedure Create a new Jenkins application using one of the following methods: A PV: USD oc new-app jenkins-persistent Or an emptyDir type volume where configuration does not persist across pod restarts: USD oc new-app jenkins-ephemeral With both templates, you can run oc describe on them to see all the parameters available for overriding. For example: USD oc describe jenkins-ephemeral 1.9. Using the Jenkins Kubernetes plugin In the following example, the openshift-jee-sample BuildConfig object causes a Jenkins Maven agent pod to be dynamically provisioned. The pod clones some Java source code, builds a WAR file, and causes a second BuildConfig , openshift-jee-sample-docker to run. The second BuildConfig layers the new WAR file into a container image. Important OpenShift Container Platform 4.11 removed the OpenShift Jenkins Maven and NodeJS Agent images from its payload. Red Hat no longer produces these images, and they are not available from the ocp-tools-4 repository at registry.redhat.io . Red Hat maintains the 4.10 and earlier versions of these images for any significant bug fixes or security CVEs, following the OpenShift Container Platform lifecycle policy . For more information, see the "Important changes to OpenShift Jenkins images" link in the following "Additional resources" section. Sample BuildConfig that uses the Jenkins Kubernetes plugin kind: List apiVersion: v1 items: - kind: ImageStream apiVersion: image.openshift.io/v1 metadata: name: openshift-jee-sample - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample-docker spec: strategy: type: Docker source: type: Docker dockerfile: |- FROM openshift/wildfly-101-centos7:latest COPY ROOT.war /wildfly/standalone/deployments/ROOT.war CMD USDSTI_SCRIPTS_PATH/run binary: asFile: ROOT.war output: to: kind: ImageStreamTag name: openshift-jee-sample:latest - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample spec: strategy: type: JenkinsPipeline jenkinsPipelineStrategy: jenkinsfile: |- node("maven") { sh "git clone https://github.com/openshift/openshift-jee-sample.git ." sh "mvn -B -Popenshift package" sh "oc start-build -F openshift-jee-sample-docker --from-file=target/ROOT.war" } triggers: - type: ConfigChange It is also possible to override the specification of the dynamically created Jenkins agent pod. The following is a modification to the preceding example, which overrides the container memory and specifies an environment variable. Sample BuildConfig that uses the Jenkins Kubernetes plugin, specifying memory limit and environment variable kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample spec: strategy: type: JenkinsPipeline jenkinsPipelineStrategy: jenkinsfile: |- podTemplate(label: "mypod", 1 cloud: "openshift", 2 inheritFrom: "maven", 3 containers: [ containerTemplate(name: "jnlp", 4 image: "openshift/jenkins-agent-maven-35-centos7:v3.10", 5 resourceRequestMemory: "512Mi", 6 resourceLimitMemory: "512Mi", 7 envVars: [ envVar(key: "CONTAINER_HEAP_PERCENT", value: "0.25") 8 ]) ]) { node("mypod") { 9 sh "git clone https://github.com/openshift/openshift-jee-sample.git ." sh "mvn -B -Popenshift package" sh "oc start-build -F openshift-jee-sample-docker --from-file=target/ROOT.war" } } triggers: - type: ConfigChange 1 A new pod template called mypod is defined dynamically. The new pod template name is referenced in the node stanza. 2 The cloud value must be set to openshift . 3 The new pod template can inherit its configuration from an existing pod template. In this case, inherited from the Maven pod template that is pre-defined by OpenShift Container Platform. 4 This example overrides values in the pre-existing container, and must be specified by name. All Jenkins agent images shipped with OpenShift Container Platform use the Container name jnlp . 5 Specify the container image name again. This is a known issue. 6 A memory request of 512 Mi is specified. 7 A memory limit of 512 Mi is specified. 8 An environment variable CONTAINER_HEAP_PERCENT , with value 0.25 , is specified. 9 The node stanza references the name of the defined pod template. By default, the pod is deleted when the build completes. This behavior can be modified with the plugin or within a pipeline Jenkinsfile. Upstream Jenkins has more recently introduced a YAML declarative format for defining a podTemplate pipeline DSL in-line with your pipelines. An example of this format, using the sample java-builder pod template that is defined in the OpenShift Container Platform Jenkins image: def nodeLabel = 'java-buidler' pipeline { agent { kubernetes { cloud 'openshift' label nodeLabel yaml """ apiVersion: v1 kind: Pod metadata: labels: worker: USD{nodeLabel} spec: containers: - name: jnlp image: image-registry.openshift-image-registry.svc:5000/openshift/jenkins-agent-base-rhel8:latest args: ['\USD(JENKINS_SECRET)', '\USD(JENKINS_NAME)'] - name: java image: image-registry.openshift-image-registry.svc:5000/openshift/java:latest command: - cat tty: true """ } } options { timeout(time: 20, unit: 'MINUTES') } stages { stage('Build App') { steps { container("java") { sh "mvn --version" } } } } } Additional resources Important changes to OpenShift Jenkins images 1.10. Jenkins memory requirements When deployed by the provided Jenkins Ephemeral or Jenkins Persistent templates, the default memory limit is 1 Gi . By default, all other process that run in the Jenkins container cannot use more than a total of 512 MiB of memory. If they require more memory, the container halts. It is therefore highly recommended that pipelines run external commands in an agent container wherever possible. And if Project quotas allow for it, see recommendations from the Jenkins documentation on what a Jenkins master should have from a memory perspective. Those recommendations proscribe to allocate even more memory for the Jenkins master. It is recommended to specify memory request and limit values on agent containers created by the Jenkins Kubernetes plugin. Admin users can set default values on a per-agent image basis through the Jenkins configuration. The memory request and limit parameters can also be overridden on a per-container basis. You can increase the amount of memory available to Jenkins by overriding the MEMORY_LIMIT parameter when instantiating the Jenkins Ephemeral or Jenkins Persistent template. 1.11. Additional resources See Base image options for more information about the Red Hat Universal Base Images (UBI). Important changes to OpenShift Jenkins images | [
"podman pull registry.redhat.io/ocp-tools-4/jenkins-rhel8:<image_tag>",
"oc new-app -e JENKINS_PASSWORD=<password> ocp-tools-4/jenkins-rhel8",
"oc describe serviceaccount jenkins",
"Name: default Labels: <none> Secrets: { jenkins-token-uyswp } { jenkins-dockercfg-xcr3d } Tokens: jenkins-token-izv1u jenkins-token-uyswp",
"oc describe secret <secret name from above>",
"Name: jenkins-token-uyswp Labels: <none> Annotations: kubernetes.io/service-account.name=jenkins,kubernetes.io/service-account.uid=32f5b661-2a8f-11e5-9528-3c970e3bf0b7 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1066 bytes token: eyJhbGc..<content cut>....wRA",
"pluginId:pluginVersion",
"apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: custom-jenkins-build spec: source: 1 git: uri: https://github.com/custom/repository type: Git strategy: 2 sourceStrategy: from: kind: ImageStreamTag name: jenkins:2 namespace: openshift type: Source output: 3 to: kind: ImageStreamTag name: custom-jenkins:latest",
"kind: ConfigMap apiVersion: v1 metadata: name: jenkins-agent labels: role: jenkins-agent data: template1: |- <org.csanchez.jenkins.plugins.kubernetes.PodTemplate> <inheritFrom></inheritFrom> <name>template1</name> <instanceCap>2147483647</instanceCap> <idleMinutes>0</idleMinutes> <label>template1</label> <serviceAccount>jenkins</serviceAccount> <nodeSelector></nodeSelector> <volumes/> <containers> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>jnlp</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/tmp</workingDir> <command></command> <args>USD{computer.jnlpmac} USD{computer.name}</args> <ttyEnabled>false</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> </containers> <envVars/> <annotations/> <imagePullSecrets/> <nodeProperties/> </org.csanchez.jenkins.plugins.kubernetes.PodTemplate>",
"kind: ConfigMap apiVersion: v1 metadata: name: jenkins-agent labels: role: jenkins-agent data: template2: |- <org.csanchez.jenkins.plugins.kubernetes.PodTemplate> <inheritFrom></inheritFrom> <name>template2</name> <instanceCap>2147483647</instanceCap> <idleMinutes>0</idleMinutes> <label>template2</label> <serviceAccount>jenkins</serviceAccount> <nodeSelector></nodeSelector> <volumes/> <containers> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>jnlp</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/home/jenkins/agent</workingDir> <command></command> <args>\\USD(JENKINS_SECRET) \\USD(JENKINS_NAME)</args> <ttyEnabled>false</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>java</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/home/jenkins/agent</workingDir> <command>cat</command> <args></args> <ttyEnabled>true</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> </containers> <envVars/> <annotations/> <imagePullSecrets/> <nodeProperties/> </org.csanchez.jenkins.plugins.kubernetes.PodTemplate>",
"oc new-app jenkins-persistent",
"oc new-app jenkins-ephemeral",
"oc describe jenkins-ephemeral",
"kind: List apiVersion: v1 items: - kind: ImageStream apiVersion: image.openshift.io/v1 metadata: name: openshift-jee-sample - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample-docker spec: strategy: type: Docker source: type: Docker dockerfile: |- FROM openshift/wildfly-101-centos7:latest COPY ROOT.war /wildfly/standalone/deployments/ROOT.war CMD USDSTI_SCRIPTS_PATH/run binary: asFile: ROOT.war output: to: kind: ImageStreamTag name: openshift-jee-sample:latest - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample spec: strategy: type: JenkinsPipeline jenkinsPipelineStrategy: jenkinsfile: |- node(\"maven\") { sh \"git clone https://github.com/openshift/openshift-jee-sample.git .\" sh \"mvn -B -Popenshift package\" sh \"oc start-build -F openshift-jee-sample-docker --from-file=target/ROOT.war\" } triggers: - type: ConfigChange",
"kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample spec: strategy: type: JenkinsPipeline jenkinsPipelineStrategy: jenkinsfile: |- podTemplate(label: \"mypod\", 1 cloud: \"openshift\", 2 inheritFrom: \"maven\", 3 containers: [ containerTemplate(name: \"jnlp\", 4 image: \"openshift/jenkins-agent-maven-35-centos7:v3.10\", 5 resourceRequestMemory: \"512Mi\", 6 resourceLimitMemory: \"512Mi\", 7 envVars: [ envVar(key: \"CONTAINER_HEAP_PERCENT\", value: \"0.25\") 8 ]) ]) { node(\"mypod\") { 9 sh \"git clone https://github.com/openshift/openshift-jee-sample.git .\" sh \"mvn -B -Popenshift package\" sh \"oc start-build -F openshift-jee-sample-docker --from-file=target/ROOT.war\" } } triggers: - type: ConfigChange",
"def nodeLabel = 'java-buidler' pipeline { agent { kubernetes { cloud 'openshift' label nodeLabel yaml \"\"\" apiVersion: v1 kind: Pod metadata: labels: worker: USD{nodeLabel} spec: containers: - name: jnlp image: image-registry.openshift-image-registry.svc:5000/openshift/jenkins-agent-base-rhel8:latest args: ['\\USD(JENKINS_SECRET)', '\\USD(JENKINS_NAME)'] - name: java image: image-registry.openshift-image-registry.svc:5000/openshift/java:latest command: - cat tty: true \"\"\" } } options { timeout(time: 20, unit: 'MINUTES') } stages { stage('Build App') { steps { container(\"java\") { sh \"mvn --version\" } } } } }"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/jenkins/images-other-jenkins |
Chapter 6. Known Issues | Chapter 6. Known Issues The following subsections describe the known issues in version 7.13. 6.1. CVE Security Vulnerabilities As a middleware integration platform, Fuse can potentially be integrated with a large number of third-party components. It is not always possible to exclude the possibility that some third-party dependencies of Fuse could have security vulnerabilities. This section documents known common vulnerabilities and exposures (CVEs) related to security that affect third-party dependencies of Fuse 7.13. CVE-2020-13936 CVE-2020-13936 velocity: arbitrary code execution when attacker is able to modify templates An attacker that is able to modify Velocity templates may execute arbitrary Java code or run arbitrary system commands with the same privileges as the account running the Servlet container. This applies to applications that allow untrusted users to upload/modify velocity templates running Apache Velocity Engine versions up to 2.2. Dependencies for Fuse 7.9 (and later) ensure that it uses only the fixed Velocity version (2.3) that protects against this security vulnerability. If your application code has any explicit dependencies on the Apache Velocity component, we recommend that you upgrade these dependencies to use the fixed version. CVE-2018-10237 CVE-2018-10237 guava: Unbounded memory allocation in AtomicDoubleArray and CompoundOrdering classes allow remote attackers to cause a denial of service [fuse-7.0.0] Google Guava versions 11.0 through 24.1 are vulnerable to unbounded memory allocation in the AtomicDoubleArray class (when serialized with Java serialization) and the CompoundOrdering class (when serialized with GWT serialization). An attacker could exploit applications that use Guava and deserialize untrusted data to cause a denial of service - for more details, see CVE-2018-10237 . To avoid this security vulnerability, we recommend that you: Never deserialize an AtomicDoubleArray instance or a CompoundOrdering instance from an unknown source. Avoid using Guava versions 24 and earlier (although in some cases it is not possible to avoid the earlier versions). To make it easier to avoid the earlier (vulnerable) versions of Guava, Fuse 7.7 (and later) has configured its Maven Bill of Materials (BOM) files for all containers to select Guava 27 by default. This means that if you incorporate a Fuse BOM into your Maven project (by adding a dependency on the BOM to the dependencyManagement section of your POM file) and then specify a dependency on the Guava artifact without specifying an explicit version, the Guava version will default to the version specified in the BOM, which is version 27 for the Fuse 7.7 BOMs. But there is at least one common use case involving the Apache Karaf (OSGi) container, where it is not possible to avoid using a vulnerable version of Guava: if your OSGi application uses Guava and Swagger together, you are obliged to use Guava 20, because that is the version required by Swagger. Here we explain why this is the case and how to configure your POM file to revert the earlier (vulnerable) Guava 20 library. First, you need to understand the concept of a double OSGi chain . Double OSGi chain Bundles in the OSGi runtime are wired together using package constraints (package name + optional version/range) - imports and exports. Each bundle can have multiple imports and usually those imports wire a given bundle with multiple bundles. For example: Where BundleA depends on BundleB and BundleCb , while BundleB depends on BundleCa . BundleCa and BundleCb should be the same bundle, if the export the same packages, but due to version (range) constraints, BundleB uses ( wires to ) a different revision/version of BundleC than BundleA . Rewriting the preceding diagram to reflect what happens when you include dependencies on both Guava and Swagger in an application: If you try to deploy this bundle configuration, you get the error, org.osgi.framework.BundleException: Uses constraint violation . Reverting to Guava 20 If your project uses both Guava and Swagger libraries (directly or indirectly), you should configure the maven-bundle-plugin to use an explicit version range (or no range at all) for the Guava bundle import, as follows: <Import-Package> com.google.common.base;version="[20.0,21.0)", com.google.common.collect;version="[20.0,21.0)", com.google.common.io;version="[20.0,21.0)" </Import-Package> This configuration forces your OSGi application to revert to the (vulnerable) Guava 20 library. It is therefore particularly important to avoid deserializing AtomicDoubleArray instances in this case. CVE-2017-12629 Solr/Lucene -security bypass to access sensitive data - CVE-2017-12629 Apache Solr is a popular open source search platform that uses the Apache Lucene search engine. If your application uses a combination of Apache Solr with Apache Lucene (for example, when using the Camel Solr component), it could be affected by this security vulnerability. Please consult the linked security advisory for more details of this vulnerability and the mitigation steps to take. Note The Fuse runtime does not use Apache Solr or Apache Lucene directly. The security risk only arises, if you are using Apache Solr and Apache Lucene together in the context of an integration application (for example, when using the Camel Solr component). CVE-2021-30129 mina-sshd-core: Memory leak denial of service in Apache Mina SSHD Server A vulnerability in sshd-core of Apache Mina SSHD allows an attacker to overflow the server causing an OutOfMemory error. This issue affects the SFTP and port forwarding features of Apache Mina SSHD version 2.0.0 and later versions. It was addressed in Apache Mina SSHD 2.7.0 This vulnerability in Apache Mina SSHD was addressed by SSHD-1004 , which deprecates certain cryptographic algorithms that have this vulnerability. In Fuse 7.10 on Karaf and Fuse 7.10 on JBoss EAP, these deprecated algorithms are still supported (for reasons of backwards compatibility). However, if you are using one of these deprecated algorithms, it is strongly recommended that you refactor your application code to use a different algorithm instead. In Fuse 7.10, the default cipher algorithms have changed as follows. Fuse 7.9 Fuse 7.10 Deprecated in Fuse 7.10? aes128-ctr aes128-ctr aes192-ctr aes256-ctr [email protected] [email protected] arcfour128 arcfour128 yes aes128-cbc aes128-cbc aes192-cbc aes256-cbc 3des-cbc 3des-cbc yes blowfish-cbc blowfish-cbc yes In Fuse 7.10, the default key exchange algorithms have changed as follows. Fuse 7.9 Fuse 7.10 deprecated in 7.10? diffie-hellman-group-exchange-sha256 diffie-hellman-group-exchange-sha256 ecdh-sha2-nistp521 ecdh-sha2-nistp521 ecdh-sha2-nistp384 ecdh-sha2-nistp384 ecdh-sha2-nistp256 ecdh-sha2-nistp256 diffie-hellman-group18-sha512 diffie-hellman-group17-sha512 diffie-hellman-group16-sha512 diffie-hellman-group15-sha512 diffie-hellman-group14-sha256 diffie-hellman-group-exchange-sha1 diffie-hellman-group-exchange-sha1 yes diffie-hellman-group1-sha1 diffie-hellman-group1-sha1 yes 6.2. Fuse on OpenShift This section lists issues that affect the deployment of Fuse applications on OpenShift. For details of issues affecting specific containers, see also the sections for Spring Boot, Fuse on Apache Karaf, and Fuse on JBoss EAP. The Fuse on OpenShift distribution has the following known issues: ENTESB-21281 Update FoO images with add-opens Without add-opens Fuse on Open Shift does not work properly with jdk17. These flags cannot be delivered automatically, so you have to specify them yourself, by adding the flags to a script that defines add-opens . Since Java 17, the Java Platform Module System is mandatory . It implements strong encapsulation, which restricts access . You can use the --add-opens option to allow access, providing deep reflection, and allowing a specified module to open the named package.: --add-opens module/package=target-module(,target-module)* ENTESB-21281 [Fuse on Openshift] QS karaf-cxf-rest - JavaDoc no longer supported on jdk17 The cxf java2wadl-plugin in Red Hat FUSE 7.x doesn't work with JDK17. ENTESB-17895 [ Fuse Console ] Upgrade subscription does not update Hawtio In Fuse 7.10, if you update the Fuse Console by changing the Operator subscription channel to version 7.10, the Fuse Console remains on vesion 7.9. Even if the Fuse Console containers and pods have the label 7.10, they are still using the 7.9 images. To work around this problem, perform the upgrade by removing the older version of Fuse Console and then making a fresh installation of Fuse Console version 7.10. ENTESB-17861 Apicurito generator cannot generate Fuse Camel Project In Fuse 7.10, the API Designer (Apicurito) does not work properly, if it is installed via the Apicurito Operator (giving an Invalid Cert Error). To work around this problem: Open a new tab to htps://apicurito-service-generator-apicurito.apps.cluster-name.openshift.com (Replace cluster-name.openshift.com with your cluster name.) Accept the certificates. Switch to the application and click on the generate button again. ENTESB-17836 [ Fuse Console ] A newly added route is not displayed in the Camel tree In Fuse 7.10, after deploying an application, the route (or routes) is not displayed in the Camel tree on the Fuse Console. You can work around this issue by refreshing the page, which should make the route appear. ENTESB-19351 FIPS on OCP - Jolokia agent doesn't start due to unsupported security encoding In Fuse 7.11, in OCP FIPS-enabled Jolokia agent becomes unavailable due to unsupported security encoding. ENTESB-19352 FIPS on OCP - karaf-maven-plugin assembly goal fails to unsupported security provider In Fuse 7.11, a binary stream deploy strategy fails on OCP FIPS enabled, with Karaf applications, if we use karaf-maven-plugin with assembly goal. 6.3. Fuse on Apache Karaf Fuse on Apache Karaf has the following known issues: ENTESB-16417 Credential store is using PBEWithSHA1AndDESede by default The security API in OpenJDK 8u292 and in OracleJDK 1.8.0_291 returns an incomplete list of security providers, which causes the credential store in Apache Karaf to fail (because the required security provider appears to be unavailable). The underlying issue that causes this problem is https://bugs.openjdk.java.net/browse/JDK-8249906 . We recommend that you use the earlier OpenJDK version, OpenJDK 8u282, or the later OpenJDK version, OpenJDK 8u302, which do not have this bug. ENTESB-16526 fuse-karaf on Windows cannot restart during patch:install While running patch:install in the Apache Karaf container on the Windows platform, under certain circumstances you might encounter the following error when the patch:install command attempts an automatic restart of the container: If you encounter this error, simply restart the Karaf container manually. ENTESB-8140 Start level of hot deploy bundles is 80 by default Starting in the Fuse 7.0 GA release, in the Apache Karaf container the start level of hot deployed bundles is 80 by default. This can cause problems for the hot deployed bundles, because there are many system bundles and features that have the same start level. To work around this problem and ensure that hot deployed bundles start reliably, edit the etc/org.apache.felix.fileinstall-deploy.cfg file and change the felix.fileinstall.start.level setting as follows: ENTESB-7664 Installing framework-security feature kills karaf The framework-security OSGi feature must be installed using the --no-auto-refresh option, otherwise this feature will shut down the Apache Karaf container. For example: 6.4. Fuse on JBoss EAP Fuse on JBoss EAP has the following known issues: ENTESB-21314 [Fuse on EAP] Support jdk17 modularity Without add-opens Fuse on EAP does not work properly with jdk17. These flags cannot be delivered automatically, so you have to specify them yourself, by adding the flags to a script that defines add-opens . Since Java 17, the Java Platform Module System is mandatory . It implements strong encapsulation, which restricts access . You can use the --add-opens option to allow access, providing deep reflection, and allowing a specified module to open the named package.: --add-opens module/package=target-module(,target-module)* ENTESB-20833 java.security.acl.Group was removed for jdk17 java.security.acl.Group is removed in versions jdk14 or later. ENTESB-13168 Camel deployment on EAP domain mode is not working on Windows Starting in Fuse 7.6.0, for Fuse on JBoss EAP, the Camel subsystem cannot be deployed on JBoss EAP in domain mode on Windows OS. 6.5. Fuse on Spring Boot Fuse on Spring Boot has the following known issues: ENTESB-21315 [Fuse on Spring-boot] Support jdk17 modularity Without add-opens Fuse does not work properly with jdk17. These flags cannot be delivered automatically, so you have to specify them yourself, by adding the flags to a script that defines add-opens . Since Java 17, the Java Platform Module System is mandatory . It implements strong encapsulation, which restricts access . You can use the --add-opens option to allow access, providing deep reflection, and allowing a specified module to open the named package.: --add-opens module/package=target-module(,target-module)* ENTESB-21421 / ENTESB-20842 Spring Boot 2.6 does not allow circular dependencies Spring Boot 2.6 may be unable to resolve circular dependencies. If you use XML DSL in Spring Boot to instantiate a customized HealthCheckRegistry in your beans file, the build fails. As a workaround, you can add the property spring.main.allow-circular-references=true to application.properties . 6.6. Fuse Tooling Fuse Tooling has the following known issues: ENTESB-20965 [Hawtio] Login failed due to: No LoginModules configured for hawtio-domain Hawtio can only work with the old security system with WildFly. If you attempt to login to Hawtio with Elytron security, the console displays the following error message. ENTESB-19668 The Hawtio management console does not display a message on the UI when client certificate authentication is rejected The Hawtio component does not show any message on the login page, after rejecting authentication from a client certificate. Hawtio only redirects the web browser to the login page, without showing any message. ENTESB-17705 [ Hawtio ] Logout button disappears In Fuse 7.10, after logging in and logging out several times in a row, the Logout button is not shown. To work around this issue, you can refresh the page one or more times and the Logout button should reappear. ENTESB-17839 Fuse + AtlasMap: Unrecognized field "dataSourceType" In Fuse 7.11, if user wants to use AtlasMap vscode extension, then they must use version 0.0.9 as Fuse 7.11 is with AtlasMap 2.3.x. Otherwise use AtlasMap standalone 2.3.x but not the vscode-extension. 6.7. Apache Camel Apache Camel has the following known issues: ENTESB-19361 / UNDERTOW-2206 Access logging support by cxf with embedded undertow server on karaf does not log URI If the DECODE_URL option is true (this is the default value for Fuse 7.11.1 karaf runtime), and use HttpServerExchange to decode relativePath and requestPath , the requestURI parameter remains encoded. The dispatch methods ( forward, include , async and error ) assign the path without decoding it, for requestPath and relativeURL , which causes dispatching to a path such as /some%20thing . ENTESB-15343 XSLT component not working properly with IBM1.8 JDK In Fuse 7.8, the Camel XSLT component does not work properly with the IBM 1.8 JDK. The problem occurs because the underlying Apache Xerces implementation of XSLT does not support the javax.xml.XMLConstants#FEATURE_SECURE_PROCESSING property (see XERCESJ-1654 ). ENTESB-11060 [ camel-linkedin ] V1 API is no longer supported Since Fuse 7.4.0, the Camel LinkedIn component is no longer able to communicate with the LinkedIn server, because it is implemented using the LinkedIn Version 1.0 API, which is no longer supported by LinkedIn. The Camel LinkedIn component will be updated to use the Version 2 API in a future release of Fuse. ENTESB-7469 Camel Docker component cannot use Unix socket connections on EAP Since Fuse 7.0, the camel-docker component can connect to Docker only through its REST API, not through UNIX sockets. ENTESB-5231 PHP script language does not work The PHP scripting language is not supported in Camel applications on the Apache Karaf container, because there is no OSGi bundle available for PHP. ENTESB-5232 Python language does not work The Python scripting language is not supported in Camel applications on the Apache Karaf container, because there is no OSGi bundle available for Python. ENTESB-2443 Google Mail API - Sending of messages and drafts is not synchronous When you send a message or draft, the response contains a Message object with an ID. It may not be possible to immediately get this message via another call to the API. You may have to wait and retry the call. ENTESB-2332 Google Drive API JSON response for changes returns bad count of items for the first page Google Drive API JSON response for changes returns bad count of items for the first page. Setting maxResults for a list operation may not return all the results in the first page. You may have to go through several pages to get the complete list (that is by setting pageToken on new requests). | [
"BundleA +-- BundleB | +-- BundleCa +-- BundleCb",
"org.jboss.qe.cxf.rs.swagger-deployment +-- Guava 27 +-- Swagger 1.5 +-- reflections 0.9.11 +-- Guava 20",
"<Import-Package> com.google.common.base;version=\"[20.0,21.0)\", com.google.common.collect;version=\"[20.0,21.0)\", com.google.common.io;version=\"[20.0,21.0)\" </Import-Package>",
"--add-opens module/package=target-module(,target-module)*",
"Red Hat Fuse starting up. Press Enter to open the shell now 100% [========================================================================] Karaf started in 18s. Bundle stats: 235 active, 235 total '.tmpdir' is not recognized as an internal or external command, operable program or batch file. There is a Root instance already running with name ~14 and pid ~13. If you know what you are doing and want to force the run anyway, SET CHECK_ROOT_INSTANCE_RUNNING=false and re run the command.",
"felix.fileinstall.start.level = 90",
"feature:install -v --no-auto-refresh framework-security",
"--add-opens module/package=target-module(,target-module)*",
"--add-opens module/package=target-module(,target-module)*",
"11:30:21,039 WARN [io.hawt.system.Authenticator] (default task-2) Login failed due to: No LoginModules configured for hawtio-domain"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/release_notes_for_red_hat_fuse_7.13/known-issues |
3.13. Additional Resources | 3.13. Additional Resources Subsystem-Specific Kernel Documentation All of the following files are located under the /usr/share/doc/kernel-doc- <kernel_version> /Documentation/cgroups/ directory (provided by the kernel-doc package). blkio subsystem - blkio-controller.txt cpuacct subsystem - cpuacct.txt cpuset subsystem - cpusets.txt devices subsystem - devices.txt freezer subsystem - freezer-subsystem.txt memory subsystem - memory.txt net_prio subsystem - net_prio.txt Additionally, refer to the following files for further information about the cpu subsystem: Real-Time scheduling - /usr/share/doc/kernel-doc- <kernel_version> /Documentation/scheduler/sched-rt-group.txt CFS scheduling - /usr/share/doc/kernel-doc- <kernel_version> /Documentation/scheduler/sched-bwc.txt | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/resource_management_guide/sec-subsystems_and_tunable_parameter-additional_resources |
31.2. Displaying Information About a Module | 31.2. Displaying Information About a Module You can display detailed information about a kernel module by running the modinfo <module_name> command. Note When entering the name of a kernel module as an argument to one of the module-init-tools utilities, do not append a .ko extension to the end of the name. Kernel module names do not have extensions: their corresponding files do. For example, to display information about the e1000e module, which is the Intel PRO/1000 network driver, run: Example 31.1. Listing information about a kernel module with lsmod Here are descriptions of a few of the fields in modinfo output: filename The absolute path to the .ko kernel object file. You can use modinfo -n as a shortcut command for printing only the filename field. description A short description of the module. You can use modinfo -d as a shortcut command for printing only the description field. alias The alias field appears as many times as there are aliases for a module, or is omitted entirely if there are none. depends This field contains a comma-separated list of all the modules this module depends on. Note If a module has no dependencies, the depends field may be omitted from the output. parm Each parm field presents one module parameter in the form parameter_name : description \ufeff , where: parameter_name is the exact syntax you should use when using it as a module parameter on the command line, or in an option line in a .conf file in the /etc/modprobe.d/ directory; and, description is a brief explanation of what the parameter does, along with an expectation for the type of value the parameter accepts (such as int , unit or array of int ) in parentheses. You can list all parameters that the module supports by using the -p option. However, because useful value type information is omitted from modinfo -p output, it is more useful to run: Example 31.2. Listing module parameters | [
"~]# modinfo e1000e filename: /lib/modules/2.6.32-71.el6.x86_64/kernel/drivers/net/e1000e/e1000e.ko version: 1.2.7-k2 license: GPL description: Intel(R) PRO/1000 Network Driver author: Intel Corporation, <[email protected]> srcversion: 93CB73D3995B501872B2982 alias: pci:v00008086d00001503sv*sd*bc*sc*i* alias: pci:v00008086d00001502sv*sd*bc*sc*i* [some alias lines omitted] alias: pci:v00008086d0000105Esv*sd*bc*sc*i* depends: vermagic: 2.6.32-71.el6.x86_64 SMP mod_unload modversions parm: copybreak:Maximum size of packet that is copied to a new buffer on receive (uint) parm: TxIntDelay:Transmit Interrupt Delay (array of int) parm: TxAbsIntDelay:Transmit Absolute Interrupt Delay (array of int) parm: RxIntDelay:Receive Interrupt Delay (array of int) parm: RxAbsIntDelay:Receive Absolute Interrupt Delay (array of int) parm: InterruptThrottleRate:Interrupt Throttling Rate (array of int) parm: IntMode:Interrupt Mode (array of int) parm: SmartPowerDownEnable:Enable PHY smart power down (array of int) parm: KumeranLockLoss:Enable Kumeran lock loss workaround (array of int) parm: WriteProtectNVM:Write-protect NVM [WARNING: disabling this can lead to corrupted NVM] (array of int) parm: CrcStripping:Enable CRC Stripping, disable if your BMC needs the CRC (array of int) parm: EEE:Enable/disable on parts that support the feature (array of int)",
"~]# modinfo e1000e | grep \"^parm\" | sort parm: copybreak:Maximum size of packet that is copied to a new buffer on receive (uint) parm: CrcStripping:Enable CRC Stripping, disable if your BMC needs the CRC (array of int) parm: EEE:Enable/disable on parts that support the feature (array of int) parm: InterruptThrottleRate:Interrupt Throttling Rate (array of int) parm: IntMode:Interrupt Mode (array of int) parm: KumeranLockLoss:Enable Kumeran lock loss workaround (array of int) parm: RxAbsIntDelay:Receive Absolute Interrupt Delay (array of int) parm: RxIntDelay:Receive Interrupt Delay (array of int) parm: SmartPowerDownEnable:Enable PHY smart power down (array of int) parm: TxAbsIntDelay:Transmit Absolute Interrupt Delay (array of int) parm: TxIntDelay:Transmit Interrupt Delay (array of int) parm: WriteProtectNVM:Write-protect NVM [WARNING: disabling this can lead to corrupted NVM] (array of int)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-Displaying_Information_About_a_Module |
Chapter 3. Scope of coverage | Chapter 3. Scope of coverage Support will be provided for use according to the published Scope of Coverage in Appendix 1 of the Red Hat Enterprise Agreement . To encourage the rapid adoption of new technologies while keeping the high standard of stability inherent in Red Hat enterprise product, the product life cycle for Red Hat {HubName} is divided into three phases of maintenance, described below. | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/private_automation_hub_life_cycle/scope_of_coverage |
Chapter 22. Enabling High Availability | Chapter 22. Enabling High Availability Abstract This chapter explains how to enable and configure high availability in the Apache CXF runtime. 22.1. Introduction to High Availability Overview Scalable and reliable applications require high availability to avoid any single point of failure in a distributed system. You can protect your system from single points of failure using replicated services . A replicated service is comprised of multiple instances, or replicas , of the same service. Together these act as a single logical service. Clients invoke requests on the replicated service, and Apache CXF delivers the requests to one of the member replicas. The routing to a replica is transparent to the client. HA with static failover Apache CXF supports high availability (HA) with static failover in which replica details are encoded in the service WSDL file. The WSDL file contains multiple ports, and can contain multiple hosts, for the same service. The number of replicas in the cluster remains static as long as the WSDL file remains unchanged. Changing the cluster size involves editing the WSDL file. 22.2. Enabling HA with Static Failover Overview To enable HA with static failover, you must do the following: the section called "Encode replica details in your service WSDL file" the section called "Add the clustering feature to your client configuration" Encode replica details in your service WSDL file You must encode the details of the replicas in your cluster in your service WSDL file. Example 22.1, "Enabling HA with Static Failover: WSDL File" shows a WSDL file extract that defines a service cluster of three replicas. Example 22.1. Enabling HA with Static Failover: WSDL File The WSDL extract shown in Example 22.1, "Enabling HA with Static Failover: WSDL File" can be explained as follows: Defines a service, ClusterService , which is exposed on three ports: Replica1 Replica2 Replica3 Defines Replica1 to expose the ClusterService as a SOAP over HTTP endpoint on port 9001 . Defines Replica2 to expose the ClusterService as a SOAP over HTTP endpoint on port 9002 . Defines Replica3 to expose the ClusterService as a SOAP over HTTP endpoint on port 9003 . Add the clustering feature to your client configuration In your client configuration file, add the clustering feature as shown in Example 22.2, "Enabling HA with Static Failover: Client Configuration" . Example 22.2. Enabling HA with Static Failover: Client Configuration 22.3. Configuring HA with Static Failover Overview By default, HA with static failover uses a sequential strategy when selecting a replica service if the original service with which a client is communicating becomes unavailable, or fails. The sequential strategy selects a replica service in the same sequential order every time it is used. Selection is determined by Apache CXF's internal service model and results in a deterministic failover pattern. Configuring a random strategy You can configure HA with static failover to use a random strategy instead of the sequential strategy when selecting a replica. The random strategy selects a random replica service each time a service becomes unavailable, or fails. The choice of failover target from the surviving members in a cluster is entirely random. To configure the random strategy, add the configuration shown in Example 22.3, "Configuring a Random Strategy for Static Failover" to your client configuration file. Example 22.3. Configuring a Random Strategy for Static Failover The configuration shown in Example 22.3, "Configuring a Random Strategy for Static Failover" can be explained as follows: Defines a Random bean and implementation class that implements the random strategy. Specifies that the random strategy is used when selecting a replica. | [
"<wsdl:service name=\"ClusteredService\"> <wsdl:port binding=\"tns:Greeter_SOAPBinding\" name=\"Replica1\"> <soap:address location=\"http://localhost:9001/SoapContext/Replica1\"/> </wsdl:port> <wsdl:port binding=\"tns:Greeter_SOAPBinding\" name=\"Replica2\"> <soap:address location=\"http://localhost:9002/SoapContext/Replica2\"/> </wsdl:port> <wsdl:port binding=\"tns:Greeter_SOAPBinding\" name=\"Replica3\"> <soap:address location=\"http://localhost:9003/SoapContext/Replica3\"/> </wsdl:port> </wsdl:service>",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <beans xmlns=\"http://www.springframework.org/schema/beans\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:jaxws=\"http://cxf.apache.org/jaxws\" xmlns:clustering=\"http://cxf.apache.org/clustering\" xsi:schemaLocation=\"http://cxf.apache.org/jaxws http://cxf.apache.org/schemas/jaxws.xsd http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd\"> <jaxws:client name=\"{http://apache.org/hello_world_soap_http}Replica1\" createdFromAPI=\"true\"> <jaxws:features> <clustering:failover/> </jaxws:features> </jaxws:client> <jaxws:client name=\"{http://apache.org/hello_world_soap_http}Replica2\" createdFromAPI=\"true\"> <jaxws:features> <clustering:failover/> </jaxws:features> </jaxws:client> <jaxws:client name=\"{http://apache.org/hello_world_soap_http}Replica3\" createdFromAPI=\"true\"> <jaxws:features> <clustering:failover/> </jaxws:features> </jaxws:client> </beans>",
"<beans ...> <bean id=\"Random\" class=\"org.apache.cxf.clustering.RandomStrategy\"/> <jaxws:client name=\"{http://apache.org/hello_world_soap_http}Replica3\" createdFromAPI=\"true\"> <jaxws:features> <clustering:failover> <clustering:strategy> <ref bean=\"Random\"/> </clustering:strategy> </clustering:failover> </jaxws:features> </jaxws:client> </beans>"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/CXFDeployHA |
Observability in OpenShift Pipelines | Observability in OpenShift Pipelines Red Hat OpenShift Pipelines 1.18 Observability features of OpenShift Pipelines Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_openshift_pipelines/1.18/html/observability_in_openshift_pipelines/index |
2.14. pqos | 2.14. pqos The pqos utility, which is available from the intel-cmt-cat package, enables you to both monitor and control CPU cache and memory bandwidth on recent Intel processors. You can use it for workload isolation and improving performance determinism in multitenant deployments. It exposes the following processor capabilities from the Resource Director Technology (RDT) feature set: Monitoring Last Level Cache (LLC) usage and contention monitoring using the Cache Monitoring Technology (CMT) Per-thread memory bandwidth monitoring using the Memory Bandwidth Monitoring (MBM) technology Allocation Controlling the amount of LLC space that is available for specific threads and processes using the Cache Allocation Technology (CAT) Controlling code and data placement in the LLC using the Code and Data Prioritization (CDP) technology Use the following command to list the RDT capabilities supported on your system and to display the current RDT configuration: Additional Resources For more information about using pqos , see the pqos (8) man page. For detailed information on the CMT, MBM, CAT, and CDP processor features, see the official Intel documentation: Intel(R) Resource Director Technology (Intel(R) RDT) . | [
"pqos --show --verbose"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/performance_tuning_guide/sect-Red_Hat_Enterprise_Linux-Performance_Tuning_Guide-Performance_Monitoring_Tools-pqos |
What's new in cost management | What's new in cost management Cost Management Service 1-latest Learn about new features and updates Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/cost_management_service/1-latest/html/whats_new_in_cost_management/index |
3.3.5. Thinly-Provisioned Logical Volumes (Thin Volumes) | 3.3.5. Thinly-Provisioned Logical Volumes (Thin Volumes) As of the Red Hat Enterprise Linux 6.4 release, logical volumes can be thinly provisioned. This allows you to create logical volumes that are larger than the available extents. Using thin provisioning, you can manage a storage pool of free space, known as a thin pool, which can be allocated to an arbitrary number of devices when needed by applications. You can then create devices that can be bound to the thin pool for later allocation when an application actually writes to the logical volume. The thin pool can be expanded dynamically when needed for cost-effective allocation of storage space. Note Thin volumes are not supported across the nodes in a cluster. The thin pool and all its thin volumes must be exclusively activated on only one cluster node. By using thin provisioning, a storage administrator can over-commit the physical storage, often avoiding the need to purchase additional storage. For example, if ten users each request a 100GB file system for their application, the storage administrator can create what appears to be a 100GB file system for each user but which is backed by less actual storage that is used only when needed. When using thin provisioning, it is important that the storage administrator monitor the storage pool and add more capacity if it starts to become full. To make sure that all available space can be used, LVM supports data discard. This allows for re-use of the space that was formerly used by a discarded file or other block range. For information on creating thin volumes, see Section 5.4.4, "Creating Thinly-Provisioned Logical Volumes" . Thin volumes provide support for a new implementation of copy-on-write (COW) snapshot logical volumes, which allow many virtual devices to share the same data in the thin pool. For information on thin snapshot volumes, see Section 3.3.7, "Thinly-Provisioned Snapshot Volumes" . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/thinprovisioned_volumes |
function::proc_mem_rss | function::proc_mem_rss Name function::proc_mem_rss - Program resident set size in pages Synopsis Arguments None Description Returns the resident set size in pages of the current process, or zero when there is no current process or the number of pages couldn't be retrieved. | [
"proc_mem_rss:long()"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-proc-mem-rss |
6.9. Connectors | 6.9. Connectors Connectors define how the TPS communicates with other subsystems - namely CA, KRA, and TKS. In general, these parameters are set up during TPS installation. The following is an example of connector configuration: TPS profiles refer to these connectors by their IDs. For example Multiple connector of the same kind (for example, multiple CA connectors) can be defined. This may be useful when one TPS instance serves multiple backend Certificate System servers for different groups of tokens. Note Automatic failover for connectors in TPS is currently not supported. A manual failover procedure must be performed to point the TPS to alternate CA, KRA, or TKS, as long as they are clones of the original systems. | [
"tps.connector.ca1.enable=true tps.connector.ca1.host=host1.EXAMPLE.com tps.connector.ca1.maxHttpConns=15 tps.connector.ca1.minHttpConns=1 tps.connector.ca1.nickName=subsystemCert cert-pki-tomcat tps.connector.ca1.port=8443 tps.connector.ca1.timeout=30 tps.connector.ca1.uri.enrollment=/ca/ee/ca/profileSubmitSSLClient tps.connector.ca1.uri.getcert=/ca/ee/ca/displayBySerial tps.connector.ca1.uri.renewal=/ca/ee/ca/profileSubmitSSLClient tps.connector.ca1.uri.revoke=/ca/ee/subsystem/ca/doRevoke tps.connector.ca1.uri.unrevoke=/ca/ee/subsystem/ca/doUnrevoke tps.connector.kra1.enable=true tps.connector.kra1.host=host1.EXAMPLE.com tps.connector.kra1.maxHttpConns=15 tps.connector.kra1.minHttpConns=1 tps.connector.kra1.nickName=subsystemCert cert-pki-tomcat tps.connector.kra1.port=8443 tps.connector.kra1.timeout=30 tps.connector.kra1.uri.GenerateKeyPair=/kra/agent/kra/GenerateKeyPair tps.connector.kra1.uri.TokenKeyRecovery=/kra/agent/kra/TokenKeyRecovery tps.connector.tks1.enable=true tps.connector.tks1.generateHostChallenge=true tps.connector.tks1.host=host1.EXAMPLE.com tps.connector.tks1.keySet=defKeySet tps.connector.tks1.maxHttpConns=15 tps.connector.tks1.minHttpConns=1 tps.connector.tks1.nickName=subsystemCert cert-pki-tomcat tps.connector.tks1.port=8443 tps.connector.tks1.serverKeygen=true tps.connector.tks1.timeout=30 tps.connector.tks1.tksSharedSymKeyName=sharedSecret tps.connector.tks1.uri.computeRandomData=/tks/agent/tks/computeRandomData tps.connector.tks1.uri.computeSessionKey=/tks/agent/tks/computeSessionKey tps.connector.tks1.uri.createKeySetData=/tks/agent/tks/createKeySetData tps.connector.tks1.uri.encryptData=/tks/agent/tks/encryptData",
"op.enroll.userKey.keyGen.signing.ca.conn= ca1"
] | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/sect-connectors |
B.85. scsi-target-utils | B.85. scsi-target-utils B.85.1. RHSA-2011:0332 - Important: scsi-target-utils security update An updated scsi-target-utils package that fixes one security issue is now available for Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this update as having important security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available from the CVE link(s) associated with each description below. The scsi-target-utils package contains the daemon and tools to set up and monitor SCSI targets. Currently, iSCSI software and iSER targets are supported. CVE-2011-0001 A double-free flaw was found in scsi-target-utils' tgtd daemon. A remote attacker could trigger this flaw by sending carefully-crafted network traffic, causing the tgtd daemon to crash. Red Hat would like to thank Emmanuel Bouillon of NATO C3 Agency for reporting this issue. All scsi-target-utils users should upgrade to this updated package, which contains a backported patch to correct this issue. All running scsi-target-utils services must be restarted for the update to take effect. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/scsi-target-utils |
7.2. Caching | 7.2. Caching Caching options can be configured with virt-manager during guest installation, or on an existing guest virtual machine by editing the guest XML configuration. Table 7.1. Caching options Caching Option Description Cache=none I/O from the guest is not cached on the host, but may be kept in a writeback disk cache. Use this option for guests with large I/O requirements. This option is generally the best choice, and is the only option to support migration. Cache=writethrough I/O from the guest is cached on the host but written through to the physical medium. This mode is slower and prone to scaling problems. Best used for small number of guests with lower I/O requirements. Suggested for guests that do not support a writeback cache (such as Red Hat Enterprise Linux 5.5 and earlier), where migration is not needed. Cache=writeback I/O from the guest is cached on the host. Cache=directsync Similar to writethrough , but I/O from the guest bypasses the host page cache. Cache=unsafe The host may cache all disk I/O, and sync requests from guest are ignored. Cache=default If no cache mode is specified, the system's default settings are chosen. In virt-manager , the caching mode can be specified under Virtual Disk . For information on using virt-manager to change the cache mode, see Section 3.3, "Virtual Disk Performance Options" To configure the cache mode in the guest XML, edit the cache setting inside the driver tag to specify a caching option. For example, to set the cache as writeback : <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> | [
"<disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/>"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_tuning_and_optimization_guide/sect-Virtualization_Tuning_Optimization_Guide-BlockIO-Caching |
Chapter 4. Secure storage for credentials | Chapter 4. Secure storage for credentials JBoss EAP allows the encryption of sensitive strings outside of configuration files. These strings can be stored in a keystore, and subsequently decrypted for applications and verifications systems. Sensitive strings can be stored in either of the following: Credential Store - Introduced in JBoss EAP 7.1, a credential store can safely secure sensitive and plain text strings by encrypting them in a storage file. Each JBoss EAP server can contain multiple credential stores. Password Vault - Primarily used in legacy configurations, a password vault uses a Java Keystore to store sensitive strings outside of the configuration files. Each JBoss EAP server can only contain a single password vault. All of the configuration files in EAP_HOME /standalone/configuration/ and EAP_HOME /domain/configuration/ are world readable by default. It is strongly recommended to not store plaintext passwords in the configuration files, and instead place these credentials in either a credential Store or password vault . If you decide to place plaintext passwords in the configuration files, then these files should only be accessible by limited users. At a minimum, the user account under which JBoss EAP 7 is running requires read-write access. 4.1. Credential stores in Elytron 4.1.1. Credential stores provided by Elytron Elytron provides two default credential store types you can use to save your credentials: KeyStoreCredentialStore and PropertiesCredentialStore. You can manage credential stores with the JBoss EAP management CLI, or you can use the WildFly Elytron tool to manage them offline. In addition to the two default store types, you can also create, use, and manage your own custom credential stores. 4.1.1.1. KeyStoreCredentialStore/credential-store You can store all the Elytron credential types in a KeyStoreCredentialStore. The resource name for KeyStoreCredentialStore in the elytron subsystem is credential-store . The KeyStoreCredentialStore protects your credentials using the mechanisms provided by the KeyStore implementations in the Java Development Kit (JDK). Access a KeyStoreCredentialStore in the management CLI as follows: Additional resources Credential types in Elytron Credential store operations using the JBoss EAP management CLI Credential store operations using the WildFly Elytron tool KeyStore Javadoc credential-store Attributes 4.1.1.2. PropertiesCredentialStore/secret-key-credential-store To start properly, JBoss EAP requires an initial key to unlock certain secure resources. Use the secret-key-credential-store to provide this master secret key to unlock these necessary server resources. You can also use the PropertiesCredentialStore to store SecretKeyCredential, which supports storing Advanced Encryption Standard (AES) secret keys. Use file system permissions to restrict access to the credential store. Ideally, you should give access only to your application server to restrict access to this credential store. The resource name in the elytron subsystem for PropertiesCredentialStore is secret-key-credential-store , and you can access it in the management CLI as follows: For information on creating and providing the initial key, see Providing an initial key to JBoss EAP to unlock secured resources . Alternately, you can get the master key or password from an external source. For information about obtaining the password from an external source, see Obtain the password for the credential store from an external source . Additional resources Credential types in Elytron Credential store operations using the JBoss EAP management CLI Credential store operations using the WildFly Elytron tool secret-key-credential-store Attributes 4.1.2. Credential types in Elytron Elytron provides the following three credential types to suit your various security needs, and you can store these credentials in one of Elytron's credential stores. PasswordCredential With this credential type, you can securely store plain text, or unencrypted, passwords. For the JBoss EAP resources that require a password, use a reference to the PasswordCredential instead of the plain text password to maintain the secrecy of the password. Example of connecting to a database In this example database connection command, you can see the password: StrongPassword . This means that others can also see it in the server configuration file. Example of connecting to a database using a PasswordCredential When you use a credential reference instead of a password to connect to a database, others can only see the credential reference in the configuration file, not your password KeyPairCredential You can use both Secure Shell (SSH) and Public-Key Cryptography Standards (PKCS) key pairs as KeyPairCredential. A key pair includes both a shared public key and a private key that only a given user knows. You can manage KeyPairCredential using only the WildFly Elytron tool. SecretKeyCredential A SecretKeyCredential is an Advanced Encryption Standard (AES) key that you can use to create encrypted expressions in Elytron. For information about encrypted expressions, see Encrypted expressions in Elytron . Additional resources Credential stores provided by Elytron Credential types supported by credential stores Encrypted expressions in Elytron 4.1.3. Credential types supported by Elytron credential stores The following table illustrates which credential type is supported by which credential store: Credential type KeyStoreCredentialStore/credential-store PropertiesCredentialStore/secret-key-credential-store PasswordCredential Yes No KeyPairCredential Yes No SecretKeyCredential Yes Yes Additional resources Credential types in Elytron Credential stores provided by Elytron 4.1.4. Credential store operations using the JBoss EAP management CLI To manage JBoss EAP credentials in a running JBoss EAP server, use the provided management CLI operations. You can manage PasswordCredential and SecretKeyCredential using the JBoss EAP management CLI. Note You can do these operation only on modifiable credential stores. All credential store types are modifiable by default. 4.1.4.1. Creating a KeyStoreCredentialStore/credential-store for a standalone server Create a KeyStoreCredentialStore for a JBoss EAP running as a standalone server in any directory on the file system. For security, the directory containing the store should be accessible to only limited users. Prerequisites You have provided at least read/write access to the directory containing the KeyStoreCredentialStore for the user account under which JBoss EAP is running. Note You cannot have the same name for a credential-store and a secret-key-credential-store because they implement the same Elytron capability: org.wildfly.security.credential-store . Procedure Create a KeyStoreCredentialStore using the following management CLI command: Syntax Example Additional resources Credential store operations using the JBoss EAP management CLI credential-store Attributes 4.1.4.2. Creating a KeyStoreCredentialStore/credential-store for a managed domain You can create a KeyStoreCredentialStore in a managed domain, but you must first use the WildFly Elytron tool to prepare your KeyStoreCredentialStore. If you have multiple host controllers in a single managed domain, choose one of the following options: Create a KeyStoreCredentialStore in each host controller and add credentials to each KeyStoreCredentialStore. Copy a populated KeyStoreCredentialStore from one host controller to all the other host controllers. Save your KeyStoreCredentialStore file in your Network File System (NFS), then use that file for all the KeyStoreCredentialStore resources you create. Alternatively, you can create a KeyStoreCredentialStore file with credentials on a host controller without using the WildFly Elytron tool. Note You don't have to define a KeyStoreCredentialStore resource on every server, because every server on the same profile contains your KeyStoreCredentialStore file. You can find the KeyStoreCredentialStore file in the server data directory, relative-to=jboss.server.data.dir . Important You cannot have the same name for a credential-store and a secret-key-credential-store because they implement the same Elytron capability: org.wildfly.security.credential-store . The following procedure describes how to use the NFS to provide the KeyStoreCredentialStore file to all host controllers. Procedure Use the WildFly Elytron tool to create a KeyStoreCredentialStore storage file. For more information on this, see Credential store operations using the WildFly Elytron tool . Distribute the storage file. For example, allocate it to each host controller by using the scp command, or store it in your NFS and use it for all of your KeyStoreCredentialStore resources. Note To maintain consistency, for a KeyStoreCredentialStore file that multiple resources and host controllers use and which you stored in your NFS, you must use the KeyStoreCredentialStore in read-only mode. Additionally, make sure you provide an absolute path for your KeyStoreCredentialStore file. Syntax Example Optional: If you need to define the credential-store resource in a profile, use the storage file to create the resource. Syntax Example Optional: Create the KeyStoreCredentialStore resource for a host controller. Syntax Example Additional resources KeyStoreCredentialStore/credential-store Credential store operations using the WlidFly Elytron tool credential-store Attributes 4.1.4.3. Creating a PropertiesCredentialStore/secret-key-credential-store for a standalone server Create a PropertiesCredentialStore using the management CLI. When you create a PropertiesCredentialStore, JBoss EAP generates a secret key by default. The name of the generated key is key and its size is 256-bit. Prerequisites You have provided at least read/write access to the directory containing the PropertiesCredentialStore for the user account under which JBoss EAP is running. Procedure Use the following command to create a PropertiesCredentialStore using the management CLI: Syntax Example Additional resources PropertiesCredentialStore/secret-key-credential-store Credential store operations using the JBoss EAP management CLI secret-key-credential-store Attributes 4.1.4.4. Adding a PasswordCredential to a KeyStoreCredentialStore/credential-store Add a plain text password for those resources that require one as a PasswordCredential to the KeyStoreCredentialStore to hide that password in the configuration file. You can then reference this stored credential to access those resources, without ever exposing your password. Prerequisites You have created a KeyStoreCredentialStore. For information about creating a KeyStoreCredentialStore, see Creating a KeyStoreCredentialStore/credential-store for a standalone server . Procedure Add a new PasswordCredential to a KeyStoreCredentialStore: Syntax Example Verification Issue the following command to verify that the PasswordCredential was added to the KeyStoreCredentialStore: Syntax Example Additional resources KeyStoreCredentialStore/credential-store Using a PasswordCredential in your JBoss EAP configuration credential-store Attributes 4.1.4.5. Generating a SecretKeyCredential in a KeyStoreCredentialStore/credential-store Generate a SecretKeyCredential in a KeyStoreCredentialStore. By default, Elytron creates a 256-bit key. If you want a different size, you can specify either a 128-bit or 192-bit key in the key-size attribute. Prerequisites You have created a KeyStoreCredentialStore. For information about creating a KeyStoreCredentialStore, see Creating a KeyStoreCredentialStore/credential-store for a standalone server . Procedure Generate a SecretKeyCredential in a KeyStoreCredentialStore using the following management CLI command: Syntax Example Verification Issue the following command to verify that Elytron stored your SecretKeyCredential in the KeyStoreCredentialStore: Syntax Example Additional resources KeyStoreCredentialStore/credential-store Creating an encrypted expression in Elytron credential-store Attributes 4.1.4.6. Generating a SecretKeyCredential in a PropertiesCredentialStore/secret-key-credential-store Generate a SecretKeyCredential in a PropertiesCredentialStore. By default, Elytron creates a 256-bit key. If you want a different size, you can specify either a 128-bit or 192-bit key in the key-size attribute. When you generate a SecretKeyCredential, Elytron generates a new random secret key and stores it as the SecretKeyCredential. You can view the contents of the credential by using the export operation on the PropertiesCredentialStore. Important Make sure that you create a backup of either PropertiesCredentialStore, SecretKeyCredential, or both, because JBoss EAP cannot decrypt or retrieve lost Elytron credentials. You can use the export operation on the PropertiesCredentialStore to get the value of the SecretKeyCredential. You can then save this value as a backup. For information, see Exporting a SecretKeyCredential from a PropertiesCredentialStore/secret-key-credential-store . Prerequisites You have created a PropertiesCredentialStore. For information about creating a PropertiesCredentialStore, see Creating a PropertiesCredentialStore/secret-key-credential-store for a standalone server . Procedure Generate a SecretKeyCredential in a PropertiesCredentialStore using the following management CLI command: Syntax Example Verification Issue the following command to verify that Elytron created a SecretKeyCredential: Syntax Example Additional resources PropertiesCredentialStore/secret-key-credential-store Creating an encrypted expression in Elytron secret-key-credential-store Attributes 4.1.4.7. Importing a SecretKeyCredential to PropertiesCredentialStore/secret-key-credential-store You can import a SecretKeyCredential created outside of the PropertiesCredentialStore into an Elytron PropertiesCredentialStore. Suppose you exported a SecretKeyCredential from another credential store - a KeyStoreCredentialStore, for example - you can import it to the PropertiesCredentialStore. Prerequisites You have created a PropertiesCredentialStore. For information about creating a PropertiesCredentialStore, see Creating a PropertiesCredentialStore/secret-key-credential-store for a standalone server . You have exported a SecretKeyCredential. For information about exporting a SecretKeyCredential, see Exporting a SecretKeyCredential from a PropertiesCredentialStore/secret-key-credential-store . Procedure Disable caching of commands in the management CLI using the following command: Important If you do not disable caching, the secret key is visible to anyone who can access the management CLI history file. Import the secret key using the following management CLI command: Syntax Example Re-enable the caching of commands using the following management CLI command: Additional resources PropertiesCredentialStore/secret-key-credential-store secret-key-credential-store Attributes 4.1.4.8. Listing credentials in the KeyStoreCredentialStore/credential-store To view all the credentials stored in the KeyStoreCredentialStore, you can list them using the management CLI. Procedure List the credentials stored in a KeyStoreCredentialStore using the following management CLI command: Syntax Example Additional resources KeyStoreCredentialStore/credential-store credential-store Attributes 4.1.4.9. Listing credentials in the PropertiesCredentialStore/secret-key-credential-store To view all the credentials stored in the PropertiesCredentialStore, you can list them using the management CLI. Procedure List the credentials stored in a PropertiesCredentialStore using the following management CLI command: Syntax Example Additional resources PropertiesCredentialStore/secret-key-credential-store secret-key-credential-store Attributes 4.1.4.10. Exporting a SecretKeyCredential from a KeyStoreCredentialStore/credential-store You can export an existing SecretKeyCredential from a KeyStoreCredentialStore to use the SecretKeyCredential or to create a backup of the SecretKeyCredential. Prerequisites You have generated a SecretKeyCredential the KeyStoreCredentialStore. For information about generating a SecretKeyCredential in a KeyStoreCredentialStore, see Generating a SecretKeyCredential in a KeyStoreCredentialStore/credential-store . Procedure Export a SecretKeyCredential from the KeyStoreCredentialStore using the following management CLI command: Syntax Example Additional resources KeyStoreCredentialStore/credential-store credential-store Attributes 4.1.4.11. Exporting a SecretKeyCredential from a PropertiesCredentialStore/secret-key-credential-store You can export an existing SecretKeyCredential from a PropertiesCredentialStore to use the SecretKeyCredential or to create a backup of the SecretKeyCredential. Prerequisites You have either generated a SecretKeyCredential in the PropertiesCredentialStore or imported one to it. For information on generating a SecretKeyCredential in a PropertiesCredentialStore, see Generating a SecretKeyCredential in a PropertiesCredentialStore/secret-key-credential-store . For information on importing a SecretKeyCredential to a PropertiesCredentialStore, see Importing a SecretKeyCredential to PropertiesCredentialStore/secret-key-credential-store Procedure Export a SecretKeyCredential from the PropertiesCredentialStore using the following management CLI command: Syntax Example Additional resources PropertiesCredentialStore/secret-key-credential-store secret-key-credential-store Attributes 4.1.4.12. Removing a credential from KeyStoreCredentialStore/credential-store You can store every credential type in the KeyStoreCredentialStore but, by default, when you remove a credential, Elytron assumes it's a PasswordCredential. If you want to remove a different credential type, specify it in the entry-type attribute. Procedure Remove a credential from the KeyStoreCredentialStore using the following management CLI command: Syntax Example removing a PasswordCredential Example removing a SecretKeyCredential Verification Issue the following command to verify that Elytron removed the credential: Syntax Example The credential you removed is not listed. Additional resources KeyStoreCredentialStore/credential-store credential-store Attributes 4.1.4.13. Removing a credential from the PropertiesCredentialStore/secret-key-credential-store You can store only the SecretKeyCredential type in a PropertiesCredentialStore. This means that, when you remove a credential from a PropertiesCredentialStore, you don't have to specify an entry-type . Procedure Remove a SecretKeyCredential from the PropertiesCredentialStore using the following command: Syntax Example Verification Issue the following command to verify that Elytron removed the credential: Syntax Example The credential you removed is not listed. Additional resources PropertiesCredentialStore/secret-key-credential-store secret-key-credential-store Attributes 4.1.5. Credential store operations using the WildFly Elytron tool 4.1.5.1. Creating a KeyStoreCredentialStore/credential-store using the WildFly Elytron tool In Elytron, you can create a KeyStoreCredentialStore offline where you can save all the credential types. Procedure Create a KeyStoreCredentialStore using the WildFly Elytron tool with the following command: Syntax Example If you don't want to include your store password in the command, omit that argument and then enter the password manually at the prompt. You can also use a masked password generated by the WildFly Elytron tool. For information about generating masked passwords, see Generating masked encrypted strings using the WildFly Elytron tool . Additional resources KeyStoreCredentialStore/credential-store Generating masked encrypted strings using the WildFly Elytron tool 4.1.5.2. Creating a KeyStoreCredentialStore/credential-store using the Bouncy Castle provider Create a KeyStoreCredentialStore using the Bouncy Castle provider. Prerequisites Make sure that your environment is configured to use Bouncy Castle. For more information, see Configure Your Environment to use the Bouncy Castle Provider . Note You cannot have the same name for a credential-store and a secret-key-credential-store because they implement the same Elytron capability: org.wildfly.security.credential-store . Procedure Define a Bouncy Castle FIPS Keystore ( BCFKS ) keystore. FIPS stands for Federal Information Processing Standards. If you already have one, move on to the step. Important Make sure that the keystore keypass and storepass attributes are identical. If they aren't, the BCFKS keystore in the elytron subsystem can't define them. Generate a secret key for the KeyStoreCredentialStore. Define the KeyStoreCredentialStore using the WildFly Elytron tool with the following command: Additional resources KeyStoreCredentialStore/credential-store WildFly Elytron tool KeyStoreCredentialStore/credential-store operations 4.1.5.3. Creating a PropertiesCredentialStore/secret-key-credential-store using WildFly Elytron tool In Elytron, you can create a PropertiesCredentialStore offline where you can save SecretKeyCredential instances. Procedure Create a PropertiesCredentialStore using the WildFly Elytron tool with the following command: Syntax Example Additional resources PropertiesCredentialStore/secret-key-credential-store WildFly Elytron tool PropertiesCredentialStore/secret-key-credential-store operations 4.1.5.4. WildFly Elytron tool KeyStoreCredentialStore/credential-store operations You can do various KeyStoreCredentialStore tasks using the WildFly Elytron tool, including the following: Add a PasswordCredential You can add a PasswordCredential to a KeyStoreCredentialStore using the following WildFly Elytron tool command: Syntax Example If you don't want to put your secret in the command, omit that argument, then enter the secret manually when prompted. Generate a SecretKeyCredential You can add a SecretKeyCredential to a KeyStoreCredentialStore using the following WildFly Elytron tool command: Syntax Example If you don't want to put your secret in the command, omit that argument, then enter the secret manually when prompted. By default, when you create a SecretKeyCredential in JBoss EAP, you create a 256-bit secret key. If you want to change the size, you can specify --size=128 or --size=192 to create 128-bit or 192-bit keys respectively. Import a SecretKeyCredential You can import a SecretKeyCredential using the following WildFLy Elytron tool command: Syntax Example Enter the secret key you want to import. List all the credentials You can list the credentials in the KeyStoreCredentialStore using the following WildFly Elytron tool command: Syntax Example: Check if an alias exists Use the following command to check whether an alias exists in a credential store: Syntax Example Export a SecretKeyCredential You can export a SecretKeyCredential from a KeyStoreCredentialStore using the following command: Syntax Example Remove a credential You can remove a credential from a credential store using the following command: Syntax Example Additional resources KeyStoreCredentialStore/credential-store Credential types in Elytron 4.1.5.5. WildFly Elytron tool PropertiesCredentialStore/secret-key-credential-store operations You can do the following PropertiesCredentialStore operations for SecretKeyCredential using the WildFly Elytron tool: Generate a SecretKeyCredential You can generate a SecteKeyCredential in a PropertiesCredentialStore using the following WildFly Elytron tool command: Syntax Example Import a SecretKeyCredential You can import a SecretKeyCredential using the following WildFLy Elytron tool command: Syntax Example List all the credentials You can list the credentials in the PropertiesCredentialStore using the following WildFly Elytron tool command: Syntax Example Export a SecretKeyCredential You can export a SecretKeyCredential from a PropertiesCredentialStore using the following command: Syntax Example Remove a credential You can remove a credential from a credential store using the following command: Syntax Example Additional resources PropertiesCredentialStore/secret-key-credential-store Credential types in Elytron 4.1.5.6. Adding a credential store created with the WildFly Elytron tool to a JBoss EAP Server After you have created a credential store with the WildFly Elytron tool, you can add it to your running JBoss EAP server. Prerequisites You have created a credential store with the WildFly Elytron tool. For more information, see Creating a KeyStoreCredentialStore/credential-store using the WildFly Elytron tool . Procedure Add the credential store to your running JBoss EAP server with the following management CLI command: For example: After adding the credential store to the JBoss EAP configuration, you can then refer to a password or sensitive string stored in the credential store using the credential-reference attribute. For more information, use the EAP_HOME /bin/elytron-tool.sh credential-store --help command for a detailed listing of available options. Additional resources Using a PasswordCredential in your JBoss EAP configuration credential-store attributes 4.1.5.7. WildFly Elytron tool key pair management operations You can use the following arguments to operate the elytron-tool.sh to manipulate a credential store, such as generating a new key pair that you can store under an alias in a credential store. Generate a key pair Use the generate-key-pair command to create a key pair. You can then store the key pair under an alias in the credential store. The following example shows the creation of an RSA key pair, which has an allocated size of 3072 bits that is stored in the location specified for the credential store. The alias given to the key pair is example . Import a key pair Use the import-key-pair command to import an existing SSH key pair into a credential store with a specified alias. The following example imports a key pair with the alias of example from the /home/user/.ssh/id_rsa file containing the private key in the OpenSSH format: Export a key pair Use the export-key-pair-public-key command to display the public key of a key pair. The public key has a specified alias in the OpenSSH format. The following example displays the public key for the alias example : Note After issuing the export-key-pair-public-key command, you are prompted to enter the credential store passphrase. If no passphrase exists, leave the prompt blank. 4.1.5.8. Example use of stored key pair in the Elytron configuration files A key pair consists of two separate, but matching, cryptographic keys: a public key and a private key. You need to store a key pair in a credential store before you can reference the key pair in an elytron configuration file. You can then provide Git with access to manage your standalone server configuration data. The following example references a credential store and its properties in the <credential-stores> element of an elytron configuration file. The <credential> element references the credential store and the alias, which stores the key pair. <?xml version="1.0" encoding="UTF-8"?> <configuration> <authentication-client xmlns="urn:elytron:client:1.6"> <credential-stores> <credential-store name=" USD{credential_store_name} "> <protection-parameter-credentials> <clear-password password=" USD{credential_store_password} "/> </protection-parameter-credentials> <attributes> <attribute name="path" value=" USD{path_to_credential_store} "/> </attributes> </credential-store> </credential-stores> <authentication-rules> <rule use-configuration=" USD{configuration_file_name} "/> </authentication-rules> <authentication-configurations> <configuration name=" USD{configuration_file_name} "> <credentials> <credential-store-reference store=" USD{credential_store_name} " alias=" USD{alias_of_key_pair} "/> </credentials> </configuration> </authentication-configurations> </authentication-client> </configuration> After you configure the elytron configuration file, the key pair can be used for SSH authentication. Additional resources WildFly Elytron tool key pair management operations 4.1.5.9. Generating masked encrypted strings using the WildFly Elytron tool You can use the WildFly Elytron tool to generate PicketBox-compatible MASK- encrypted strings to use instead of a plain text password for a credential store. Procedure To generate a masked string, use the following command and provide values for the salt and the iteration count: For example: If you do not want to provide the secret in the command, you can omit that argument and you will be prompted to enter the secret manually using standard input. For more information, use the EAP_HOME /bin/elytron-tool.sh mask --help command for a detailed listing of available options. 4.1.6. Encrypted expressions in Elytron To maintain the secrecy of your sensitive strings, you can use encrypted expressions instead of the sensitive strings in the server configuration file. An encrypted expression is one that results from encrypting a string with a SecretKeyCredential, then combining it with its encoding prefix and resolver name. The encoding prefix tells Elytron that the expression is an encrypted expression. The resolver maps the encrypted expression to its corresponding SecretKeyCredential in a credential store. The expression=encryption resource in Elytron uses an encrypted expression to decode the encrypted string inside it at run time. By using an encrypted expression instead of the sensitive string itself in the configuration file, you protect the secrecy of the string. An encrypted expression takes the following format: Syntax when using a specific resolver ENC is the prefix that denotes an encrypted expression. RESOLVER_NAME is the resolver Elytron uses to decrypt the encrypted string. Example If you create an encrypted expression with a default resolver, it looks like this: Syntax when using the default resolver Example In this case, Elytron uses the default resolver you defined in the expression=encryption resource to decrypt an expression. You can use an encrypted expression on any resource attribute that supports it. To find out whether an attribute supports encrypted expression, use the read-resource-description operation, for example: Example read-resource-description on mail/mail-session In this example, the attribute from supports encrypted expressions. This means that you can hide your email address in the from field by encrypting it and then using the encrypted expression instead. Additional resources Creating an encrypted expression in Elytron expression=encryption Attributes 4.1.7. Creating an encrypted expression in Elytron Create an encrypted expression from a sensitive string and a SecretKeyCredential. Use this encrypted expression instead of the sensitive string in the management model - the server configuration file, to maintain the secrecy of the sensitive string. Prerequisites You have generated a secret key in some credential store. For information on creating a secret key in a KeyStoreCredentialStore , see Generating a SecretKeyCredential in a KeyStoreCredentialStore/credential-store For information on creating a secret key in a PropertiesCredentialStore , see Generating a SecretKeyCredential in a PropertiesCredentialStore/secret-key-credential-store Procedure Create a resolver that references the alias of an existing SecretKeyCredential in a credential store using the following management CLI command: Syntax Example If an error message about a duplicate resource displays, use the list-add operation instead of add , as follows: Syntax Example Reload the server using the following management CLI command: Disable caching of commands in the management CLI: Important If you do not disable caching, the secret key is visible to anyone who can access the management CLI history file. Create an encrypted expression using the following management CLI command: Syntax Example USD{ENC::exampleResolver:RUxZAUMQgtpG7oFlHR2j1Gkn3GKIHff+HR8GcMX1QXHvx2uGurI=} is the encrypted expression you use instead of TestPassword in the management model. If you use the same plain text in different locations, repeat this command each time before you use the encrypted expression instead of the plain text in that location. When you repeat the same command for the same plain text, you get a different result for the same key because Elytron uses a unique initialization vector for each call. By using different encrypted expressions you make sure that, if one encrypted expression on a string is somehow compromised, users cannot discover that any other encrypted expressions might also contain the same string. Re-enable the command caching using the following management CLI command: Additional resources Using an encrypted expression to secure a KeyStoreCredentialStore/credential-store expression=encryption Attributes 4.1.8. Using a PasswordCredential in your JBoss EAP configuration To refer to a password or sensitive string stored in a credential store, use the credential-reference attribute in your JBoss EAP configuration. You can use credential-reference as an alternative to providing a password or other sensitive string in most places throughout the JBoss EAP configuration. Prerequisites You have added a PasswordCredential to a KeyStoreCredentialStore. For information on adding PasswordCredential to a KeyStoreCredentialStore, see Adding a PasswordCredential to a KeyStoreCredentialStore . Procedure Reference the existing KeyStoreCredentialStore and the alias to the PasswordCredential in the credential-reference attribute: Syntax Example In this example, an existing PasswordCredential with the alias passwordCredentialAlias in a KeyStoreCredentialStore exampleKeyStoreCredentialStore is used instead of the plain text password for the database, protecting the database password. Additional resources Obtain the password for the credential store from an external source . Credential types in Elytron 4.1.9. Using an encrypted expression to secure a KeyStoreCredentialStore/credential-store You can use an encrypted expression to secure a KeyStoreCredentialStore. Prerequisites You have created an encrypted expression. For information about creating an encrypted expression, see Creating an encrypted expression in Elytron . Procedure Create a KeyStoreCredentialStore that uses an encrypted expression as the clear-text : Syntax Example Additional resources expression-encryption Attributes credential-store Attributes 4.1.10. Automatic update of credentials in credential store If you have a credential store, you are not required to add credentials or update existing credentials before you can reference them from a credential reference. Elytron automates this process. When configuring a credential reference, specify both the store and clear-text attributes. Elytron automatically adds or updates a credential in the credential store specified by the store attribute. Optionally, you can specify the alias attribute. Elytron updates the credential store as follows: If you specify an alias: If an entry for the alias exists, the existing credential is replaced with the specified clear text password. If an entry for the alias does not exist, a new entry is added to the credential store with the specified alias and the clear text password. If you do not specify an alias, Elytron generates an alias and adds a new entry to the credential store with the generated alias and the specified clear text password. The clear-text attribute is removed from the management model when the credential store is updated. The following example illustrates how to create a credential reference that specifies the store , clear-text , and alias attributes: You can update the credential for the myNewAlias entry that was added to the previously defined credential store with the following command: Note If an operation that includes a credential-reference parameter fails, no automatic credential store update occurs. The credential store that was specified by the credential-reference attribute does not change. 4.1.11. Defining FIPS 140-2 compliant credential stores You can define a Federal Information Processing Standards (FIPS) 140-2-compliant credential store using a Network Security Services (NSS) database, or with a Bouncy Castle provider. 4.1.11.1. Defining a FIPS 140-2 compliant credential store using an NSS database To get a Federal Information Processing Standards (FIPS)-compliant keystore, use a Sun PKCS#11 (PKCS stands for Public Key Cryptography Standards) provider accessing a Network Security Services (NSS) database. For instructions on defining the database, see Configuring the NSS Database . Procedure Create a secret key to be used in the credential store. Note For the keytool command to work, in the nss_pkcsll_fips.cfg file, you must assign the nssDbMode attribute as readWrite . Create an external credential store. An external credential store holds a secret key in a PKCS#11 keystore and accesses this keystore using the alias defined in the step. This keystore is then used to decrypt the credentials in a Java Cryptography Extension Keystore (JCEKS) keystore. In addition to the credential-store attributes, Elytron uses the credential-store KeyStoreCredentialStore implementation properties to configure external credential stores. Once created, the credential store can be used to store aliases as normal. Confirm that the alias has been added successfully by reading from the credential store. Additional resources Configuring the NSS Database credential-store Attributes credential-store KeyStoreCredentialStore implementation properties 4.1.11.2. Defining a FIPS 140-2 compliant credential store using Bouncy Castle providers Define a Federal Information Processing Standards (FIPS) 140-2 compliant credential store using Bouncy Castle providers. Prerequisites Ensure that your environment is configured to use the BouncyCastle provider. For more information, see Configure Your Environment to use the BouncyCastle Provider . Procedure Create a secret key to be used in the credential store. Important The keypass and storepass for the keystore must be identical for FIPS credential stores to be defined in the elytron subsystem. Create an external credential store. An external credential store holds a secret key in a BCFKS keystore, and accesses this keystore using the alias defined in the step. This keystore is then used to decrypt the credentials in a JCEKS keystore. The credential-store KeyStoreCredentialStore implementation properties are used to configure external credential stores. Once created, the credential store can be used to store aliases as normal. Confirm that the alias has been added successfully by reading from the credential store. Additional resources credential-store Attributes credential-store KeyStoreCredentialStore implementation properties 4.1.12. Using a custom implementation of the credential store Use a custom implementation of the credential store. Procedure Create a class that extends the Service Provider Interface (SPI) CredentialStoreSpi abstract class. Create a class that implements the Java Security Provider . The provider must add the custom credential store class as a service. Create a module containing your credential store and provider classes, and add it to JBoss EAP with a dependency on org.wildfly.security.elytron . For example: Create a provider loader for your provider. For example: Create a credential store using the custom implementation. Note Ensure that you specify the correct providers and type values. The value of type is what is used in your provider class where it adds your custom credential store class as a service. For example: Alternatively, if you have created multiple providers, you can specify the additional providers using another provider loader with other-providers . This allows you to have other additional implementations for new types of credentials. These specified other providers are automatically accessible in the custom credential store's initialize method as the Provider[] argument. For example: 4.1.13. Obtain the password for the credential store from an external source Instead of providing your credential store's password in clear text format, you can choose to provide the password by using a pseudo credential store. You have the following options for providing a password: EXT External command using java.lang.Runtime#exec(java.lang.String) . You can supply parameters to the command with a space-separated list of strings. An external command refers to any executable file from the operating system, for example, a shell script or an executable binary file. Elytron reads the password from the standard output of the command that you run. Example CMD External command using java.lang.ProcessBuilder . You can supply parameters to the command with a comma-separated list of strings. An external command refers to any executable file from the operating system, for example, a shell script or an executable binary file. Elytron reads the password from the standard output of the command that you run. Example MASK Masked password using PBE, or Password-Based Encryption. It must be in the following format, which includes the SALT and ITERATION values: Example Important EXT , CMD , and MASK provide backward compatibility with the legacy security vault style of supplying an external password. For MASK you must use the above format that includes the SALT and ITERATION values. You can also use a password located in another credential store as the password for a new credential store. Example Credential Store Created with a Password from Another Credential Store Additional resources Providing an initial key to JBoss EAP to unlock secured resources 4.1.14. Providing an initial key to JBoss EAP to unlock secured resources For security, some JBoss EAP components are protected by a PasswordCredential in KeyStoreCredentialStore. This KeyStoreCredentialStore is in turn protected by a secret key stored external to JBoss EAP. This is referred to as a master key. JBoss EAP uses this master key during startup to unlock the KeyStoreCredentialStore to obtain the PasswordCredential stored in the KeyStoreCredentialStore. You can use a PropertiesCredentialStore in Elytron to provide the master key. Alternately, you can obtain the master key or password from an external source. For information about obtaining the password from an external source, see Obtain the password for the credential store from an external source . 4.1.14.1. Creating a PropertiesCredentialStore/secret-key-credential-store for a standalone server Create a PropertiesCredentialStore using the management CLI. When you create a PropertiesCredentialStore, JBoss EAP generates a secret key by default. The name of the generated key is key and its size is 256-bit. Prerequisites You have provided at least read/write access to the directory containing the PropertiesCredentialStore for the user account under which JBoss EAP is running. Procedure Use the following command to create a PropertiesCredentialStore using the management CLI: Syntax Example Additional resources PropertiesCredentialStore/secret-key-credential-store Credential store operations using the JBoss EAP management CLI secret-key-credential-store Attributes 4.1.14.2. Creating an encrypted expression in Elytron Create an encrypted expression from a sensitive string and a SecretKeyCredential. Use this encrypted expression instead of the sensitive string in the management model - the server configuration file, to maintain the secrecy of the sensitive string. Prerequisites You have generated a secret key in some credential store. For information on creating a secret key in a KeyStoreCredentialStore , see Generating a SecretKeyCredential in a KeyStoreCredentialStore/credential-store For information on creating a secret key in a PropertiesCredentialStore , see Generating a SecretKeyCredential in a PropertiesCredentialStore/secret-key-credential-store Procedure Create a resolver that references the alias of an existing SecretKeyCredential in a credential store using the following management CLI command: Syntax Example If an error message about a duplicate resource displays, use the list-add operation instead of add , as follows: Syntax Example Reload the server using the following management CLI command: Disable caching of commands in the management CLI: Important If you do not disable caching, the secret key is visible to anyone who can access the management CLI history file. Create an encrypted expression using the following management CLI command: Syntax Example USD{ENC::exampleResolver:RUxZAUMQgtpG7oFlHR2j1Gkn3GKIHff+HR8GcMX1QXHvx2uGurI=} is the encrypted expression you use instead of TestPassword in the management model. If you use the same plain text in different locations, repeat this command each time before you use the encrypted expression instead of the plain text in that location. When you repeat the same command for the same plain text, you get a different result for the same key because Elytron uses a unique initialization vector for each call. By using different encrypted expressions you make sure that, if one encrypted expression on a string is somehow compromised, users cannot discover that any other encrypted expressions might also contain the same string. Re-enable the command caching using the following management CLI command: Additional resources Using an encrypted expression to secure a KeyStoreCredentialStore/credential-store expression=encryption Attributes 4.1.14.3. Using an encrypted expression to secure a KeyStoreCredentialStore/credential-store You can use an encrypted expression to secure a KeyStoreCredentialStore. Prerequisites You have created an encrypted expression. For information about creating an encrypted expression, see Creating an encrypted expression in Elytron . Procedure Create a KeyStoreCredentialStore that uses an encrypted expression as the clear-text : Syntax Example Additional resources expression-encryption Attributes credential-store Attributes After you have secured a KeyStoreCredentialStore with an encrypted expression, you can generate a SecretKeyCredential in the KeyStoreCredentialStore and use the secret key to create another encrypted expression. You can then use this new encrypted expression instead of a sensitive string in the management model - the server configuration file. You can create an entire chain of credential stores for security. Such a chain makes it harder to guess the sensitive string because the string is protected as follows: The first encrypted expression secures a KeyStoreCredentialStore. Another encrypted expression secures a sensitive string. To decode the sensitive string, you would need to decrypt both the encrypted expressions. As the chain of encrypted expressions becomes longer, it gets harder to decrypt the sensitive string. 4.1.15. Converting password vaults to credential stores You can use the WildFly Elytron tool to convert a password vault to a credential store. To convert a password vault to a credential store, you need the vault's values used when initializing the vault . Note When converting a password vault, aliases in the new credential store are named in the following format based on their equivalent password vault block and attribute name: VAULT_BLOCK :: ATTRIBUTE_NAME . 4.1.15.1. Converting a single password vault to a Credential Store using the WildFly Elytron tool Convert a single password vault to a Credential Store using the WildFly Elytron tool. Procedure Convert the password vault to a credential store using the following command: For example, you can also specify the new credential store's file name and location with the --location argument: Note You can also use the --summary argument to print a summary of the management CLI commands used to convert it. Note that even if a plain text password is used, it is masked in the summary output. The default salt and iteration values are used unless they are specified in the command. 4.1.15.2. Bulk converting password vault to a credential store using the WildFly Elytron tool Bulk convert multiple password vaults to credential stores. Procedure Put the details of the vaults you want to convert into a description file in the following format: 1 salt and iteration can be omitted if you are providing a plain text password for the vault. 2 Specifies the location and file name for the converted credential store. 3 Optional: Specifies a list of optional parameters separated by semicolons ( ; ). See EAP_HOME /bin/elytron-tool.sh vault --help for a list of available parameters. For example: Run the bulk convert command with your description file from the step: For more information, use the EAP_HOME /bin/elytron-tool.sh vault --help command for a detailed listing of available options. 4.1.16. Example of using a credential store with Elytron client Clients connecting to JBoss EAP, such as Jakarta Enterprise Beans, can authenticate using Elytron Client. Users without access to a running JBoss EAP server can create and modify credential stores using the WildFly Elytron tool, and then clients can use Elytron Client to access sensitive strings inside a credential store. The following example shows you how to use a credential store in an Elytron Client configuration file. Example custom-config.xml with a Credential Store <configuration> <authentication-client xmlns="urn:elytron:client:1.2"> ... <credential-stores> <credential-store name="my_store"> 1 <protection-parameter-credentials> <credential-store-reference clear-text="pass123"/> 2 </protection-parameter-credentials> <attributes> <attribute name="location" value="/path/to/my_store.jceks"/> 3 </attributes> </credential-store> </credential-stores> ... <authentication-configurations> <configuration name="my_user"> <set-host name="localhost"/> <set-user-name name="my_user"/> <set-mechanism-realm name="ManagementRealm"/> <use-provider-sasl-factory/> <credentials> <credential-store-reference store="my_store" alias="my_user"/> 4 </credentials> </configuration> </authentication-configurations> ... </authentication-client> </configuration> 1 A name for the credential store for use within the Elytron Client configuration file. 2 The password for the credential store. 3 The path to the credential store file. 4 The credential reference for a sensitive string stored in the credential store. Additional resources Configuring client authentication using Elytron Client Creating and modifying credential stores offline with the WildFly Elytron tool . 4.2. Password Vault Configuration of JBoss EAP and associated applications requires potentially sensitive information, such as user names and passwords. Instead of storing the password as plain text in configuration files, the password vault feature can be used to mask the password information and store it in an encrypted keystore. Once the password is stored, references can be included in management CLI commands or applications deployed to JBoss EAP. The password vault uses the Java keystore as its storage mechanism. Password vault consists of two parts: storage and key storage. Java keystore is used to store the key, which is used to encrypt or decrypt sensitive strings in Vault storage. Important The keytool utility, provided by the Java Runtime Environment (JRE), is utilized for this steps. Locate the path for the file, which on Red Hat Enterprise Linux is /usr/bin/keytool . JCEKS keystore implementations differ between Java vendors so the keystore must be generated using the keytool utility from the same vendor as the JDK used. Using a keystore generated by the keytool from one vendor's JDK in a JBoss EAP 7 instance running on a JDK from a different vendor results in the following exception: java.io.IOException: com.sun.crypto.provider.SealedObjectForKeyProtector 4.2.1. Set Up a Password Vault Follow the steps below to set up and use a Password Vault. Create a directory to store the keystore and other encrypted information. The rest of this procedure assumes that the directory is EAP_HOME /vault/ . Since this directory will contain sensitive information it should be accessible to only limited users. At a minimum the user account under which JBoss EAP is running requires read-write access. Determine the parameters to use with keytool utility. Decide on values for the following parameters: alias The alias is a unique identifier for the vault or other data stored in the keystore. Aliases are case-insensitive. storetype The storetype specifies the keystore type. The value jceks is recommended. keyalg The algorithm to use for encryption. Use the documentation for the JRE and operating system to see which other choices are available. keysize The size of an encryption key impacts how difficult it is to decrypt through brute force. For information on appropriate values, see the documentation distributed with the keytool utility. storepass The value of storepass is the password that is used to authenticate to the keystore so that the key can be read. The password must be at least 6 characters long and must be provided when the keystore is accessed. If this parameter is omitted, the keytool utility will prompt for it to be entered after the command has been executed keypass The value of keypass is the password used to access the specific key and must match the value of the storepass parameter. validity The value of validity is the period (in days) for which the key will be valid. keystore The value of keystore is the file path and file name in which the keystore's values are to be stored. The keystore file is created when data is first added to it. Ensure the correct file path separator is used: / (forward slash) for Red Hat Enterprise Linux and similar operating systems, \ (backslash) for Windows Server. The keytool utility has many other options. See the documentation for the JRE or the operating system for more details. Run the keytool command, ensuring keypass and storepass contain the same value. USD keytool -genseckey -alias vault -storetype jceks -keyalg AES -keysize 128 -storepass vault22 -keypass vault22 -keystore EAP_HOME /vault/vault.keystore This results in a keystore that has been created in the file EAP_HOME /vault/vault.keystore . It stores a single key, with the alias vault, which will be used to store encrypted strings, such as passwords, for JBoss EAP. 4.2.2. Initialize the Password Vault The password vault can be initialized either interactively, where you are prompted for each parameter's value, or non-interactively, where all parameter values are provided on the command line. Each method gives the same result, so either may be used. The following parameters will be needed: keystore URL (KEYSTORE_URL) The file system path or URI of the keystore file. The examples use EAP_HOME /vault/vault.keystore . keystore password (KEYSTORE_PASSWORD) The password used to access the keystore. Salt (SALT) The salt value is a random string of eight characters used, together with the iteration count, to encrypt the content of the keystore. keystore Alias (KEYSTORE_ALIAS) The alias by which the keystore is known. Iteration Count (ITERATION_COUNT) The number of times the encryption algorithm is run. Directory to store encrypted files (ENC_FILE_DIR) The path in which the encrypted files are to be stored. This is typically the directory containing the password vault. It is convenient but not mandatory to store all of your encrypted information in the same place as the keystore. This directory should be only accessible to limited users. At a minimum the user account under which JBoss EAP 7 is running requires read-write access. The keystore should be located in the directory you created when you set up the password vault . Note that the trailing backslash or forward slash on the directory name is required. Ensure the correct file path separator is used: / (forward slash) for Red Hat Enterprise Linux and similar operating systems, \ (backslash) for Windows Server. Vault Block (VAULT_BLOCK) The name to be given to this block in the password vault. Attribute (ATTRIBUTE) The name to be given to the attribute being stored. Security Attribute (SEC-ATTR) The password which is being stored in the password vault. To run the password vault command non-interactively, the vault script located in EAP_HOME /bin/ can be invoked with parameters for the relevant information: USD vault.sh --keystore KEYSTORE_URL --keystore-password KEYSTORE_PASSWORD --alias KEYSTORE_ALIAS --vault-block VAULT_BLOCK --attribute ATTRIBUTE --sec-attr SEC-ATTR --enc-dir ENC_FILE_DIR --iteration ITERATION_COUNT --salt SALT Example: Initializing Password Vault USD vault.sh --keystore EAP_HOME /vault/vault.keystore --keystore-password vault22 --alias vault --vault-block vb --attribute password --sec-attr 0penS3sam3 --enc-dir EAP_HOME /vault/ --iteration 120 --salt 1234abcd Example: Output ========================================================================= JBoss Vault JBOSS_HOME: EAP_HOME JAVA: java ========================================================================= Nov 09, 2015 9:02:47 PM org.picketbox.plugins.vault.PicketBoxSecurityVault init INFO: PBOX00361: Default Security Vault Implementation Initialized and Ready WFLYSEC0047: Secured attribute value has been stored in Vault. Please make note of the following: ******************************************** Vault Block:vb Attribute Name:password Configuration should be done as follows: VAULT::vb::password::1 ******************************************** WFLYSEC0048: Vault Configuration in WildFly configuration file: ******************************************** </extensions> <vault> <vault-option name="KEYSTORE_URL" value="EAP_HOME/vault/vault.keystore"/> <vault-option name="KEYSTORE_PASSWORD" value="MASK-5dOaAVafCSd"/> <vault-option name="KEYSTORE_ALIAS" value="vault"/> <vault-option name="SALT" value="1234abcd"/> <vault-option name="ITERATION_COUNT" value="120"/> <vault-option name="ENC_FILE_DIR" value="EAP_HOME/vault/"/> </vault><management> ... ******************************************** To run the password vault command interactively, the following steps are required: Launch the password vault command interactively. Run EAP_HOME /bin/vault.sh on Red Hat Enterprise Linux and similar operating systems or EAP_HOME \bin\vault.bat on Windows Server. Start a new interactive session by typing 0 (zero). Complete the prompted parameters. Follow the prompts to input the required parameters. Make a note of the masked password information. The masked password, salt, and iteration count are printed to standard output. Make a note of them in a secure location. They are required to add entries to the Password Vault. Access to the keystore file and these values could allow an attacker access to obtain access to sensitive information in the Password Vault. Exit the interactive console Type 2 (two) to exit the interactive console. Example: Input and Output Please enter a Digit:: 0: Start Interactive Session 1: Remove Interactive Session 2: Exit 0 Starting an interactive session Enter directory to store encrypted files:EAP_HOME/vault/ Enter Keystore URL:EAP_HOME/vault/vault.keystore Enter Keystore password: vault22 Enter Keystore password again: vault22 Values match Enter 8 character salt:1234abcd Enter iteration count as a number (Eg: 44):120 Enter Keystore Alias:vault Initializing Vault Nov 09, 2015 9:24:36 PM org.picketbox.plugins.vault.PicketBoxSecurityVault init INFO: PBOX000361: Default Security Vault Implementation Initialized and Ready Vault Configuration in AS7 config file: ******************************************** ... </extensions> <vault> <vault-option name="KEYSTORE_URL" value="EAP_HOME/vault/vault.keystore"/> <vault-option name="KEYSTORE_PASSWORD" value="MASK-5dOaAVafCSd"/> <vault-option name="KEYSTORE_ALIAS" value="vault"/> <vault-option name="SALT" value="1234abcd"/> <vault-option name="ITERATION_COUNT" value="120"/> <vault-option name="ENC_FILE_DIR" value="EAP_HOME/vault/"/> </vault><management> ... ******************************************** Vault is initialized and ready for use Handshake with Vault complete + The keystore password has been masked for use in configuration files and deployments. In addition, the vault is initialized and ready to use. 4.2.3. Use a Password Vault Before passwords and other sensitive attributes can be masked and used in configuration files, JBoss EAP 7 must be made aware of the password vault which stores and decrypts them. The following command can be used to configure JBoss EAP 7 to use the password vault: Note If Microsoft Windows Server is being used, use two backslashes (\\) in the file path instead using one. For example, C:\\data\\vault\\vault.keystore . This is because a single backslash character (\) is used for character escaping. 4.2.4. Store a Sensitive String in the Password Vault Including passwords and other sensitive strings in plaintext configuration files is a security risk. Store these strings instead in the Password Vault for improved security, where they can then be referenced in configuration files, management CLI commands and applications in their masked form. Sensitive strings can be stored in the Password Vault either interactively, where the tool prompts for each parameter's value, or non-interactively, where all the parameters' values are provided on the command line. Each method gives the same result, so either may be used. Both of these methods are invoked using the vault script. To run the password vault command non-interactively, the vault script (located in EAP_HOME /bin/ ) can be invoked with parameters for the relevant information: USD vault.sh --keystore KEYSTORE_URL --keystore-password KEYSTORE_PASSWORD --alias KEYSTORE_ALIAS --vault-block VAULT_BLOCK --attribute ATTRIBUTE --sec-attr SEC-ATTR --enc-dir ENC_FILE_DIR --iteration ITERATION_COUNT --salt SALT Note The keystore password must be given in plaintext form, not masked form. USD vault.sh --keystore EAP_HOME /vault/vault.keystore --keystore-password vault22 --alias vault --vault-block vb --attribute password --sec-attr 0penS3sam3 --enc-dir EAP_HOME /vault/ --iteration 120 --salt 1234abcd Example: Output ========================================================================= JBoss Vault JBOSS_HOME: EAP_HOME JAVA: java ========================================================================= Nov 09, 2015 9:24:36 PM org.picketbox.plugins.vault.PicketBoxSecurityVault init INFO: PBOX00361: Default Security Vault Implementation Initialized and Ready WFLYSEC0047: Secured attribute value has been stored in Vault. Please make note of the following: ******************************************** Vault Block:vb Attribute Name:password Configuration should be done as follows: VAULT::vb::password::1 ******************************************** WFLYSEC0048: Vault Configuration in WildFly configuration file: ******************************************** ... </extensions> <vault> <vault-option name="KEYSTORE_URL" value="../vault/vault.keystore"/> <vault-option name="KEYSTORE_PASSWORD" value="MASK-5dOaAVafCSd"/> <vault-option name="KEYSTORE_ALIAS" value="vault"/> <vault-option name="SALT" value="1234abcd"/> <vault-option name="ITERATION_COUNT" value="120"/> <vault-option name="ENC_FILE_DIR" value="../vault/"/> </vault><management> ... ******************************************** After invoking the vault script, a message prints to standard output, showing the vault block, attribute name, masked string, and advice about using the string in your configuration. Make note of this information in a secure location. An extract of sample output is as follows: Vault Block:vb Attribute Name:password Configuration should be done as follows: VAULT::vb::password::1 To run the password vault command interactively, the following steps are required: Launch the Password Vault command interactively. Launch the operating system's command line interface and run EAP_HOME /bin/vault.sh (on Red Hat Enterprise Linux and similar operating systems) or EAP_HOME \bin\vault.bat (on Microsoft Windows Server). Start a new interactive session by typing 0 (zero). Complete the prompted parameters. Follow the prompts to input the required parameters. These values must match those provided when the Password Vault was created. Note The keystore password must be given in plaintext form, not masked form. Complete the prompted parameters about the sensitive string. Enter 0 (zero) to start storing the sensitive string. Follow the prompts to input the required parameters. Make note of the information about the masked string. A message prints to standard output, showing the vault block, attribute name, masked string, and advice about using the string in the configuration. Make note of this information in a secure location. An extract of sample output is as follows: Vault Block:ds_Example1 Attribute Name:password Configuration should be done as follows: VAULT::ds_Example1::password::1 Exit the interactive console. Type 2 (two) to exit the interactive console. Example: Input and Output ========================================================================= JBoss Vault JBOSS_HOME: EAP_HOME JAVA: java ========================================================================= ********************************** **** JBoss Vault *************** ********************************** Please enter a Digit:: 0: Start Interactive Session 1: Remove Interactive Session 2: Exit 0 Starting an interactive session Enter directory to store encrypted files:EAP_HOME/vault/ Enter Keystore URL:EAP_HOME/vault/vault.keystore Enter Keystore password: Enter Keystore password again: Values match Enter 8 character salt:1234abcd Enter iteration count as a number (Eg: 44):120 Enter Keystore Alias:vault Initializing Vault Nov 09, 2015 9:24:36 PM org.picketbox.plugins.vault.PicketBoxSecurityVault init INFO: PBOX000361: Default Security Vault Implementation Initialized and Ready Vault Configuration in AS7 config file: ******************************************** ... </extensions> <vault> <vault-option name="KEYSTORE_URL" value="EAP_HOME/vault/vault.keystore"/> <vault-option name="KEYSTORE_PASSWORD" value="MASK-5dOaAVafCSd"/> <vault-option name="KEYSTORE_ALIAS" value="vault"/> <vault-option name="SALT" value="1234abcd"/> <vault-option name="ITERATION_COUNT" value="120"/> <vault-option name="ENC_FILE_DIR" value="EAP_HOME/vault/"/> </vault><management> ... ******************************************** Vault is initialized and ready for use Handshake with Vault complete Please enter a Digit:: 0: Store a secured attribute 1: Check whether a secured attribute exists 2: Remove secured attribute 3: Exit 0 Task: Store a secured attribute Please enter secured attribute value (such as password): Please enter secured attribute value (such as password) again: Values match Enter Vault Block:ds_Example1 Enter Attribute Name:password Secured attribute value has been stored in vault. Please make note of the following: ******************************************** Vault Block:ds_Example1 Attribute Name:password Configuration should be done as follows: VAULT::ds_Example1::password::1 ******************************************** Please enter a Digit:: 0: Store a secured attribute 1: Check whether a secured attribute exists 2: Remove secured attribute 3: Exit 4.2.5. Use an Encrypted Sensitive String in Configuration Any sensitive string which has been encrypted can be used in a configuration file or management CLI command in its masked form, providing expressions are allowed. To confirm if expressions are allowed within a particular subsystem, run the following management CLI command against that subsystem: From the output of running this command, look for the value of the expressions-allowed parameter. If this is true , then expressions can be used within the configuration of this subsystem. Use the following syntax to replace any plaintext string with the masked form. USD{VAULT::VAULT_BLOCK::ATTRIBUTE_NAME::MASKED_STRING} Example: Datasource Definition Using a Password in Masked Form ... <subsystem xmlns="urn:jboss:domain:datasources:5.0"> <datasources> <datasource jndi-name="java:jboss/datasources/ExampleDS" enabled="true" use-java-context="true" pool-name="H2DS"> <connection-url>jdbc:h2:mem:test;DB_CLOSE_DELAY=-1</connection-url> <driver>h2</driver> <pool></pool> <security> <user-name>sa</user-name> <password>USD{VAULT::ds_ExampleDS::password::1}</password> </security> </datasource> <drivers> <driver name="h2" module="com.h2database.h2"> <xa-datasource-class>org.h2.jdbcx.JdbcDataSource</xa-datasource-class> </driver> </drivers> </datasources> </subsystem> ... 4.2.6. Use an Encrypted Sensitive String in an Application Encrypted strings stored in the password vault can be used in an application's source code. The below example is an extract of a servlet's source code, illustrating the use of a masked password in a datasource definition, instead of the plaintext password. The plaintext version is commented out so that you can see the difference. Example: Servlet Using a Vaulted Password @DataSourceDefinition( name = "java:jboss/datasources/LoginDS", user = "sa", password = "VAULT::DS::thePass::1", className = "org.h2.jdbcx.JdbcDataSource", url = "jdbc:h2:tcp://localhost/mem:test" ) /*old (plaintext) definition @DataSourceDefinition( name = "java:jboss/datasources/LoginDS", user = "sa", password = "sa", className = "org.h2.jdbcx.JdbcDataSource", url = "jdbc:h2:tcp://localhost/mem:test" )*/ 4.2.7. Check if a Sensitive String is in the Password Vault Before attempting to store or use a sensitive string in the Password Vault it can be useful to first confirm if it is already stored. This check can be done either interactively, where the user is prompted for each parameter's value, or non-interactively, where all parameters' values are provided on the command line. Each method gives the same result, so either may be used. Both of these methods are invoked using the vault script. Use the non-interative method to provide all parameters' values at once. For a description of all parameters, see Initialize the Password Vault . To run the password vault command non-interactively, the vault script located in EAP_HOME /bin/ can be invoked with parameters for the relevant information: USD vault.sh --keystore KEYSTORE_URL --keystore-password KEYSTORE_PASSWORD --alias KEYSTORE_ALIAS --check-sec-attr --vault-block VAULT_BLOCK --attribute ATTRIBUTE --enc-dir ENC_FILE_DIR --iteration ITERATION_COUNT --salt SALT Substitute the placeholder values with the actual values. The values for parameters KEYSTORE_URL , KEYSTORE_PASSWORD and KEYSTORE_ALIAS must match those provided when the password vault was created. Note The keystore password must be given in plaintext form, not masked form. If the sensitive string is stored in the vault block specified, the following message will be displayed: Password already exists. If the value is not stored in the specified block, the following message will be displayed: Password doesn't exist. To run the password vault command interactively, the following steps are required: Launch the password vault command interactively. Run EAP_HOME /bin/vault.sh (on Red Hat Enterprise Linux and similar operating systems) or EAP_HOME \bin\vault.bat (on Windows Server). Start a new interactive session by typing 0 (zero). Complete the prompted parameters. Follow the prompts to input the required authentication parameters. These values must match those provided when the password vault was created. Note When prompted for authentication, the keystore password must be given in plaintext form, not masked form. Enter 1 (one) to select Check whether a secured attribute exists . Enter the name of the vault block in which the sensitive string is stored. Enter the name of the sensitive string to be checked. If the sensitive string is stored in the vault block specified, a confirmation message like the following will be output: A value exists for (VAULT_BLOCK, ATTRIBUTE) If the sensitive string is not stored in the specified block, a message like the following will be output: No value has been store for (VAULT_BLOCK, ATTRIBUTE) Example: Check For a Sensitive String Interactively ========================================================================= JBoss Vault JBOSS_HOME: EAP_HOME JAVA: java ========================================================================= ********************************** **** JBoss Vault *************** ********************************** Please enter a Digit:: 0: Start Interactive Session 1: Remove Interactive Session 2: Exit 0 Starting an interactive session Enter directory to store encrypted files:EAP_HOME/vault Enter Keystore URL:EAP_HOME/vault/vault.keystore Enter Keystore password: Enter Keystore password again: Values match Enter 8 character salt:1234abcd Enter iteration count as a number (Eg: 44):120 Enter Keystore Alias:vault Initializing Vault Nov 09, 2015 9:24:36 PM org.picketbox.plugins.vault.PicketBoxSecurityVault init INFO: PBOX000361: Default Security Vault Implementation Initialized and Ready Vault Configuration in AS7 config file: ******************************************** ... </extensions> <vault> <vault-option name="KEYSTORE_URL" value="EAP_HOME/vault/vault.keystore"/> <vault-option name="KEYSTORE_PASSWORD" value="MASK-5dOaAVafCSd"/> <vault-option name="KEYSTORE_ALIAS" value="vault"/> <vault-option name="SALT" value="1234abcd"/> <vault-option name="ITERATION_COUNT" value="120"/> <vault-option name="ENC_FILE_DIR" value="EAP_HOME/vault/"/> </vault><management> ... ******************************************** Vault is initialized and ready for use Handshake with Vault complete Please enter a Digit:: 0: Store a secured attribute 1: Check whether a secured attribute exists 2: Remove secured attribute 3: Exit 1 Task: Verify whether a secured attribute exists Enter Vault Block:vb Enter Attribute Name:password A value exists for (vb, password) Please enter a Digit:: 0: Store a secured attribute 1: Check whether a secured attribute exists 2: Remove secured attribute 3: Exit 4.2.8. Remove a Sensitive String from the Password Vault For security reasons it is best to remove sensitive strings from the Password Vault when they are no longer required. For example, if an application is being decommissioned, any sensitive strings used in datasource definitions should be removed at the same time. Important As a prerequisite, before removing a sensitive string from the Password Vault, confirm if it is used in the configuration of JBoss EAP. This operation can be done either interactively, where the user is prompted for each parameter's value, or non-interactively, where all parameters' values are provided on the command line. Each method gives the same result, so either may be used. Both of these methods are invoked using the vault script. Use the non-interative method to provide all parameters' values at once. For a description of all parameters, see Initialize the Password Vault . To run the password vault command non-interactively, the vault script (located in EAP_HOME /bin/ ) can be invoked with parameters for the relevant information: USD vault.sh --keystore KEYSTORE_URL --keystore-password KEYSTORE_PASSWORD --alias KEYSTORE_ALIAS --remove-sec-attr --vault-block VAULT_BLOCK --attribute ATTRIBUTE --enc-dir ENC_FILE_DIR --iteration ITERATION_COUNT --salt SALT Substitute the placeholder values with the actual values. The values for parameters KEYSTORE_URL , KEYSTORE_PASSWORD and KEYSTORE_ALIAS must match those provided when the password Vault was created. Note The keystore password must be given in plaintext form, not masked form. If the sensitive string is successfully removed, a confirmation message like the following will be displayed: Secured attribute [VAULT_BLOCK::ATTRIBUTE] has been successfully removed from vault If the sensitive string is not removed, a message like the following will be displayed: Secured attribute [VAULT_BLOCK::ATTRIBUTE] was not removed from vault, check whether it exist Example: Output USD ./vault.sh --keystore EAP_HOME /vault/vault.keystore --keystore-password vault22 --alias vault --remove-sec-attr --vault-block vb --attribute password --enc-dir EAP_HOME /vault/ --iteration 120 --salt 1234abcd ========================================================================= JBoss Vault JBOSS_HOME: EAP_HOME JAVA: java ========================================================================= Dec 23, 2015 1:54:24 PM org.picketbox.plugins.vault.PicketBoxSecurityVault init INFO: PBOX000361: Default Security Vault Implementation Initialized and Ready Secured attribute [vb::password] has been successfully removed from vault Remove a Sensitive String Interactively To run the password vault command interactively, the following steps are required: Launch the password vault command interactively. Run EAP_HOME /bin/vault.sh (on Red Hat Enterprise Linux and similar operating systems) or EAP_HOME \bin\vault.bat (on Microsoft Windows Server). Start a new interactive session by typing 0 (zero). Complete the prompted parameters. Follow the prompts to input the required authentication parameters. These values must match those provided when the password vault was created. Note When prompted for authentication, the keystore password must be given in plaintext form, not masked form. Enter 2 (two) to choose option Remove secured attribute. Enter the name of the vault block in which the sensitive string is stored. Enter the name of the sensitive string to be removed. If the sensitive string is successfully removed, a confirmation message like the following will be displayed: Secured attribute [VAULT_BLOCK::ATTRIBUTE] has been successfully removed from vault If the sensitive string is not removed, a message like the following will be displayed: Secured attribute [VAULT_BLOCK::ATTRIBUTE] was not removed from vault, check whether it exist Example: Output ********************************** **** JBoss Vault *************** ********************************** Please enter a Digit:: 0: Start Interactive Session 1: Remove Interactive Session 2: Exit 0 Starting an interactive session Enter directory to store encrypted files:EAP_HOME/vault/ Enter Keystore URL:EAP_HOME/vault/vault.keystore Enter Keystore password: Enter Keystore password again: Values match Enter 8 character salt:1234abcd Enter iteration count as a number (Eg: 44):120 Enter Keystore Alias:vault Initializing Vault Dec 23, 2014 1:40:56 PM org.picketbox.plugins.vault.PicketBoxSecurityVault init INFO: PBOX000361: Default Security Vault Implementation Initialized and Ready Vault Configuration in configuration file: ******************************************** ... </extensions> <vault> <vault-option name="KEYSTORE_URL" value="EAP_HOME/vault/vault.keystore"/> <vault-option name="KEYSTORE_PASSWORD" value="MASK-5dOaAVafCSd"/> <vault-option name="KEYSTORE_ALIAS" value="vault"/> <vault-option name="SALT" value="1234abcd"/> <vault-option name="ITERATION_COUNT" value="120"/> <vault-option name="ENC_FILE_DIR" value="EAP_HOME/vault/"/> </vault><management> ... ******************************************** Vault is initialized and ready for use Handshake with Vault complete Please enter a Digit:: 0: Store a secured attribute 1: Check whether a secured attribute exists 2: Remove secured attribute 3: Exit 2 Task: Remove secured attribute Enter Vault Block:vb Enter Attribute Name:password Secured attribute [vb::password] has been successfully removed from vault 4.2.9. Configure Red Hat JBoss Enterprise Application Platform to Use a Custom Implementation of the Password Vault In addition to using the provided password vault implementation, a custom implementation of SecurityVault can also be used. Important As a prerequisite, ensure that the password vault has been initialized. For more information, see Initialize the Password Vault . To use a custom implementation for the password vault: Create a class that implements the interface SecurityVault . Create a module containing the class from the step, and specify a dependency on org.picketbox where the interface is SecurityVault . Enable the custom password vault in the JBoss EAP configuration by adding the vault element with the following attributes: code - The fully qualified name of class that implements SecurityVault . module - The name of the module that contains the custom class. Optionally, the vault-options parameters can be used to initialize the custom class for a password vault. Example: Use vault-options Parameters to Initialize the Custom Class 4.2.10. Obtain Keystore Password From External Source The EXT , EXTC , CMD , CMDC or CLASS methods can be used in vault configuration for obtaining the Java keystore password. <vault-option name="KEYSTORE_PASSWORD" value=" METHOD_TO_OBTAIN_PASSWORD "/> The description for the methods are listed as: {EXT}... Refers to the exact command, where the ... is the exact command. For example: {EXT}/usr/bin/getmypassword --section 1 --query company , run the /usr/bin/getmypassword command, which displays the password on standard output and use it as password for Security Vault's keystore. In this example, the command is using two options: --section 1 and --query company . {EXTC[:expiration_in_millis]}... Refers to the exact command, where the ... is the exact command line that is passed to the Runtime.exec(String) method to execute a platform command. The first line of the command output is used as the password. EXTC variant caches the passwords for expiration_in_millis milliseconds. Default cache expiration is 0 = infinity . For example: {EXTC:120000}/usr/bin/getmypassword --section 1 --query company verifies if the cache contains /usr/bin/getmypassword output, if it contains the output then use it. If it does not contain the output, run the command to output it to cache and use it. In this example, the cache expires in 2 minutes, that is 120000 milliseconds. {CMD}... or {CMDC[:expiration_in_millis]}... The general command is a string delimited by , (comma) where the first part is the actual command and further parts represents the parameters. The comma can be backslashed to keep it as a part of the parameter. For example, {CMD}/usr/bin/getmypassword,--section,1,--query,company . {CLASS[@jboss_module_spec]}classname[:ctorargs] Where the [:ctorargs] is an optional string delimited by the : (colon) from the classname is passed to the classname ctor . The ctorargs is a comma delimited list of strings. For example, {[email protected]}org.test.passwd.ExternamPassworProvider . In this example, the org.test.passwd.ExternamPassworProvider class is loaded from org.test.passwd module and uses the toCharArray() method to get the password. If toCharArray() is not available the toString() method is used. The org.test.passwd.ExternamPassworProvider class must have the default constructor. | [
"/subsystem=elytron/credential-store",
"/subsystem=elytron/secret-key-credential-store",
"data-source add ... --user-name=db_user --password=StrongPassword",
"data-source add ... --user-name=db_user --credential-reference={store=exampleKeyStoreCredentialStore, alias=passwordCredentialAlias}",
"/subsystem=elytron/credential-store= <name_of_credential_store> :add(path=\" <path_to_store_file> \", relative-to= <base_path_to_store_file> , credential-reference={clear-text= <store_password> }, create=true)",
"/subsystem=elytron/credential-store=exampleKeyStoreCredentialStore:add(path=\"exampleKeyStoreCredentialStore.jceks\", relative-to=jboss.server.data.dir, credential-reference={clear-text=password}, create=true) {\"outcome\" => \"success\"}",
"/profile= <profile_name> /subsystem=elytron/credential-store= <name_of_credential_store> :add(path= <absolute_path_to_store_keystore> ,credential-reference={clear-text=\" <store_password> \"},create=false,modifiable=false)",
"/profile=full-ha/subsystem=elytron/credential-store=exampleCredentialStoreDomain:add(path=/usr/local/etc/example-cred-store.cs,credential-reference={clear-text=\"password\"},create=false,modifiable=false)",
"/profile= <profile_name> /subsystem=elytron/credential-store= <name_of_credential_store> :add(path= <path_to_store_file> ,credential-reference={clear-text=\" <store_password> \"})",
"/profile=full-ha/subsystem=elytron/credential-store=exampleCredentialStoreHA:add(path=/usr/local/etc/example-cred-store-ha.cs, credential-reference={clear-text=\"password\"})",
"/host= <host_controller_name> /subsystem=elytron/credential-store= <name_of_credential_store> :add(path= <path_to_store_file> ,credential-reference={clear-text=\" <store_password> \"})",
"/host=master/subsystem=elytron/credential-store=exampleCredentialStoreHost:add(path=/usr/local/etc/example-cred-store-host.cs, credential-reference={clear-text=\"password\"})",
"/subsystem=elytron/secret-key-credential-store= <name_of_credential_store> :add(path=\" <path_to_the_credential_store> \", relative-to= <path_to_store_file> )",
"/subsystem=elytron/secret-key-credential-store=examplePropertiesCredentialStore:add(path=examplePropertiesCredentialStore.cs, relative-to=jboss.server.config.dir) {\"outcome\" => \"success\"}",
"/subsystem=elytron/credential-store= <name_of_credential_store> :add-alias(alias= <alias> , secret-value= <secret-value> )",
"/subsystem=elytron/credential-store=exampleKeyStoreCredentialStore:add-alias(alias=passwordCredentialAlias, secret-value=StrongPassword) {\"outcome\" => \"success\"}",
"/subsystem=elytron/credential-store= <name_of_credential_store> :read-aliases()",
"/subsystem=elytron/credential-store=exampleKeyStoreCredentialStore:read-aliases() { \"outcome\" => \"success\", \"result\" => [\"passwordcredentialalias\"] }",
"/subsystem=elytron/credential-store= <name_of_credential_store> :generate-secret-key(alias= <alias> , key-size= <128_or_192> )",
"/subsystem=elytron/credential-store=exampleKeyStoreCredentialStore:generate-secret-key(alias=secretKeyCredentialAlias)",
"/subsystem=elytron/credential-store= <credential_store> :read-aliases()",
"/subsystem=elytron/credential-store=exampleKeyStoreCredentialStore:read-aliases() { \"outcome\" => \"success\", \"result\" => [ \"secretkeycredentialalias\" ] }",
"/subsystem=elytron/secret-key-credential-store= <name_of_the_properties_credential_store> :generate-secret-key(alias= <alias> , key-size= <128_or_192> )",
"/subsystem=elytron/secret-key-credential-store=examplePropertiesCredentialStore:generate-secret-key(alias=secretKeyCredentialAlias) {\"outcome\" => \"success\"}",
"/subsystem=elytron/secret-key-credential-store= <name_of_the_properties_credential_store> :read-aliases()",
"/subsystem=elytron/secret-key-credential-store=examplePropertiesCredentialStore:read-aliases() { \"outcome\" => \"success\", \"result\" => [ \"secretkeycredentialalias\", \"key\" ] }",
"history --disable",
"/subsystem=elytron/secret-key-credential-store= <name_of_credential_store> :import-secret-key(alias= <alias> , key=\" <secret_key> \")",
"/subsystem=elytron/secret-key-credential-store=examplePropertiesCredentialStore:import-secret-key(alias=imported, key=\"RUxZAUs+Y1CzEPw0g2AHHOZ+oTKhT9osSabWQtoxR+O+42o11g==\")",
"history --enable",
"/subsystem=elytron/credential-store= <name_of_credential_store> :read-aliases()",
"/subsystem=elytron/credential-store=exampleKeyStoreCredentialStore:read-aliases() { \"outcome\" => \"success\", \"result\" => [ \"passwordcredentialalias\", \"secretkeycredentialalias\" ] }",
"/subsystem=elytron/secret-key-credential-store= <name_of_credential_store> :read-aliases()",
"/subsystem=elytron/secret-key-credential-store=examplePropertiesCredentialStore:read-aliases() { \"outcome\" => \"success\", \"result\" => [ \"secretkeycredentialalias\", \"key\" ] }",
"/subsystem=elytron/credential-store= <name_of_credential_store> :export-secret-key(alias= <alias> )",
"/subsystem=elytron/credential-store=exampleKeyStoreCredentialStore:export-secret-key(alias=secretKeyCredentialAlias) { \"outcome\" => \"success\", \"result\" => {\"key\" => \"RUxZAUui+8JkoDCE6mFyA3cCIbSAZaXq5wgYejj1scYgdDqWiw==\"} }",
"/subsystem=elytron/secret-key-credential-store= <name_of_credential_store> :export-secret-key(alias= <alias> )",
"/subsystem=elytron/secret-key-credential-store=examplePropertiesCredentialStore:export-secret-key(alias=secretkeycredentialalias) { \"outcome\" => \"success\", \"result\" => {\"key\" => \"RUxZAUtxXcYvz0aukZu+odOynIr0ByLhC72iwzlJsi+ZPmONgA==\"} }",
"/subsystem=elytron/credential-store= <name_of_credential_store> :remove-alias(alias= <alias> , entry-type= <credential_type> )",
"/subsystem=elytron/credential-store=exampleKeyStoreCredentialStore:remove-alias(alias=passwordCredentialAlias) { \"outcome\" => \"success\", \"response-headers\" => {\"warnings\" => [{ \"warning\" => \"Update dependent resources as alias 'passwordCredentialAlias' does not exist anymore\", \"level\" => \"WARNING\", \"operation\" => { \"address\" => [ (\"subsystem\" => \"elytron\"), (\"credential-store\" => \"exampleKeyStoreCredentialStore\") ], \"operation\" => \"remove-alias\" } }]} }",
"/subsystem=elytron/credential-store=exampleKeyStoreCredentialStore:remove-alias(alias=secretKeyCredentialAlias, entry-type=SecretKeyCredential) { \"outcome\" => \"success\", \"response-headers\" => {\"warnings\" => [{ \"warning\" => \"Update dependent resources as alias 'secretKeyCredentialAl ias' does not exist anymore\", \"level\" => \"WARNING\", \"operation\" => { \"address\" => [ (\"subsystem\" => \"elytron\"), (\"credential-store\" => \"exampleKeyStoreCredentialStore\") ], \"operation\" => \"remove-alias\" } }]} }",
"/subsystem=elytron/credential-store= <name_of_credential_store> :read-aliases()",
"/subsystem=elytron/credential-store=exampleKeyStoreCredentialStore:read-aliases() { \"outcome\" => \"success\", \"result\" => [] }",
"/subsystem=elytron/secret-key-credential-store= <name_of_credential_store> :remove-alias(alias= <alias> )",
"/subsystem=elytron/secret-key-credential-store=examplePropertiesCredentialStore:remove-alias(alias=secretKeyCredentialAlias) { \"outcome\" => \"success\", \"response-headers\" => {\"warnings\" => [{ \"warning\" => \"Update dependent resources as alias 'secretKeyCredentialAlias' does not exist anymore\", \"level\" => \"WARNING\", \"operation\" => { \"address\" => [ (\"subsystem\" => \"elytron\"), (\"secret-key-credential-store\" => \"examplePropertiesCredentialSt ore\") ], \"operation\" => \"remove-alias\" } }]} }",
"/subsystem=elytron/secret-key-credential-store= <name_of_credential_store> :read-aliases()",
"/subsystem=elytron/secret-key-credential-store=examplePropertiesCredentialStore:read-aliases() { \"outcome\" => \"success\", \"result\" => [] }",
"EAP_HOME /bin/elytron-tool.sh credential-store --create --location \" <path_to_store_file> \" --password <store_password>",
"EAP_HOME /bin/elytron-tool.sh credential-store --create --location \"../cred_stores/example-credential-store.jceks\" --password storePassword Credential Store has been successfully created",
"keytool -genkeypair -alias <key_pair_alias> -keyalg <key_algorithm> -keysize <key_size> -storepass <key_pair_and_keystore_password> -keystore <path_to_keystore> -storetype BCFKS -keypass <key_pair_and_keystore_password>",
"keytool -genseckey -alias <key_alias> -keyalg <key_algorithm> -keysize <key_size> -keystore <path_to_keystore> -storetype BCFKS -storepass <key_and_keystore_password> -keypass <key_and_keystore_password>",
"EAP_HOME/bin/elytron-tool.sh credential-store -c -a <alias> -x <alias_password> -p <key_and_keystore_password> -l <path_to_keystore> -u \"keyStoreType=BCFKS;external=true;keyAlias= <key_alias> ;externalPath= <path_to_credential_store> \"",
"EAP_HOME /bin/elytron-tool.sh credential-store --create --location \" <path_to_store_file> \" --type PropertiesCredentialStore",
"bin/elytron-tool.sh credential-store --create --location=standalone/configuration/properties-credential-store.cs --type PropertiesCredentialStore Credential Store has been successfully created",
"EAP_HOME /bin/elytron-tool.sh credential-store --location \" <path_to_store_file> \" --password <store_password> --add <alias> --secret <sensitive_string>",
"EAP_HOME /bin/elytron-tool.sh credential-store --location \"../cred_stores/example-credential-store.jceks\" --password storePassword --add examplePasswordCredential --secret speci@l_db_paUSDUSD_01 Alias \"examplePasswordCredential\" has been successfully stored",
"EAP_HOME /bin/elytron-tool.sh credential-store --generate-secret-key=example --location= <path_to_the_credential_store> --password <store_password>",
"EAP_HOME /bin/elytron-tool.sh credential-store --generate-secret-key=example --location \"../cred_stores/example-credential-store.jceks\" --password storePassword Alias \"example\" has been successfully stored",
"EAP_HOME /bin/elytron-tool.sh credential-store --import-secret-key=imported --location= <path_to_credential_store> --password= <store_password>",
"EAP_HOME /bin/elytron-tool.sh credential-store --import-secret-key=imported --location=../cred_stores/example-credential-store.jceks --password=storePassword",
"EAP_HOME /bin/elytron-tool.sh credential-store --location \" <path_to_store_file> \" --password <store_password> --aliases",
"EAP_HOME /bin/elytron-tool.sh credential-store --location \"../cred_stores/example-credential-store.jceks\" --password storePassword --aliases Credential store contains following aliases: examplepasswordcredential example",
"EAP_HOME /bin/elytron-tool.sh credential-store --location \" <path_to_store_file> \" --password <store_password> --exists <alias>",
"EAP_HOME /bin/elytron-tool.sh credential-store --location \"../cred_stores/example-credential-store.jceks\" --password storePassword --exists examplepasswordcredential Alias \"examplepasswordcredential\" exists",
"EAP_HOME /bin/elytron-tool.sh credential-store --export-secret-key= <alias> --location= <path_to_credential_store> --password=storePassword",
"EAP_HOME /bin/elytron-tool.sh credential-store --export-secret-key=example --location=../cred_stores/example-credential-store.jceks --password=storePassword Exported SecretKey for alias example=RUxZAUtBiAnoLP1CA+i6DtcbkZHfybBJxPeS9mlVOmEYwjjmEA==",
"EAP_HOME /bin/elytron-tool.sh credential-store --location \" <path_to_store_file> \" --password <store_password> --remove <alias>",
"EAP_HOME /bin/elytron-tool.sh credential-store --location \"../cred_stores/example-credential-store.jceks\" --password storePassword --remove examplepasswordcredential Alias \"examplepasswordcredential\" has been successfully removed",
"EAP_HOME /bin/elytron-tool.sh credential-store --generate-secret-key=example --location \" <path_to_the_credential_store> \" --type PropertiesCredentialStore",
"EAP_HOME /bin/elytron-tool.sh credential-store --generate-secret-key=example --location \"standalone/configuration/properties-credential-store.cs\" --type PropertiesCredentialStore Alias \"example\" has been successfully stored",
"EAP_HOME /bin/elytron-tool.sh credential-store --import-secret-key=imported --location= <path_to_credential_store> --type PropertiesCredentialStore",
"EAP_HOME /bin/elytron-tool.sh credential-store --import-secret-key=imported --location \"standalone/configuration/properties-credential-store.cs\" --type PropertiesCredentialStore",
"EAP_HOME /bin/elytron-tool.sh credential-store --location \" <path_to_store_file> \" --aliases --type PropertiesCredentialStore",
"EAP_HOME /bin/elytron-tool.sh credential-store --location \"standalone/configuration/properties-credential-store.cs\" --aliases --type PropertiesCredentialStore Credential store contains following aliases: example",
"EAP_HOME /bin/elytron-tool.sh credential-store --export-secret-key= <alias> --location \" <path_to_credential_store> \" --type PropertiesCredentialStore",
"EAP_HOME /bin/elytron-tool.sh credential-store --export-secret-key=example --location \"standalone/configuration/properties-credential-store.cs\" --type PropertiesCredentialStore Exported SecretKey for alias example=RUxZAUt1EZM7PsYRgMGypkGirSel+5Eix4aSgwop6jfxGYUQaQ==",
"EAP_HOME /bin/elytron-tool.sh credential-store --location \" <path_to_store_file> \" --remove <alias> --type PropertiesCredentialStore",
"EAP_HOME /bin/elytron-tool.sh credential-store --location \"standalone/configuration/properties-credential-store.cs\" --remove example --type PropertiesCredentialStore Alias \"example\" has been successfully removed",
"/subsystem=elytron/credential-store= <store_name> :add(location=\" <path_to_store_file> \",credential-reference={clear-text= <store_password> })",
"/subsystem=elytron/credential-store=my_store:add(location=\"../cred_stores/example-credential-store.jceks\",credential-reference={clear-text=storePassword})",
"EAP_HOME /bin/elytron-tool.sh credential-store --location= <path_to_store_file> --generate-key-pair example --algorithm RSA --size 3072",
"EAP_HOME /bin/elytron-tool.sh credential-store --import-key-pair example --private-key-location /home/user/.ssh/id_rsa --location= <path_to_store_file>",
"EAP_HOME /bin/elytron-tool.sh credential-store --location= <path_to_store_file> --export-key-pair-public-key example Credential store password: Confirm credential store password: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMfncZuHmR7uglb0M96ieArRFtp42xPn9+ugukbY8dyjOXoi cZrYRyy9+X68fylEWBMzyg+nhjWkxJlJ2M2LAGY=",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <configuration> <authentication-client xmlns=\"urn:elytron:client:1.6\"> <credential-stores> <credential-store name=\" USD{credential_store_name} \"> <protection-parameter-credentials> <clear-password password=\" USD{credential_store_password} \"/> </protection-parameter-credentials> <attributes> <attribute name=\"path\" value=\" USD{path_to_credential_store} \"/> </attributes> </credential-store> </credential-stores> <authentication-rules> <rule use-configuration=\" USD{configuration_file_name} \"/> </authentication-rules> <authentication-configurations> <configuration name=\" USD{configuration_file_name} \"> <credentials> <credential-store-reference store=\" USD{credential_store_name} \" alias=\" USD{alias_of_key_pair} \"/> </credentials> </configuration> </authentication-configurations> </authentication-client> </configuration>",
"EAP_HOME /bin/elytron-tool.sh mask --salt <salt> --iteration <iteration_count> --secret <password>",
"EAP_HOME /bin/elytron-tool.sh mask --salt 12345678 --iteration 123 --secret supersecretstorepassword MASK-8VzWsSNwBaR676g8ujiIDdFKwSjOBHCHgnKf17nun3v;12345678;123",
"USD{ENC:: RESOLVER_NAME : ENCRYPTED_STRING }",
"USD{ENC::initialresolver:RUxZAUMQE+L5zx9LmCRLyh5fjdfl1WM7lhfthKjeoEU+x+RMi6s=}",
"USD{ENC:: ENCRYPTED_STRING }",
"USD{ENC::RUxZAUMQE+L5zx9LmCRLyh5fjdfl1WM7lhfthKjeoEU+x+RMi6s=}",
"/subsystem=mail/mail-session=*/:read-resource-description(recursive=true,access-control=none) { \"outcome\"=>\"success\", \"result\"=>[{ \"from\"=>{ \"expression-allowed\"=>true, }] }",
"/subsystem=elytron/expression=encryption:add(resolvers=[{name= <name_of_the_resolver> , credential-store= <name_of_credential_store> , secret-key= <secret_key_alias> }])",
"/subsystem=elytron/expression=encryption:add(resolvers=[{name=exampleResolver, credential-store=examplePropertiesCredentialStore, secret-key=key}])",
"/subsystem=elytron/expression=encryption:list-add(name=resolvers, value={name= <name_of_the_resolver> , credential-store= <name_of_credential_store> , secret-key= <secret_key_alias> })",
"/subsystem=elytron/expression=encryption:list-add(name=resolvers,value={name=exampleResolver, credential-store=examplePropertiesCredentialStore, secret-key=key}) { \"outcome\" => \"success\", \"response-headers\" => { \"operation-requires-reload\" => true, \"process-state\" => \"reload-required\" } }",
"reload",
"history --disable",
"/subsystem=elytron/expression=encryption:create-expression(resolver= <existing_resolver> , clear-text= <sensitive_string_to_protect> )",
"/subsystem=elytron/expression=encryption:create-expression(resolver=exampleResolver, clear-text=TestPassword) { \"outcome\" => \"success\", \"result\" => {\"expression\" => \"USD{ENC::exampleResolver:RUxZAUMQgtpG7oFlHR2j1Gkn3GKIHff+HR8GcMX1QXHvx2uGurI=}\"} }",
"history --enable",
"credential-reference={store= <store_name> , alias= <alias> }",
"data-source add --name=example_data_source --jndi-name=java:/example_data_source --driver-name=h2 --connection-url=jdbc:h2:mem:test;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE --user-name=db_user --credential-reference={store=exampleKeyStoreCredentialStore, alias=passwordCredentialAlias} 16:17:23,024 INFO [org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-2) WFLYJCA0001: Bound data source [java:/example_data_source]",
"/subsystem=elytron/credential-store= <name_of_credential_store> :add(path= <path_to_the_credential_store> , create=true, modifiable=true, credential-reference={clear-text= <encrypted_expression> })",
"/subsystem=elytron/credential-store=secureKeyStoreCredentialStore:add(path=\"secureKeyStoreCredentialStore.jceks\", relative-to=jboss.server.data.dir, create=true, modifiable=true, credential-reference={clear-text=USD{ENC::exampleResolver:RUxZAUMQgtpG7oFlHR2j1Gkn3GKIHff+HR8GcMX1QXHvx2uGurI=}}) {\"outcome\" => \"success\"}",
"/subsystem=elytron/key-store=exampleKS:add(relative-to=jboss.server.config.dir, path=example.keystore, type=JCEKS, credential-reference={store=exampleKeyStoreCredentialStore, alias=myNewAlias, clear-text=myNewPassword}) { \"outcome\" => \"success\", \"result\" => {\"credential-store-update\" => { \"status\" => \"new-entry-added\", \"new-alias\" => \"myNewAlias\" }} }",
"/subsystem=elytron/key-store=exampleKS:write-attribute(name=credential-reference.clear-text,value=myUpdatedPassword) { \"outcome\" => \"success\", \"result\" => {\"credential-store-update\" => {\"status\" => \"existing-entry-updated\"}}, \"response-headers\" => { \"operation-requires-reload\" => true, \"process-state\" => \"reload-required\" } }",
"keytool -keystore NONE -storetype PKCS11 -storepass <keystore_password> -genseckey -alias <key_alias> -keyalg <key_algorithm> -keysize <key_size>",
"/subsystem=elytron/credential-store= <store_name> :add(modifiable=true, implementation-properties={\"keyStoreType\"=>\"PKCS11\", \"external\"=>\"true\", \"keyAlias\"=>\" <key_alias> \", externalPath=\" <path_to_JCEKS_file> \"}, credential-reference={clear-text=\" <keystore_password> \"}, create=true)",
"/subsystem=elytron/credential-store= <store_name> :add-alias(alias=\" <alias> \", secret-value=\" <sensitive_string> \")",
"/subsystem=elytron/credential-store= <store_name> :read-aliases()",
"keytool -genseckey -alias <key_alias> -keyalg <key_algorithm> -keysize <key_size> -keystore <path_to_keystore> -storetype BCFKS -storepass <key_and_keystore_password> -keypass <key_and_keystore_password>",
"/subsystem=elytron/credential-store= <BCFKS_credential_store> :add(relative-to=jboss.server.config.dir,credential-reference={clear-text= <key_and_keystore_password> },implementation-properties={keyAlias= <key_alias> ,external=true,externalPath= <path_to_credential_store> ,keyStoreType=BCFKS},create=true,location= <path_to_keystore> ,modifiable=true)",
"/subsystem=elytron/credential-store= <BCFKS_credential_store> :add-alias(alias=\" <alias> \", secret-value=\" <sensitive_string> \")",
"/subsystem=elytron/credential-store= <BCFKS_credential_store> :read-aliases()",
"module add --name=org.jboss.customcredstore --resources=/path/to/customcredstoreprovider.jar --dependencies=org.wildfly.security.elytron --slot=main",
"/subsystem=elytron/provider-loader=myCustomLoader:add(class-names=[org.wildfly.security.mycustomcredstore.CustomElytronProvider],module=org.jboss.customcredstore)",
"/subsystem=elytron/credential-store=my_store:add(providers=myCustomLoader,type=CustomKeyStorePasswordStore,location=\"cred_stores/my_store.jceks\",relative-to=jboss.server.data.dir,credential-reference={clear-text=supersecretstorepassword},create=true)",
"/subsystem=elytron/credential-store=my_store:add(providers=myCustomLoader,other-providers=myCustomLoader2,type=CustomKeyStorePasswordStore,location=\"cred_stores/my_store.jceks\",relative-to=jboss.server.data.dir,credential-reference={clear-text=supersecretstorepassword},create=true)",
"credential-reference={clear-text=\"{EXT}/usr/bin/getThePasswordScript.sh par1 par2\", type=\"COMMAND\"}",
"credential-reference={clear-text=\"{CMD}/usr/bin/getThePasswordScript.sh par1,par2\", type=\"COMMAND\"}",
"credential-reference={clear-text=\"MASK-MASKED_VALUE;SALT;ITERATION\"}",
"credential-reference={clear-text=\"MASK-NqMznhSbL3lwRpDmyuqLBW==;12345678;123\"}",
"/subsystem=elytron/credential-store=exampleCS:add(location=\"cred_stores/exampleCS.jceks\", relative-to=jboss.server.data.dir, create=true, credential-reference={store=cred-store, alias=pwd})",
"/subsystem=elytron/secret-key-credential-store= <name_of_credential_store> :add(path=\" <path_to_the_credential_store> \", relative-to= <path_to_store_file> )",
"/subsystem=elytron/secret-key-credential-store=examplePropertiesCredentialStore:add(path=examplePropertiesCredentialStore.cs, relative-to=jboss.server.config.dir) {\"outcome\" => \"success\"}",
"/subsystem=elytron/expression=encryption:add(resolvers=[{name= <name_of_the_resolver> , credential-store= <name_of_credential_store> , secret-key= <secret_key_alias> }])",
"/subsystem=elytron/expression=encryption:add(resolvers=[{name=exampleResolver, credential-store=examplePropertiesCredentialStore, secret-key=key}])",
"/subsystem=elytron/expression=encryption:list-add(name=resolvers, value={name= <name_of_the_resolver> , credential-store= <name_of_credential_store> , secret-key= <secret_key_alias> })",
"/subsystem=elytron/expression=encryption:list-add(name=resolvers,value={name=exampleResolver, credential-store=examplePropertiesCredentialStore, secret-key=key}) { \"outcome\" => \"success\", \"response-headers\" => { \"operation-requires-reload\" => true, \"process-state\" => \"reload-required\" } }",
"reload",
"history --disable",
"/subsystem=elytron/expression=encryption:create-expression(resolver= <existing_resolver> , clear-text= <sensitive_string_to_protect> )",
"/subsystem=elytron/expression=encryption:create-expression(resolver=exampleResolver, clear-text=TestPassword) { \"outcome\" => \"success\", \"result\" => {\"expression\" => \"USD{ENC::exampleResolver:RUxZAUMQgtpG7oFlHR2j1Gkn3GKIHff+HR8GcMX1QXHvx2uGurI=}\"} }",
"history --enable",
"/subsystem=elytron/credential-store= <name_of_credential_store> :add(path= <path_to_the_credential_store> , create=true, modifiable=true, credential-reference={clear-text= <encrypted_expression> })",
"/subsystem=elytron/credential-store=secureKeyStoreCredentialStore:add(path=\"secureKeyStoreCredentialStore.jceks\", relative-to=jboss.server.data.dir, create=true, modifiable=true, credential-reference={clear-text=USD{ENC::exampleResolver:RUxZAUMQgtpG7oFlHR2j1Gkn3GKIHff+HR8GcMX1QXHvx2uGurI=}}) {\"outcome\" => \"success\"}",
"EAP_HOME /bin/elytron-tool.sh vault --keystore \" <path_to_vault_file> \" --keystore-password <vault_password> --enc-dir \" <path_to_vault_directory> \" --salt <salt> --iteration <iteration_count> --alias <vault_alias>",
"EAP_HOME /bin/elytron-tool.sh vault --keystore ../vaults/vault.keystore --keystore-password vault22 --enc-dir ../vaults/ --salt 1234abcd --iteration 120 --alias my_vault --location ../cred_stores/my_vault_converted.cred_store",
"keystore: <path_to_vault_file> keystore-password: <vault_password> enc-dir: <path_to_vault_directory> salt: <salt> 1 iteration: <iteration_count> location: <path_to_converted_cred_store> 2 alias: <vault_alias> properties: <parameter1> = <value1> ; <parameter2> = <value2> ; 3",
"keystore:/vaults/vault1/vault1.keystore keystore-password:vault11 enc-dir:/vaults/vault1/ salt:1234abcd iteration:120 location:/cred_stores/vault1_converted.cred_store alias:my_vault keystore:/vaults/vault2/vault2.keystore keystore-password:vault22 enc-dir:/vaults/vault2/ salt:abcd1234 iteration:130 location:/cred_stores/vault2_converted.cred_store alias:my_vault2",
"EAP_HOME /bin/elytron-tool.sh vault --bulk-convert vaultdescriptions.txt",
"<configuration> <authentication-client xmlns=\"urn:elytron:client:1.2\"> <credential-stores> <credential-store name=\"my_store\"> 1 <protection-parameter-credentials> <credential-store-reference clear-text=\"pass123\"/> 2 </protection-parameter-credentials> <attributes> <attribute name=\"location\" value=\"/path/to/my_store.jceks\"/> 3 </attributes> </credential-store> </credential-stores> <authentication-configurations> <configuration name=\"my_user\"> <set-host name=\"localhost\"/> <set-user-name name=\"my_user\"/> <set-mechanism-realm name=\"ManagementRealm\"/> <use-provider-sasl-factory/> <credentials> <credential-store-reference store=\"my_store\" alias=\"my_user\"/> 4 </credentials> </configuration> </authentication-configurations> </authentication-client> </configuration>",
"keytool -genseckey -alias vault -storetype jceks -keyalg AES -keysize 128 -storepass vault22 -keypass vault22 -keystore EAP_HOME /vault/vault.keystore",
"vault.sh --keystore KEYSTORE_URL --keystore-password KEYSTORE_PASSWORD --alias KEYSTORE_ALIAS --vault-block VAULT_BLOCK --attribute ATTRIBUTE --sec-attr SEC-ATTR --enc-dir ENC_FILE_DIR --iteration ITERATION_COUNT --salt SALT",
"vault.sh --keystore EAP_HOME /vault/vault.keystore --keystore-password vault22 --alias vault --vault-block vb --attribute password --sec-attr 0penS3sam3 --enc-dir EAP_HOME /vault/ --iteration 120 --salt 1234abcd",
"========================================================================= JBoss Vault JBOSS_HOME: EAP_HOME JAVA: java ========================================================================= Nov 09, 2015 9:02:47 PM org.picketbox.plugins.vault.PicketBoxSecurityVault init INFO: PBOX00361: Default Security Vault Implementation Initialized and Ready WFLYSEC0047: Secured attribute value has been stored in Vault. Please make note of the following: ******************************************** Vault Block:vb Attribute Name:password Configuration should be done as follows: VAULT::vb::password::1 ******************************************** WFLYSEC0048: Vault Configuration in WildFly configuration file: ******************************************** </extensions> <vault> <vault-option name=\"KEYSTORE_URL\" value=\"EAP_HOME/vault/vault.keystore\"/> <vault-option name=\"KEYSTORE_PASSWORD\" value=\"MASK-5dOaAVafCSd\"/> <vault-option name=\"KEYSTORE_ALIAS\" value=\"vault\"/> <vault-option name=\"SALT\" value=\"1234abcd\"/> <vault-option name=\"ITERATION_COUNT\" value=\"120\"/> <vault-option name=\"ENC_FILE_DIR\" value=\"EAP_HOME/vault/\"/> </vault><management> ********************************************",
"Please enter a Digit:: 0: Start Interactive Session 1: Remove Interactive Session 2: Exit 0 Starting an interactive session Enter directory to store encrypted files:EAP_HOME/vault/ Enter Keystore URL:EAP_HOME/vault/vault.keystore Enter Keystore password: vault22 Enter Keystore password again: vault22 Values match Enter 8 character salt:1234abcd Enter iteration count as a number (Eg: 44):120 Enter Keystore Alias:vault Initializing Vault Nov 09, 2015 9:24:36 PM org.picketbox.plugins.vault.PicketBoxSecurityVault init INFO: PBOX000361: Default Security Vault Implementation Initialized and Ready Vault Configuration in AS7 config file: ******************************************** </extensions> <vault> <vault-option name=\"KEYSTORE_URL\" value=\"EAP_HOME/vault/vault.keystore\"/> <vault-option name=\"KEYSTORE_PASSWORD\" value=\"MASK-5dOaAVafCSd\"/> <vault-option name=\"KEYSTORE_ALIAS\" value=\"vault\"/> <vault-option name=\"SALT\" value=\"1234abcd\"/> <vault-option name=\"ITERATION_COUNT\" value=\"120\"/> <vault-option name=\"ENC_FILE_DIR\" value=\"EAP_HOME/vault/\"/> </vault><management> ******************************************** Vault is initialized and ready for use Handshake with Vault complete",
"/core-service=vault:add(vault-options=[(\"KEYSTORE_URL\" => PATH_TO_KEYSTORE ),(\"KEYSTORE_PASSWORD\" => MASKED_PASSWORD ),(\"KEYSTORE_ALIAS\" => ALIAS ),(\"SALT\" => SALT ),(\"ITERATION_COUNT\" => ITERATION_COUNT ),(\"ENC_FILE_DIR\" => ENC_FILE_DIR )]) /core-service=vault:add(vault-options=[(\"KEYSTORE_URL\" => \" EAP_HOME /vault/vault.keystore\"),(\"KEYSTORE_PASSWORD\" => \"MASK-5dOaAVafCSd\"),(\"KEYSTORE_ALIAS\" => \"vault\"),(\"SALT\" => \"1234abcd\"),(\"ITERATION_COUNT\" => \"120\"),(\"ENC_FILE_DIR\" => \" EAP_HOME /vault/\")])",
"vault.sh --keystore KEYSTORE_URL --keystore-password KEYSTORE_PASSWORD --alias KEYSTORE_ALIAS --vault-block VAULT_BLOCK --attribute ATTRIBUTE --sec-attr SEC-ATTR --enc-dir ENC_FILE_DIR --iteration ITERATION_COUNT --salt SALT",
"vault.sh --keystore EAP_HOME /vault/vault.keystore --keystore-password vault22 --alias vault --vault-block vb --attribute password --sec-attr 0penS3sam3 --enc-dir EAP_HOME /vault/ --iteration 120 --salt 1234abcd",
"========================================================================= JBoss Vault JBOSS_HOME: EAP_HOME JAVA: java ========================================================================= Nov 09, 2015 9:24:36 PM org.picketbox.plugins.vault.PicketBoxSecurityVault init INFO: PBOX00361: Default Security Vault Implementation Initialized and Ready WFLYSEC0047: Secured attribute value has been stored in Vault. Please make note of the following: ******************************************** Vault Block:vb Attribute Name:password Configuration should be done as follows: VAULT::vb::password::1 ******************************************** WFLYSEC0048: Vault Configuration in WildFly configuration file: ******************************************** </extensions> <vault> <vault-option name=\"KEYSTORE_URL\" value=\"../vault/vault.keystore\"/> <vault-option name=\"KEYSTORE_PASSWORD\" value=\"MASK-5dOaAVafCSd\"/> <vault-option name=\"KEYSTORE_ALIAS\" value=\"vault\"/> <vault-option name=\"SALT\" value=\"1234abcd\"/> <vault-option name=\"ITERATION_COUNT\" value=\"120\"/> <vault-option name=\"ENC_FILE_DIR\" value=\"../vault/\"/> </vault><management> ********************************************",
"Vault Block:vb Attribute Name:password Configuration should be done as follows: VAULT::vb::password::1",
"Vault Block:ds_Example1 Attribute Name:password Configuration should be done as follows: VAULT::ds_Example1::password::1",
"========================================================================= JBoss Vault JBOSS_HOME: EAP_HOME JAVA: java ========================================================================= ********************************** **** JBoss Vault *************** ********************************** Please enter a Digit:: 0: Start Interactive Session 1: Remove Interactive Session 2: Exit 0 Starting an interactive session Enter directory to store encrypted files:EAP_HOME/vault/ Enter Keystore URL:EAP_HOME/vault/vault.keystore Enter Keystore password: Enter Keystore password again: Values match Enter 8 character salt:1234abcd Enter iteration count as a number (Eg: 44):120 Enter Keystore Alias:vault Initializing Vault Nov 09, 2015 9:24:36 PM org.picketbox.plugins.vault.PicketBoxSecurityVault init INFO: PBOX000361: Default Security Vault Implementation Initialized and Ready Vault Configuration in AS7 config file: ******************************************** </extensions> <vault> <vault-option name=\"KEYSTORE_URL\" value=\"EAP_HOME/vault/vault.keystore\"/> <vault-option name=\"KEYSTORE_PASSWORD\" value=\"MASK-5dOaAVafCSd\"/> <vault-option name=\"KEYSTORE_ALIAS\" value=\"vault\"/> <vault-option name=\"SALT\" value=\"1234abcd\"/> <vault-option name=\"ITERATION_COUNT\" value=\"120\"/> <vault-option name=\"ENC_FILE_DIR\" value=\"EAP_HOME/vault/\"/> </vault><management> ******************************************** Vault is initialized and ready for use Handshake with Vault complete Please enter a Digit:: 0: Store a secured attribute 1: Check whether a secured attribute exists 2: Remove secured attribute 3: Exit 0 Task: Store a secured attribute Please enter secured attribute value (such as password): Please enter secured attribute value (such as password) again: Values match Enter Vault Block:ds_Example1 Enter Attribute Name:password Secured attribute value has been stored in vault. Please make note of the following: ******************************************** Vault Block:ds_Example1 Attribute Name:password Configuration should be done as follows: VAULT::ds_Example1::password::1 ******************************************** Please enter a Digit:: 0: Store a secured attribute 1: Check whether a secured attribute exists 2: Remove secured attribute 3: Exit",
"/subsystem= SUBSYSTEM :read-resource-description(recursive=true)",
"USD{VAULT::VAULT_BLOCK::ATTRIBUTE_NAME::MASKED_STRING}",
"<subsystem xmlns=\"urn:jboss:domain:datasources:5.0\"> <datasources> <datasource jndi-name=\"java:jboss/datasources/ExampleDS\" enabled=\"true\" use-java-context=\"true\" pool-name=\"H2DS\"> <connection-url>jdbc:h2:mem:test;DB_CLOSE_DELAY=-1</connection-url> <driver>h2</driver> <pool></pool> <security> <user-name>sa</user-name> <password>USD{VAULT::ds_ExampleDS::password::1}</password> </security> </datasource> <drivers> <driver name=\"h2\" module=\"com.h2database.h2\"> <xa-datasource-class>org.h2.jdbcx.JdbcDataSource</xa-datasource-class> </driver> </drivers> </datasources> </subsystem>",
"@DataSourceDefinition( name = \"java:jboss/datasources/LoginDS\", user = \"sa\", password = \"VAULT::DS::thePass::1\", className = \"org.h2.jdbcx.JdbcDataSource\", url = \"jdbc:h2:tcp://localhost/mem:test\" ) /*old (plaintext) definition @DataSourceDefinition( name = \"java:jboss/datasources/LoginDS\", user = \"sa\", password = \"sa\", className = \"org.h2.jdbcx.JdbcDataSource\", url = \"jdbc:h2:tcp://localhost/mem:test\" )*/",
"vault.sh --keystore KEYSTORE_URL --keystore-password KEYSTORE_PASSWORD --alias KEYSTORE_ALIAS --check-sec-attr --vault-block VAULT_BLOCK --attribute ATTRIBUTE --enc-dir ENC_FILE_DIR --iteration ITERATION_COUNT --salt SALT",
"Password already exists.",
"Password doesn't exist.",
"A value exists for (VAULT_BLOCK, ATTRIBUTE)",
"No value has been store for (VAULT_BLOCK, ATTRIBUTE)",
"========================================================================= JBoss Vault JBOSS_HOME: EAP_HOME JAVA: java ========================================================================= ********************************** **** JBoss Vault *************** ********************************** Please enter a Digit:: 0: Start Interactive Session 1: Remove Interactive Session 2: Exit 0 Starting an interactive session Enter directory to store encrypted files:EAP_HOME/vault Enter Keystore URL:EAP_HOME/vault/vault.keystore Enter Keystore password: Enter Keystore password again: Values match Enter 8 character salt:1234abcd Enter iteration count as a number (Eg: 44):120 Enter Keystore Alias:vault Initializing Vault Nov 09, 2015 9:24:36 PM org.picketbox.plugins.vault.PicketBoxSecurityVault init INFO: PBOX000361: Default Security Vault Implementation Initialized and Ready Vault Configuration in AS7 config file: ******************************************** </extensions> <vault> <vault-option name=\"KEYSTORE_URL\" value=\"EAP_HOME/vault/vault.keystore\"/> <vault-option name=\"KEYSTORE_PASSWORD\" value=\"MASK-5dOaAVafCSd\"/> <vault-option name=\"KEYSTORE_ALIAS\" value=\"vault\"/> <vault-option name=\"SALT\" value=\"1234abcd\"/> <vault-option name=\"ITERATION_COUNT\" value=\"120\"/> <vault-option name=\"ENC_FILE_DIR\" value=\"EAP_HOME/vault/\"/> </vault><management> ******************************************** Vault is initialized and ready for use Handshake with Vault complete Please enter a Digit:: 0: Store a secured attribute 1: Check whether a secured attribute exists 2: Remove secured attribute 3: Exit 1 Task: Verify whether a secured attribute exists Enter Vault Block:vb Enter Attribute Name:password A value exists for (vb, password) Please enter a Digit:: 0: Store a secured attribute 1: Check whether a secured attribute exists 2: Remove secured attribute 3: Exit",
"vault.sh --keystore KEYSTORE_URL --keystore-password KEYSTORE_PASSWORD --alias KEYSTORE_ALIAS --remove-sec-attr --vault-block VAULT_BLOCK --attribute ATTRIBUTE --enc-dir ENC_FILE_DIR --iteration ITERATION_COUNT --salt SALT",
"Secured attribute [VAULT_BLOCK::ATTRIBUTE] has been successfully removed from vault",
"Secured attribute [VAULT_BLOCK::ATTRIBUTE] was not removed from vault, check whether it exist",
"./vault.sh --keystore EAP_HOME /vault/vault.keystore --keystore-password vault22 --alias vault --remove-sec-attr --vault-block vb --attribute password --enc-dir EAP_HOME /vault/ --iteration 120 --salt 1234abcd ========================================================================= JBoss Vault JBOSS_HOME: EAP_HOME JAVA: java ========================================================================= Dec 23, 2015 1:54:24 PM org.picketbox.plugins.vault.PicketBoxSecurityVault init INFO: PBOX000361: Default Security Vault Implementation Initialized and Ready Secured attribute [vb::password] has been successfully removed from vault",
"Secured attribute [VAULT_BLOCK::ATTRIBUTE] has been successfully removed from vault",
"Secured attribute [VAULT_BLOCK::ATTRIBUTE] was not removed from vault, check whether it exist",
"********************************** **** JBoss Vault *************** ********************************** Please enter a Digit:: 0: Start Interactive Session 1: Remove Interactive Session 2: Exit 0 Starting an interactive session Enter directory to store encrypted files:EAP_HOME/vault/ Enter Keystore URL:EAP_HOME/vault/vault.keystore Enter Keystore password: Enter Keystore password again: Values match Enter 8 character salt:1234abcd Enter iteration count as a number (Eg: 44):120 Enter Keystore Alias:vault Initializing Vault Dec 23, 2014 1:40:56 PM org.picketbox.plugins.vault.PicketBoxSecurityVault init INFO: PBOX000361: Default Security Vault Implementation Initialized and Ready Vault Configuration in configuration file: ******************************************** </extensions> <vault> <vault-option name=\"KEYSTORE_URL\" value=\"EAP_HOME/vault/vault.keystore\"/> <vault-option name=\"KEYSTORE_PASSWORD\" value=\"MASK-5dOaAVafCSd\"/> <vault-option name=\"KEYSTORE_ALIAS\" value=\"vault\"/> <vault-option name=\"SALT\" value=\"1234abcd\"/> <vault-option name=\"ITERATION_COUNT\" value=\"120\"/> <vault-option name=\"ENC_FILE_DIR\" value=\"EAP_HOME/vault/\"/> </vault><management> ******************************************** Vault is initialized and ready for use Handshake with Vault complete Please enter a Digit:: 0: Store a secured attribute 1: Check whether a secured attribute exists 2: Remove secured attribute 3: Exit 2 Task: Remove secured attribute Enter Vault Block:vb Enter Attribute Name:password Secured attribute [vb::password] has been successfully removed from vault",
"/core-service=vault:add(code=\"custom.vault.implementation.CustomSecurityVault\", module=\"custom.vault.module\", vault-options=[(\"KEYSTORE_URL\" => PATH_TO_KEYSTORE ),(\"KEYSTORE_PASSWORD\" => MASKED_PASSWORD ), (\"KEYSTORE_ALIAS\" => ALIAS ),(\"SALT\" => SALT ),(\"ITERATION_COUNT\" => ITERATION_COUNT ),(\"ENC_FILE_DIR\" => ENC_FILE_DIR )])",
"<vault-option name=\"KEYSTORE_PASSWORD\" value=\" METHOD_TO_OBTAIN_PASSWORD \"/>"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/how_to_configure_server_security/con-secure-storage-for-credentials_default |
Chapter 6. Installing a private cluster on IBM Power Virtual Server | Chapter 6. Installing a private cluster on IBM Power Virtual Server In OpenShift Container Platform version 4.15, you can install a private cluster into an existing VPC and IBM Power(R) Virtual Server Workspace. The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 6.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an IBM Cloud(R) account to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. You configured the ccoctl utility before you installed the cluster. For more information, see Configuring the Cloud Credential Operator utility . 6.2. Private clusters You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet. By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet. Important If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private. To deploy a private cluster, you must: Use existing networking that meets your requirements. Create a DNS zone using IBM Cloud(R) DNS Services and specify it as the base domain of the cluster. For more information, see "Using IBM Cloud(R) DNS Services to configure DNS resolution". Deploy from a machine that has access to: The API services for the cloud to which you provision. The hosts on the network that you provision. The internet to obtain installation media. You can use any machine that meets these access requirements and follows your company's guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN. 6.3. Private clusters in IBM Power Virtual Server To create a private cluster on IBM Power(R) Virtual Server, you must provide an existing private Virtual Private Cloud (VPC) and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for only internal traffic. The cluster still requires access to internet to access the IBM Cloud(R) APIs. The following items are not required or created when you install a private cluster: Public subnets Public network load balancers, which support public Ingress A public DNS zone that matches the baseDomain for the cluster You will also need to create an IBM(R) DNS service containing a DNS zone that matches your baseDomain . Unlike standard deployments on Power VS which use IBM(R) CIS for DNS, you must use IBM(R) DNS for your DNS service. 6.3.1. Limitations Private clusters on IBM Power(R) Virtual Server are subject only to the limitations associated with the existing VPC that was used for cluster deployment. 6.4. Requirements for using your VPC You must correctly configure the existing VPC and its subnets before you install the cluster. The installation program does not create a VPC or VPC subnet in this scenario. The installation program cannot: Subdivide network ranges for the cluster to use Set route tables for the subnets Set VPC options like DHCP Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. 6.4.1. VPC validation The VPC and all of the subnets must be in an existing resource group. The cluster is deployed to this resource group. As part of the installation, specify the following in the install-config.yaml file: The name of the resource group The name of VPC The name of the VPC subnet To ensure that the subnets that you provide are suitable, the installation program confirms that all of the subnets you specify exists. Note Subnet IDs are not supported. 6.4.2. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: ICMP Ingress is allowed to the entire network. TCP port 22 Ingress (SSH) is allowed to the entire network. Control plane TCP 6443 Ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 Ingress (MCS) is allowed to the entire network. 6.5. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.15, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 6.6. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 6.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 6.8. Exporting the API key You must set the API key you created as a global variable; the installation program ingests the variable during startup to set the API key. Prerequisites You have created either a user API key or service ID API key for your IBM Cloud(R) account. Procedure Export your API key for your account as a global variable: USD export IBMCLOUD_API_KEY=<api_key> Important You must set the variable name exactly as specified; the installation program expects the variable name to be present during startup. 6.9. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for IBM Power(R) Virtual Server 6.9.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 6.1. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 2 16 GB 100 GB 300 Control plane RHCOS 2 16 GB 100 GB 300 Compute RHCOS 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 6.9.2. Sample customized install-config.yaml file for IBM Power Virtual Server You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com compute: 1 2 - architecture: ppc64le hyperthreading: Enabled 3 name: worker platform: powervs: smtLevel: 8 4 replicas: 3 controlPlane: 5 6 architecture: ppc64le hyperthreading: Enabled 7 name: master platform: powervs: smtLevel: 8 8 replicas: 3 metadata: creationTimestamp: null name: example-private-cluster-name networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/24 networkType: OVNKubernetes 10 serviceNetwork: - 172.30.0.0/16 platform: powervs: userID: ibm-user-id powervsResourceGroup: "ibmcloud-resource-group" region: powervs-region vpcName: name-of-existing-vpc 11 vpcSubnets: - powervs-region-example-subnet-1 vpcRegion : vpc-region zone: powervs-zone serviceInstanceGUID: "powervs-region-service-instance-guid" publish: Internal 12 pullSecret: '{"auths": ...}' 13 sshKey: ssh-ed25519 AAAA... 14 1 5 If you do not provide these parameters and values, the installation program provides the default value. 2 6 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Both sections currently define a single machine pool. Only one control plane pool is used. 3 7 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. 4 8 The smtLevel specifies the level of SMT to set to the control plane and compute machines. The supported values are 1, 2, 4, 8, 'off' and 'on' . The default value is 8. The smtLevel 'off' sets SMT to off and smtlevel 'on' sets SMT to the default value 8 on the cluster nodes. Note When simultaneous multithreading (SMT), or hyperthreading is not enabled, one vCPU is equivalent to one physical core. When enabled, total vCPUs is computed as (Thread(s) per core * Core(s) per socket) * Socket(s). The smtLevel controls the threads per core. Lower SMT levels may require additional assigned cores when deploying the cluster nodes. You can do this by setting the 'processors' parameter in the install-config.yaml file to an appropriate value to meet the requirements for deploying OpenShift Container Platform successfully. 9 The machine CIDR must contain the subnets for the compute machines and control plane machines. 10 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 11 Specify the name of an existing VPC. 12 Specify how to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster. 13 Required. The installation program prompts you for this value. 14 Provide the sshKey value that you use to access the machines in your cluster. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 6.9.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 6.10. Manually creating IAM Installing the cluster requires that the Cloud Credential Operator (CCO) operate in manual mode. While the installation program configures the CCO for manual mode, you must specify the identity and access management secrets for you cloud provider. You can use the Cloud Credential Operator (CCO) utility ( ccoctl ) to create the required IBM Cloud(R) resources. Prerequisites You have configured the ccoctl binary. You have an existing install-config.yaml file. Procedure Edit the install-config.yaml configuration file so that it contains the credentialsMode parameter set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: ppc64le hyperthreading: Enabled 1 This line is added to set the credentialsMode parameter to Manual . To generate the manifests, run the following command from the directory that contains the installation program: USD ./openshift-install create manifests --dir <installation_directory> From the directory that contains the installation program, set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: "1.0" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer Create the service ID for each credential request, assign the policies defined, create an API key, and generate the secret: USD ccoctl ibmcloud create-service-id \ --credentials-requests-dir=<path_to_credential_requests_directory> \ 1 --name=<cluster_name> \ 2 --output-dir=<installation_directory> \ 3 --resource-group-name=<resource_group_name> 4 1 Specify the directory containing the files for the component CredentialsRequest objects. 2 Specify the name of the OpenShift Container Platform cluster. 3 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 4 Optional: Specify the name of the resource group used for scoping the access policies. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. If an incorrect resource group name is provided, the installation fails during the bootstrap phase. To find the correct resource group name, run the following command: USD grep resourceGroup <installation_directory>/manifests/cluster-infrastructure-02-config.yml Verification Ensure that the appropriate secrets were generated in your cluster's manifests directory. 6.11. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 6.12. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.15. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.15 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 6.13. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources Accessing the web console 6.14. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.15, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources About remote health monitoring 6.15. steps Customize your cluster Optional: Opt out of remote health reporting | [
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"export IBMCLOUD_API_KEY=<api_key>",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com compute: 1 2 - architecture: ppc64le hyperthreading: Enabled 3 name: worker platform: powervs: smtLevel: 8 4 replicas: 3 controlPlane: 5 6 architecture: ppc64le hyperthreading: Enabled 7 name: master platform: powervs: smtLevel: 8 8 replicas: 3 metadata: creationTimestamp: null name: example-private-cluster-name networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/24 networkType: OVNKubernetes 10 serviceNetwork: - 172.30.0.0/16 platform: powervs: userID: ibm-user-id powervsResourceGroup: \"ibmcloud-resource-group\" region: powervs-region vpcName: name-of-existing-vpc 11 vpcSubnets: - powervs-region-example-subnet-1 vpcRegion : vpc-region zone: powervs-zone serviceInstanceGUID: \"powervs-region-service-instance-guid\" publish: Internal 12 pullSecret: '{\"auths\": ...}' 13 sshKey: ssh-ed25519 AAAA... 14",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: ppc64le hyperthreading: Enabled",
"./openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer",
"ccoctl ibmcloud create-service-id --credentials-requests-dir=<path_to_credential_requests_directory> \\ 1 --name=<cluster_name> \\ 2 --output-dir=<installation_directory> \\ 3 --resource-group-name=<resource_group_name> 4",
"grep resourceGroup <installation_directory>/manifests/cluster-infrastructure-02-config.yml",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin"
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_ibm_power_virtual_server/installing-ibm-power-vs-private-cluster |
Chapter 1. Introduction to spine-leaf networking | Chapter 1. Introduction to spine-leaf networking The following chapters provide information about constructing a spine-leaf network topology for your Red Hat OpenStack Platform environment. This includes a full end-to-end scenario and example files to help replicate a more extensive network topology within your own environment. 1.1. Spine-leaf networking Red Hat OpenStack Platform has a composable network architecture that you can use to adapt your networking to the routed spine-leaf data center topology. In a practical application of routed spine-leaf, a leaf is represented as a composable Compute or Storage role usually in a data center rack, as shown in Figure 1.1, "Routed spine-leaf example" . The Leaf 0 rack has an undercloud node, Controller nodes, and Compute nodes. The composable networks are presented to the nodes, which have been assigned to composable roles. The following diagram contains the following configuration: The StorageLeaf networks are presented to the Ceph storage and Compute nodes. The NetworkLeaf represents an example of any network you might want to compose. Figure 1.1. Routed spine-leaf example 1.2. Spine-leaf network topology The spine-leaf scenario takes advantage of OpenStack Networking (neutron) functionality to define multiple subnets within segments of a single network. Each network uses a base network which acts as Leaf 0. Director creates Leaf 1 and Leaf 2 subnets as segments of the main network. This scenario uses the following networks: Table 1.1. Leaf 0 Networks (base networks) Network Roles attached Subnet Provisioning / Ctlplane / Leaf0 Controller, ComputeLeaf0, CephStorageLeaf0 192.168.10.0/24 Storage Controller, ComputeLeaf0, CephStorageLeaf0 172.16.0.0/24 StorageMgmt Controller, CephStorageLeaf0 172.17.0.0/24 InternalApi Controller, ComputeLeaf0 172.18.0.0/24 Tenant [1] Controller, ComputeLeaf0 172.19.0.0/24 External Controller 10.1.1.0/24 [1] Tenant networks are also known as project networks. Table 1.2. Leaf 1 Networks Network Roles attached Subnet Provisioning / Ctlplane / Leaf1 ComputeLeaf1, CephStorageLeaf1 192.168.11.0/24 StorageLeaf1 ComputeLeaf1, CephStorageLeaf1 172.16.1.0/24 StorageMgmtLeaf1 CephStorageLeaf1 172.17.1.0/24 InternalApiLeaf1 ComputeLeaf1 172.18.1.0/24 TenantLeaf1 [1] ComputeLeaf1 172.19.1.0/24 [1] Tenant networks are also known as project networks. Table 1.3. Leaf 2 Networks Network Roles attached Subnet Provisioning / Ctlplane / Leaf2 ComputeLeaf2, CephStorageLeaf2 192.168.12.0/24 StorageLeaf2 ComputeLeaf2, CephStorageLeaf2 172.16.2.0/24 StorageMgmtLeaf2 CephStorageLeaf2 172.17.2.0/24 InternalApiLeaf2 ComputeLeaf2 172.18.2.0/24 TenantLeaf2 [1] ComputeLeaf2 172.19.2.0/24 [1] Tenant networks are also known as project networks. Figure 1.2. Spine-leaf network topology 1.3. Spine-leaf requirements To deploy the overcloud on a network with a L3 routed architecture, complete the following prerequisite steps: Layer-3 routing Configure the routing of the network infrastructure to enable traffic between the different L2 segments. You can configure this routing statically or dynamically. DHCP-Relay Each L2 segment not local to the undercloud must provide dhcp-relay . You must forward DHCP requests to the undercloud on the provisioning network segment where the undercloud is connected. Note The undercloud uses two DHCP servers. One for baremetal node introspection, and another for deploying overcloud nodes. Ensure that you read DHCP relay configuration to understand the requirements when you configure dhcp-relay . 1.4. Spine-leaf limitations Some roles, such as the Controller role, use virtual IP addresses and clustering. The mechanism behind this functionality requires L2 network connectivity between these nodes. You must place these nodes within the same leaf. Similar restrictions apply to Networker nodes. The network service implements highly-available default paths in the network with Virtual Router Redundancy Protocol (VRRP). Because VRRP uses a virtual router IP address, you must connect master and backup nodes to the same L2 network segment. When you use tenant or provider networks with VLAN segmentation, you must share the particular VLANs between all Networker and Compute nodes. Note It is possible to configure the network service with multiple sets of Networker nodes. Each set of Networker nodes share routes for their networks, and VRRP provides highly-available default paths within each set of Networker nodes. In this type of configuration, all Networker nodes that share networks must be on the same L2 network segment. | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/spine_leaf_networking/assembly_introduction-to-spine-leaf-networking |
Chapter 19. Configuring Layer 3 high availability (HA) | Chapter 19. Configuring Layer 3 high availability (HA) 19.1. OpenStack Networking without high availability (HA) OpenStack Networking deployments without any high availability (HA) features are vulnerable to physical node failures. In a typical deployment, projects create virtual routers, which are scheduled to run on physical L3 agent nodes. This becomes an issue when you lose a L3 agent node and the dependent virtual machines subsequently lose connectivity to external networks. Any floating IP addresses will also be unavailable. In addition, connectivity is lost between any networks that the router hosts. 19.2. Overview of Layer 3 high availability (HA) This active/passive high availability (HA) configuration uses the industry standard VRRP (as defined in RFC 3768) to protect project routers and floating IP addresses. A virtual router is randomly scheduled across multiple OpenStack Networking nodes, with one designated as the active router, and the remainder serving in a standby role. Note To deploy Layer 3 HA, you must maintain similar configuration on the redundant OpenStack Networking nodes, including floating IP ranges and access to external networks. In the following diagram, the active Router1 and Router2 routers are running on separate physical L3 agent nodes. Layer 3 HA has scheduled backup virtual routers on the corresponding nodes, ready to resume service in the case of a physical node failure. When the L3 agent node fails, Layer 3 HA reschedules the affected virtual router and floating IP addresses to a working node: During a failover event, instance TCP sessions through floating IPs remain unaffected, and migrate to the new L3 node without disruption. Only SNAT traffic is affected by failover events. The L3 agent is further protected when in an active/active HA mode. 19.3. Layer 3 high availability (HA) failover conditions Layer 3 high availability (HA) automatically reschedules protected resources in the following events: The L3 agent node shuts down or otherwise loses power because of a hardware failure. The L3 agent node becomes isolated from the physical network and loses connectivity. Note Manually stopping the L3 agent service does not induce a failover event. 19.4. Project considerations for Layer 3 high availability (HA) Layer 3 high availability (HA) configuration occurs in the back end and is invisible to the project. Projects can continue to create and manage their virtual routers as usual, however there are some limitations to be aware of when designing your Layer 3 HA implementation: Layer 3 HA supports up to 255 virtual routers per project. Internal VRRP messages are transported within a separate internal network, created automatically for each project. This process occurs transparently to the user. 19.5. High availability (HA) changes to OpenStack Networking The Neutron API has been updated to allow administrators to set the --ha=True/False flag when creating a router, which overrides the default configuration of l3_ha in /var/lib/config-data/neutron/etc/neutron/neutron.conf. HA changes to neutron-server: Layer 3 HA assigns the active role randomly, regardless of the scheduler used by OpenStack Networking (whether random or leastrouter). The database schema has been modified to handle allocation of virtual IP addresses (VIPs) to virtual routers. A transport network is created to direct Layer 3 HA traffic. High availability (HA) changes to L3 agent: A new keepalived manager has been added, providing load-balancing and HA capabilities. IP addresses are converted to VIPs. 19.6. Enabling Layer 3 high availability (HA) on OpenStack Networking nodes Complete the following steps to enable Layer 3 high availability (HA) on OpenStack Networking and L3 agent nodes. Configure Layer 3 HA in the /var/lib/config-data/neutron/etc/neutron/neutron.conf file by enabling L3 HA and defining the number of L3 agent nodes that you want to protect each virtual router: L3 HA parameters: l3_ha - When set to True, all virtual routers created from this point onwards default to HA (and not legacy) routers. Administrators can override the value for each router using the following option in the openstack router create command: or max_l3_agents_per_router - Set this to a value between the minimum and total number of network nodes in your deployment. For example, if you deploy four OpenStack Networking nodes but set this parameter to 2, only two L3 agents protect each HA virtual router: one active, and one standby. In addition, each time a new L3 agent node is deployed, additional standby versions of the virtual routers are scheduled until the max_l3_agents_per_router limit is reached. As a result, you can scale out the number of standby routers by adding new L3 agents. In addition, each time a new L3 agent node is deployed, additional standby versions of the virtual routers are scheduled until the max_l3_agents_per_router limit is reached. As a result, you can scale out the number of standby routers by adding new L3 agents. min_l3_agents_per_router - The minimum setting ensures that the HA rules remain enforced. This setting is validated during the virtual router creation process to ensure a sufficient number of L3 Agent nodes are available to provide HA. For example, if you have two network nodes and one becomes unavailable, no new routers can be created during that time, as you need at least min active L3 agents when creating a HA router. Restart the neutron-server service to apply the changes: 19.7. Reviewing high availability (HA) node configurations Run the ip address command within the virtual router namespace to return a HA device in the result, prefixed with ha- . With Layer 3 HA enabled, virtual routers and floating IP addresses are protected against individual node failure. | [
"l3_ha = True max_l3_agents_per_router = 2 min_l3_agents_per_router = 2",
"openstack router create --ha",
"openstack router create --no-ha",
"systemctl restart neutron-server.service",
"ip netns exec qrouter-b30064f9-414e-4c98-ab42-646197c74020 ip address <snip> 2794: ha-45249562-ec: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 12:34:56:78:2b:5d brd ff:ff:ff:ff:ff:ff inet 169.254.0.2/24 brd 169.254.0.255 scope global ha-54b92d86-4f"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/networking_guide/sec-l3-ha |
Chapter 1. Introduction to content management | Chapter 1. Introduction to content management In the context of Satellite, content is defined as the software installed on systems. This includes, but is not limited to, the base operating system, middleware services, and end-user applications. With Red Hat Satellite, you can manage the various types of content for Red Hat Enterprise Linux systems at every stage of the software lifecycle. Red Hat Satellite manages the following content: Subscription management This provides organizations with a method to manage their Red Hat subscription information. Content management This provides organizations with a method to store Red Hat content and organize it in various ways. 1.1. Content types in Red Hat Satellite With Red Hat Satellite, you can import and manage many content types. For example, Satellite supports the following content types: RPM packages Import RPM packages from repositories related to your Red Hat subscriptions. Satellite Server downloads the RPM packages from the Red Hat Content Delivery Network and stores them locally. You can use these repositories and their RPM packages in content views. Kickstart trees Import the Kickstart trees to provision a host. New systems access these Kickstart trees over a network to use as base content for their installation. Red Hat Satellite contains predefined Kickstart templates. You can also create your own Kickstart templates. ISO and KVM images Download and manage media for installation and provisioning. For example, Satellite downloads, stores, and manages ISO images and guest images for specific Red Hat Enterprise Linux and non-Red Hat operating systems. Custom file type Manage custom content for any type of file you require, such as SSL certificates, ISO images, and OVAL files. | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/managing_content/introduction_to_content_management_content-management |
Chapter 2. OpenShift Container Platform architecture | Chapter 2. OpenShift Container Platform architecture 2.1. Introduction to OpenShift Container Platform OpenShift Container Platform is a platform for developing and running containerized applications. It is designed to allow applications and the data centers that support them to expand from just a few machines and applications to thousands of machines that serve millions of clients. With its foundation in Kubernetes, OpenShift Container Platform incorporates the same technology that serves as the engine for massive telecommunications, streaming video, gaming, banking, and other applications. Its implementation in open Red Hat technologies lets you extend your containerized applications beyond a single cloud to on-premise and multi-cloud environments. 2.1.1. About Kubernetes Although container images and the containers that run from them are the primary building blocks for modern application development, to run them at scale requires a reliable and flexible distribution system. Kubernetes is the defacto standard for orchestrating containers. Kubernetes is an open source container orchestration engine for automating deployment, scaling, and management of containerized applications. The general concept of Kubernetes is fairly simple: Start with one or more worker nodes to run the container workloads. Manage the deployment of those workloads from one or more control plane nodes. Wrap containers in a deployment unit called a pod. Using pods provides extra metadata with the container and offers the ability to group several containers in a single deployment entity. Create special kinds of assets. For example, services are represented by a set of pods and a policy that defines how they are accessed. This policy allows containers to connect to the services that they need even if they do not have the specific IP addresses for the services. Replication controllers are another special asset that indicates how many pod replicas are required to run at a time. You can use this capability to automatically scale your application to adapt to its current demand. In only a few years, Kubernetes has seen massive cloud and on-premise adoption. The open source development model allows many people to extend Kubernetes by implementing different technologies for components such as networking, storage, and authentication. 2.1.2. The benefits of containerized applications Using containerized applications offers many advantages over using traditional deployment methods. Where applications were once expected to be installed on operating systems that included all their dependencies, containers let an application carry their dependencies with them. Creating containerized applications offers many benefits. 2.1.2.1. Operating system benefits Containers use small, dedicated Linux operating systems without a kernel. Their file system, networking, cgroups, process tables, and namespaces are separate from the host Linux system, but the containers can integrate with the hosts seamlessly when necessary. Being based on Linux allows containers to use all the advantages that come with the open source development model of rapid innovation. Because each container uses a dedicated operating system, you can deploy applications that require conflicting software dependencies on the same host. Each container carries its own dependent software and manages its own interfaces, such as networking and file systems, so applications never need to compete for those assets. 2.1.2.2. Deployment and scaling benefits If you employ rolling upgrades between major releases of your application, you can continuously improve your applications without downtime and still maintain compatibility with the current release. You can also deploy and test a new version of an application alongside the existing version. If the container passes your tests, simply deploy more new containers and remove the old ones. Since all the software dependencies for an application are resolved within the container itself, you can use a standardized operating system on each host in your data center. You do not need to configure a specific operating system for each application host. When your data center needs more capacity, you can deploy another generic host system. Similarly, scaling containerized applications is simple. OpenShift Container Platform offers a simple, standard way of scaling any containerized service. For example, if you build applications as a set of microservices rather than large, monolithic applications, you can scale the individual microservices individually to meet demand. This capability allows you to scale only the required services instead of the entire application, which can allow you to meet application demands while using minimal resources. 2.1.3. OpenShift Container Platform overview OpenShift Container Platform provides enterprise-ready enhancements to Kubernetes, including the following enhancements: Hybrid cloud deployments. You can deploy OpenShift Container Platform clusters to a variety of public cloud platforms or in your data center. Integrated Red Hat technology. Major components in OpenShift Container Platform come from Red Hat Enterprise Linux (RHEL) and related Red Hat technologies. OpenShift Container Platform benefits from the intense testing and certification initiatives for Red Hat's enterprise quality software. Open source development model. Development is completed in the open, and the source code is available from public software repositories. This open collaboration fosters rapid innovation and development. Although Kubernetes excels at managing your applications, it does not specify or manage platform-level requirements or deployment processes. Powerful and flexible platform management tools and processes are important benefits that OpenShift Container Platform 4.18 offers. The following sections describe some unique features and benefits of OpenShift Container Platform. 2.1.3.1. Custom operating system OpenShift Container Platform uses Red Hat Enterprise Linux CoreOS (RHCOS), a container-oriented operating system that is specifically designed for running containerized applications from OpenShift Container Platform and works with new tools to provide fast installation, Operator-based management, and simplified upgrades. RHCOS includes: Ignition, which OpenShift Container Platform uses as a firstboot system configuration for initially bringing up and configuring machines. CRI-O, a Kubernetes native container runtime implementation that integrates closely with the operating system to deliver an efficient and optimized Kubernetes experience. CRI-O provides facilities for running, stopping, and restarting containers. It fully replaces the Docker Container Engine, which was used in OpenShift Container Platform 3. Kubelet, the primary node agent for Kubernetes that is responsible for launching and monitoring containers. In OpenShift Container Platform 4.18, you must use RHCOS for all control plane machines, but you can use Red Hat Enterprise Linux (RHEL) as the operating system for compute machines, which are also known as worker machines. If you choose to use RHEL workers, you must perform more system maintenance than if you use RHCOS for all of the cluster machines. 2.1.3.2. Simplified installation and update process With OpenShift Container Platform 4.18, if you have an account with the right permissions, you can deploy a production cluster in supported clouds by running a single command and providing a few values. You can also customize your cloud installation or install your cluster in your data center if you use a supported platform. For clusters that use RHCOS for all machines, updating, or upgrading, OpenShift Container Platform is a simple, highly-automated process. Because OpenShift Container Platform completely controls the systems and services that run on each machine, including the operating system itself, from a central control plane, upgrades are designed to become automatic events. If your cluster contains RHEL worker machines, the control plane benefits from the streamlined update process, but you must perform more tasks to upgrade the RHEL machines. 2.1.3.3. Other key features Operators are both the fundamental unit of the OpenShift Container Platform 4.18 code base and a convenient way to deploy applications and software components for your applications to use. In OpenShift Container Platform, Operators serve as the platform foundation and remove the need for manual upgrades of operating systems and control plane applications. OpenShift Container Platform Operators such as the Cluster Version Operator and Machine Config Operator allow simplified, cluster-wide management of those critical components. Operator Lifecycle Manager (OLM) and the OperatorHub provide facilities for storing and distributing Operators to people developing and deploying applications. The Red Hat Quay Container Registry is a Quay.io container registry that serves most of the container images and Operators to OpenShift Container Platform clusters. Quay.io is a public registry version of Red Hat Quay that stores millions of images and tags. Other enhancements to Kubernetes in OpenShift Container Platform include improvements in software defined networking (SDN), authentication, log aggregation, monitoring, and routing. OpenShift Container Platform also offers a comprehensive web console and the custom OpenShift CLI ( oc ) interface. 2.1.3.4. OpenShift Container Platform lifecycle The following figure illustrates the basic OpenShift Container Platform lifecycle: Creating an OpenShift Container Platform cluster Managing the cluster Developing and deploying applications Scaling up applications Figure 2.1. High level OpenShift Container Platform overview 2.1.4. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.18, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/architecture/architecture |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/using_shenandoah_garbage_collector_with_red_hat_build_of_openjdk/making-open-source-more-inclusive |
Chapter 7. Upgrading from Red Hat Gluster Storage 3.4 to Red Hat Gluster Storage 3.5 in a Red Hat Enterprise Virtualization-Red Hat Gluster Storage Environment | Chapter 7. Upgrading from Red Hat Gluster Storage 3.4 to Red Hat Gluster Storage 3.5 in a Red Hat Enterprise Virtualization-Red Hat Gluster Storage Environment This section describes the upgrade methods for a Red Hat Gluster Storage and Red Hat Enterprise Virtualization integrated environment. You can upgrade Red Hat Gluster Storage 3.4 to Red Hat Gluster Storage 3.5 using an ISO or yum . Warning Before you upgrade, be aware of changed requirements that exist after Red Hat Gluster Storage 3.1.3. If you want to access a volume being provided by a Red Hat Gluster Storage 3.1.3 or higher server, your client must also be using Red Hat Gluster Storage 3.1.3 or higher. Accessing volumes from other client versions can result in data becoming unavailable and problems with directory operations. This requirement exists because Red Hat Gluster Storage 3.1.3 contained a number of changes that affect how the Distributed Hash Table works in order to improve directory consistency and remove the effects seen in BZ#1115367 and BZ#1118762 . Important RHEL 8 is supported only for new installations of Red Hat Gluster Storage 3.5.2. Upgrades to RHEL 8 based Red Hat Gluster Storage 3.5.2 are not supported . Important In Red Hat Enterprise Linux 7 based Red Hat Gluster Storage, updating to 3.1 or higher reloads firewall rules. All runtime-only changes made before the reload are lost. 7.1. Prerequisites Verify that no self-heal operations are in progress. Ensure that the gluster volume corresponding to Glusterfs Storage Domain does not have any pending self heal by executing the following command: | [
"gluster volume heal volname info",
"gluster volume heal volname info summary"
] | https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/installation_guide/chap-rhev_rhs |
Chapter 25. cron | Chapter 25. cron This chapter describes the commands under the cron command. 25.1. cron trigger create Create new trigger. Usage: Table 25.1. Positional arguments Value Summary name Cron trigger name workflow_identifier Workflow name or id workflow_input Workflow input Table 25.2. Command arguments Value Summary -h, --help Show this help message and exit --params PARAMS Workflow params --pattern <* * * * *> Cron trigger pattern --first-time <YYYY-MM-DD HH:MM> Date and time of the first execution. time is treated as local time unless --utc is also specified --count <integer> Number of wanted executions --utc All times specified should be treated as utc Table 25.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 25.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 25.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 25.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 25.2. cron trigger delete Delete trigger. Usage: Table 25.7. Positional arguments Value Summary cron_trigger Name of cron trigger(s). Table 25.8. Command arguments Value Summary -h, --help Show this help message and exit 25.3. cron trigger list List all cron triggers. Usage: Table 25.9. Command arguments Value Summary -h, --help Show this help message and exit --marker [MARKER] The last execution uuid of the page, displays list of executions after "marker". --limit [LIMIT] Maximum number of entries to return in a single result. --sort_keys [SORT_KEYS] Comma-separated list of sort keys to sort results by. Default: created_at. Example: mistral execution-list --sort_keys=id,description --sort_dirs [SORT_DIRS] Comma-separated list of sort directions. default: asc. Example: mistral execution-list --sort_keys=id,description --sort_dirs=asc,desc --filter FILTERS Filters. can be repeated. Table 25.10. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 25.11. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 25.12. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 25.13. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 25.4. cron trigger show Show specific cron trigger. Usage: Table 25.14. Positional arguments Value Summary cron_trigger Cron trigger name Table 25.15. Command arguments Value Summary -h, --help Show this help message and exit Table 25.16. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 25.17. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 25.18. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 25.19. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack cron trigger create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--params PARAMS] [--pattern <* * * * *>] [--first-time <YYYY-MM-DD HH:MM>] [--count <integer>] [--utc] name workflow_identifier [workflow_input]",
"openstack cron trigger delete [-h] cron_trigger [cron_trigger ...]",
"openstack cron trigger list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--marker [MARKER]] [--limit [LIMIT]] [--sort_keys [SORT_KEYS]] [--sort_dirs [SORT_DIRS]] [--filter FILTERS]",
"openstack cron trigger show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] cron_trigger"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/command_line_interface_reference/cron |
A.6. perf | A.6. perf The perf tool provides a number of useful commands, some of which are listed in this section. For detailed information about perf , see the Red Hat Enterprise Linux 7 Developer Guide , or refer to the man pages. perf stat This command provides overall statistics for common performance events, including instructions executed and clock cycles consumed. You can use the option flags to gather statistics on events other than the default measurement events. As of Red Hat Enterprise Linux 6.4, it is possible to use perf stat to filter monitoring based on one or more specified control groups (cgroups). For further information, read the man page: perf record This command records performance data into a file which can be later analyzed using perf report . For further details, read the man page: perf report This command reads the performance data from a file and analyzes the recorded data. For further details, read the man page: perf list This command lists the events available on a particular machine. These events vary based on the performance monitoring hardware and the software configuration of the system. For further information, read the man page: perf top This command performs a similar function to the top tool. It generates and displays a performance counter profile in realtime. For further information, read the man page: perf trace This command performs a similar function to the strace tool. It monitors the system calls used by a specified thread or process and all signals received by that application. Additional trace targets are available; refer to the man page for a full list: | [
"man perf-stat",
"man perf-record",
"man perf-report",
"man perf-list",
"man perf-top",
"man perf-trace"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/performance_tuning_guide/sect-Red_Hat_Enterprise_Linux-Performance_Tuning_Guide-Tool_Reference-perf |
Chapter 10. Migrating to a cluster with multi-architecture compute machines | Chapter 10. Migrating to a cluster with multi-architecture compute machines You can migrate your current cluster with single-architecture compute machines to a cluster with multi-architecture compute machines by updating to a multi-architecture, manifest-listed payload. This allows you to add mixed architecture compute nodes to your cluster. For information about configuring your multi-architecture compute machines, see Configuring multi-architecture compute machines on an OpenShift Container Platform cluster . Important Migration from a multi-architecture payload to a single-architecture payload is not supported. Once a cluster has transitioned to using a multi-architecture payload, it can no longer accept a single-architecture upgrade payload. 10.1. Migrating to a cluster with multi-architecture compute machines using the CLI Prerequisites You have access to the cluster as a user with the cluster-admin role. Your OpenShift Container Platform version is up to date to at least version 4.13.0. For more information on how to update your cluster version, see Updating a cluster using the web console or Updating a cluster using the CLI . You have installed the OpenShift CLI ( oc ) that matches the version for your current cluster. Your oc client is updated to at least verion 4.13.0. Your OpenShift Container Platform cluster is installed on either the AWS or Azure platform. For more information on selecting a supported platform for your cluster installation, see Selecting a cluster installation type . Procedure Verify that the RetrievedUpdates condition is True in the Cluster Version Operator (CVO) by running the following command: USD oc get clusterversion/version -o=jsonpath="{.status.conditions[?(.type=='RetrievedUpdates')].status}" If the RetrievedUpates condition is False , you can find supplemental information regarding the failure by using the following command: USD oc adm upgrade For more information about cluster version condition types, see Understanding cluster version condition types . If the condition RetrievedUpdates is False , change the channel to stable-<4.y> or fast-<4.y> with the following command: USD oc adm upgrade channel <channel> After setting the channel, verify if RetrievedUpdates is True . For more information about channels, see Understanding update channels and releases . Migrate to the multi-architecture payload with following command: USD oc adm upgrade --to-multi-arch Verification You can monitor the migration by running the following command: USD oc adm upgrade Important Machine launches may fail as the cluster settles into the new state. To notice and recover when machines fail to launch, we recommend deploying machine health checks. For more information about machine health checks and how to deploy them, see About machine health checks . The migrations must be complete and all the cluster operators must be stable before you can add compute machine sets with different architectures to your cluster. Additional resources Configuring multi-architecture compute machines on an OpenShift Container Platform cluster Updating a cluster using the web console Updating a cluster using the CLI Understanding cluster version condition types Understanding update channels and releases Selecting a cluster installation type About machine health checks | [
"oc get clusterversion/version -o=jsonpath=\"{.status.conditions[?(.type=='RetrievedUpdates')].status}\"",
"oc adm upgrade",
"oc adm upgrade channel <channel>",
"oc adm upgrade --to-multi-arch",
"oc adm upgrade"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/updating_clusters/migrating-clusters-to-multi-payload |
Chapter 2. Breaking changes | Chapter 2. Breaking changes This section lists breaking changes in Red Hat Developer Hub 1.3. 2.1. The 'dynamic-plugins' config map is named dynamically Before this update, the dynamic-plugins ConfigMap name was hardcoded. Therefore, it was not possible to install two Red Hat Developer Hub Helm charts in the same namespace. With this update, the dynamic-plugins ConfigMap is named dynamically based on the deployment name. This naming method is similar to the way that all other component names are generated. When upgrading from a chart you might need to manually update that section of your values.yaml file to pull in the correct ConfigMap. Additional resources RHIDP-3048 2.2. Signing in without user in the software catalog is now disabled by default By default, it is now required for the user entity to exist in the software catalog to allow sign in. This is required for production ready deployments since identities need to exist and originate from a trusted source (i.e. the Identity Provider) in order for security controls such as RBAC and Audit logging to be effective. To bypass this, enable the dangerouslySignInWithoutUserInCatalog configuration that allows sign in without the user being in the catalog. Enabling this option is dangerous as it might allow unauthorized users to gain access. Additional resources RHIDP-3074 2.3. Red Hat and Community Technology Preview (TP) plugins and actions are disabled by default Before this update, some Red Hat and Community Technology Preview (TP) plugins and actions were enabled by default: Technology Preview plugins @backstage-community/plugin-catalog-backend-module-scaffolder-relation-processor (changing in RHIDP-3643) Community Support plugins @backstage/plugin-scaffolder-backend-module-azure @backstage/plugin-scaffolder-backend-module-bitbucket-cloud @backstage/plugin-scaffolder-backend-module-bitbucket-server @backstage/plugin-scaffolder-backend-module-gerrit @backstage/plugin-scaffolder-backend-module-github @backstage/plugin-scaffolder-backend-module-gitlab @roadiehq/scaffolder-backend-module-http-request @roadiehq/scaffolder-backend-module-utils With this update, all plugins included under the Technology Preview scope of support , whether from Red Hat or the community, are disabled by default. Procedure If your workload requires these plugins, enable them in your custom resource or ConfigMap using disabled: false . Additional resources RHIDP-3187 2.4. Plugins with updated scope With this update, three plugins previously under the @janus-idp scope have moved to @backstage-community : RHDH 1.2 Plugin Name RHDH 1.3 Plugin Name @janus-idp/backstage-plugin-argocd @backstage-community/plugin-redhat-argocd @janus-idp/backstage-plugin-3scale-backend @backstage-community/plugin-3scale-backend @janus-idp/backstage-plugin-catalog-backend-module-scaffolder-relation-processor @backstage-community/plugin-catalog-backend-module-scaffolder-relation-processor As the scope of the plugins has been updated, the dynamic plugin configuration has also changed. RHDH 1.2 Configuration RHDH 1.3 Configuration dynamic-plugins.default.yaml dynamic-plugins.default.yaml Procedure If your workload requires plugins with an updated scope, revise your configuration to use the latest plugins from the new scope. Additional resources RHIDP-4293 | null | https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html/release_notes/breaking-changes |
22.16.12. Configuring Symmetric Authentication Using a Key | 22.16.12. Configuring Symmetric Authentication Using a Key To configure symmetric authentication using a key, add the following option to the end of a server or peer command: key number where number is in the range 1 to 65534 inclusive. This option enables the use of a message authentication code ( MAC ) in packets. This option is for use with the peer , server , broadcast , and manycastclient commands. The option can be used in the /etc/ntp.conf file as follows: See also Section 22.6, "Authentication Options for NTP" . | [
"server 192.168.1.1 key 10 broadcast 192.168.1.255 key 20 manycastclient 239.255.254.254 key 30"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2_Configuring_Symmetric_Authentication_Using_a_Key |
7. Clustering | 7. Clustering Clusters are multiple computers (nodes) working in concert to increase reliability, scalability, and availability to critical production services. High Availability using Red Hat Enterprise Linux 6 can be deployed in a variety of configurations to suit varying needs for performance, high-availability, load balancing, and file sharing. Note The Cluster Suite Overview document provides an overview of Red Hat Cluster Suite for Red Hat Enterprise Linux 6. Additionally, the High Availability Administration document describes the configuration and management of Red Hat cluster systems for Red Hat Enterprise Linux 6. 7.1. Corosync Cluster Engine Red Hat Enterprise Linux 6 utilizes the Corosync Cluster Engine for core cluster functionality. 7.2. Unified Logging Configuration The various daemons that High Availability employs now utilize a shared unified logging configuration. This allows system administrators to enable, capture and read cluster system logs via a single command in the cluster configuration. 7.3. High Availability Administration Conga is an integrated set of software components that provides centralized configuration and management for Red Hat Enterprise Linux High Availability. One of the primary components of Conga is luci, a server that runs on one computer and communicates with multiple clusters and computers. In Red Hat Enterprise Linux 6 the web interface that is used to interact with luci has been redesigned. 7.4. General High Availability Improvements In addition to the features and improvements detailed above, the following features and enhancements to clustering have been implemented for Red Hat Enterprise Linux 6. Enhanced support for Internet Protocol version 6 (IPv6) SCSI persistent reservation fencing support is improved. Virtualized KVM guests can now be run as managed services. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_release_notes/clustering |
Chapter 6. Config [operator.openshift.io/v1] | Chapter 6. Config [operator.openshift.io/v1] Description Config specifies the behavior of the config operator which is responsible for creating the initial configuration of other components on the cluster. The operator also handles installation, migration or synchronization of cloud configurations for AWS and Azure cloud based clusters Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec is the specification of the desired behavior of the Config Operator. status object status defines the observed status of the Config Operator. 6.1.1. .spec Description spec is the specification of the desired behavior of the Config Operator. Type object Property Type Description logLevel string logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". managementState string managementState indicates whether and how the operator should manage the component observedConfig `` observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator operatorLogLevel string operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". unsupportedConfigOverrides `` unsupportedConfigOverrides overrides the final configuration that was computed by the operator. Red Hat does not support the use of this field. Misuse of this field could lead to unexpected behavior or conflict with other configuration options. Seek guidance from the Red Hat support before using this field. Use of this property blocks cluster upgrades, it must be removed before upgrading your cluster. 6.1.2. .status Description status defines the observed status of the Config Operator. Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state version string version is the level this availability applies to 6.1.3. .status.conditions Description conditions is a list of conditions and their status Type array 6.1.4. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Required type Property Type Description lastTransitionTime string message string reason string status string type string 6.1.5. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 6.1.6. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 6.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/configs DELETE : delete collection of Config GET : list objects of kind Config POST : create a Config /apis/operator.openshift.io/v1/configs/{name} DELETE : delete a Config GET : read the specified Config PATCH : partially update the specified Config PUT : replace the specified Config /apis/operator.openshift.io/v1/configs/{name}/status GET : read status of the specified Config PATCH : partially update status of the specified Config PUT : replace status of the specified Config 6.2.1. /apis/operator.openshift.io/v1/configs HTTP method DELETE Description delete collection of Config Table 6.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Config Table 6.2. HTTP responses HTTP code Reponse body 200 - OK ConfigList schema 401 - Unauthorized Empty HTTP method POST Description create a Config Table 6.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.4. Body parameters Parameter Type Description body Config schema Table 6.5. HTTP responses HTTP code Reponse body 200 - OK Config schema 201 - Created Config schema 202 - Accepted Config schema 401 - Unauthorized Empty 6.2.2. /apis/operator.openshift.io/v1/configs/{name} Table 6.6. Global path parameters Parameter Type Description name string name of the Config HTTP method DELETE Description delete a Config Table 6.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 6.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Config Table 6.9. HTTP responses HTTP code Reponse body 200 - OK Config schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Config Table 6.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.11. HTTP responses HTTP code Reponse body 200 - OK Config schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Config Table 6.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.13. Body parameters Parameter Type Description body Config schema Table 6.14. HTTP responses HTTP code Reponse body 200 - OK Config schema 201 - Created Config schema 401 - Unauthorized Empty 6.2.3. /apis/operator.openshift.io/v1/configs/{name}/status Table 6.15. Global path parameters Parameter Type Description name string name of the Config HTTP method GET Description read status of the specified Config Table 6.16. HTTP responses HTTP code Reponse body 200 - OK Config schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Config Table 6.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.18. HTTP responses HTTP code Reponse body 200 - OK Config schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Config Table 6.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.20. Body parameters Parameter Type Description body Config schema Table 6.21. HTTP responses HTTP code Reponse body 200 - OK Config schema 201 - Created Config schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/operator_apis/config-operator-openshift-io-v1 |
Chapter 11. Optimizing networking | Chapter 11. Optimizing networking The OpenShift SDN uses OpenvSwitch, virtual extensible LAN (VXLAN) tunnels, OpenFlow rules, and iptables. This network can be tuned by using jumbo frames, network interface controllers (NIC) offloads, multi-queue, and ethtool settings. OVN-Kubernetes uses Geneve (Generic Network Virtualization Encapsulation) instead of VXLAN as the tunnel protocol. VXLAN provides benefits over VLANs, such as an increase in networks from 4096 to over 16 million, and layer 2 connectivity across physical networks. This allows for all pods behind a service to communicate with each other, even if they are running on different systems. VXLAN encapsulates all tunneled traffic in user datagram protocol (UDP) packets. However, this leads to increased CPU utilization. Both these outer- and inner-packets are subject to normal checksumming rules to guarantee data is not corrupted during transit. Depending on CPU performance, this additional processing overhead can cause a reduction in throughput and increased latency when compared to traditional, non-overlay networks. Cloud, VM, and bare metal CPU performance can be capable of handling much more than one Gbps network throughput. When using higher bandwidth links such as 10 or 40 Gbps, reduced performance can occur. This is a known issue in VXLAN-based environments and is not specific to containers or OpenShift Container Platform. Any network that relies on VXLAN tunnels will perform similarly because of the VXLAN implementation. If you are looking to push beyond one Gbps, you can: Evaluate network plugins that implement different routing techniques, such as border gateway protocol (BGP). Use VXLAN-offload capable network adapters. VXLAN-offload moves the packet checksum calculation and associated CPU overhead off of the system CPU and onto dedicated hardware on the network adapter. This frees up CPU cycles for use by pods and applications, and allows users to utilize the full bandwidth of their network infrastructure. VXLAN-offload does not reduce latency. However, CPU utilization is reduced even in latency tests. 11.1. Optimizing the MTU for your network There are two important maximum transmission units (MTUs): the network interface controller (NIC) MTU and the cluster network MTU. The NIC MTU is only configured at the time of OpenShift Container Platform installation. The MTU must be less than or equal to the maximum supported value of the NIC of your network. If you are optimizing for throughput, choose the largest possible value. If you are optimizing for lowest latency, choose a lower value. The OpenShift SDN network plugin overlay MTU must be less than the NIC MTU by 50 bytes at a minimum. This accounts for the SDN overlay header. So, on a normal ethernet network, this should be set to 1450 . On a jumbo frame ethernet network, this should be set to 8950 . These values should be set automatically by the Cluster Network Operator based on the NIC's configured MTU. Therefore, cluster administrators do not typically update these values. Amazon Web Services (AWS) and bare-metal environments support jumbo frame ethernet networks. This setting will help throughput, especially with transmission control protocol (TCP). For OVN and Geneve, the MTU must be less than the NIC MTU by 100 bytes at a minimum. Note This 50 byte overlay header is relevant to the OpenShift SDN network plugin. Other SDN solutions might require the value to be more or less. 11.2. Recommended practices for installing large scale clusters When installing large clusters or scaling the cluster to larger node counts, set the cluster network cidr accordingly in your install-config.yaml file before you install the cluster: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 The default cluster network cidr 10.128.0.0/14 cannot be used if the cluster size is more than 500 nodes. It must be set to 10.128.0.0/12 or 10.128.0.0/10 to get to larger node counts beyond 500 nodes. 11.3. Impact of IPsec Because encrypting and decrypting node hosts uses CPU power, performance is affected both in throughput and CPU usage on the nodes when encryption is enabled, regardless of the IP security system being used. IPSec encrypts traffic at the IP payload level, before it hits the NIC, protecting fields that would otherwise be used for NIC offloading. This means that some NIC acceleration features might not be usable when IPSec is enabled and will lead to decreased throughput and increased CPU usage. Additional resources Modifying advanced network configuration parameters Configuration parameters for the OVN-Kubernetes default CNI network provider Configuration parameters for the OpenShift SDN default CNI network provider | [
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/scalability_and_performance/optimizing-networking |
Chapter 1. Back Up the Undercloud | Chapter 1. Back Up the Undercloud This guide describes how to back up the undercloud used in the Red Hat OpenStack Platform director. The undercloud is usually a single physical node (although high availability options exist using a two-node pacemaker cluster that runs director in a VM) that is used to deploy and manage your OpenStack environment. 1.1. Backup Considerations Formulate a robust back up and recovery policy in order to minimize data loss and system downtime. When determining your back up strategy, you will need to answer the following questions: How quickly will you need to recover from data loss? If you cannot have data loss at all, you should include high availability in your deployment strategy, in addition to using backups. You'll need to consider how long it will take to obtain the physical backup media (including from an offsite location, if used), and how many tape drives are available for restore operations. How many backups should you keep? You will need to consider legal and regulatory requirements that affect how long you are expected to store data. Should your backups be kept off-site? Storing your backup media offsite will help mitigate the risk of catastrophe befalling your physical location. How often should backups be tested? A robust back up strategy will include regular restoration tests of backed up data. This can help validate that the correct data is still being backed up, and that no corruption is being introduced during the back up or restoration processes. These drills should assume that they are being performed under actual disaster recovery conditions. What will be backed up? The following sections describe database and file-system backups for components, as well as information on recovering backups. 1.2. High Availability of the Undercloud node You are free to consider your preferred high availability (HA) options for the Undercloud node; Red Hat does not prescribe any particular requirements for this. For example, you might consider running your Undercloud node as a highly available virtual machine within Red Hat Enterprise Virtualization (RHEV). You might also consider using physical nodes with Pacemaker providing HA for the required services. When approaching high availability for your Undercloud node, you should consult the documentation and good practices of the solution you decide works best for your environment. 1.3. Backing up a containerized undercloud A full undercloud backup includes the following databases and files: All MariaDB databases on the undercloud node MariaDB configuration file on the undercloud so that you can accurately restore databases The configuration data: /etc Log data: /var/log Image data: /var/lib/glance Certificate generation data if using SSL: /var/lib/certmonger Any container image data: /var/lib/containers and /var/lib/image-serve All swift data: /srv/node All data in the stack user home directory: /home/stack Note Confirm that you have sufficient disk space available on the undercloud before you perform the backup process. Expect the archive file to be at least 3.5 GB. Procedure Log into the undercloud as the root user. Retrieve the password: Perform the backup: Copy the root configuration file for the database: Archive the database backup and the configuration files: The --ignore-failed-read option skips any directory that does not apply to your undercloud. The --xattrs option includes extended attributed, which are required to store metadata for Object Storage (swift). This creates a file named undercloud-backup-<date>.tar.gz , where <date> is the system date. Copy this tar file to a secure location. 1.4. Validate the Completed Backup You can validate the success of the completed back up process by running and validating the restore process. See the section for further details on restoring from backup. | [
"/bin/hiera -c /etc/puppet/hiera.yaml mysql::server::root_password",
"podman exec mysql bash -c \"mysqldump -uroot -pPASSWORD --opt --all-databases\" > /root/undercloud-all-databases.sql",
"cp /var/lib/config-data/puppet-generated/mysql/root/.my.cnf ~/.",
"cd /backup tar --xattrs --xattrs-include='*.*' --ignore-failed-read -cf undercloud-backup-`date +%F`.tar /root/undercloud-all-databases.sql /etc /var/log /var/lib/glance /var/lib/certmonger /var/lib/containers /var/lib/image-serve /var/lib/config-data /srv/node /root /home/stack"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/back_up_and_restore_the_director_undercloud/back_up_the_undercloud |
probe::ioscheduler_trace.unplug_io | probe::ioscheduler_trace.unplug_io Name probe::ioscheduler_trace.unplug_io - Fires when a request queue is unplugged; Synopsis Values name Name of the probe point rq_queue request queue Description Either, when number of pending requests in the queue exceeds threshold or, upon expiration of timer that was activated when queue was plugged. | [
"ioscheduler_trace.unplug_io"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-ioscheduler-trace-unplug-io |
Argo CD instance | Argo CD instance Red Hat OpenShift GitOps 1.12 Installing and deploying Argo CD instances, enabling notifications with an Argo CD instance, and configuring the NotificationsConfiguration CR Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.12/html/argo_cd_instance/index |
1.3.2. Server Weight and Scheduling | 1.3.2. Server Weight and Scheduling The administrator of LVS can assign a weight to each node in the real server pool. This weight is an integer value which is factored into any weight-aware scheduling algorithms (such as weighted least-connections) and helps the LVS router more evenly load hardware with different capabilities. Weights work as a ratio relative to one another. For instance, if one real server has a weight of 1 and the other server has a weight of 5, then the server with a weight of 5 gets 5 connections for every 1 connection the other server gets. The default value for a real server weight is 1. Although adding weight to varying hardware configurations in a real server pool can help load-balance the cluster more efficiently, it can cause temporary imbalances when a real server is introduced to the real server pool and the virtual server is scheduled using weighted least-connections. For example, suppose there are three servers in the real server pool. Servers A and B are weighted at 1 and the third, server C, is weighted at 2. If server C goes down for any reason, servers A and B evenly distributes the abandoned load. However, once server C comes back online, the LVS router sees it has zero connections and floods the server with all incoming requests until it is on par with servers A and B. To prevent this phenomenon, administrators can make the virtual server a quiesce server - anytime a new real server node comes online, the least-connections table is reset to zero and the LVS router routes requests as if all the real servers were newly added to the cluster. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/virtual_server_administration/s2-lvs-sched-weight-vsa |
5.233. perl-DBD-Pg | 5.233. perl-DBD-Pg 5.233.1. RHSA-2012:1116 - Moderate: perl-DBD-Pg security update An updated perl-DBD-Pg package that fixes two security issues is now available for Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this update as having moderate security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. Perl DBI is a database access Application Programming Interface (API) for the Perl language. perl-DBD-Pg allows Perl applications to access PostgreSQL database servers. Security Fix CVE-2012-1151 Two format string flaws were found in perl-DBD-Pg. A specially-crafted database warning or error message from a server could cause an application using perl-DBD-Pg to crash or, potentially, execute arbitrary code with the privileges of the user running the application. All users of perl-DBD-Pg are advised to upgrade to this updated package, which contains a backported patch to fix these issues. Applications using perl-DBD-Pg must be restarted for the update to take effect. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/perl-dbd-pg |
7.7. Downgrading SSSD | 7.7. Downgrading SSSD When downgrading - either downgrading the version of SSSD or downgrading the operating system itself - then the existing SSSD cache needs to be removed. If the cache is not removed, then SSSD process is dead but a PID file remains. The SSSD logs show that it cannot connect to any of its associated domains because the cache version is unrecognized. Users are then no longer recognized and are unable to authenticate to domain services and hosts. After downgrading the SSSD version: Delete the existing cache database files. Restart the SSSD process. | [
"(Wed Nov 28 21:25:50 2012) [sssd] [sysdb_domain_init_internal] (0x0010): Unknown DB version [0.14], expected [0.10] for domain AD!",
"rm -rf /var/lib/sss/db/*",
"systemctl restart sssd.service"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system-level_authentication_guide/sssd-downgrade |
Appendix A. Reference Material | Appendix A. Reference Material A.1. Server Runtime Arguments The application server startup script accepts arguments and switches at runtime. This allows the server to start under alternative configurations to those defined in the standalone.xml , domain.xml , and host.xml configuration files. Alternative configurations might include starting the server with an alternative socket bindings set or a secondary configuration. The available parameters list can be accessed by passing the help switch -h or --help at startup. Table A.1. Runtime Switches and Arguments Argument or Switch Operating Mode Description --admin-only Standalone Set the server's running type to ADMIN_ONLY . This will cause it to open administrative interfaces and accept management requests, but not start other runtime services or accept end user requests. Note that it is recommended to use --start-mode=admin-only instead. --admin-only Domain Set the host controller's running type to ADMIN_ONLY causing it to open administrative interfaces and accept management requests but not start servers or, if this host controller is the master for the domain, accept incoming connections from slave host controllers. -b=<value>, -b <value> Standalone, Domain Set system property jboss.bind.address , which is used in configuring the bind address for the public interface. This defaults to 127.0.0.1 if no value is specified. See the -b<interface>=<value> entry for setting the bind address for other interfaces. -b<interface>=<value> Standalone, Domain Set system property jboss.bind.address.<interface> to the given value. For example, -bmanagement= IP_ADDRESS --backup Domain Keep a copy of the persistent domain configuration even if this host is not the domain controller. -c=<config>, -c <config> Standalone Name of the server configuration file to use. The default is standalone.xml . -c=<config>, -c <config> Domain Name of the server configuration file to use. The default is domain.xml . --cached-dc Domain If the host is not the domain controller and cannot contact the domain controller at boot, boot using a locally cached copy of the domain configuration. --debug [<port>] Standalone Activate debug mode with an optional argument to specify the port. Only works if the launch script supports it. -D<name>[=<value>] Standalone, Domain Set a system property. --domain-config=<config> Domain Name of the server configuration file to use. The default is domain.xml . --git-repo Standalone The location of the Git repository that is used to manage and persist server configuration data. This can be local if you want to store it locally, or the URL to a remote repository. --git-branch Standalone The branch or tag name in the Git repository to use. This argument should name an existing branch or tag name as it will not be created if it does not exist. If you use a tag name, you put the repository in a detached HEAD state, meaning future commits are not attached to any branches. Tag names are read-only and are normally used when you need to replicate a configuration across several nodes. --git-auth Standalone The URL to an Elytron configuration file that contains the credentials to be used when connecting to a remote Git repository. This argument is required if your remote Git repository requires authentication. Elytron does not support SSH. Therefore, only default SSH authentication is supported using private keys without a password. This argument is not used with a local repository. -h, --help Standalone, Domain Display the help message and exit. --host-config=<config> Domain Name of the host configuration file to use. The default is host.xml . --interprocess-hc-address=<address> Domain Address on which the host controller should listen for communication from the process controller. --interprocess-hc-port=<port> Domain Port on which the host controller should listen for communication from the process controller. --master-address=<address> Domain Set system property jboss.domain.master.address to the given value. In a default slave host controller configuration, this is used to configure the address of the master host controller. --master-port=<port> Domain Set system property jboss.domain.master.port to the given value. In a default slave host controller configuration, this is used to configure the port used for native management communication by the master host controller. --read-only-server-config=<config> Standalone Name of the server configuration file to use. This differs from --server-config and -c in that the original file is never overwritten. --read-only-domain-config=<config> Domain Name of the domain configuration file to use. This differs from --domain-config and -c in that the initial file is never overwritten. --read-only-host-config=<config> Domain Name of the host configuration file to use. This differs from --host-config in that the initial file is never overwritten. -P=<url>, -P <url>, --properties=<url> Standalone, Domain Load system properties from the given URL. --pc-address=<address> Domain Address on which the process controller listens for communication from processes it controls. --pc-port=<port> Domain Port on which the process controller listens for communication from processes it controls. -S<name>[=<value>] Standalone Set a security property. -secmgr Standalone, Domain Runs the server with a security manager installed. --server-config=<config> Standalone Name of the server configuration file to use. The default is standalone.xml . --start-mode=<mode> Standalone Set the start mode of the server. This option cannot be used in conjunction with --admin-only . Valid values are: normal : The server will start normally. admin-only : The server will only open administrative interfaces and accept management requests but not start other runtime services or accept end user requests. suspend : The server will start in suspended mode and will not service requests until it has been resumed. -u=<value>, -u <value> Standalone, Domain Set system property jboss.default.multicast.address , which is used in configuring the multicast address in the socket-binding elements in the configuration files. This defaults to 230.0.0.4 if no value is specified. -v, -V, --version Standalone, Domain Display the application server version and exit. Warning The configuration files that ship with JBoss EAP are set up to handle the behavior of the switches, for example, -b and -u . If you change your configuration files to no longer use the system property controlled by the switch, then adding it to the launch command will have no effect. A.2. RPM Service Configuration Files The RPM installation of JBoss EAP includes two additional configuration files compared to a ZIP or installer installation. These files are used by the service init script to specify the JBoss EAP launch environment. The location of these service configuration files differ for Red Hat Enterprise Linux 6, and Red Hat Enterprise Linux 7 and later versions. Important For Red Hat Enterprise Linux 7 and later, RPM service configuration files are loaded using systemd , so variable expressions are not expanded. Table A.2. RPM Configuration Files for Red Hat Enterprise Linux 6 File Description /etc/sysconfig/eap7-standalone Settings specific to standalone JBoss EAP servers on Red Hat Enterprise Linux 6. /etc/sysconfig/eap7-domain Settings specific to JBoss EAP running as a managed domain on Red Hat Enterprise Linux 6. Table A.3. RPM Configuration Files for Red Hat Enterprise Linux 7 and later File Description /etc/opt/rh/eap7/wildfly/eap7-standalone.conf Settings specific to standalone JBoss EAP servers on Red Hat Enterprise Linux 7 and later. /etc/opt/rh/eap7/wildfly/eap7-domain.conf Settings specific to JBoss EAP running as a managed domain on Red Hat Enterprise Linux 7 and later. A.3. RPM Service Configuration Properties The following table shows a list of available configuration properties for the JBoss EAP RPM service along with their default values. Note If a property has the same name in both the RPM service configuration file, such as /etc/sysconfig/eap7-standalone , and in the JBoss EAP startup configuration file, such as EAP_HOME /bin/standalone.conf , the value that takes precedence is the one in the JBoss EAP startup configuration file. One such property is JAVA_HOME . Table A.4. RPM Service Configuration Properties Property Description JAVA_HOME The directory where your Java Runtime Environment is installed. Default value: /usr/lib/jvm/jre JAVAPTH The path where the Java executable files are installed. Default value: USDJAVA_HOME/bin WILDFLY_STARTUP_WAIT The number of seconds that the init script will wait until confirming that the server has launched successfully after receiving a start or restart command. This property only applies to Red Hat Enterprise Linux 6. Default value: 60 WILDFLY_SHUTDOWN_WAIT The number of seconds that the init script will wait for the server to shutdown before continuing when it receives a stop or restart command. This property only applies to Red Hat Enterprise Linux 6. Default value: 20 WILDFLY_CONSOLE_LOG The file that the CONSOLE log handler will be redirected to. Default value: /var/opt/rh/eap7/log/wildfly/standalone/console.log for a standalone server, or /var/opt/rh/eap7/log/wildfly/domain/console.log for a managed domain. WILDFLY_SH The script which is used to launch to JBoss EAP server. Default value: /opt/rh/eap7/root/usr/share/wildfly/bin/standalone.sh for a standalone server, or /opt/rh/eap7/root/usr/share/wildfly/bin/domain.sh for a managed domain. WILDFLY_SERVER_CONFIG The server configuration file to use. There is no default for this property. Either standalone.xml or domain.xml can be defined at start. WILDFLY_HOST_CONFIG For a managed domain, this property allows a user to specify the host configuration file, such as host.xml . It has no value set as the default. WILDFLY_MODULEPATH The path of the JBoss EAP module directory. Default value: /opt/rh/eap7/root/usr/share/wildfly/modules WILDFLY_BIND Sets the jboss.bind.address system property, which is used to configure the bind address for the public interface. This defaults to 0.0.0.0 if no value is specified. WILDFLY_OPTS Additional arguments to include on startup. For example: A.4. Overview of JBoss EAP Subsystems The table below gives a brief description of the JBoss EAP subsystems. Table A.5. JBoss EAP Subsystems JBoss EAP Subsystem Description batch-jberet Configure an environment for running batch applications and manage batch jobs . bean-validation Configure bean validation for validating Java object data. core-management Register listeners for server lifecycle events and track configuration changes . datasources Create and configure datasources and manage JDBC database drivers . deployment-scanner Configure deployment scanners to monitor particular locations for applications to deploy. ee Configure common functionality in the Jakarta EE platform, such as defining global modules , enabling descriptor-based property replacement , and configuring default bindings. ejb3 Configure Jakarta Enterprise Beans, including session and message-driven beans. More information for the ejb3 subsystem can be found in Developing Jakarta Enterprise Beans Applications for JBoss EAP. elytron Configure server and application security. More information on the elytron subsystem can be found in Security Architecture for JBoss EAP. iiop-openjdk Configure Common Object Request Broker Architecture (CORBA) services for JTS transactions and other ORB services , including security. In JBoss EAP 6, this functionality was contained in the jacorb subsystem. infinispan Configure caching functionality for JBoss EAP high availability services. io Define workers and buffer pools to be used by other subsystems. jaxrs Enable the deployment and functionality of Jakarta RESTful Web Services applications. jca Configure the general settings for the Jakarta Connectors container and resource adapter deployments. jdr Enable the gathering of diagnostic data to aid in troubleshooting. JBoss EAP subscribers can provide this information to Red Hat when requesting support. jgroups Configure the protocol stacks and communication mechanisms for how servers in a cluster talk to each other. jmx Configure remote Jakarta Management access. jpa Manages the Jakarta Persistence 2.2 container-managed requirements and allows you to deploy persistent unit definitions, annotations, and descriptors. More information for the jpa subsystem can be found in the JBoss EAP Development Guide . jsf Manage Jakarta Server Faces implementations. jsr77 Provide Jakarta EE management capabilities defined by the Jakarta Management specification . logging Configure system and application-level logging through a system of log categories and log handlers . mail Configure mail server attributes and custom mail transports to create a mail service that allows applications deployed to JBoss EAP to send mail using that service. messaging-activemq Configure Jakarta Messaging destinations, connection factories, and other settings for Artemis, the integrated messaging provider. In JBoss EAP 6, messaging functionality was contained in the messaging subsystem. More information for the messaging-activemq subsystem can be found in Configuring Messaging for JBoss EAP. metrics Displays base metrics from the management model and Java Virtual Machine (JVM) MBeans. JBoss EAP no longer includes the microprofile-smallrye-metrics subsystem, so application metrics are no longer available. health Exposes the health checks for the JBoss EAP runtime. JBoss EAP no longer includes the microprofile-smallrye-health subsystem, so application healthiness checks are no longer available. modcluster Configure the server-side mod_cluster worker node . naming Bind entries into global JNDI namespaces and configure the remote JNDI interface. picketlink-federation Configure PicketLink SAML-based single sign-on (SSO). More information on the picketlink-federation subsystem can be found in How To Set Up SSO with SAML v2 for JBoss EAP. picketlink-identity-management Configure PicketLink identity management services. This subsystem is unsupported. pojo Enable deployment of applications containing JBoss Microcontainer services, as supported by versions of JBoss EAP. remoting Configure settings for inbound and outbound connections for local and remote services . discovery The discovery subsystem is currently for internal subsystem use only; it is a private API and is not available for public use. request-controller Configure settings to suspend and shut down servers gracefully . resource-adapters Configure and maintain resource adapters for communication between Jakarta EE applications and an Enterprise Information System (EIS) using the Jakarta Connectors specification. rts Unsupported implementation of REST-AT. sar Enable deployment of SAR archives containing MBean services, as supported by versions of JBoss EAP. security Legacy method to configure application security settings. More information on the security subsystem can be found in Security Architecture for JBoss EAP. security-manager Configure Java security policies to be used by the Java Security Manager. More information on the security-manager subsystem can be found in How to Configure Server Security for JBoss EAP. singleton Define singleton policies to configure the behavior of singleton deployments or to create singleton MSC services. More information on the singleton subsystem can be found in the JBoss EAP Development Guide . transactions Configure the Transaction Manager (TM) options, such as timeout values, transaction logging, and whether to use Java Transaction Service (JTS). More information on the transactions subsystem can be found in Managing Transactions on JBoss EAP for JBoss EAP. undertow Configure JBoss EAP's web server and servlet container settings. In JBoss EAP 6, this functionality was contained in the web subsystem. webservices Configure published endpoint addresses and endpoint handler chains, as well as the host name, ports, and WSDL address for the web services provider. More information for the webservices subsystem can be found in Developing Web Services Applications for JBoss EAP. weld Configure Jakarta Contexts and Dependency Injection functionality for JBoss EAP. xts Configure settings for coordinating web services in a transaction. A.5. Add-User Utility Arguments The following table describes the arguments available for the add-user.sh or add-user.bat script, which is a utility for adding new users to the properties file for out-of-the-box authentication. Table A.6. Add-User Command Arguments Command Line Argument Description -a Create a user in the application realm. If omitted, the default is to create a user in the management realm. -dc <value> The domain configuration directory that will contain the properties files. If it is omitted, the default directory is EAP_HOME /domain/configuration/ . -sc <value> An alternative standalone server configuration directory that will contain the properties files. If omitted, the default directory is EAP_HOME /standalone/configuration/ . -up, --user-properties <value> The name of the alternative user properties file. It can be an absolute path or it can be a file name used in conjunction with the -sc or -dc argument that specifies the alternative configuration directory. -g, --group <value> A comma-separated list of groups to assign to this user. -gp, --group-properties <value> The name of the alternative group properties file. It can be an absolute path or it can be a file name used in conjunction with the -sc or -dc argument that specifies the alternative configuration directory. -p, --password <value> The password of the user. -u, --user <value> The name of the user. User names can only contain the following characters, in any number and in any order: Alphanumeric characters (a-z, A-Z, 0-9) Dashes (-), periods (.), commas (,), at sign (@) Backslash (\) Equals (=) -r, --realm <value> The name of the realm used to secure the management interfaces. If omitted, the default is ManagementRealm . -s, --silent Run the add-user script with no output to the console. -e, --enable Enable the user. -d, --disable Disable the user. -cw, --confirm-warning Automatically confirm warning in interactive mode. -h, --help Display usage information for the add-user script. -ds, --display-secret Print the secret value in non-interactive mode. A.6. Management Audit Logging Attributes Note Attribute names in these tables are listed as they appear in the management model, for example, when using the management CLI. See the schema definition file located at EAP_HOME /docs/schema/wildfly-config_5_0.xsd to view the elements as they appear in the XML, as there may be differences from the management model. Table A.7. Management Audit Logging: Logger Attributes Attribute Description enabled Whether audit logging is enabled. log-boot Whether operations should be logged on server boot. log-read-only Whether operations that do not modify the configuration or any runtime services should be logged. Table A.8. Management Audit Logging: Log Formatter Attributes Attribute Description compact If true , it will format the JSON on one line. There may still be values containing new lines, so if having the whole record on one line is important, set escape-new-line or escape-control-characters to true . date-format The date format to use as understood by java.text.SimpleDateFormat . This is ignored if include-date is set to false . date-separator The separator between the date and the rest of the formatted log message. This is ignored if include-date is set to false . escape-control-characters If true , it will escape all control characters, ASCII entries with a decimal value greater than 32 , with the ASCII code in octal. For example, a new line becomes #012 . If true , this will override escape-new-line=false . escape-new-line If true , it will escape all new lines with the ASCII code in octal: #012 . include-date Whether or not to include the date in the formatted log record. Table A.9. Management Audit Logging: File Handler Attributes Attribute Description disabled-due-to-failure Whether this handler has been disabled due to logging failures (read-only). failure-count The number of logging failures since the handler was initialized (read-only). formatter The JSON formatter used to format the log messages. max-failure-count The maximum number of logging failures before disabling this handler. path The path of the audit log file. relative-to The name of another previously named path, or of one of the standard paths provided by the system. If relative-to is provided, the value of the path attribute is treated as relative to the path specified by this attribute. rotate-at-startup Whether the old log file should be rotated at server startup. Table A.10. Management Audit Logging: Syslog Handler Attributes Attribute Description app-name The application name to add to the syslog records as defined in section 6.2.5 of RFC-5424 . If not specified it will default to the name of the product. disabled-due-to-failure Whether this handler has been disabled due to logging failures (read-only). facility The facility to use for syslog logging as defined in section 6.2.1 of RFC-5424 and section 4.1.1 of RFC-3164 . failure-count The number of logging failures since the handler was initialized (read-only). formatter The JSON formatter used to format the log messages. max-failure-count The maximum number of logging failures before disabling this handler. max-length The maximum length in bytes a log message, including the header, is allowed to be. If undefined, it will default to 1024 bytes if the syslog-format is RFC3164 , or 2048 bytes if the syslog-format is RFC5424 . protocol The protocol to use for the syslog handler. Must be one and only one of udp , tcp or tls . syslog-format The syslog format: RFC5424 or RFC3164 . truncate Whether or not a message, including the header, should truncate the message if the length in bytes is greater than the value of the max-length attribute. If set to false , messages will be split and sent with the same header values. Note Syslog servers vary in their implementation, so not all settings are applicable to all syslog servers. Testing has been conducted using the rsyslog syslog implementation. This table lists only the high-level attributes. Each attribute has configuration parameters, and some have child configuration parameters. A.7. Interface Attributes Note Attribute names in this table are listed as they appear in the management model, for example, when using the management CLI. See the schema definition file located at EAP_HOME /docs/schema/wildfly-config_5_0.xsd to view the elements as they appear in the XML, as there may be differences from the management model. Table A.11. Interface Attributes and Values Interface Element Description any Element indicating that part of the selection criteria for an interface should be that it meets at least one, but not necessarily all, of the nested set of criteria. any-address Empty element indicating that sockets using this interface should be bound to a wildcard address. The IPv6 wildcard address ( :: ) will be used unless the java.net.preferIPv4Stack system property is set to true, in which case the IPv4 wildcard address ( 0.0.0.0 ) will be used. If a socket is bound to an IPv6 anylocal address on a dual-stack machine, it can accept both IPv6 and IPv4 traffic; if it is bound to an IPv4 (IPv4-mapped) anylocal address, it can only accept IPv4 traffic. inet-address Either an IP address in IPv6 or IPv4 dotted decimal notation, or a host name that can be resolved to an IP address. link-local-address Empty element indicating that part of the selection criteria for an interface should be whether or not an address associated with it is link-local. loopback Empty element indicating that part of the selection criteria for an interface should be whether or not it is a loopback interface. loopback-address A loopback address that may not actually be configured on the machine's loopback interface. Differs from inet-address type in that the given value will be used even if no NIC can be found that has the IP address associated with it. multicast Empty element indicating that part of the selection criteria for an interface should be whether or not it supports multicast. name The name of the interface. nic The name of a network interface (e.g. eth0, eth1, lo). nic-match A regular expression against which the names of the network interfaces available on the machine can be matched to find an acceptable interface. not Element indicating that part of the selection criteria for an interface should be that it does not meet any of the nested set of criteria. point-to-point Empty element indicating that part of the selection criteria for an interface should be whether or not it is a point-to-point interface. public-address Empty element indicating that part of the selection criteria for an interface should be whether or not it has a publicly routable address. site-local-address Empty element indicating that part of the selection criteria for an interface should be whether or not an address associated with it is site-local. subnet-match A network IP address and the number of bits in the address' network prefix, written in slash notation , for example, 192.168.0.0/16 . up Empty element indicating that part of the selection criteria for an interface should be whether or not it is currently up. virtual Empty element indicating that part of the selection criteria for an interface should be whether or not it is a virtual interface. A.8. Socket Binding Attributes Note Attribute names in these tables are listed as they appear in the management model, for example, when using the management CLI. See the schema definition file located at EAP_HOME /docs/schema/wildfly-config_5_0.xsd to view the elements as they appear in the XML, as there may be differences from the management model. The following tables show the attributes that can be configured for each of the three types of socket bindings. socket-binding remote-destination-outbound-socket-binding local-destination-outbound-socket-binding Table A.12. Inbound Socket Binding (socket-binding) Attributes Attribute Description client-mappings Specifies the client mappings for this socket binding. A client connecting to this socket should use the destination address specified in the mapping that matches its desired outbound interface. This allows for advanced network topologies that use either network address translation, or have bindings on multiple network interfaces to function. Each mapping should be evaluated in declared order, with the first successful match used to determine the destination. fixed-port Whether the port value should remain fixed even if numeric offsets are applied to the other sockets in the socket group. interface Name of the interface to which the socket should be bound, or, for multicast sockets, the interface on which it should listen. This should be one of the declared interfaces. If not defined, the value of the default-interface attribute from the enclosing socket binding group will be used. multicast-address Multicast address on which the socket should receive multicast traffic. If unspecified, the socket will not be configured to receive multicast. multicast-port Port on which the socket should receive multicast traffic. Must be configured if multicast-address is configured. name The name of the socket. Services needing to access the socket configuration information will find it using this name. This attribute is required. port Number of the port to which the socket should be bound. Note that this value can be overridden if servers apply a port-offset to increment or decrement all port values. Table A.13. Remote Outbound Socket Binding (remote-destination-outbound-socket-binding) Attributes Attribute Description fixed-source-port Whether the port value should remain fixed even if numeric offsets are applied to the other outbound sockets in the socket group. host The host name or IP address of the remote destination to which this outbound socket will connect. port The port number of the remote destination to which the outbound socket should connect. source-interface The name of the interface that will be used for the source address of the outbound socket. source-port The port number that will be used as the source port of the outbound socket. Table A.14. Local Outbound Socket Binding (local-destination-outbound-socket-binding) Attributes Attribute Description fixed-source-port Whether the port value should remain fixed even if numeric offsets are applied to the other outbound sockets in the socket group. socket-binding-ref The name of the local socket binding that will be used to determine the port to which this outbound socket connects. source-interface The name of the interface that will be used for the source address of the outbound socket. source-port The port number that will be used as the source port of the outbound socket. A.9. Default Socket Bindings The following tables show the default socket bindings for each socket binding group. standard-sockets ha-sockets full-sockets full-ha-sockets load-balancer-sockets Table A.15. standard-sockets Socket Binding Port Description ajp 8009 Apache JServ Protocol. Used for HTTP clustering and load balancing. http 8080 The default port for deployed web applications. https 8443 SSL-encrypted connection between deployed web applications and clients. management-http 9990 Used for HTTP communication with the management layer. management-https 9993 Used for HTTPS communication with the management layer. txn-recovery-environment 4712 The Jakarta Transactions recovery manager. txn-status-manager 4713 The Jakarta Transactions / JTS transaction manager. Table A.16. ha-sockets Socket Binding Port Multicast Port Description ajp 8009 Apache JServ Protocol. Used for HTTP clustering and load balancing. http 8080 The default port for deployed web applications. https 8443 SSL-encrypted connection between deployed web applications and clients. jgroups-mping 45700 Multicast. Used to discover initial membership in a HA cluster. jgroups-tcp 7600 Unicast peer discovery in HA clusters using TCP. jgroups-udp 55200 45688 Multicast peer discovery in HA clusters using UDP. management-http 9990 Used for HTTP communication with the management layer. management-https 9993 Used for HTTPS communication with the management layer. modcluster 23364 Multicast port for communication between JBoss EAP and the HTTP load balancer. txn-recovery-environment 4712 The Jakarta Transactions recovery manager. txn-status-manager 4713 The Jakarta Transactions / JTS transaction manager. Table A.17. full-sockets Socket Binding Port Description ajp 8009 Apache JServ Protocol. Used for HTTP clustering and load balancing. http 8080 The default port for deployed web applications. https 8443 SSL-encrypted connection between deployed web applications and clients. iiop 3528 CORBA services for JTS transactions and other ORB-dependent services. iiop-ssl 3529 SSL-encrypted CORBA services. management-http 9990 Used for HTTP communication with the management layer. management-https 9993 Used for HTTPS communication with the management layer. txn-recovery-environment 4712 The Jakarta Transactions recovery manager. txn-status-manager 4713 The Jakarta Transactions / JTS transaction manager. Table A.18. full-ha-sockets Name Port Multicast Port Description ajp 8009 Apache JServ Protocol. Used for HTTP clustering and load balancing. http 8080 The default port for deployed web applications. https 8443 SSL-encrypted connection between deployed web applications and clients. iiop 3528 CORBA services for JTS transactions and other ORB-dependent services. iiop-ssl 3529 SSL-encrypted CORBA services. jgroups-mping 45700 Multicast. Used to discover initial membership in a HA cluster. jgroups-tcp 7600 Unicast peer discovery in HA clusters using TCP. jgroups-udp 55200 45688 Multicast peer discovery in HA clusters using UDP. management-http 9990 Used for HTTP communication with the management layer. management-https 9993 Used for HTTPS communication with the management layer. modcluster 23364 Multicast port for communication between JBoss EAP and the HTTP load balancer. txn-recovery-environment 4712 The Jakarta Transactions recovery manager. txn-status-manager 4713 The Jakarta Transactions / JTS transaction manager. Table A.19. load-balancer-sockets Name Port Multicast Port Description http 8080 The default port for deployed web applications. https 8443 SSL-encrypted connection between deployed web applications and clients. management-http 9990 Used for HTTP communication with the management layer. management-https 9993 Used for HTTPS communication with the management layer. mcmp-management 8090 The port for the Mod-Cluster Management Protocol (MCMP) connection to transmit lifecycle events. modcluster 23364 Multicast port for communication between JBoss EAP and the HTTP load balancer. A.10. Module Command Arguments The following arguments can be passed to the module add management CLI command: Table A.20. Module Command Arguments Argument Description --absolute-resources Use this argument to specify a list of absolute file system paths to reference from its module.xml file. The files specified are not copied to the module directory. See --resource-delimiter for delimiter details. --allow-nonexistent-resources Use this argument to create empty directories for resources specified by --resources that do not exist. The module add command will fail if there are resources that do not exist and this argument is not used. --dependencies Use this argument to provide a comma-separated list of module names that this module depends on. --export-dependencies Use this argument to specify exported dependencies. --main-class Use this argument to specify the fully qualified class name that declares the module's main method. --module-root-dir Use this argument if you have defined an external JBoss EAP module directory to use instead of the default EAP_HOME /modules/ directory. --module-xml Use this argument to provide a file system path to a module.xml to use for this new module. This file is copied to the module directory. If this argument is not specified, a module.xml file is generated in the module directory. --name Use this argument to provide the name of the module to add. This argument is required. --properties Use this argument to provide a comma-separated list of PROPERTY_NAME = PROPERTY_VALUE pairs that define module properties. --resource-delimiter Use this argument to set a user-defined file path separator for the list of resources provided to the --resources or absolute-resources argument. If not set, the file path separator is a colon ( : ) for Linux and a semicolon ( ; ) for Windows. --resources Use this argument to specify the resources for this module by providing a list of file system paths. The files are copied to this module directory and referenced from its module.xml file. If you a provide a path to a directory, the directory and its contents are copied to the module directory. Symbolic links are not preserved; linked resources are copied to the module directory. This argument is required unless --absolute-resources or --module-xml is provided. See --resource-delimiter for delimiter details. --slot Use this argument to add the module to a slot other than the default main slot. A.11. Deployment Scanner Marker Files Marker files are used by the deployment scanner to mark the status of an application within the deployment directory of the JBoss EAP server instance. A marker file has the same name as the deployment, with the file suffix indicating the state of the application's deployment. For example, a successful deployment of test-application.war would have a marker file named test-application.war.deployed . The following table lists the available marker file types and their meanings. Table A.21. Marker File Types Filename Suffix Origin Description .deployed System-generated Indicates that the content has been deployed. The content will be undeployed if this file is deleted. .dodeploy User-generated Indicates that the content should be deployed or redeployed. .failed System-generated Indicates deployment failure. The marker file contains information about the cause of failure. If the marker file is deleted, the content will be eligible for auto-deployment again. .isdeploying System-generated Indicates that the deployment is in progress. This marker file will be deleted upon completion. .isundeploying System-generated Triggered by deleting a .deployed file, this indicates that the content is being undeployed. This marker file will be deleted upon completion. .pending System-generated Indicates that the deployment scanner recognizes the need to deploy content, but an issue is currently preventing auto-deployment (for example, if content is in the process of being copied). This marker serves as a global deployment road-block, meaning that the scanner will not instruct the server to deploy or undeploy any content while this marker file exists. .skipdeploy User-generated Disables auto-deploy of an application while present. Useful as a method of temporarily blocking the auto-deployment of exploded content, preventing the risk of incomplete content edits being pushed. Can be used with zipped content, although the scanner detects in-progress changes to zipped content and waits until completion. .undeployed System-generated Indicates that the content has been undeployed. Deletion of this marker file has no impact to content redeployment. A.12. Deployment Scanner Attributes The deployment scanner contains the following configurable attributes. Note Attribute names in this table are listed as they appear in the management model, for example, when using the management CLI. See the schema definition file located at EAP_HOME /docs/schema/jboss-as-deployment-scanner_2_0.xsd to view the elements as they appear in the XML, as there may be differences from the management model. Table A.22. Deployment Scanner Attributes Name Default Description auto-deploy-exploded false Allows the automatic deployment of exploded content without requiring a .dodeploy marker file. Recommended for only basic development scenarios to prevent exploded application deployment from occurring during changes by the developer or operating system. auto-deploy-xml true Allows the automatic deployment of XML content without requiring a .dodeploy marker file. auto-deploy-zipped true Allows the automatic deployment of zipped content without requiring a .dodeploy marker file. deployment-timeout 600 The time value in seconds for the deployment scanner to allow a deployment attempt before being canceled. path deployments The actual file system path to be scanned. Treated as an absolute path, unless the relative-to attribute is specified, in which case the value is treated as relative to that path. relative-to jboss.server.base.dir Reference to a file system path defined as a path in the server configuration. runtime-failure-causes-rollback false Whether a runtime failure of a deployment causes a rollback of the deployment as well as all other (possibly unrelated) deployments as part of the scan operation. scan-enabled true Allows the automatic scanning for applications by scan-interval and at startup. scan-interval 5000 The time interval in milliseconds that the repository should be scanned for changes. A value of less than 1 causes the scan to occur only at initial startup. A.13. Managed Domain JVM Configuration Attributes The following JVM configuration options can be set for a managed domain at the host, server group, or server level. Note that valid values for some of these attributes are dependent upon your JVM. See your JDK vendor's documentation for additional information. Note Attribute names in this table are listed as they appear in the management model, for example, when using the management CLI. See the schema definition file located at EAP_HOME /docs/schema/wildfly-config_5_0.xsd to view the elements as they appear in the XML, as there may be differences from the management model. Table A.23. JVM Configuration Attributes Attribute Description agent-lib Sets the value of the -agentlib java option, which specifies the Java agent library. agent-path Sets the value of the -agentpath java option, which specifies the Java agent path. debug-enabled Whether to enable debug. This attribute only applies to JVM configurations at the server level. debug-options Specifies the JVM options to use when debug is enabled. This attribute only applies to JVM configurations at the server level. env-classpath-ignored Whether to ignore the CLASSPATH environment variable. environment-variables Specifies key/value pair environment variables. heap-size Sets the value of the -Xms option, which specifies the initial heap size allocated by the JVM. java-agent Sets the value of the -javaagent java option, which specifies the Java agent. java-home Sets the value of the JAVA_HOME variable. jvm-options Specifies any additional JVM options needed. launch-command Specifies an operating system level command to prefix before the java command used to launch the server process. For example, you could use the sudo command to run the Java process as another user. max-heap-size Sets the value of the -Xmx option, which specifies the maximum heap size allocated by the JVM. max-permgen-size Sets the maximum size of the permanent generation. Deprecated: The JVM no longer provides a separate permanent generation space. permgen-size Sets the initial permanent generation size. Deprecated: The JVM no longer provides a separate permanent generation space. stack-size Sets the value of the -Xss option, which specifies the JVM stack size. type Specifies which vendor provided the JVM in use. Available options are ORACLE , IBM , SUN , or OTHER . A.14. Mail Subsystem Attributes The following tables describe the attributes in the mail subsystem for mail sessions and the following mail server types: imap pop3 smtp custom Note Attribute names in these tables are listed as they appear in the management model, for example, when using the management CLI. See the schema definition file located at EAP_HOME /docs/schema/wildfly-mail_3_0.xsd to view the elements as they appear in the XML, as there may be differences from the management model. Table A.24. Mail Session Attributes Attribute Description debug Whether to enable Jakarta Mail debugging. from The default "from" address to use if not set when sending. jndi-name The JNDI name to which the mail session should be bound. Table A.25. IMAP Mail Server Attributes Attribute Description credential-reference Credential, from a credential store, to authenticate on the server. outbound-socket-binding-ref Reference to the outbound socket binding for the mail server. password The password to authenticate on the server. ssl Whether the server requires SSL. tls Whether the server requires TLS. username The username to authenticate on the server. Table A.26. POP3 Mail Server Attributes Attribute Description credential-reference Credential, from a credential store, to authenticate on the server. outbound-socket-binding-ref Reference to the outbound socket binding for the mail server. password The password to authenticate on the server. ssl Whether the server requires SSL. tls Whether the server requires TLS. username The username to authenticate on the server. Table A.27. SMTP Mail Server Attributes Attribute Description credential-reference Credential, from a credential store to authenticate on the server. outbound-socket-binding-ref Reference to the outbound socket binding for the mail server. password The password to authenticate on the server. ssl Whether the server requires SSL. tls Whether the server requires TLS. username The username to authenticate on the server. Table A.28. Custom Mail Server Attributes Attribute Description credential-reference Credential, from a credential store, to authenticate on the server. outbound-socket-binding-ref Reference to the outbound socket binding for the mail server. password The password to authenticate on the server. properties The Jakarta Mail properties for this server. ssl Whether the server requires SSL. tls Whether the server requires TLS. username The username to authenticate on the server. A.15. Root Logger Attributes Note Attribute names in this table are listed as they appear in the management model, for example, when using the management CLI. See the schema definition file located at EAP_HOME /docs/schema/jboss-as-logging_3_0.xsd to view the elements as they appear in the XML, as there may be differences from the management model. Table A.29. Root Logger Attributes Attribute Description filter Defines a simple filter type. Deprecated in favor of filter-spec . filter-spec An expression value that defines a filter. The following expression defines a filter that excludes log entries that do not match a pattern: not(match("WFLY.*")) handlers A list of log handlers that are used by the root logger. level The lowest level of log message that the root logger records. Note A filter-spec specified for the root logger is not inherited by other handlers. Instead a filter-spec must be specified per handler. A.16. Log Category Attributes Note Attribute names in this table are listed as they appear in the management model, for example, when using the management CLI. See the schema definition file located at EAP_HOME /docs/schema/jboss-as-logging_6_0.xsd to view the elements as they appear in the XML, as there may be differences from the management model. Table A.30. Log Category Attributes Attribute Description category The log category from which log messages will be captured. filter Defines a simple filter type. Deprecated in favor of filter-spec . filter-spec An expression value that defines a filter. The following expression defines a filter that does not match a pattern: not(match("WFLY.*")) handlers A list of log handlers associated with the logger. level The lowest level of log message that the log category records. use-parent-handlers If set to true , this category will use the log handlers of the root logger in addition to any other assigned handlers. A.17. Log Handler Attributes Note Attribute names in these tables are listed as they appear in the management model, for example, when using the management CLI. See the schema definition file located at EAP_HOME /docs/schema/jboss-as-logging_6_0.xsd to view the elements as they appear in the XML, as there may be differences from the management model. Table A.31. Console Log Handler Attributes Attribute Description autoflush If set to true , the log messages will be sent to the handlers assigned file immediately upon receipt. enabled If set to true , the handler is enabled and functioning as normal. If set to false , the handler is ignored when processing log messages. encoding The character encoding scheme to be used for the output. filter Defines a simple filter type. Deprecated in favor of filter-spec . filter-spec An expression value that defines a filter. The following expression defines a filter that does not match a pattern: not(match("WFLY.*")) formatter The log formatter used by this log handler. level The lowest level of log message the log handler records. name The name of the log handler. Deprecated since the handler's address contains the name. named-formatter The name of the defined formatter to be used on the handler. target The system output stream where the output of the log handler is sent. This can be one of the following: System.err : Log handler output goes to the system error stream. System.out : Log handler output goes to the standard output stream. console : Log hander output goes to the java.io.PrintWriter class. Table A.32. File Log Handler Attributes Attribute Description append If set to true , all messages written by this handler will be appended to the file if it already exists. If set to false , a new file will be created each time the application server launches. autoflush If set to true , the log messages will be sent to the handlers assigned file immediately upon receipt. enabled If set to true , the handler is enabled and functioning as normal. If set to false , the handler is ignored when processing log messages. encoding The character encoding scheme to be used for the output. file The object that represents the file where the output of this log handler is written to. It has two configuration properties, relative-to and path . filter Defines a simple filter type. Deprecated in favor of filter-spec . filter-spec An expression value that defines a filter. The following expression defines a filter that does not match a pattern: not(match("WFLY.*")) formatter The log formatter used by this log handler. level The lowest level of log message the log handler records. name The name of the log handler. Deprecated since the handler's address contains the name. named-formatter The name of the defined formatter to be used on the handler. Table A.33. Periodic Log Handler Attributes Attribute Description append If set to true , all messages written by this handler will be appended to the file if it already exists. If set to false , a new file will be created each time the application server launches. autoflush If set to true , the log messages will be sent to the handlers assigned file immediately upon receipt. enabled If set to true , the handler is enabled and functioning as normal. If set to false , the handler is ignored when processing log messages. encoding The character encoding scheme to be used for the output. file Object that represents the file to which the output of this log handler is written. It has two configuration properties, relative-to and path . filter Defines a simple filter type. Deprecated in favor of filter-spec . filter-spec An expression value that defines a filter. The following expression defines a filter that does not match a pattern: not(match("WFLY.*")) . formatter The log formatter used by this log handler. level The lowest level of log message the log handler records. name The name of the log handler. Deprecated since the handler's address contains the name. named-formatter The name of the defined formatter to be used on the handler. suffix This string is included in the suffix appended to rotated logs. The format of the suffix is a dot ( . ) followed by a date string which is able to be parsed by the SimpleDateFormat class. Table A.34. Size Log Handler Attributes Attribute Description append If set to true , all messages written by this handler will be appended to the file if it already exists. If set to false , a new file will be created each time the application server launches. autoflush If set to true the log messages will be sent to the handlers assigned file immediately upon receipt. enabled If set to true , the handler is enabled and functioning as normal. If set to false , the handler is ignored when processing log messages. encoding The character encoding scheme to be used for the output. file Object that represents the file where the output of this log handler is written to. It has two configuration properties, relative-to and path . filter Defines a simple filter type. Deprecated in favor of filter-spec . filter-spec An expression value that defines a filter. The following expression defines a filter that does not match a pattern: not(match("WFLY.*")) formatter The log formatter used by this log handler. level The lowest level of log message the log handler records. max-backup-index The maximum number of rotated logs that are kept. When this number is reached, the oldest log is reused. The default is 1 . If the suffix attribute is used, the suffix of rotated log files is included in the rotation algorithm. When the log file is rotated, the oldest file whose name starts with name + suffix is deleted, the remaining rotated log files have their numeric suffix incremented and the newly rotated log file is given the numeric suffix 1 . name The name of the log handler. Deprecated since the handler's address contains the name. named-formatter The name of the defined formatter to be used on the handler. rotate-on-boot If set to true , a new log file will be created on server restart. The default is false . rotate-size The maximum size that the log file can reach before it is rotated. A single character appended to the number indicates the size units: b for bytes, k for kilobytes, m for megabytes, g for gigabytes. For example, 50m for 50 megabytes. suffix This string is included in the suffix appended to rotated logs. The format of the suffix is a dot ( . ) followed by a date string which is able to be parsed by the SimpleDateFormat class. Table A.35. Periodic Size Log Handler Attributes Attribute Description append If set to true , all messages written by this handler will be appended to the file if it already exists. If set to false , a new file will be created each time the application server launches. autoflush If set to true , the log messages will be sent to the handlers assigned file immediately upon receipt. enabled If set to true , the handler is enabled and functioning as normal. If set to false , the handler is ignored when processing log messages. encoding The character encoding scheme to be used for the output. file Object that represents the file where the output of this log handler is written to. It has two configuration properties, relative-to and path . filter-spec An expression value that defines a filter. The following expression defines a filter that does not match a pattern: not(match("WFLY.*")) formatter The log formatter used by this log handler. level The lowest level of log message the log handler records. max-backup-index The maximum number of rotated logs that are kept. When this number is reached, the oldest log is reused. The default is 1 . If the suffix attribute is used, the suffix of rotated log files is included in the rotation algorithm. When the log file is rotated, the oldest file whose name starts with name + suffix is deleted, the remaining rotated log files have their numeric suffix incremented and the newly rotated log file is given the numeric suffix 1 . name The name of the log handler. Deprecated since the handler's address contains the name. named-formatter The name of the defined formatter to be used on the handler. rotate-on-boot If set to true , a new log file will be created on server restart. The default is false . rotate-size The maximum size that the log file can reach before it is rotated. A single character appended to the number indicates the size units: b for bytes, k for kilobytes, m for megabytes, g for gigabytes. For example, 50m for 50 megabytes. suffix This string is included in the suffix appended to rotated logs. The format of the suffix is a dot ( . ) followed by a date string which is able to be parsed by the SimpleDateFormat class. Table A.36. Syslog Handler Attributes Attribute Description app-name The app name used when formatting the message in RFC5424 format. By default the app name is java . enabled If set to true , the handler is enabled and functioning as normal. If set to false , the handler is ignored when processing log messages. facility The facility as defined by RFC-5424 and RFC-3164. hostname The name of the host from which the messages are being sent. For example, the name of the host the application server is running on. level The lowest level of log message the log handler records. port The port on which the syslog server is listening. server-address The address of the syslog server. syslog-format Formats the log message according to the RFC specification. named-formatter Formats the message of the syslog payload. With this attribute, you can customize the message as required. Table A.37. Socket Log Handler Attributes Attribute Description autoflush Whether to automatically flush after each write. block-on-reconnect If set to true , the write methods will block when attempting to reconnect. This is only advisable to be set to true if using an asynchronous handler. enabled If set to true , the handler is enabled and functioning as normal. If set to false , the handler is ignored when processing log messages. encoding The character encoding used by this handler filter-spec An expression value that defines a filter. The following expression defines a filter that does not match a pattern: not(match("WFLY.*")) level The lowest level of log message the log handler records. named-formatter The name of the defined formatter to be used on the handler. outbound-socket-binding-ref The reference to the outbound socket binding for the socket connection. protocol The protocol the socket should communicate over. Allowed values are TCP , UDP , or SSL_TCP . ssl-context The reference to the defined SSL context. This is only used if protocol is set to SSL_TCP . Table A.38. Custom Log Handler Attributes Attribute Description class The logging handler class to be used. enabled If set to true , the handler is enabled and functioning as normal. If set to false , the handler is ignored when processing log messages. encoding The character encoding scheme to be used for the output. filter Defines a simple filter type. Deprecated in favor of filter-spec . filter-spec An expression value that defines a filter. The following expression defines a filter that does not match a pattern: not(match("WFLY.*")) formatter The log formatter used by this log handler. level The lowest level of log message the log handler records. module The module one which the logging handler depends. name The name of the log handler. Deprecated since the handler's address contains the name. named-formatter The name of the defined formatter to be used on the handler. properties The properties used for the logging handler. Table A.39. Async Log Handler Attributes Attribute Description enabled If set to true , the handler is enabled and functioning as normal. If set to false , the handler is ignored when processing log messages. filter Defines a simple filter type. Deprecated in favor of filter-spec . filter-spec An expression value that defines a filter. The following expression defines a filter that does not match a pattern: not(match("WFLY.*")) level The lowest level of log message the log handler records. name The name of the log handler. Deprecated since the handler's address contains the name. overflow-action How this handler responds when its queue length is exceeded. This can be set to BLOCK or DISCARD . BLOCK makes the logging application wait until there is available space in the queue. This is the same behavior as an non-async log handler. DISCARD allows the logging application to continue but the log message is deleted. queue-length Maximum number of log messages that will be held by this handler while waiting for sub-handlers to respond. subhandlers The list of log handlers to which this async handler passes its log messages. A.18. Log Formatter Attributes Table A.40. Format Characters for Pattern Formatter Symbol Description %c The category of the logging event. %p The level of the log entry (INFO, DEBUG, etc.). %P The localized level of the log entry. %d The current date/time ( yyyy-MM-dd HH:mm:ss,SSS format). %r The relative time (milliseconds since the log was initialized). %z The time zone, which must be specified before the date ( %d ). For example, %z{GMT}%d{HH:mm:ss,SSS} . %k A log resource key (used for localization of log messages). %m The log message (including exception trace). %s The simple log message (no exception trace). %e The exception stack trace (no extended module information). %E The exception stack trace (with extended module information). %t The name of the current thread. %n A newline character. %C The class of the code calling the log method (slow). %F The filename of the class calling the log method (slow). %l The source location of the code calling the log method (slow). %L The line number of the code calling the log method (slow). %M The method of the code calling the log method (slow). %x The Nested Diagnostic Context. %X The Message Diagnostic Context. %% A literal percent ( % ) character (escaping). Table A.41. JSON Log Formatter Attributes Attribute Description date-format The date-time format pattern. The pattern must be a valid java.time.format.DateTimeFormatter.ofPattern() pattern. The default pattern is an ISO-8601 extended offset date-time format. exception-output-type Indicates how the cause of the logged message, if one is available, is added to the JSON output. The allowed values are: detailed formatted detailed-and-formatted key-overrides Allows the names of the keys for the JSON properties to be overridden. meta-data Sets the metadata to be used in the JSON formatter. pretty-print Whether or not pretty printing should be used when formatting. print-details Whether or not details should be printed. The details include the source class name, source file name, source method name, source module name, source module version and source line number. Note Printing the details can be expensive as the values are retrieved from the caller. record-delimiter The value to be used to indicate the end of a record. If set to null no delimiter will be used at the end of the record. The default value is a line feed. zone-id The zone ID for formatting the date and time. The system default is used if left undefined. Table A.42. XML Log Formatter Attributes Attribute Description date-format The date-time format pattern. The pattern must be a valid java.time.format.DateTimeFormatter.ofPattern() pattern. The default pattern is an ISO-8601 extended offset date-time format. exception-output-type Indicates how the cause of the logged message, if one is available, is added to the XML output. The allowed values are: detailed formatted detailed-and-formatted key-overrides Allows the names of the keys for the XML properties to be overridden. meta-data Sets the meta data to use in the XML format. Properties are added to each log message. namespace-uri Sets the namespace URI used for each record if print-namespace attribute is true. Note that if no namespace-uri is defined and there are overridden keys no namespace will be written regardless if the print-namespace attribute is set to true. pretty-print Whether or not pretty printing should be used when formatting. print-details Whether or not details should be printed. The details include the source class name, source file name, source method name, source module name, source module version and source line number. Note Printing the details can be expensive as the values are retrieved from the caller. record-delimiter The value to be used to indicate the end of a record. If this is null, no delimiter is used at the end of the record. The default value is a line feed. zone-id The zone ID for formatting the date and time. The system default is used if left undefined. A.19. Datasource Connection URLs Table A.43. Datasource Connection URLs Datasource Connection URL IBM DB2 jdbc:db2:// SERVER_NAME : PORT / DATABASE_NAME MariaDB jdbc:mariadb:// SERVER_NAME : PORT / DATABASE_NAME MariaDB Galera Cluster jdbc:mariadb:// SERVER_NAME : PORT , SERVER_NAME : PORT / DATABASE_NAME Microsoft SQL Server jdbc:sqlserver:// SERVER_NAME : PORT ;DatabaseName= DATABASE_NAME MySQL jdbc:mysql:// SERVER_NAME : PORT / DATABASE_NAME Oracle jdbc:oracle:thin:@ SERVER_NAME : PORT : ORACLE_SID PostgreSQL jdbc:postgresql:// SERVER_NAME : PORT / DATABASE_NAME Sybase jdbc:sybase:Tds: SERVER_NAME : PORT / DATABASE_NAME A.20. Datasource Attributes Note Attribute names in this table are listed as they appear in the management model, for example, when using the management CLI. See the schema definition file located at EAP_HOME /docs/schema/wildfly-datasources_5_0.xsd to view the elements as they appear in the XML, as there may be differences from the management model. Table A.44. Datasource Attributes Attribute Datasource Type Description allocation-retry Non-XA, XA The number of times that allocating a connection should be tried before throwing an exception. The default is 0 , so an exception is thrown upon the first failure. allocation-retry-wait-millis Non-XA, XA The amount of time, in milliseconds, to wait between retrying to allocate a connection. The default is 0 ms. allow-multiple-users Non-XA, XA Whether multiple users will access the datasource through the getConnection(user, password) method and if the internal pool type accounts for this behavior. authentication-context Non-XA, XA The Elytron authentication context which defines the javax.security.auth.Subject that is used to distinguish connections in the pool. background-validation Non-XA, XA Whether connections should be validated on a background thread versus being validated prior to use. Background validation is typically not to be used with validate-on-match or there will be redundant checks. With background validation, there is an opportunity for a connection to go bad between the time of the validations can and being handed to the client, so the application must account for this possibility. background-validation-millis Non-XA, XA The frequency, in milliseconds, that background validation will run. blocking-timeout-wait-millis Non-XA, XA The maximum time, in milliseconds, to block while waiting for a connection before throwing an exception. Note that this blocks only while waiting for locking a connection, and will never throw an exception if creating a new connection takes an inordinately long time. capacity-decrementer-class Non-XA, XA Class defining the policy for decrementing connections in the pool. capacity-decrementer-properties Non-XA, XA Properties to be injected in the class defining the policy for decrementing connections in the pool. capacity-incrementer-class Non-XA, XA Class defining the policy for incrementing connections in the pool. capacity-incrementer-properties Non-XA, XA Properties to be injected in the class defining the policy for incrementing connections in the pool. check-valid-connection-sql Non-XA, XA An SQL statement to check validity of a pool connection. This may be called when a managed connection is obtained from the pool. connectable Non-XA, XA Enable the use of CMR, which means that a local resource can reliably participate in an XA transaction. connection-listener-class Non-XA, XA Specifies class name extending org.jboss.jca.adapters.jdbc.spi.listener.ConnectionListener . This class listens for connection activation and passivation in order to perform actions before the connection is returned to the application or to the pool. The specified class must be bundled together with the JDBC driver in one module using two resource JARs, as seen in Installing a JDBC Driver as a Core Module , or in a separate global module, as seen in Define Global Modules . connection-listener-property Non-XA, XA Properties to be injected into the class specified in the connection-listener-class . The properties injected are compliant with the JavaBeans conventions. For example, if you specify a property named foo , then the connection listener class needs to have a method setFoo that accepts String as argument. connection-properties Non-XA Only Arbitrary string name/value pair connection properties to pass to the Driver.connect(url, props) method. connection-url Non-XA Only The JDBC driver connection URL. credential-reference Non-XA, XA Credential, from a credential store, to authenticate on datasource. datasource-class Non-XA Only The fully-qualified name of the JDBC datasource class. driver-class Non-XA Only The fully-qualified name of the JDBC driver class. driver-name Non-XA, XA Defines the JDBC driver the datasource should use. It is a symbolic name matching the name of installed driver. If the driver is deployed as JAR, the name is the name of the deployment. elytron-enabled Non-XA, XA Enables Elytron security for handling authentication of connections. The Elytron authentication-context to be used will be current context if no context is specified. See authentication-context for additional information. enabled Non-XA, XA Whether the datasource should be enabled. enlistment-trace Non-XA, XA Whether enlistment traces should be recorded. This is false by default. exception-sorter-class-name Non-XA, XA An instance of org.jboss.jca.adapters.jdbc.ExceptionSorter that provides a method to validate if an exception should broadcast an error. exception-sorter-properties Non-XA, XA The exception sorter properties. flush-strategy Non-XA, XA Specifies how the pool should be flushed in case of an error. Valid values are: FailingConnectionOnly Only the failing connection is removed. This is the default setting. InvalidIdleConnections The failing connection and idle connections that share the same credentials and are returned as invalid by the ValidatingManagedConnectionFactory.getInvalidConnections(... ) method are removed. IdleConnections The failing connection and idle connections that share the same credentials are removed. Gracefully The failing connection and idle connections that share the same credentials are removed. Active connections that share the same credentials are destroyed upon return to the pool. EntirePool The failing connection and idle and active connections that share the same credentials are removed. This setting is not recommended for production systems. AllInvalidIdleConnections The failing connection and idle connections that are returned as invalid by the ValidatingManagedConnectionFactory.getInvalidConnections(... ) method are removed. AllIdleConnections The failing connection and all idle connections are removed. AllGracefully The failing connection and all idle connections are removed. Active connections are destroyed upon return to the pool. AllConnections The failing connection and all idle and active connections are removed. This setting is not recommended for production systems. idle-timeout-minutes Non-XA, XA The maximum time, in minutes, a connection may be idle before being closed. If not specified, the default is 30 minutes. The actual maximum time also depends on the IdleRemover scan time, which is half of the smallest idle-timeout-minutes value of any pool. initial-pool-size Non-XA, XA The initial number of connections a pool should hold. interleaving XA Only Whether to enable interleaving for XA connections. jndi-name Non-XA, XA The unique JNDI name for the datasource. jta Non-XA Only Enable Jakarta Transactions integration. max-pool-size Non-XA, XA The maximum number of connections that a pool can hold. mcp Non-XA, XA The ManagedConnectionPool implementation. For example, org.jboss.jca.core.connectionmanager.pool.mcp.SemaphoreArrayListManagedConnectionPool min-pool-size Non-XA, XA The minimum number of connections that a pool can hold. new-connection-sql Non-XA, XA An SQL statement to execute whenever a connection is added to the connection pool. no-recovery XA Only Whether the connection pool should be excluded from recovery. no-tx-separate-pool XA Only Whether to create a separate sub-pool for each context. This may be required for some Oracle datasources, which may not allow XA connections to be used both inside and outside of a Jakarta Transactions transaction. Using this option will cause your total pool size to be twice the max-pool-size , because two actual pools will be created. pad-xid XA Only Whether to pad the Xid. password Non-XA, XA The password to use when creating a new connection. pool-fair Non-XA, XA Defines if pool should be fair. This setting is part of a Semaphore class used to manage the connection pools in Jakarta Connectors, which provides a performance benefit in some use cases where the order of leasing connections is not required. pool-prefill Non-XA, XA Whether the pool should be prefilled. pool-use-strict-min Non-XA, XA Whether min-pool-size should be considered strictly. prepared-statements-cache-size Non-XA, XA The number of prepared statements per connection in a Least Recently Used (LRU) cache. query-timeout Non-XA, XA The timeout for queries, in seconds. The default is no timeout. reauth-plugin-class-name Non-XA, XA The fully-qualified class name of the reauthentication plugin implementation to reauthenticate physical connections. reauth-plugin-properties Non-XA, XA The properties for the reauthentication plugin. recovery-authentication-context XA Only The Elytron authentication context which defines the javax.security.auth.Subject that is used to distinguish connections in the pool. recovery-credential-reference XA Only Credential, from a credential store, to authenticate on datasource. recovery-elytron-enabled XA Only Enables Elytron security for handling authentication of connections for recovery. The Elytron authentication-context used will be the current context if no authentication-context is specified. See authentication-context for additional information. recovery-password XA Only The password to use to connect to the resource for recovery. recovery-plugin-class-name XA Only The fully-qualified class name of the recovery plugin implementation. recovery-plugin-properties XA Only The properties for the recovery plugin. recovery-security-domain XA Only The security domain to use to connect to the resource for recovery. recovery-username XA Only The user name to use to connect to the resource for recovery. same-rm-override XA Only Whether the javax.transaction.xa.XAResource.isSameRM(XAResource) class returns true or false . security-domain Non-XA, XA The name of a JAAS security-manager which handles authentication. This name correlates to the application-policy/name attribute of the JAAS login configuration. set-tx-query-timeout Non-XA, XA Whether to set the query timeout based on the time remaining until transaction timeout. Any configured query timeout will be used if no transaction exists. share-prepared-statements Non-XA, XA Whether JBoss EAP should cache, instead of close or terminate, the underlying physical statement when the wrapper supplied to the application is closed by application code. The default is false . spy Non-XA, XA Enable spy functionality on the JDBC layer. This logs all JDBC traffic to the datasource. Note that the logging category jboss.jdbc.spy must also be set to the log level DEBUG in the logging subsystem. stale-connection-checker-class-name Non-XA, XA An instance of org.jboss.jca.adapters.jdbc.StaleConnectionChecker that provides an isStaleConnection(SQLException) method. If this method returns true , then the exception is wrapped in an org.jboss.jca.adapters.jdbc.StaleConnectionException . stale-connection-checker-properties Non-XA, XA The stale connection checker properties. statistics-enabled Non-XA, XA Whether runtime statistics are enabled. The default is false . track-statements Non-XA, XA Whether to check for unclosed statements when a connection is returned to a pool and a statement is returned to the prepared statement cache. If false, statements are not tracked. Valid values: true : Statements and result sets are tracked, and a warning is issued if they are not closed. false : Neither statements or result sets are tracked. nowarn : Statements are tracked but no warning is issued ( default ). tracking Non-XA, XA Whether to track connection handles across transaction boundaries. transaction-isolation Non-XA, XA The java.sql.Connection transaction isolation level. Valid values: TRANSACTION_READ_UNCOMMITTED TRANSACTION_READ_COMMITTED TRANSACTION_REPEATABLE_READ TRANSACTION_SERIALIZABLE TRANSACTION_NONE url-delimiter Non-XA, XA The delimiter for URLs in connection-url for High Availability (HA) datasources. url-property XA Only The property for the URL property in the xa-datasource-property values. url-selector-strategy-class-name Non-XA, XA A class that implements org.jboss.jca.adapters.jdbc.URLSelectorStrategy . use-ccm Non-XA, XA Enable the cached connection manager. use-fast-fail Non-XA, XA If true, fail a connection allocation on the first attempt if the connection is invalid. If false, keep trying until the pool is exhausted. use-java-context Non-XA, XA Whether to bind the datasource into global JNDI. use-try-lock Non-XA, XA A timeout value for internal locks. This attempts to obtain the lock for the configured number of seconds, before timing out, rather than failing immediately if the lock is unavailable. Uses tryLock() instead of lock() . user-name Non-XA, XA The user name to use when creating a new connection. valid-connection-checker-class-name Non-XA, XA An implementation of org.jboss.jca.adaptors.jdbc.ValidConnectionChecker which provides a SQLException.isValidConnection(Connection e) method to validate a connection. An exception means the connection is destroyed. This overrides the attribute check-valid-connection-sql if it is present. valid-connection-checker-properties Non-XA, XA The valid connection checker properties. validate-on-match Non-XA, XA Whether connection validation is performed when a connection factory attempts to match a managed connection. This should be used when a client must have a connection validated prior to use. Validate-on-match is typically not to be used with background-validation or there will be redundant checks. wrap-xa-resource XA Only Whether to wrap the XAResource in an org.jboss.tm.XAResourceWrapper instance. xa-datasource-class XA Only The fully-qualified name of the javax.sql.XADataSource implementation class. xa-datasource-properties XA Only String name/value pair of XA datasource properties. xa-resource-timeout XA Only If non-zero, this value is passed to the XAResource.setTransactionTimeout method. Table A.45. JDBC Driver Attributes Attribute Datasource Type Description datasource-class-info Non-XA, XA The available properties for the datasource-class and xa-datasource-class for the jdbc-driver . The datasource-class and xa-datasource-class attributes define the fully qualified class name that implements javax.sql.DataSource or javax.sql.XADataSource classes. The class defined can have setters for various properties. The datasource-class-info attribute lists these properties that can be set for the class. A.21. Datasource Statistics Table A.46. Core Pool Statistics Name Description ActiveCount The number of active connections. Each of the connections is either in use by an application or available in the pool. AvailableCount The number of available connections in the pool. AverageBlockingTime The average time spent blocking on obtaining an exclusive lock on the pool. This value is in milliseconds. AverageCreationTime The average time spent creating a connection. This value is in milliseconds. AverageGetTime The average time spent obtaining a connection. This value is in milliseconds. AveragePoolTime The average time that a connection spent in the pool.This value is in milliseconds. AverageUsageTime The average time spent using a connection. This value is in milliseconds. BlockingFailureCount The number of failures trying to obtain a connection. CreatedCount The number of connections created. DestroyedCount The number of connections destroyed. IdleCount The number of connections that are currently idle. InUseCount The number of connections currently in use. MaxCreationTime The maximum time it took to create a connection. This value is in milliseconds. MaxGetTime The maximum time for obtaining a connection. This value is in milliseconds. MaxPoolTime The maximum time for a connection in the pool. This value is in milliseconds. MaxUsageTime The maximum time using a connection. This value is in milliseconds. MaxUsedCount The maximum number of connections used. MaxWaitCount The maximum number of requests waiting for a connection at the same time. MaxWaitTime The maximum time spent waiting for an exclusive lock on the pool. This value is in milliseconds. TimedOut The number of timed out connections. TotalBlockingTime The total time spent waiting for an exclusive lock on the pool. This value is in milliseconds. TotalCreationTime The total time spent creating connections. This value is in milliseconds. TotalGetTime The total time spent obtaining connections. This value is in milliseconds. TotalPoolTime The total time spent by connections in the pool. This value is in milliseconds. TotalUsageTime The total time spent using connections. This value is in milliseconds. WaitCount The number of requests that had to wait to obtain a connection. XACommitAverageTime The average time for an XAResource commit invocation. This value is in milliseconds. XACommitCount The number of XAResource commit invocations. XACommitMaxTime The maximum time for an XAResource commit invocation. This value is in milliseconds. XACommitTotalTime The total time for all XAResource commit invocations. This value is in milliseconds. XAEndAverageTime The average time for an XAResource end invocation. This value is in milliseconds. XAEndCount The number of XAResource end invocations. XAEndMaxTime The maximum time for an XAResource end invocation. This value is in milliseconds. XAEndTotalTime The total time for all XAResource end invocations. This value is in milliseconds. XAForgetAverageTime The average time for an XAResource forget invocation. This value is in milliseconds. XAForgetCount The number of XAResource forget invocations. XAForgetMaxTime The maximum time for an XAResource forget invocation. This value is in milliseconds. XAForgetTotalTime The total time for all XAResource forget invocations. This value is in milliseconds. XAPrepareAverageTime The average time for an XAResource prepare invocation. This value is in milliseconds. XAPrepareCount The number of XAResource prepare invocations. XAPrepareMaxTime The maximum time for an XAResource prepare invocation. This value is in milliseconds. XAPrepareTotalTime The total time for all XAResource prepare invocations. This value is in milliseconds. XARecoverAverageTime The average time for an XAResource recover invocation. This value is in milliseconds. XARecoverCount The number of XAResource recover invocations. XARecoverMaxTime The maximum time for an XAResource recover invocation. This value is in milliseconds. XARecoverTotalTime The total time for all XAResource recover invocations. This value is in milliseconds. XARollbackAverageTime The average time for an XAResource rollback invocation. This value is in milliseconds. XARollbackCount The number of XAResource rollback invocations. XARollbackMaxTime The maximum time for an XAResource rollback invocation. This value is in milliseconds. XARollbackTotalTime The total time for all XAResource rollback invocations. This value is in milliseconds. XAStartAverageTime The average time for an XAResource start invocation. This value is in milliseconds. XAStartCount The number of XAResource start invocations. XAStartMaxTime The maximum time for an XAResource start invocation. This value is in milliseconds. XAStartTotalTime The total time for all XAResource start invocations. This value is in milliseconds. Table A.47. JDBC Statistics Name Description PreparedStatementCacheAccessCount The number of times that the statement cache was accessed. PreparedStatementCacheAddCount The number of statements added to the statement cache. PreparedStatementCacheCurrentSize The number of prepared and callable statements currently cached in the statement cache. PreparedStatementCacheDeleteCount The number of statements discarded from the cache. PreparedStatementCacheHitCount The number of times that statements from the cache were used. PreparedStatementCacheMissCount The number of times that a statement request could not be satisfied with a statement from the cache. A.22. Agroal Datasource Attributes Note Attribute names in this table are listed as they appear in the management model, for example, when using the management CLI. See the schema definition file located at EAP_HOME /docs/schema/wildfly-agroal_1_0.xsd to view the elements as they appear in the XML, as there may be differences from the management model. Table A.48. Agroal Datasource Attributes Attribute Description connectable Whether to enable CMR (Commit Markable Resource) functionality on this datasource. This applies to non-XA datasources only. jndi-name Specifies the JNDI name for the datasource. jta Whether to enable Jakarta Transactions integration. This applies to non-XA datasources only. statistics-enabled Whether to enable statistics for this datasource. Defaults to false . Table A.49. Agroal Datasource Connection Factory Attributes Attribute Description authentication-context Reference to an authentication context in the elytron subsystem. connection-properties Properties to be passed to the JDBC driver when creating a connection. credential-reference Credential, from a credential store, to authenticate with. driver A unique reference to the JDBC driver. new-connection-sql A SQL statement to be executed on a connection after creation. password The password to use for basic authentication with the database. transaction-isolation Set the java.sql.Connection transaction isolation level to used. url The JDBC driver connection URL. username The username to use for basic authentication with the database. Table A.50. Agroal Datasource Connection Pool Attributes Attribute Description background-validation The time, in milliseconds, between background validation runs. blocking-timeout The maximum time, in milliseconds, to block while waiting for a connection before throwing an exception. idle-removal The time, in minutes, that a connection must be idle before it can be removed. initial-size The initial number of connections the pool should hold. leak-detection The time, in milliseconds, that a connection must be held before a leak warning. max-size The maximum number of connections in the pool. min-size The minimum number of connections the pool should hold. A.23. Transaction Manager Configuration Options Note Attribute names in this table are listed as they appear in the management model, for example, when using the management CLI. See the schema definition file located at EAP_HOME /docs/schema/wildfly-txn_5_0.xsd to view the elements as they appear in the XML, as there may be differences from the management model. Table A.51. Transactions Subsystem Attributes Attribute Description default-timeout The default transaction timeout. This defaults to 300 seconds. You can override this programmatically, on a per-transaction basis. enable-statistics Deprecated in favor of statistics-enabled . enable-tsm-status Whether to enable the transaction status manager (TSM) service, which is used for out-of-process recovery. This option is not supported, as running an out-of-process recovery manager to contact the ActionStatusService from a different process, instead of in memory, is not supported. hornetq-store-enable-async-io Deprecated in favor of journal-store-enable-async-io . jdbc-action-store-drop-table Whether JDBC action store should drop tables. The default is false . jdbc-action-store-table-prefix Optional prefix for table used to write transaction logs in configured JDBC action store. jdbc-communication-store-drop-table Whether JDBC communication store should drop tables. The default is false . jdbc-communication-store-table-prefix Optional prefix for table used to write transaction logs in configured JDBC communication store. jdbc-state-store-drop-table Whether JDBC state store should drop tables. The default is false . jdbc-state-store-table-prefix Optional prefix for table used to write transaction logs in configured JDBC state store. jdbc-store-datasource JNDI name of non-XA datasource used. Datasource should be defined in the datasources subsystem. journal-store-enable-async-io Whether AsyncIO should be enabled for the journal store or not. Defaults to false . The server should be restarted for this setting to take effect. jts Whether to use Java Transaction Service (JTS) transactions. Defaults to false , which uses Jakarta Transactions transactions only. maximum-timeout If a transaction is set to have a transaction timeout of 0 , which implies an unlimited timeout, the transaction manager uses the value set by this attribute instead. The default is 31536000 seconds (365 days). node-identifier The node identifier for the transaction manager. If this option is not set, you will see a warning upon server startup. This option is required in the following situations: For JTS to JTS communications When two transaction managers access shared resource managers When two transaction managers access shared object stores The node-identifier must be unique for each transaction manager as it is required to enforce data integrity during recovery. The node-identifier must also be unique for Jakarta Transactions because multiple nodes may interact with the same resource manager or share a transaction object store. object-store-path A relative or absolute file system path where the transaction manager object store stores data. By default relative to the object-store-relative-to parameter's value. If object-store-relative-to is set to an empty string, this value is treated as an absolute path. object-store-relative-to References a global path configuration in the domain model. The default value is the data directory for JBoss EAP, which is the value of the property jboss.server.data.dir , and defaults to EAP_HOME /domain/data/ for a managed domain, or EAP_HOME /standalone/data/ for a standalone server instance. The value of the object store object-store-path transaction manager attribute is relative to this path. Set this attribute to an empty string to have object-store-path be treated as an absolute path. process-id-socket-binding The name of the socket binding configuration to use if the transaction manager should use a socket-based process ID. Will be undefined if process-id-uuid is true ; otherwise must be set. process-id-socket-max-ports The transaction manager creates a unique identifier for each transaction log. Two different mechanisms are provided for generating unique identifiers: a socket-based mechanism and a mechanism based on the process identifier of the process. In the case of the socket-based identifier, a socket is opened and its port number is used for the identifier. If the port is already in use, the port is probed, until a free one is found. The process-id-socket-max-ports represents the maximum number of sockets the transaction manager will try before failing. The default value is 10 . process-id-uuid Set to true to use the process identifier to create a unique identifier for each transaction. Otherwise, the socket-based mechanism is used. Defaults to true . See process-id-socket-max-ports for more information. To enable process-id-socket-binding , set process-id-uuid to false . recovery-listener Whether or not the transaction recovery process should listen on a network socket. Defaults to false . socket-binding Specifies the name of the socket binding used by the transaction periodic recovery listener when recovery-listener is set to true . statistics-enabled Whether statistics should be enabled. The default is false . status-socket-binding Specifies the socket binding to use for the transaction status manager. This configuration option is not supported. use-hornetq-store Deprecated in favor of use-journal-store . use-jdbc-store Use the JDBC store for writing transaction logs. Set to true to enable and to false to use the default log store type. use-journal-store Use Apache ActiveMQ Artemis journaled storage mechanisms instead of file-based storage for the transaction logs. This is disabled by default, but can improve I/O performance. It is not recommended for JTS transactions on separate transaction managers. When changing this option, the server has to be restarted using the shutdown command for the change to take effect. Table A.52. Log Store Attributes Attribute Description expose-all-logs Whether to expose all logs. The default is false , meaning that only a subset of transaction logs is exposed. type Specifies the implementation type of the logging store. The default is default . Table A.53. Commit Markable Resource Attributes Attribute Description batch-size The batch size for this CMR resource. The default is 100 . immediate-cleanup Whether to perform immediate cleanup for this CMR resource. The default is true . jndi-name The JNDI name of this CMR resource. name The table name for storing XIDs. The default is xids . A.24. IIOP Subsystem Attributes Note Attribute names in this table are listed as they appear in the management model, for example, when using the management CLI. See the schema definition file located at EAP_HOME /docs/schema/wildfly-iiop-openjdk_3_0.xsd to view the elements as they appear in the XML, as there may be differences from the management model. Table A.54. IIOP Subsystem Attributes Attribute Description add-component-via-interceptor Indicates whether SSL components should be added by an IOR interceptor. Deprecated . auth-method The authentication method. Valid values are none and username_password . authentication-context The name of the authentication context used when the security initializer is set to elytron . caller-propagation Indicates whether the caller identity should be propagated in the SAS context. Valid values are none and supported . client-requires Value that indicates the client SSL required parameters. Valid values are None , ServerAuth , ClientAuth , and MutualAuth . Deprecated: Use client-requires-ssl instead . client-requires-ssl Indicates whether IIOP connections from the server require SSL. client-ssl-context The name of the SSL context used to create client-side SSL sockets. client-supports Value that indicates the client SSL supported parameters. Valid values are None , ServerAuth , ClientAuth , and MutualAuth . Deprecated: Use client-requires-ssl instead . confidentiality Indicates whether the transport must require confidentiality protection or not. Valid values are none , supported , and required . Deprecated: Use server-requires-ssl instead . detect-misordering Indicates whether the transport must require misordering detection or not. Valid values are none , supported , and required . Deprecated: Use server-requires-ssl instead . detect-replay Indicates whether the transport must require replay detection or not. Valid values are none , supported , and required . Deprecated: Use server-requires-ssl instead . export-corbaloc Indicates whether the root context should be exported as corbaloc::address:port/NameService . giop-version The GIOP version to be used. high-water-mark TCP connection cache parameter. Each time the number of connections exceeds this value, the ORB tries to reclaim connections. The number of reclaimed connections is specified by the number-to-reclaim property. If this property is not set, then the OpenJDK ORB default is used. integrity Indicates whether the transport must require integrity protection or not. Valid values are none , supported , and required . Deprecated: Use server-requires-ssl instead . number-to-reclaim TCP connection cache parameter. Each time the number of connections exceeds the high-water-mark property, then the ORB tries to reclaim connections. The number of reclaimed connections is specified by this property. If it is not set, then the OpenJDK ORB default is used. persistent-server-id Persistent ID of the server. Persistent object references are valid across many activations of the server and they identify it using this property. As a result of that, many activations of the same server should have this property set to the same value, and different server instances running on the same host should have different server IDs. properties A list of generic key/value properties. realm The authentication service realm name. required Indicates whether authentication is required. root-context The naming service root context. security Indicates whether the security interceptors are to be installed. Valid values are client , identity , elytron , and none . security-domain The name of the security domain that holds the keystores and truststores that will be used to establish SSL connections. server-requires Value that indicates the server SSL required parameters. Valid values are None , ServerAuth , ClientAuth , and MutualAuth . Deprecated: Use server-requires-ssl instead . server-requires-ssl Indicates whether IIOP connections to the server require SSL. server-ssl-context The name of the SSL context used to create server-side SSL sockets. server-supports Value that indicates the server SSL supported parameters. Valid values are None , ServerAuth , ClientAuth , and MutualAuth . Deprecated: Use server-requires-ssl instead . socket-binding The name of the socket binding configuration that specifies the ORB port. ssl-socket-binding The name of the socket binding configuration that specifies the ORB SSL port. support-ssl Indicates whether SSL is supported. transactions Indicates whether the transactions interceptors are to be installed or not. Valid values are full , spec , and none . A value of full enables JTS while a value of spec enables a non-JTS spec-compliant mode that rejects incoming transaction contexts. trust-in-client Indicates if the transport must require trust in client to be established. Valid values are none , supported , and required . Deprecated: Use server-requires-ssl instead . trust-in-target Indicates if the transport must require trust in target to be established. Valid values are none and supported . Deprecated: Use server-requires-ssl instead . A.25. Resource Adapter Attributes The following tables describe the resource adapter attributes. Note Attribute names in these tables are listed as they appear in the management model, for example, when using the management CLI. See the schema definition file located at EAP_HOME /docs/schema/wildfly-resource-adapters_5_0.xsd to view the elements as they appear in the XML, as there may be differences from the management model. Table A.55. Main Attributes Attribute Description archive The resource adapter archive. beanvalidationgroups The bean validation groups that should be used. bootstrap-context The unique name of the bootstrap context that should be used. config-properties Custom defined config properties. module The module from which the resource adapter will be loaded. statistics-enabled Whether runtime statistics are enabled or not. transaction-support The transaction support level of the resource adapter. Valid values are NoTransaction , LocalTransaction , or XATransaction . wm-elytron-security-domain Defines the name of the Elytron security domain that should be used. wm-security Toggle on/off wm.security for this resource adapter. In case of false, all wm-security-* parameters are ignored, even the defaults. wm-security-default-groups A default groups list that should be added to the used Subject instance. wm-security-default-principal A default principal name that should be added to the used Subject instance. wm-security-domain The name of the security domain that should be used. wm-security-mapping-groups List of groups mappings. wm-security-mapping-required Defines if a mapping is required for security credentials. wm-security-mapping-users List of user mappings. Note If your resource adapter is using bootstrap-context along with a work manager that has elytron-enabled set to true , you must use the wm-elytron-security-domain attribute instead of the wm-security-domain attribute for security domain specification. Table A.56. admin-objects Attributes Attribute Description class-name The fully qualified class name of an administration object. enabled Specifies if the administration object should be enabled. jndi-name The JNDI name for the administration object. use-java-context Setting this to false will bind the object into global JNDI. Table A.57. connection-definitions Attributes Attribute Description allocation-retry Indicates the number of times that allocating a connection should be tried before throwing an exception. allocation-retry-wait-millis The amount of time, in milliseconds, to wait between retrying to allocate a connection. authentication-context The Elytron authentication context which defines the javax.security.auth.Subject that is used to distinguish connections in the pool. authentication-context-and-application Indicates that either application-supplied parameters, such as from getConnection(user, pw) , or Subject , are used to distinguish connections in the pool. These parameters are provided by Elytron after authentication when using a configured authentication-context . background-validation Specifies that connections should be validated on a background thread versus being validated prior to use. Changing this value requires a server restart. background-validation-millis The amount of time, in milliseconds, that background validation will run. Changing this value requires a server restart. blocking-timeout-wait-millis The maximum time, in milliseconds, to block while waiting for a connection before throwing an exception. Note that this blocks only while waiting for locking a connection, and will never throw an exception if creating a new connection takes an inordinately long time. capacity-decrementer-class Class defining the policy for decrementing connections in the pool. capacity-decrementer-properties Properties to inject in class defining the policy for decrementing connections in the pool. capacity-incrementer-class Class defining the policy for incrementing connections in the pool. capacity-incrementer-properties Properties to inject in class defining the policy for incrementing connections in the pool. class-name The fully qualified class name of a managed connection factory or admin object. connectable Enable the use of CMR. This feature means that a local resource can reliably participate in an XA transaction. elytron-enabled Enables Elytron security for handling authentication of connections. The Elytron authentication-context to be used will be the current context if no context is specified. See authentication-context for additional information. enabled Specifies if the resource adapter should be enabled. enlistment Specifies if lazy enlistment should be used if supported by the resource adapter. enlistment-trace Specifies if JBoss EAP/IronJacamar should record enlistment traces. This is false by default. flush-strategy Specifies how the pool should be flushed in case of an error. Valid values are: FailingConnectionOnly Only the failing connection is removed. This is the default setting. InvalidIdleConnections The failing connection and idle connections that share the same credentials and are returned as invalid by the ValidatingManagedConnectionFactory.getInvalidConnections(... ) method are removed. IdleConnections The failing connection and idle connections that share the same credentials are removed. Gracefully The failing connection and idle connections that share the same credentials are removed. Active connections that share the same credentials are destroyed upon return to the pool. EntirePool The failing connection and idle and active connections that share the same credentials are removed. This setting is not recommended for production systems. AllInvalidIdleConnections The failing connection and idle connections that are returned as invalid by the ValidatingManagedConnectionFactory.getInvalidConnections(... ) method are removed. AllIdleConnections The failing connection and all idle connections are removed. AllGracefully The failing connection and all idle connections are removed. Active connections are destroyed upon return to the pool. AllConnections The failing connection and all idle and active connections are removed. This setting is not recommended for production systems. idle-timeout-minutes The maximum time, in minutes, a connection may be idle before being closed. The actual maximum time depends also on the IdleRemover scan time, which is half of the smallest idle-timeout-minutes value of any pool. Changing this value requires a server restart. initial-pool-size The initial number of connections a pool should hold. interleaving Specifies whether to enable interleaving for XA connections. jndi-name The JNDI name for the connection factory. max-pool-size The maximum number of connections for a pool. No more connections will be created in each sub-pool. mcp The ManagedConnectionPool implementation. For example: org.jboss.jca.core.connectionmanager.pool.mcp.SemaphoreArrayListManagedConnectionPool . min-pool-size The minimum number of connections for a pool. no-recovery Specifies if the connection pool should be excluded from recovery. no-tx-separate-pool Oracle does not like XA connections getting used both inside and outside a Jakarta Transactions transaction. To workaround the problem you can create separate sub-pools for the different contexts. pad-xid Specifies whether the Xid should be padded. pool-fair Specifies if pool use should be fair. pool-prefill Specifies if the pool should be prefilled. Changing this value requires a server restart. pool-use-strict-min Specifies if the min-pool-size should be considered strict. recovery-authentication-context The Elytron authentication context used for recovery. If no authentication-context is specified, then the current one will be used. recovery-credential-reference Credential, from a credential store, to authenticate on recovery of the connection. recovery-elytron-enabled Indicates that an Elytron authentication context will be used for recovery. The default is false . recovery-password The password used for recovery. recovery-plugin-class-name The fully qualified class name of the recovery plugin implementation. recovery-plugin-properties The properties for the recovery plugin. recovery-security-domain The security domain used for recovery. recovery-username The user name used for recovery. same-rm-override Unconditionally set whether javax.transaction.xa.XAResource.isSameRM(XAResource) returns true or false. security-application Indicates that application-supplied parameters, such as from getConnection(user, pw) , are used to distinguish connections in the pool. security-domain The security domain which defines the javax.security.auth.Subject that is used to distinguish connections in the pool. security-domain-and-application Indicates that either application-supplied parameters, such as from getConnection(user, pw) , or Subject , from the security domain, are used to distinguish connections in the pool. sharable Enable the use of sharable connections, which allows lazy association to be enabled if supported. tracking Specifies if IronJacamar should track connection handles across transaction boundaries. use-ccm Enable the use of a cached connection manager. use-fast-fail When set to true , fail a connection allocation on the first try if it is invalid. When set to false , keep trying until the pool is exhausted of all potential connections. use-java-context Setting this to false will bind the object into global JNDI. validate-on-match Specifies if connection validation should be done when a connection factory attempts to match a managed connection. This is typically exclusive to the use of background validation. wrap-xa-resource Specifies whether XAResource instances should be wrapped in an org.jboss.tm.XAResourceWrapper instance. xa-resource-timeout The value is passed to XAResource.setTransactionTimeout() , in seconds. The default is 0 . A.26. Resource Adapter Statistics Table A.58. Resource Adapter Statistics Name Description ActiveCount The number of active connections. Each of the connections is either in use by an application or available in the pool AvailableCount The number of available connections in the pool. AverageBlockingTime The average time spent blocking on obtaining an exclusive lock on the pool. The value is in milliseconds. AverageCreationTime The average time spent creating a connection. The value is in milliseconds. CreatedCount The number of connections created. DestroyedCount The number of connections destroyed. InUseCount The number of connections currently in use. MaxCreationTime The maximum time it took to create a connection. The value is in milliseconds. MaxUsedCount The maximum number of connections used. MaxWaitCount The maximum number of requests waiting for a connection at the same time. MaxWaitTime The maximum time spent waiting for an exclusive lock on the pool. TimedOut The number of timed out connections. TotalBlockingTime The total time spent waiting for an exclusive lock on the pool. The value is in milliseconds. TotalCreationTime The total time spent creating connections. The value is in milliseconds. WaitCount The number of requests that had to wait for a connection. A.27. Undertow Subsystem Attributes See the tables below for the attributes of the various elements of the undertow subsystem. Note Attribute names in these tables are listed as they appear in the management model, for example, when using the management CLI. See the schema definition file located at EAP_HOME /docs/schema/wildfly-undertow_4_0.xsd to view the elements as they appear in the XML, as there may be differences from the management model. Main Attributes Application Security Domain Attributes Buffer Cache Attributes Byte Buffer Pool Attributes Servlet Container Attributes Filter Attributes Handler Attributes Server Attributes Table A.59. Main undertow Attributes Attribute Default Description default-security-domain other The default security domain used by web deployments. default-server default-server The default server to use for deployments. default-servlet-container default The default servlet container to use for deployments. default-virtual-host default-host The default virtual host to use for deployments. instance-id USD{jboss.node.name} The cluster instance ID. obfuscate-session-route true Whether the instance-id value is obfuscated during server routing. The obfuscated server route does not change across server restarts, unless there is a change in the instance-id value. statistics-enabled false Whether statistics are enabled. Application Security Domain Attributes The application security domain attributes has the following structure: application-security-domain setting single-sign-on application-security-domain Attributes Table A.60. application-security-domain Attributes Attribute Default Description enable-jacc false Enable authorization using Jakarta Authorization. enable-jaspi true Enable Jakarta Authentication for the associated deployments. http-authentication-factory The HTTP authentication factory to be used by deployments that reference the mapped security domain. integrated-jaspi true Whether integrated-jaspi should be used. When set to true during Jakarta Authentication authentication, the identity is loaded from the SecurityDomain referenced by the deployment. When set to false , an ad hoc identity is created instead. override-deployment-config false Whether the authentication configuration in the deployment should be overridden by the factory. referencing-deployments The deployments currently referencing this mapping. security-domain The SecurityDomain to be used by the deployments. single-sign-on Attributes Table A.61. single-sign-on Attributes Attribute Default Description client-ssl-context Reference to the SSL context used to secure back-channel logout connection. cookie-name JSESSIONIDSSO Name of the cookie. credential-reference The credential reference to decrypt the private key entry. domain The cookie domain that will be used. http-only false Set cookie httpOnly attribute. key-alias Alias of the private key entry used for signing and verifying back-channel logout connection. key-store Reference to keystore containing a private key entry. path / Cookie path. secure false Set cookie secure attribute. Buffer Cache Attributes Table A.62. buffer-cache Attributes Attribute Default Description buffer-size 1024 The size of the buffers. Smaller buffers allow space to be utilized more effectively. buffers-per-region 1024 The numbers of buffers per region. max-regions 10 The maximum number of regions. This controls the maximum amount of memory that can be used for caching. Byte Buffer Pool Attributes Table A.63. byte-buffer-pool Attributes Attribute Default Description buffer-size The size, in bytes, of each buffer slice. If not specified, the size is set based on the available RAM of your system: 512 bytes for less than 64 MB RAM 1024 bytes (1 KB) for 64 MB - 128 MB RAM 16384 bytes (16 KB) for more than 128 MB RAM For performance tuning advice on this attribute, see Configuring Buffer Pools in the JBoss EAP Performance Tuning Guide . direct Boolean value that denotes if this buffer is a direct or heap pool. If not specified, the value is set based on the available RAM of your system: If available RAM is < 64MB, the value is set to false If available RAM is >= 64MB, the value is set to true Note that direct pools also have a corresponding heap pool. leak-detection-percent 0 The percentage of buffers that should be allocated with a leak detector. max-pool-size The maximum number of buffers to keep in the pool. Buffers will still be allocated above this limit, but will not be retained if the pool is full. thread-local-cache-size 12 The size of the per-thread cache. This is a maximum size, the cache will use smart sizing to only keep buffers on the thread if the thread is actually allocating buffers. Servlet Container Attributes The servlet container component has the following structure: servlet-container mime-mapping setting crawler-session-management jsp persistent-sessions session-cookie websockets welcome-file servlet-container Attributes Table A.64. servlet-container Attributes Attribute Default Description allow-non-standard-wrappers false Whether request and response wrappers that do not extend the standard wrapper classes can be used. default-buffer-cache default The buffer cache to use for caching static resources. default-cookie-version 0 The default cookie version to use for cookies created by the application. default-encoding Default encoding to use for all deployed applications. default-session-timeout 30 The default session timeout in minutes for all applications deployed in the container. directory-listing If directory listing should be enabled for default servlets. disable-caching-for-secured-pages true Whether to set headers to disable caching for secured paged. Disabling this can cause security problems, as sensitive pages may be cached by an intermediary. disable-file-watch-service false If set to true , then the file watch service will not be used to monitor exploded deployments for changes. This attribute overrides the io.undertow.disable-file-system-watcher system property. disable-session-id-reuse false If set to true , then an unknown session ID will never be reused and a new session ID will be generated. If set to false , then the session ID will be reused only if it is present in the session manager of another deployment to allow the same session ID to be shared between applications on the same server. eager-filter-initialization false Whether to call filter init() on deployment start rather than when first requested. ignore-flush false Ignore flushes on the servlet output stream. In most cases these just hurt performance for no good reason. max-sessions The maximum number of sessions that can be active at one time. proactive-authentication true Whether proactive authentication should be used. If this is true , a user will always be authenticated if credentials are present. session-id-length 30 Longer session ID's are more secure. This value specifies the length of the generated session ID in bytes. The system encodes the generated session ID as a Base64 string and provides the result to the client as a session ID cookie. As a result of this processing, the server sends to the client a cookie value that is approximately 33% larger than the session ID that it originally generated. For example, a session ID length of 30 results in a cookie value length of 40. stack-trace-on-error local-only If an error page with the stack trace should be generated on error. Values are all , none and local-only . use-listener-encoding false Use encoding defined on listener. mime-mapping Attributes Table A.65. mime-mapping Attributes Attribute Default Description value The mime type for this mapping. crawler-session-management Attributes Configures special session handling for crawler bots. Note When using the management CLI to manage the crawler-session-management element, it is available under settings in the servlet-container element. For example: Table A.66. crawler-session-management Attributes Attribute Default Description session-timeout The session timeout in seconds for sessions that are owned by crawlers. user-agents Regular expression that is used to match the user agent of a crawler. jsp Attributes Note When using the management CLI to manage the jsp element, it is available under settings in the servlet-container element. For example: Table A.67. jsp Attributes Attribute Default Description check-interval 0 Check interval for Jakarta Server Pages updates using a background thread. This has no effect for most deployments where Jakarta Server Pages change notifications are handled using the file system notification API. This only takes effect if the file watch service is disabled. development false Enable development mode which enables reloading Jakarta Server Pages on-the-fly. disabled false Enable the Jakarta Server Pages container. display-source-fragment true When a runtime error occurs, attempts to display corresponding Jakarta Server Pages source fragment. dump-smap false Write SMAP data to a file. error-on-use-bean-invalid-class-attribute false Enable errors when using a bad class in useBean. generate-strings-as-char-arrays false Generate String constants as char arrays. java-encoding UTF8 Specify the encoding used for Java sources. keep-generated true Keep the generated servlets. mapped-file true Map to the Jakarta Server Pages source. modification-test-interval 4 Minimum amount of time between two tests for updates, in seconds. optimize-scriptlets false If Jakarta Server Pages scriptlets should be optimized to remove string concatenation. recompile-on-fail false Retry failed Jakarta Server Pages compilations on each request. scratch-dir Specify a different work directory. smap true Enable SMAP. source-vm 1.8 Source VM level for compilation. tag-pooling true Enable tag pooling. target-vm 1.8 Target VM level for compilation. trim-spaces false Trim some spaces from the generated servlet. x-powered-by true Enable advertising the Jakarta Server Pages engine in x-powered-by. persistent-sessions Attributes Note When using the management CLI to manage the persistent-sessions element, it is available under settings in the servlet-container element. For example: Table A.68. persistent-sessions Attributes Attribute Default Description path The path to the persistent session data directory. If this is null, sessions will be stored in memory. relative-to The directory the path is relative to. session-cookie Attributes Note When using the management CLI to manage the session-cookie element, it is available under settings in the servlet-container element. For example: Table A.69. session-cookie Attributes Attribute Default Description comment Cookie comment. domain Cookie domain. http-only Whether the cookie is http-only. max-age Maximum age of the cookie. name Name of the cookie. secure Whether the cookie is secure. websockets Attributes Note When using the management CLI to manage the websockets element, it is available under settings in the servlet-container element. For example: Table A.70. websockets Attributes Attribute Default Description buffer-pool default The buffer pool to use for websocket deployments. deflater-level 0 Configures the level of compression of the DEFLATE algorithm. dispatch-to-worker true Whether callbacks should be dispatched to a worker thread. If this is false , then they will be run in the IO thread, which is faster however care must be taken not to perform blocking operations. per-message-deflate false Enables websocket's per-message compression extension. worker default The worker to use for websocket deployments. welcome-file Attributes Defines a welcome file and has no options. Filter Attributes These components can be found at /subsystem=undertow/configuration=filter . custom-filter Filters Table A.71. custom-filter Attributes Attribute Default Description class-name Class name of HttpHandler. module Module name where class can be loaded from. parameters Filter parameters. error-page Filters The error pages Table A.72. error-page Attributes Attribute Default Description code Error page code. path Error page path. expression-filter Filters A filter parsed from the Undertow expression language. Table A.73. expression-filter Attributes Attribute Default Description expression The expression that defines the filter. module Module to use to load the filter definitions. gzip Filters Defines the gzip filter and has no attributes. mod-cluster Filters The mod-cluster filter component has the following structure: mod-cluster balancer load-balancing-group node context Table A.74. mod-cluster Attributes Attribute Default Description advertise-frequency 10000 The frequency in milliseconds that mod_cluster advertises itself on the network. advertise-path / The path that mod_cluster is registered under. advertise-protocol http The protocol that is in use. advertise-socket-binding The multicast group that is used to advertise. broken-node-timeout 60000 The amount of time that must elapse before a broken node is removed from the table. cached-connections-per-thread 5 The number of connections that will be kept alive indefinitely. connection-idle-timeout 60 The amount of time a connection can be idle before it will be closed. Connections will not time out once the pool size is down to the configured minimum, which is configured by cached-connections-per-thread . connections-per-thread 10 The number of connections that will be maintained to back-end servers, per IO thread. enable-http2 false Whether the load balancer should attempt to upgrade back-end connections to HTTP/2. If HTTP/2 is not supported, HTTP or HTTPS will be used as normal. failover-strategy LOAD_BALANCED The attribute that determines how a failover node is chosen, in the event that the node to which a session has affinity is not available. health-check-interval 10000 The frequency of health check pings to back-end nodes. http2-enable-push true Whether push should be enabled for HTTP/2 connections. http2-header-table-size 4096 The size of the header table used for HPACK compression, in bytes. This amount of memory will be allocated per connection for compression. Larger values use more memory but may give better compression. http2-initial-window-size 65535 The flow control window size, in bytes, that controls how quickly the client can send data to the server. http2-max-concurrent-streams The maximum number of HTTP/2 streams that can be active at any time on a single connection. http2-max-frame-size 16384 The maximum HTTP/2 frame size, in bytes. http2-max-header-list-size The maximum size, in bytes, of request headers the server is prepared to accept. management-access-predicate A predicate that is applied to incoming requests to determine if they can perform mod cluster management commands. Provides additional security on top of what is provided by limiting management to requests that originate from the management-socket-binding . management-socket-binding The socket binding of the mod_cluster management port. When using mod_cluster two HTTP listeners should be defined, a public one to handle requests, and one bound to the internal network to handle mod cluster commands. This socket binding should correspond to the internal listener, and should not be publicly accessible. max-ajp-packet-size 8192 The maximum size, in bytes, for AJP packets. Increasing this will allow AJP to work for requests and responses that have a large amount of headers. This must be the same between load balancers and backend servers. max-request-time -1 The maximum amount of time that a request to a back-end node can take before it is killed. max-retries 1 The number of times that an attempt to retry a request will be made, if the request fails. Note If a request is not considered idempotent, it will only be retried if the proxy can be sure that it was not sent to the backend server. request-queue-size 10 The number of requests that can be queued if the connection pool is full before requests are rejected with a 503. security-key The security key that is used for the mod_cluster group. All members must use the same security key. security-realm The security realm that provides the SSL configuration. Deprecated: Use the ssl-context attribute to reference a configured SSLContext directly. ssl-context The reference to the SSLContext that is used by the filter. use-alias false Whether an alias check is performed. worker default The XNIO worker that is used to send the advertise notifications. Table A.75. balancer Attributes Attribute Default Description max-attempts The number of attempts to send the request to a back-end server. sticky-session If sticky sessions are enabled. sticky-session-cookie The session cookie name. sticky-session-force If this is true , then an error will be returned if the request cannot be routed to the sticky node, otherwise it will be routed to another node. sticky-session-path The path of the sticky session cookie. sticky-session-remove Remove the session cookie if the request cannot be routed to the correct host. wait-worker The number of seconds to wait for an available worker. load-balancing-group Attributes Defines a load balancing group and has no options. Table A.76. node Attributes Attribute Default Description aliases The nodes aliases. cache-connections The number of connections to keep alive indefinitely. elected The elected count. flush-packets If received data should be immediately flushed. load The current load of this node. load-balancing-group The load balancing group this node belongs to. max-connections The maximum number of connections per IO thread. open-connections The current number of open connections. ping The nodes ping. queue-new-requests If a request is received and there is no worker immediately available should it be queued. read The number of bytes read from the node. request-queue-size The size of the request queue. status The current status of this node. timeout The request timeout. ttl The time connections will stay alive with no requests before being closed, if the number of connections is larger than cache-connections . uri The URI that the load balancer uses to connect to the node. written The number of bytes transferred to the node. Table A.77. context Attributes Attribute Default Description requests The number of requests against this context. status The status of this context. request-limit Filters Table A.78. request-limit Attributes Attribute Default Description max-concurrent-requests Maximum number of concurrent requests. queue-size Number of requests to queue before they start being rejected. response-header Filters Response header filter allows you to add custom headers. Table A.79. response-header Attributes Attribute Default Description header-name The header name. header-value The header value. rewrite Filters Table A.80. rewrite Attributes Attribute Default Description redirect false Whether a redirect will be done instead of a rewrite. target The expression that defines the target. If you are redirecting to a constant target put single quotes around the value. Handler Attributes These components can be found at /subsystem=undertow/configuration=handler . file Attributes Table A.81. file Attributes Attribute Default Description cache-buffer-size 1024 Size of the buffers. cache-buffers 1024 Number of buffers. case-sensitive true Whether to use case-sensitive file handling. Note that setting this to false for case insensitivity will only work if the underlying file system is case insensitive. directory-listing false Whether to enable directory listing. follow-symlink false Whether to enable following symbolic links. path Path on the file system from where file handler will serve resources. safe-symlink-paths Paths that are safe to be targets of symbolic links. Using WebDAV for Static Resources versions of JBoss EAP allowed for using WebDAV with the web subsystem, by way of the WebdavServlet , to host static resources and enable additional HTTP methods for accessing and manipulating those files. In JBoss EAP 7, the undertow subsystem does provide a mechanism for serving static files using a file handler, but the undertow subsystem does not support WebDAV. If you want to use WebDAV with JBoss EAP 7, you can write a custom WebDAV servlet. reverse-proxy attributes The reverse-proxy handler component has the following structure: reverse-proxy host Table A.82. reverse-proxy Attributes Attribute Default Description cached-connections-per-thread 5 The number of connections that will be kept alive indefinitely. connection-idle-timeout 60 The amount of time a connection can be idle before it will be closed. Connections will not time out once the pool size is down to the configured minimum (as configured by cached-connections-per-thread). connections-per-thread 40 The number of connections that will be maintained to back-end servers, per IO thread. max-request-time -1 The maximum time that a proxy request can be active for, before being killed. Defaults to unlimited. max-retries 1 The number of times that an attempt to retry a request will be made, if the request fails. Note If a request is not considered idempotent, it will only be retried if the proxy can be sure that it was not sent to the backend server. problem-server-retry 30 Time in seconds to wait before attempting to reconnect to a server that is down. request-queue-size 10 The number of requests that can be queued if the connection pool is full before requests are rejected with a 503. session-cookie-names JSESSIONID Comma-separated list of session cookie names. Generally this will just be JSESSIONID. Table A.83. host Attributes Attribute Default Description enable-http2 false If true , then the proxy will attempt to use HTTP/2 to connect to the back end. If it is not supported, it will fall back to HTTP/1.1. instance-id The instance ID, or JVM route, that will be used to enable sticky sessions. outbound-socket-binding Outbound socket binding for this host. path / Optional path if host is using non root resource. scheme http The kind of scheme that is used. security-realm The security realm that provides the SSL configuration for the connection to the host. ssl-context Reference to the SSLContext to be used by this handler. Server Attributes The server component has the following structure: server ajp-listener host filter-ref location filter-ref setting access-log console-access-log http-invoker single-sign-on http-listener https-listener server Attributes Table A.84. server Attributes Attribute Default Description default-host default-host The server's default virtual host. servlet-container default The server's default servlet container. ajp-listener Attributes Table A.85. ajp-listener Attributes Attribute Default Description allow-encoded-slash false If a request comes in with encoded characters, for example %2F , whether these will be decoded. allow-equals-in-cookie-value false Whether to allow non-escaped equals characters in unquoted cookie values. Unquoted cookie values may not contain equals characters. If present the value ends before the equals sign. The remainder of the cookie value will be dropped. allow-unescaped-characters-in-url false Whether to allow non-escaped characters in a URL. If set to true , the listener processes any URL containing non-escaped, non-ASCII characters. If set to false , the listener rejects any URL containing non-escaped, non-ASCII characters with an HTTP Bad Request 400 response code. always-set-keep-alive true Whether a Connection: keep-alive header will be added to responses, even when it is not strictly required by the specification. buffer-pipelined-data false Whether to buffer pipelined requests. buffer-pool default The AJP listener's buffer pool. decode-url true If this is true then the parser will decode the URL and query parameters using the selected character encoding, defaulting to UTF-8. If this is false they will not be decoded. This will allow a later handler to decode them into whatever charset is desired. disallowed-methods ["TRACE"] A comma-separated list of HTTP methods that are not allowed. enabled true If the listener is enabled. Deprecated: Enabled attributes can cause problems in enforcement of configuration consistency. max-ajp-packet-size 8192 The maximum supported size of AJP packets. If this is modified it has be increased on the load balancer and the back-end server. max-buffered-request-size 16384 Maximum size of a buffered request, in bytesRequests are not usually buffered, the most common case is when performing SSL renegotiation for a POST request, and the post data must be fully buffered in order to perform the renegotiation. max-connections The maximum number of concurrent connections. If no value is set in the server configuration, the limit for the number of concurrent connections is Integer.MAX_VALUE . max-cookies 200 The maximum number of cookies that will be parsed. This is used to protect against hash vulnerabilities. max-header-size 1048576 The maximum size in bytes of an HTTP request header. max-headers 200 The maximum number of headers that will be parsed. This is used to protect against hash vulnerabilities. max-parameters 1000 The maximum number of parameters that will be parsed. This is used to protect against hash vulnerabilities. This applies to both query parameters, and to POST data, but is not cumulative. For example, you can potentially have max parameters * 2 total parameters. max-post-size 10485760 The maximum size of a post that will be accepted no-request-timeout 60000 The length of time in milliseconds that the connection can be idle before it is closed by the container. read-timeout Configure a read timeout for a socket, in milliseconds. If the given amount of time elapses without a successful read taking place, the socket's read will throw a ReadTimeoutException . receive-buffer The receive buffer size. record-request-start-time false Whether to record the request start time, to allow for request time to be logged. This has a small but measurable performance impact. redirect-socket If this listener is supporting non-SSL requests, and a request is received for which a matching requires SSL transport, whether to automatically redirect the request to the socket binding port specified here. request-parse-timeout The maximum amount of time in milliseconds that can be spent parsing the request. resolve-peer-address false Enables host DNS lookup. scheme The listener scheme, can be HTTP or HTTPS. By default the scheme will be taken from the incoming AJP request. secure false If this is true , then requests that originate from this listener are marked as secure, even if the request is not using HTTPS. send-buffer The send buffer size. socket-binding The AJP listener's socket binding. tcp-backlog Configure a server with the specified backlog. tcp-keep-alive Configure a channel to send TCP keep-alive messages in an implementation-dependent manner. url-charset UTF-8 URL charset. worker default The listener's XNIO worker. write-timeout Configure a write timeout for a socket, in milliseconds. If the given amount of time elapses without a successful write taking place, the socket's write will throw a WriteTimeoutException . host Attributes Table A.86. host Attributes Attribute Default Description alias Comma-separated list of aliases for the host. default-response-code 404 If set, this will be response code sent back in case requested context does not exist on server. default-web-module ROOT.war Default web module. disable-console-redirect false If set to true , /console redirect will not be enabled for this host. queue-requests-on-start true If set to true , requests should be queued on start for this host. If set to false , the default response code is returned instead. filter-ref Attributes Table A.87. filter-ref Attributes Attribute Default Description predicate Predicates provide a simple way of making a true/false decision based on an exchange. Many handlers have a requirement that they be applied conditionally, and predicates provide a general way to specify a condition. priority 1 Defines filter order. A lower number instructs the server to be included earlier in the handler chain than others above the same context. Values range from 1 , indicating the filter will be handled first, to 2147483647 , resulting in the filter being handled last. location Attributes Table A.88. location Attributes Attribute Default Description handler Default handler for this location. filter-ref Attributes Table A.89. filter-ref Attributes Attribute Default Description predicate Predicates provide a simple way of making a true/false decision based on an exchange. Many handlers have a requirement that they be applied conditionally, and predicates provide a general way to specify a condition. priority 1 Defines filter order. It should be set to 1 or more. A higher number instructs the server to be included earlier in the handler chain than others under the same context. access-log Attributes Note When using the management CLI to manage the access-log element, it is available under settings in the host element. For example: Table A.90. access-log Attributes Attribute Default Description directory USD{jboss.server.log.dir} The directory in which to save logs. extended false Whether the log uses the extended log file format. pattern common The access log pattern. For details about the options available for this attribute, see Provided Undertow Handlers in the JBoss EAP Development Guide . Note If you set the pattern to print the time taken to process the request, you must also enable the record-request-start-time attribute on the appropriate listeners; otherwise the time will not be recorded properly in the access log. For example: predicate Predicate that determines whether the request should be logged. prefix access_log. Prefix for the log file name. relative-to The directory the path is relative to. rotate true Whether to rotate the access log every day. suffix log Suffix for the log file name. use-server-log false Whether the log should be written to the server log, rather than a separate file. worker default Name of the worker to use for logging. console-access-log Attributes Table A.91. console-access-log attributes Attribute Default Description attributes {remote-host={},remote-user={},date-time={},request-line={},response-code={},bytes-sent={}} Specifies log data to include in the console access log output, or customizations to default data. include-host-name false Specifies whether to include the host name in the JSON structured output. If set to true the key in the structured data is "hostName" and the value is the name of the host for which the console-access-log is configured. metadata Specifies custom metadata to include in console access log output. predicate Predicate that determines whether the request should be logged. worker default Name of the worker to use for logging. http-invoker Attributes Table A.92. http-invoker Attributes Attribute Default Description http-authentication-factory The HTTP authentication factory to use for authentication. path wildfly-services The path that the services are installed under. security-realm The legacy security realm to use for authentication. single-sign-on Attributes Note When using the management CLI to manage the single-sign-on element, it is available under settings in the host element. For example: Important While distributed single sign-on is no different from an application perspective from versions of JBoss EAP, in JBoss EAP 7 the caching and distribution of authentication information is handled differently. For JBoss EAP 7, when running the ha profile, by default each host will have its own Infinispan cache which will store the relevant session and SSO cookie information. This cache is based on the default cache of the web cache container. JBoss EAP will also handle propagating information between all hosts' individual caches. Table A.93. single-sign-on Attributes Attribute Default Description cookie-name JSESSIONIDSSO Name of the cookie. domain The cookie domain that will be used. http-only false Set cookie httpOnly attribute. path / Cookie path. secure false Set cookie secure attribute. http-listener Attributes Table A.94. http-listener Attributes Attribute Default Description allow-encoded-slash false If a request comes in with encoded characters, for example %2F , whether these will be decoded. allow-equals-in-cookie-value false Whether to allow non-escaped equals characters in unquoted cookie values. Unquoted cookie values may not contain equals characters. If present the value ends before the equals sign. The remainder of the cookie value will be dropped. allow-unescaped-characters-in-url false Whether to allow non-escaped characters in a URL. If set to true , the listener processes any URL containing non-escaped, non-ASCII characters. If set to false , the listener rejects any URL containing non-escaped, non-ASCII characters with an HTTP Bad Request 400 response code. always-set-keep-alive true Whether a Connection: keep-alive header will be added to responses, even when it is not strictly required by the specification. buffer-pipelined-data false Whether to buffer pipelined requests. buffer-pool default The listener's buffer pool. certificate-forwarding false Whether certificate forwarding should be enabled. If this is enabled then the listener will take the certificate from the SSL_CLIENT_CERT attribute. This should only be enabled if behind a proxy, and the proxy is configured to always set these headers. decode-url true Whether the parser will decode the URL and query parameters using the selected character encoding, defaulting to UTF-8. If this is false they will not be decoded. This will allow a later handler to decode them into whatever charset is desired. disallowed-methods ["TRACE"] A comma-separated list of HTTP methods that are not allowed. enable-http2 false Whether to enable HTTP/2 support for this listener. enabled true Whether the listener is enabled. Deprecated: Enabled attributes can cause problems in enforcement of configuration consistency. http2-enable-push true Whether server push is enabled for this connection. http2-header-table-size 4096 The size, in bytes, of the header table used for HPACK compression. This amount of memory will be allocated per connection for compression. Larger values use more memory but may give better compression. http2-initial-window-size 65535 The flow control window size, in bytes, that controls how quickly the client can send data to the server. http2-max-concurrent-streams The maximum number of HTTP/2 streams that can be active at any time on a single connection. http2-max-frame-size 16384 The maximum HTTP/2 frame size, in bytes. http2-max-header-list-size The maximum size of request headers the server is prepared to accept. max-buffered-request-size 16384 Maximum size of a buffered request, in bytesRequests are not usually buffered, the most common case is when performing SSL renegotiation for a POST request, and the post data must be fully buffered in order to perform the renegotiation. max-connections The maximum number of concurrent connections. If no value is set in the server configuration, the limit for the number of concurrent connections is Integer.MAX_VALUE . max-cookies 200 The maximum number of cookies that will be parsed. This is used to protect against hash vulnerabilities. max-header-size 1048576 The maximum size in bytes of an HTTP request header. max-headers 200 The maximum number of headers that will be parsed. This is used to protect against hash vulnerabilities. max-parameters 1000 The maximum number of parameters that will be parsed. This is used to protect against hash vulnerabilities. This applies to both query parameters, and to POST data, but is not cumulative. For example, you can potentially have max parameters * 2 total parameters). max-post-size 10485760 The maximum size of a post that will be accepted. no-request-timeout 60000 The length of time in milliseconds that the connection can be idle before it is closed by the container. proxy-address-forwarding false Whether to enable x-forwarded-host and similar headers and set a remote IP address and host name. proxy-protocol false Whether to use the PROXY protocol to transport connection information. If set to true , the listener uses the PROXY protocol Version 1, as defined by The PROXY protocol Versions 1 & 2 specification. This option must only be enabled for listeners that are behind a load balancer that supports the same protocol. read-timeout Configure a read timeout for a socket, in milliseconds. If the given amount of time elapses without a successful read taking place, the socket's read will throw a ReadTimeoutException . receive-buffer The receive buffer size. record-request-start-time false Whether to record the request start time, to allow for request time to be logged. This has a small but measurable performance impact. redirect-socket If this listener is supporting non-SSL requests, and a request is received for which a matching requires SSL transport, whether to automatically redirect the request to the socket binding port specified here. request-parse-timeout The maximum amount of time in milliseconds that can be spent parsing the request. require-host-http11 false It requires all HTTP/1.1 requests to have a Host header. If the request does not include this header it will be rejected with a 403 error. resolve-peer-address false Enables host DNS lookup. secure false If this is true , requests that originate from this listener are marked as secure, even if the request is not using HTTPS. send-buffer The send buffer size. socket-binding The listener's socket binding tcp-backlog Configure a server with the specified backlog. tcp-keep-alive Configure a channel to send TCP keep-alive messages in an implementation-dependent manner. url-charset UTF-8 URL charset. worker default The listener's XNIO worker. write-timeout Configure a write timeout for a socket, in milliseconds. If the given amount of time elapses without a successful write taking place, the socket's write will throw a WriteTimeoutException . https-listener Attributes Table A.95. https-listener Attributes Attribute Default Description allow-encoded-slash false If a request comes in with encoded characters, for example %2F , whether these will be decoded. allow-equals-in-cookie-value false Whether to allow non-escaped equals characters in unquoted cookie values. Unquoted cookie values may not contain equals characters. If present the value ends before the equals sign. The remainder of the cookie value will be dropped. allow-unescaped-characters-in-url false Whether to allow non-escaped characters in a URL. If set to true , the listener processes any URL containing non-escaped, non-ASCII characters. If set to false , the listener rejects any URL containing non-escaped, non-ASCII characters with an HTTP Bad Request 400 response code. always-set-keep-alive true Whether a Connection: keep-alive header will be added to responses, even when it is not strictly required by the specification. buffer-pipelined-data false Whether to buffer pipelined requests. buffer-pool default The listener's buffer pool. certificate-forwarding false Whether certificate forwarding should be enabled or not. If this is enabled then the listener will take the certificate from the SSL_CLIENT_CERT attribute. This should only be enabled if behind a proxy, and the proxy is configured to always set these headers. decode-url true Whether the parser will decode the URL and query parameters using the selected character encoding, defaulting to UTF-8. If this is false they will not be decoded. This will allow a later handler to decode them into whatever charset is desired. disallowed-methods ["TRACE"] A comma-separated list of HTTP methods that are not allowed. enable-http2 false Enables HTTP/2 support for this listener. enable-spdy false Enables SPDY support for this listener. Deprecated: SPDY has been replaced by HTTP/2. enabled true If the listener is enabled. Deprecated: Enabled attributes can cause problems in enforcement of configuration consistency. enabled-cipher-suites Configures Enabled SSL ciphers. Deprecated: Where an SSLContext is referenced it should be configured with the cipher suites to be supported. enabled-protocols Configures SSL protocols. Deprecated: Where an SSLContext is referenced it should be configured with the cipher suites to be supported. http2-enable-push true If server push is enabled for this connection. http2-header-table-size 4096 The size, in bytes, of the header table used for HPACK compression. This amount of memory will be allocated per connection for compression. Larger values use more memory but may give better compression. http2-initial-window-size 65535 The flow control window size, in bytes, that controls how quickly the client can send data to the server. http2-max-concurrent-streams The maximum number of HTTP/2 streams that can be active at any time on a single connection. http2-max-frame-size 16384 The maximum HTTP/2 frame size, in bytes. http2-max-header-list-size The maximum size of request headers the server is prepared to accept. max-buffered-request-size 16384 Maximum size of a buffered request, in bytesRequests are not usually buffered, the most common case is when performing SSL renegotiation for a POST request, and the post data must be fully buffered in order to perform the renegotiation. max-connections The maximum number of concurrent connections. If no value is set in the server configuration, the limit for the number of concurrent connections is Integer.MAX_VALUE . max-cookies 100 The maximum number of cookies that will be parsed. This is used to protect against hash vulnerabilities. max-header-size 1048576 The maximum size in bytes of an HTTP request header. max-headers 200 The maximum number of headers that will be parsed. This is used to protect against hash vulnerabilities. max-parameters 1000 The maximum number of parameters that will be parsed. This is used to protect against hash vulnerabilities. This applies to both query parameters, and to POST data, but is not cumulative. For example, you can potentially have max parameters * 2 total parameters. max-post-size 10485760 The maximum size of a post that will be accepted. no-request-timeout 60000 The length of time in milliseconds that the connection can be idle before it is closed by the container. proxy-address-forwarding false Enables handling of x-forwarded-host header, and other x-forwarded-* headers, and uses this header information to set the remote address. This should only be used behind a trusted proxy that sets these headers otherwise a remote user can spoof their IP address. proxy-protocol false Whether to use the PROXY protocol to transport connection information. If set to true , the listener uses the PROXY protocol Version 1, as defined by The PROXY protocol Versions 1 & 2 specification. This option must only be enabled for listeners that are behind a load balancer that supports the same protocol. read-timeout Configure a read timeout for a socket, in milliseconds. If the given amount of time elapses without a successful read taking place, the socket's read will throw a ReadTimeoutException . receive-buffer The receive buffer size. record-request-start-time false Whether to record the request start time, to allow for request time to be logged. This has a small but measurable performance impact. request-parse-timeout The maximum amount of time in milliseconds that can be spent parsing the request. require-host-http11 false Require that all HTTP/1.1 requests have a 'Host' header. If the request does not include this header it will be rejected with a 403. resolve-peer-address false Enables host DNS lookup. secure false If this is true then requests that originate from this listener are marked as secure, even if the request is not using HTTPS. security-realm The listener's security realm. Deprecated: Use the ssl-context attribute to reference a configured SSLContext directly. send-buffer The send buffer size. socket-binding The listener's socket binding. ssl-context Reference to the SSLContext to be used by this listener. ssl-session-cache-size The maximum number of active SSL sessions. Deprecated: This can now be configured on the Elytron security context. ssl-session-timeout The timeout for SSL sessions, in seconds. Deprecated: This can now be configured on the Elytron security context. tcp-backlog Configure a server with the specified backlog. tcp-keep-alive Configure a channel to send TCP keep-alive messages in an implementation-dependent manner. url-charset UTF-8 URL charset. verify-client NOT_REQUESTED The desired SSL client authentication mode for SSL channels. Deprecated: Where an SSLContext is referenced it should be configured directly for the required mode of client verification. worker default The listener's XNIO worker. write-timeout Configure a write timeout for a socket, in milliseconds. If the given amount of time elapses without a successful write taking place, the socket's write will throw a WriteTimeoutException . A.28. Undertow Subsystem Statistics Table A.96. ajp-listener Statistics Name Description bytes-received The number of bytes that have been received by this listener. bytes-sent The number of bytes that have been sent out on this listener. error-count The number of 500 responses that have been sent by this listener. max-processing-time The maximum processing time taken by a request on this listener. processing-time The total processing time of all requests handed by this listener. request-count The number of requests this listener has served. Table A.97. http-listener Statistics Name Description bytes-received The number of bytes that have been received by this listener. bytes-sent The number of bytes that have been sent out on this listener. error-count The number of 500 responses that have been sent by this listener. max-processing-time The maximum processing time taken by a request on this listener. processing-time The total processing time of all requests handed by this listener. request-count The number of requests this listener has served. Table A.98. https-listener Statistics Name Description bytes-received The number of bytes that have been received by this listener. bytes-sent The number of bytes that have been sent out on this listener. error-count The number of 500 responses that have been sent by this listener. max-processing-time The maximum processing time taken by a request on this listener. processing-time The total processing time of all requests handed by this listener. request-count The number of requests this listener has served. A.29. Default Behavior of HTTP Methods Compared to the web subsystem in JBoss EAP releases, the undertow subsystem in JBoss EAP 7.4 has different default behaviors of HTTP methods. The following table outlines the default behaviors in JBoss EAP 7.4. Table A.99. HTTP Method Default Behavior HTTP Method Jakarta Server Pages Static HTML Static HTML by File Handler GET OK OK OK POST OK NOT_ALLOWED OK HEAD OK OK OK PUT NOT_ALLOWED NOT_ALLOWED NOT_ALLOWED TRACE NOT_ALLOWED NOT_ALLOWED NOT_ALLOWED DELETE NOT_ALLOWED NOT_ALLOWED NOT_ALLOWED OPTIONS NOT_ALLOWED OK NOT_ALLOWED Note For servlets, the default behavior depends on its implementation, except for the TRACE method, which has a default behavior of NOT_ALLOWED . A.30. Remoting Subsystem Attributes Note Attribute names in these tables are listed as they appear in the management model, for example, when using the management CLI. See the schema definition file located at EAP_HOME /docs/schema/wildfly-remoting_4_0.xsd to view the elements as they appear in the XML, as there may be differences from the management model. Table A.100. remoting Attributes Attribute Default Description worker-read-threads 1 The number of read threads to create for the remoting worker. worker-task-core-threads 4 The number of core threads for the remoting worker task thread pool. worker-task-keepalive 60 The number of milliseconds to keep non-core remoting worker task threads alive. worker-task-limit 16384 The maximum number of remoting worker tasks to allow before rejecting. worker-task-max-threads 16 The maximum number of threads for the remoting worker task thread pool. worker-write-threads 1 The number of write threads to create for the remoting worker. Important The above attributes of the remoting element are deprecated. These attributes should now be configured using the io subsystem. Table A.101. endpoint Attributes Attribute Default Description auth-realm The authentication realm to use if no authentication CallbackHandler is specified. authentication-retries 3 Specify the number of times a client is allowed to retry authentication before closing the connection. authorize-id The SASL authorization ID. Used as authentication user name to use if no authentication CallbackHandler is specified and the selected SASL mechanism demands a user name. buffer-region-size The size of allocated buffer regions. heartbeat-interval 2147483647 The interval to use for connection heartbeat, in milliseconds. If the connection is idle in the outbound direction for this amount of time, a ping message will be sent, which will trigger a corresponding reply message. max-inbound-channels 40 The maximum number of concurrent inbound messages on a channel. max-inbound-message-size 9223372036854775807 The maximum inbound message size to be allowed. Messages exceeding this size will cause an exception to be thrown on the reading side as well as the writing side. max-inbound-messages 80 The maximum number of inbound channels to support for a connection. max-outbound-channels 40 The maximum number of concurrent outbound messages on a channel. max-outbound-message-size 9223372036854775807 The maximum outbound message size to send. No messages larger than this well be transmitted; attempting to do so will cause an exception on the writing side. max-outbound-messages 65535 The maximum number of outbound channels to support for a connection. receive-buffer-size 8192 The size of the largest buffer that this endpoint will accept over a connection. receive-window-size 131072 The maximum window size of the receive direction for connection channels, in bytes. sasl-protocol remote When a SaslServer or SaslClient is created, the protocol specified by default is remote . This attribute can be used to override this protocol. send-buffer-size 8192 The size of the largest buffer that this endpoint will transmit over a connection. server-name The server side of the connection passes it's name to the client in the initial greeting, by default the name is automatically discovered from the local address of the connection or it can be overridden using this. transmit-window-size 131072 The maximum window size of the transmit direction for connection channels, in bytes. worker default Worker to use Note When using the management CLI to update the endpoint element, it is available under configuration in the remoting element. For example: /subsystem=remoting/configuration=endpoint/ . Connector Attributes The connector component has the following structure: connector property security sasl property sasl-policy policy Table A.102. connector Attributes Attribute Default Description authentication-provider The authentication-provider element contains the name of the authentication provider to use for incoming connections. sasl-authentication-factory Reference to the SASL authentication factory to secure this connector. sasl-protocol remote The protocol to pass into the SASL mechanisms used for authentication. security-realm The associated security realm to use for authentication for this connector. server-name The server name to send in the initial message exchange and for SASL based authentication. socket-binding The name (or names) of the socket binding(s) to attach to. ssl-context Reference to the SSL context to use for this connector. Table A.103. property Attributes Attribute Default Description value The property value. Security Attributes The security component allows you to configure the security for the connector, but contains no direct configuration attributes. It can be configured using its nested components, such as sasl . Table A.104. sasl Attributes Attribute Default Description include-mechanisms The optional nested include-mechanisms element contains a whitelist of allowed SASL mechanism names. No mechanisms will be allowed which are not present in this list. qop The optional nested qop element contains a comma-separated list of quality-of-protection values, in decreasing order of preference. Quality-of-protection values for this list are: auth : authentication only auth-int : authentication, plus integrity protection auth-conf : authentication, plus integrity protection and confidentiality protection reuse-session false The optional nested reuse-session boolean element specifies whether or not the server should attempt to reuse previously authenticated session information. The mechanism may or may not support such reuse, and other factors may also prevent it. server-auth false The optional nested server-auth boolean element specifies whether the server should authenticate to the client. Not all mechanisms may support this setting. strength The optional nested strength element contains a comma-separated list of cipher strength values, in decreasing order of preference. Cipher strength values for this list are: high medium low sasl-policy Attributes The sasl-policy component allows you to specify an optional policy to use to narrow down the available set of mechanisms, but contains no direct configuration attributes. It can be configured using its nested components, such as policy . Table A.105. policy Attributes Attribute Default Description forward-secrecy true The optional nested forward-secrecy element contains a boolean value which specifies whether mechanisms that implement forward secrecy between sessions are required. Forward secrecy means that breaking into one session will not automatically provide information for breaking into future sessions. no-active true The optional nested no-active element contains a boolean value which specifies whether mechanisms susceptible to active (non-dictionary) attacks are not permitted. false to permit, true to deny. no-anonymous true The optional nested no-anonymous element contains a boolean value which specifies whether mechanisms that accept anonymous login are permitted. false to permit, true to deny. no-dictionary true The optional nested no-dictionary element contains a boolean value which specifies whether mechanisms susceptible to passive dictionary attacks are permitted. false to permit, true to deny. no-plain-text true The optional nested no-plain-text element contains a boolean value which specifies whether mechanisms susceptible to simple plain passive attacks (for example, PLAIN ) are not permitted. false to permit, true to deny. pass-credentials true The optional nested pass-credentials element contains a boolean value which specifies whether mechanisms that pass client credentials are required. HTTP Connector Attributes The http-connector component has the following structure: http-connector property (same as connector) security (same as connector) sasl (same as connector) property (same as connector) sasl-policy (same as connector) policy (same as connector) Table A.106. http-connector Attributes Attribute Default Description authentication-provider The authentication-provider element contains the name of the authentication provider to use for incoming connections. connector-ref The name (or names) of a connector in the undertow subsystem to connect to. sasl-authentication-factory Reference to the SASL authentication factory to secure this connector. sasl-protocol remote The protocol to pass into the SASL mechanisms used for authentication. security-realm The associated security realm to use for authentication for this connector. server-name The server name to send in the initial message exchange and for SASL based authentication. Outbound Connection Attributes The outbound-connection component has the following structure: outbound-connection property Table A.107. outbound-connection Attributes Attribute Default Description uri The connection URI for the outbound connection. Table A.108. property Attributes Attribute Default Description value The property value. Note The above property attributes are related to the XNIO Options that will be used during the connection creation. Remote Outbound Connection The remote-outbound-connection component has the following structure: remote-outbound-connection property (same as outbound-connection) Table A.109. remote-outbound-connection Attributes Attribute Default Description authentication-context Reference to the authentication context instance containing the configuration for outbound connections. outbound-socket-binding-ref Name of the outbound-socket-binding which will be used to determine the destination address and port for the connection. protocol http-remoting The protocol to use for the remote connection. Defaults to http-remoting . Deprecated: Outbound security settings should be migrated to an authentication-context definition. security-realm Reference to the security realm to use to obtain the password and SSL configuration. Deprecated: Outbound security settings should be migrated to an authentication-context definition. username The user name to use when authenticating against the remote server. Deprecated: Outbound security settings should be migrated to an authentication-context definition. Local Outbound Connection Attributes The local-outbound-connection component has the following structure: local-outbound-connection property (same as outbound-connection) Table A.110. local-outbound-connection Attributes Attribute Default Description outbound-socket-binding-ref Name of the outbound-socket-binding which will be used to determine the destination address and port for the connection. A.31. IO Subsystem Attributes Note Attribute names in these tables are listed as they appear in the management model, for example, when using the management CLI. See the schema definition file located at EAP_HOME /docs/schema/wildfly-io_3_0.xsd to view the elements as they appear in the XML, as there may be differences from the management model. Table A.111. worker Attributes Attribute Default Description io-threads The number of I/O threads to create for the worker. If not specified, the number of threads is set to the number of CPUs x 2. stack-size 0 The stack size, in bytes, to attempt to use for worker threads. task-keepalive 60000 The number of milliseconds to keep non-core task threads alive. task-core-threads 2 The number of threads for the core task thread pool. task-max-threads The maximum number of threads for the worker task thread pool. If not specified, the maximum number of threads is set to the number of CPUs x 16, taking the MaxFileDescriptorCount Jakarta Management property, if set, into account. Table A.112. buffer-pool Attributes Attribute Default Description buffer-size The size, in bytes, of each buffer slice. If not specified, the size is set based on the available RAM of your system: 512 bytes for less than 64 MB RAM 1024 bytes (1 KB) for 64 MB - 128 MB RAM 16384 bytes (16 KB) for more than 128 MB RAM For performance tuning advice on this attribute, see Configuring Buffer Pools in the JBoss EAP Performance Tuning Guide . buffers-per-slice How many slices, or sections, to divide the larger buffer into. This can be more memory efficient than allocating many separate buffers. If not specified, the number of slices is set based on the available RAM of your system: 10 for less than 128 MB RAM 20 for more than 128 MB RAM direct-buffers Whether the buffer pool uses direct buffers, which are faster in many cases with NIO. Note that some platforms do not support direct buffers. A.32. Jakarta Server Faces Module Templates The following are example templates used for the various Jakarta Server Faces modules required when installing a different Jakarta Server Faces version for JBoss EAP. See Installing a Jakarta Server Faces Implementation for full instructions. Example: Mojarra Jakarta Server Faces Implementation JAR module.xml Note Be sure to use the appropriate values for the following replaceable variables in the template: IMPL_NAME VERSION <module xmlns="urn:jboss:module:1.8" name="com.sun.jsf-impl: IMPL_NAME - VERSION "> <properties> <property name="jboss.api" value="private"/> </properties> <dependencies> <module name="javax.faces.api: IMPL_NAME - VERSION "/> <module name="javaee.api"/> <module name="javax.servlet.jstl.api"/> <module name="org.apache.xerces" services="import"/> <module name="org.apache.xalan" services="import"/> <module name="org.jboss.weld.core"/> <module name="org.jboss.weld.spi"/> <module name="javax.xml.rpc.api"/> <module name="javax.rmi.api"/> <module name="org.omg.api"/> </dependencies> <resources> <resource-root path="impl- VERSION .jar"/> </resources> </module> Example: MyFaces Jakarta Server Faces Implementation JAR module.xml Note Be sure to use the appropriate values for the following replaceable variables in the template: IMPL_NAME VERSION <module xmlns="urn:jboss:module:1.8" name="com.sun.jsf-impl: IMPL_NAME - VERSION "> <properties> <property name="jboss.api" value="private"/> </properties> <dependencies> <module name="javax.faces.api: IMPL_NAME - VERSION "> <imports> <include path="META-INF/**"/> </imports> </module> <module name="javaee.api"/> <module name="javax.servlet.jstl.api"/> <module name="org.apache.xerces" services="import"/> <module name="org.apache.xalan" services="import"/> <!-- extra dependencies for MyFaces --> <module name="org.apache.commons.collections"/> <module name="org.apache.commons.codec"/> <module name="org.apache.commons.beanutils"/> <module name="org.apache.commons.digester"/> <!-- extra dependencies for MyFaces 1.1 <module name="org.apache.commons.logging"/> <module name="org.apache.commons.el"/> <module name="org.apache.commons.lang"/> --> <module name="javax.xml.rpc.api"/> <module name="javax.rmi.api"/> <module name="org.omg.api"/> </dependencies> <resources> <resource-root path=" IMPL_NAME -impl- VERSION .jar"/> </resources> </module> Example: Mojarra Jakarta Server Faces API JAR module.xml Note Be sure to use the appropriate values for the following replaceable variables in the template: IMPL_NAME VERSION <module xmlns="urn:jboss:module:1.8" name="javax.faces.api: IMPL_NAME - VERSION "> <dependencies> <module name="com.sun.jsf-impl: IMPL_NAME - VERSION "/> <module name="javax.enterprise.api" export="true"/> <module name="javax.servlet.api" export="true"/> <module name="javax.servlet.jsp.api" export="true"/> <module name="javax.servlet.jstl.api" export="true"/> <module name="javax.validation.api" export="true"/> <module name="org.glassfish.javax.el" export="true"/> <module name="javax.api"/> <module name="javax.websocket.api"/> </dependencies> <resources> <resource-root path="jsf-api- VERSION .jar"/> </resources> </module> Example: MyFaces Jakarta Server Faces API JAR module.xml Note Be sure to use the appropriate values for the following replaceable variables in the template: IMPL_NAME VERSION <module xmlns="urn:jboss:module:1.8" name="javax.faces.api: IMPL_NAME - VERSION "> <dependencies> <module name="javax.enterprise.api" export="true"/> <module name="javax.servlet.api" export="true"/> <module name="javax.servlet.jsp.api" export="true"/> <module name="javax.servlet.jstl.api" export="true"/> <module name="javax.validation.api" export="true"/> <module name="org.glassfish.javax.el" export="true"/> <module name="javax.api"/> <!-- extra dependencies for MyFaces 1.1 <module name="org.apache.commons.logging"/> <module name="org.apache.commons.el"/> <module name="org.apache.commons.lang"/> --> </dependencies> <resources> <resource-root path="myfaces-api- VERSION .jar"/> </resources> </module> Example: Mojarra Jakarta Server Faces Injection JAR module.xml Note Be sure to use the appropriate values for the following replaceable variables in the template: IMPL_NAME VERSION INJECTION_VERSION WELD_VERSION <module xmlns="urn:jboss:module:1.8" name="org.jboss.as.jsf-injection: IMPL_NAME - VERSION "> <properties> <property name="jboss.api" value="private"/> </properties> <resources> <resource-root path="wildfly-jsf-injection- INJECTION_VERSION .jar"/> <resource-root path="weld-core-jsf- WELD_VERSION .jar"/> </resources> <dependencies> <module name="com.sun.jsf-impl: IMPL_NAME - VERSION "/> <module name="java.naming"/> <module name="java.desktop"/> <module name="org.jboss.as.jsf"/> <module name="org.jboss.as.web-common"/> <module name="javax.servlet.api"/> <module name="org.jboss.as.ee"/> <module name="org.jboss.as.jsf"/> <module name="javax.enterprise.api"/> <module name="org.jboss.logging"/> <module name="org.jboss.weld.core"/> <module name="org.jboss.weld.api"/> <module name="javax.faces.api: IMPL_NAME - VERSION "/> </dependencies> </module> Example: MyFaces Jakarta Server Faces Injection JAR module.xml Note Be sure to use the appropriate values for the following replaceable variables in the template: IMPL_NAME VERSION INJECTION_VERSION WELD_VERSION <module xmlns="urn:jboss:module:1.8" name="org.jboss.as.jsf-injection: IMPL_NAME - VERSION "> <properties> <property name="jboss.api" value="private"/> </properties> <resources> <resource-root path="wildfly-jsf-injection- INJECTION_VERSION .jar"/> <resource-root path="weld-jsf- WELD_VERSION .jar"/> </resources> <dependencies> <module name="com.sun.jsf-impl: IMPL_NAME - VERSION "/> <module name="javax.api"/> <module name="org.jboss.as.web-common"/> <module name="javax.servlet.api"/> <module name="org.jboss.as.jsf"/> <module name="org.jboss.as.ee"/> <module name="org.jboss.as.jsf"/> <module name="javax.enterprise.api"/> <module name="org.jboss.logging"/> <module name="org.jboss.weld.core"/> <module name="org.jboss.weld.api"/> <module name="org.wildfly.security.elytron"/> <module name="javax.faces.api: IMPL_NAME - VERSION "/> </dependencies> </module> Example: MyFaces commons-digester JAR module.xml Note Be sure to use the appropriate value for the VERSION replaceable variable in the template. <module xmlns="urn:jboss:module:1.5" name="org.apache.commons.digester"> <properties> <property name="jboss.api" value="private"/> </properties> <resources> <resource-root path="commons-digester- VERSION .jar"/> </resources> <dependencies> <module name="javax.api"/> <module name="org.apache.commons.collections"/> <module name="org.apache.commons.logging"/> <module name="org.apache.commons.beanutils"/> </dependencies> </module> A.33. JGroups Subsystem Attributes See the tables below for the attributes of the various elements of the jgroups subsystem. Main Attributes Channel Attributes Stack Attributes Note Attribute names in these tables are listed as they appear in the management model, for example, when using the management CLI. See the schema definition file located at EAP_HOME /docs/schema/jboss-as-jgroups_5_0.xsd to view the elements as they appear in the XML, as there may be differences from the management model. Table A.113. Main jgroups Attributes Attribute Default Description default-channel ee The default JGroups channel. default-stack The default JGroups protocol stack. Channel Attributes The channel element has the following structure: channel fork protocol protocol channel Attributes Table A.114. channel Attributes Attribute Default Description cluster The cluster name of the JGroups channel. If undefined, the name of the channel will be used. module org.wildfly.clustering.server The module from which to load channel services. stack The protocol stack of the JGroups channel. statistics-enabled false Whether statistics are enabled. stats-enabled false Whether statistics are enabled. Deprecated: Use the statistics-enabled attribute instead . Stack Attributes The stack element has the following structure: stack protocol relay remote-site transport thread-pool stack Attributes Table A.115. stack Attributes Attribute Default Description statistics-enabled false Indicates whether or not all protocols in the stack will collect statistics. protocol Attributes For a list of commonly used protocols, see the JGroups Protocols section. Table A.116. protocol Attributes Attribute Default Description module org.jgroups The module with which to resolve the protocol type. properties Properties of this protocol. statistics-enabled false Indicates whether or not this protocol will collect statistics, overriding the stack configuration. relay Attributes Table A.117. relay Attributes Attribute Default Description module org.jgroups The module with which to resolve the protocol type. properties Properties of this protocol. site The name of the local site. statistics-enabled false Indicates whether or not this protocol will collect statistics, overriding the stack configuration. remote-site Attributes Table A.118. remote-site Attributes Attribute Default Description channel The name of the bridge channel used to communicate with this remote site. cluster The cluster name of the bridge channel to this remote site. Deprecated: Use an explicitly defined channel instead . stack The stack from which to create a bridge to this remote site. Deprecated: Use an explicitly defined channel instead . transport Attributes Table A.119. transport Attributes Attribute Default Description default-executor The thread pool executor to handle incoming messages. Deprecated: Configure the predefined default thread pool instead . diagnostics-socket-binding The diagnostics socket binding specification for this protocol layer, used to specify IP interfaces and ports for communication. machine Machine, or host, identifier for this node. Used by Infinispan's topology-aware consistent hash. module org.jgroups Module with which to resolve the protocol type. oob-executor The thread pool executor to handle incoming out-of-band messages. Deprecated: Configure the predefined oob thread pool instead . properties Properties of this transport. rack Rack, such as the server rack, identifier for this node. Used by Infinispan's topology-aware consistent hash. shared false If true , the underlying transport is shared by all channels using this stack. Deprecated: Configure a fork of the channel instead . site Site, such as the data center, identifier for this node. Used by Infinispan's topology-aware consistent hash. socket-binding The socket binding specification for this protocol layer, used to specify IP interfaces and ports for communication. statistics-enabled false Indicates whether or not this protocol will collect statistics, overriding the stack configuration. thread-factory The thread factory to use for handling asynchronous transport-specific tasks. Deprecated: Configure the predefined internal thread pool instead . timer-executor The thread pool executor to handle protocol-related timing tasks. Deprecated: Configure the predefined timer thread pool instead . thread-pool Attributes Table A.120. thread-pool Attributes Attribute Default Description keepalive-time 5000L The amount of milliseconds that pool threads should be kept running when idle. If not specified, then threads will run until the executor is shut down. max-threads 4 The maximum thread pool size. min-threads 2 The core thread pool size, which is smaller than max-threads . If undefined, the core thread pool size is the same as max-threads . queue-length 500 The queue length. A.34. JGroups Protocols Protocol Protocol Type Description ASYM_ENCRYPT Encryption Uses a secret key, stored in a coordinator on the cluster, for encrypting messages between cluster members. AUTH Authentication Provides a layer of authentication to cluster members. azure.AZURE_PING Discovery Supports node discovery using Microsoft Azure's blob storage. FD_ALL Failure Detection Provides failure detection based on a simple heartbeat protocol. FD_SOCK Failure Detection Provides failure detection based on a ring of TCP sockets created between cluster members. JDBC_PING Discovery Discovers cluster members by using a shared database where members write their address. MERGE3 Merge Merges the subclusters together in the event of a cluster split. MFC Flow Control Provides multicast flow control between a sender and all cluster members. MPING Discovery Discovers cluster members with IP multicast. pbcast.GMS Group Membership Handles group membership, including new members joining the cluster, leave requests by existing members, and SUSPECT messages for crashed members. pbcast.NAKACK2 Message Transmission Ensures message reliability and order, guaranteeing that all messages sent by one sender will be received in the order they were sent. pbcast.STABLE Message Stability Deletes messages that have been seen by all members. PING Discovery Initial discovery of members, with support for dynamic discovery of cluster members. SASL Authentication Provides a layer of authentication to cluster members using SASL mechanisms. SYM_ENCRYPT Encryption Uses a shared keystore for encrypting messages between cluster members. S3_PING Discovery Uses Amazon S3 to discover initial members. TCPGOSSIP Discovery Discovers cluster members by using an external gossip router. TCPPING Discovery Contains a static list of cluster member's addresses to form the cluster. UFC Flow Control Provides unicast flow control between a sender and all cluster members UNICAST3 Message Transmission Ensures message reliability and order for unicast messages, guaranteeing that all messages sent by one sender will be received in the order they were sent. VERIFY_SUSPECT Failure Detection Verifies that a suspected member has died by pinging the member one final time before evicting it. Generic Protocol Attributes All of the protocols have access to the following attributes. Table A.121. protocol Attributes Attribute Default Description module org.jgroups The module with which to resolve the protocol type. properties Properties of this protocol. statistics-enabled false Whether statistics are enabled. Authentication Protocols The authentication protocols are used to perform authentication, and are primarily responsible for ensuring that only authenticated members can join the cluster. These protocols sit below the GMS protocol, so that they may listen for requests to join the cluster. AUTH SASL AUTH Attributes While the AUTH protocol contains no additional attributes, it must have a token defined as a child element. Note When defining this protocol, the auth-protocol element is used instead of the protocol element. Token Types When using Elytron for security, it is recommended to use one of the following authentication tokens. These authentication tokens were intentionally designed for use with Elytron, and may not be used with legacy security configurations. Table A.122. Elytron Token Types Token Description cipher-token An authentication token where the shared secret is transformed. RSA is the default algorithm used for the transformation. digest-token An authentication token where the shared secret is transformed. SHA-256 is the default algorithm used for the transformation. plain-token An authentication token with no additional transformations to the shared secret. The following authentication tokens are inherited from JGroups, and are eligible for use in any configuration where authentication is desired. Table A.123. JGroups Token Types Token Description MD5Token An authentication token where the shared secret is encrypted using either an MD5 or SHA hash. MD5 is the default algorithm used for the encryption. SimpleToken An authentication token with no additional transformations to the shared secret. This token is case-insensitive, and case is not considered when determining if strings match. X509Token An authentication token where the shared secret is encrypted using an X509 certificate. SASL Attributes Table A.124. SASL Attributes Attribute Default Description client_callback_handler The class name of the CallbackHandler to use when a node acts as a client. client_name The name to use when a node acts as a client. This name will also be used to obtain the subject if using a JAAS login module. client_password The password to use when a node acts as a client. This password will also be used to obtain the subject if using a JAAS login module. login_module_name The name of the JAAS login module to use as a subject for creating the SASL client and server. This attribute is only required by certain mech values, such as GSSAPI. mech The name of the SASL authentication mechanism. This name can be any mechanism supported by the local SASL provider, and the JDK supplies CRAM-MD5 , DIGEST-MD5 , GSSAPI , and NTLM by default. sasl_props Properties of the defined mech . server_callback_handler The class name of the CallbackHandler to use when a node acts as a server. server_name The fully qualified server name. timeout 5000 The number of milliseconds to wait for a response to a challenge. Discovery Protocols The following protocols are used to find an initial membership for the cluster, which can then be used to determine the current coordinator. A list of the discovery protocols are below. AZURE_PING JDBC_PING MPING PING S3_PING TCPGOSSIP TCPPING AZURE_PING Attributes Table A.125. AZURE_PING Attributes Attribute Default Description container The name of the blob container to use for PING data. This must be a valid DNS name. storage_access_key The secret access key for the storage account. storage_account_name The name of the Microsoft Azure storage account that contains your blob container. JDBC_PING Attributes Table A.126. JDBC_PING Attributes Attribute Default Description data-source Datasource reference, to be used instead of the connection and JNDI lookup properties. Note When defining a JDBC_PING protocol, the jdbc-protocol element is used instead of the protocol element. S3_PING Attributes Table A.127. S3_PING Attributes Attribute Default Description access_key The Amazon S3 access key used to access an S3 bucket. host s3.amazonaws.com Destination of the S3 web service. location Name of the Amazon S3 bucket to use. The bucket must exist and use a unique name. pre_signed_delete_url The pre-signed URL to be used for the DELETE operation. port 443 if use_ssl is true . 80 if use_ssl is false . The port on which the web service is listening. pre_signed_put_url The pre-signed URL to be used for the PUT operation. prefix If set, and location is set, define the bucket name as PREFIX - LOCATION . If set, and a bucket does not exist at the specified PREFIX - LOCATION , then the bucket name becomes PREFIX followed by a random UUID. secret_access_key The Amazon S3 secret access key used to access an S3 bucket. use_ssl true Determines if SSL is used when contacting the host and port combination. TCPGOSSIP Attributes Table A.128. TCPGOSSIP Attributes Attribute Default Description socket-binding The socket binding specification for this protocol layer. Deprecated: Use socket-bindings instead. socket-bindings The outbound socket bindings for this protocol. Note When defining a TCPGOSSIP protocol, the socket-discovery-protocol element is used instead of the protocol element. TCPPING Attributes Table A.129. TCPPING Attributes Attribute Default Description socket-binding The socket binding specification for this protocol layer. Deprecated: Use socket-bindings instead. socket-bindings The outbound socket bindings for this protocol. Note When defining a TCPPING protocol, the socket-discovery-protocol element is used instead of the protocol element. Encrypt Protocols The following protocols are used to secure the communication stack. Encryption is based on a shared secret key that all members of the cluster have. This key is either acquired from a shared keystore, when using SYM_ENCRYPT or from a public/private key exchange, when using ASYM_ENCRYPT . When defining any of the following protocols an encrypt-protocol element is created in the resulting XML. Note If using ASYM_ENCRYPT , then the same stack must have an AUTH protocol defined. The AUTH protocol is optional when using SYM_ENCRYPT . ASYM_ENCRYPT SYM_ENCRYPT ASYM_ENCRYPT Attributes Table A.130. ASYM_ENCRYPT Attributes Attribute Default Description key-alias The alias of the encryption key from the specified keystore. key-credential-reference The credentials required to obtain the encryption key from the keystore. key-store A reference to a keystore containing the encryption key. SYM_ENCRYPT Attributes Table A.131. SYM_ENCRYPT Attributes Attribute Default Description key-alias The alias of the encryption key from the specified keystore. key-credential-reference The credentials required to obtain the encryption key from the keystore. key-store A reference to a keystore containing the encryption key. Failure Detection Protocols The following protocols are used to probe members of the cluster to determine if they are still alive. These protocols do not have any additional attributes beyond the generic attributes. FD_ALL FD_SOCK VERIFY_SUSPECT Flow Control Protocols The following protocols are responsible for flow control, or the process of adjusting the rate of a message sender to the slowest receiver. If a sender continuously sends messages at a rate faster than the receiver, then the receivers will either queue up or discard messages, resulting in retransmissions. These protocols do not have any additional attributes beyond the generic attributes. MFC - Multicast Flow Control UFC - Unicast Flow Control Group Membership Protocols The pbcast.GMS protocol is responsible for new members joining the cluster, existing members leaving the cluster, and members that are suspected of having crashed. This protocol does not have any additional attributes beyond the generic attributes. Merge Protocols If the cluster becomes split, then the MERGE3 protocol is responsible for merging the subclusters back together. While this protocol is responsible for merging the cluster members back together, this will not merge the state of the cluster. The application is responsible for handling the callback to merge states. This protocol does not have any additional attributes beyond the generic attributes. Message Stability The pbcast.STABLE protocol is responsible for garbage collecting messages that have been seen by all members of the cluster. This protocol initiates a stable message containing message numbers for a given member, called a digest. Once all members of the cluster have received the others' digests, then the message may be removed from the retransmission table. This protocol does not have any additional attributes beyond the generic attributes. Reliable Message Transmission The following protocols provide reliable message delivery and FIFO properties for messages sent to all nodes in a cluster. Reliable delivery means that no messages sent by a sender will ever be lost, as all messages are numbered, and retransmission requests are sent if a sequence number is not received. These protocols do not have any additional attributes beyond the generic attributes. pbcast.NAKACK2 pbcast.UNICAST3 Deprecated Protocols The following protocols have been deprecated, and have been replaced by a protocol that contains only the class name. For instance, instead of specifying org.jgroups.protocols.ASYM_ENCRYPT , the protocol name would be ASYM_ENCRYPT . org.jgroups.protocols.ASYM_ENCRYPT org.jgroups.protocols.AUTH org.jgroups.protocols.JDBC_PING org.jgroups.protocols.SYM_ENCRYPT org.jgroups.protocols.TCPGOSSIP org.jgroups.protocols.TCPPING A.35. Apache HTTP Server mod_cluster Directives The mod_cluster connector is an Apache HTTP Server-based load balancer. It uses a communication channel to forward requests from the Apache HTTP Server to one of a set of application server nodes. The following directives can be set to configure mod_cluster. Note There is no need to use ProxyPass directives because mod_cluster automatically configures the URLs that must be forwarded to Apache HTTP Server. Table A.132. mod_cluster Directives Directive Description Values CreateBalancers Defines how the balancers are created in the Apache HTTP Server VirtualHosts. This allows directives like: ProxyPass /balancer://mycluster1/ . 0 : Create all VirtualHosts defined in Apache HTTP Server 1 : Do not create balancers (at least one ProxyPass or ProxyMatch is required to define the balancer names) 2 : Create only the main server (default) UseAlias Check that the alias corresponds to the server name. 0 : Ignore aliases (default) 1 : Check aliases LBstatusRecalTime Time interval in seconds for load-balancing logic to recalculate the status of a node. Default: 5 seconds WaitBeforeRemove Time in seconds before a removed node is forgotten by httpd. Default: 10 seconds ProxyPassMatch/ProxyPass ProxyPassMatch and ProxyPass are mod_proxy directives which, when using ! instead of the back-end URL, prevent reverse-proxy in the path. This is used to allow Apache HTTP Server to serve static content. For example: ProxyPassMatch ^(/.*\.gif)USD ! This example allows the Apache HTTP Server to serve the .gif files directly. Note Due to performance optimizations for sessions in JBoss EAP 7, configuring hot-standby nodes is not supported. mod_manager The context of a mod_manager directive is VirtualHost in all cases, except when mentioned otherwise. server config context implies that the directive must be outside a VirtualHost configuration. If not, an error message is displayed and the Apache HTTP Server does not start. Table A.133. mod_manager Directives Directive Description Values EnableMCPMReceive Allow the VirtualHost to receive the MCPM from the nodes. Include EnableMCPMReceive in the Apache HTTP Server configuration to allow mod_cluster to work. Save it in the VirtualHost where you configure advertising. MemManagerFile The base name for the names that mod_manager uses to store configuration, generate keys for shared memory or locked files. This must be an absolute path name; the directories are created if needed. It is recommended that these files are placed on a local drive and not an NFS share. Context: server config USDserver_root/logs/ Maxcontext The maximum number of contexts supported by mod_cluster. Context: server config Default: 100 Maxnode The maximum number of nodes supported by mod_cluster. Context: server config Default: 20 Maxhost The maximum number of hosts, or aliases, supported by mod_cluster. It also includes the maximum number of balancers. Context: server config Default: 20 Maxsessionid The number of active sessionid stored to provide the number of active sessions in the mod_cluster-manager handler. A session is inactive when mod_cluster does not receive any information from the session within 5 minutes. Context: server config. This field is for demonstration and debugging purposes only. 0 : the logic is not activated. MaxMCMPMaxMessSize The maximum size of MCMP messages from other Max directives Calculated from other Max directives. Min: 1024 ManagerBalancerName The name of balancer to use when the JBoss EAP instance does not provide a balancer name. mycluster PersistSlots Tells mod_slotmem to persist nodes, aliases and contexts in files. Context: server config Off CheckNonce Switch check of nonce when using mod_cluster-manager handler. on/off Default: on - Nonce checked AllowDisplay Switch additional display on mod_cluster-manager main page. on/off Default: off - only version is displayed AllowCmd Allow commands using mod_cluster-manager URL. on/off Default: on - Commands allowed ReduceDisplay Reduce the information displayed on the main mod_cluster-manager page, so that more nodes can be displayed on the page. on/off Default: off - full information is displayed SetHandler mod_cluster-manager Displays information about the node that mod_cluster sees from the cluster. The information includes generic information and additionally counts the number of active sessions. <Location /mod_cluster-manager> SetHandler mod_cluster-manager Require ip 127.0.0.1 </Location> on/off Default: off Note When accessing the location defined in httpd.conf : Transferred: Corresponds to the POST data sent to the back-end server. Connected: Corresponds to the number of requests that have been processed when the mod_cluster status page was requested. Num_sessions: Corresponds to the number of sessions mod_cluster report as active (on which there was a request within the past 5 minutes). This field is not present when Maxsessionid is zero and is for demonstration and debugging purposes only. A.36. ModCluster Subsystem Attributes The modcluster subsystem has the following structure: proxy load-provider=dynamic custom-load-metric load-metric load-provider=simple ssl The load-provider=dynamic resource allows you to configure factors, such as CPU, sessions, heap, memory, and weight to determine the load balancing behavior. The load-provider=simple resource allows setting only a static constant as the factor attribute. This helps when the user does not need dynamic or complex rules to load balance the incoming HTTP requests. Note Attribute names in these tables are listed as they appear in the management model, for example, when using the management CLI. See the schema definition file located at EAP_HOME /docs/schema/jboss-as-mod-cluster_3_0.xsd to view the elements as they appear in the XML, as there may be differences from the management model. Table A.134. proxy Configuration Options Attribute Default Description advertise true Whether to enable multicast-based advertise mechanism. advertise-security-key It is a shared secret between an httpd instance and the JBoss EAP servers listening for advertisements from the httpd instance. advertise-socket The name of the balancer on the reverse proxy to register with. auto-enable-contexts true If set to false , contexts are registered with the reverse proxy as disabled. You can enable the context using the enable-context operation or by using the mod_cluster_manager console. balancer The name of the balancer on the reverse proxy to register with. If not set, the value is configured on the Apache HTTP Server side with the ManagerBalancerName directive, which defaults to mycluster . connector The name of Undertow listener that mod_cluster reverse proxy will connect to. excluded-contexts A list of contexts to exclude from registration with the reverse proxies. If no host is indicated, the host is assumed to be localhost . ROOT indicates the root context of the web application. flush-packets false Whether or not to enable packet flushing to the web server. flush-wait -1 Time to wait before flushing packets in httpd. Max value is 2,147,483,647 . listener The name of the Undertow listener that will be registered with the reverse proxy. load-balancing-group If set, requests are sent to the specified load balancing group on the load balancer. max-attempts 1 The number of times the reverse proxy will attempt to send a given request to a worker before giving up. node-timeout -1 Timeout, in seconds, for proxy connections to a worker. This is the time that mod_cluster will wait for the back-end response before returning an error. If the node-timeout attribute is undefined, the httpd ProxyTimeout directive is used. If ProxyTimeout is undefined, the httpd Timeout directive is used, which defaults to 300 seconds. ping 10 Time, in seconds, in which to wait for a pong answer to a ping. proxies List of proxies for mod_cluster to register with defined by outbound-socket-binding in socket-binding-group . proxy-list List of proxies. The format is HOST_NAME : PORT , separated with commas. Deprecated in favor of proxies . proxy-url / Base URL for MCMP requests. session-draining-strategy DEFAULT Session draining strategy used during undeployment of a web application. Valid values are DEFAULT , ALWAYS , or NEVER . DEFAULT Drain sessions before web application undeploy only if the web application is non-distributable. ALWAYS Always drain sessions before web application undeploy, even for distributable web applications. NEVER Do not drain sessions before web application undeploy. load-provider=simple A load provider to use if no dynamic load provider is present. It assigns each cluster member a load factor of 1 , and distributes work evenly without applying a load balancing algorithm. smax -1 Soft maximum idle connection count in httpd. socket-timeout 20 Number of seconds to wait for a response from an httpd proxy to MCMP commands before timing out, and flagging the proxy as in error. ssl-context Reference to the SSLContext to be used by mod_cluster. status-interval 10 Number of seconds a STATUS message is sent from the application server to the reverse proxy. Allowed values are between 1 and 2,147,483,647 . sticky-session true Whether subsequent requests for a given session should be routed to the same node, if possible. sticky-session-force false Whether the reverse proxy should return an error in the event that the balancer is unable to route a request to the node to which it is stuck. This setting is ignored if sticky sessions are disabled. sticky-session-remove false Remove session information on failover. stop-context-timeout 10 The maximum time, in seconds, to wait for a context to process pending requests, for a distributable context, or to destroy active sessions, for a non-distributable context. ttl -1 Time to live, in seconds, for idle connections above smax. Allowed values are between -1 and 2,147,483,647 . worker-timeout -1 Timeout to wait in httpd for an available worker to process the requests. Allowed values are between -1 and 2,147,483,647 . Table A.135. load-provider=dynamic Configuration Options Attribute Default Description decay 2 The decay. history 9 The history. initial-load 0 The initial load reported by a node. The valid range is 0-100 , with 0 indicating maximum load. This attribute helps to gradually increase the load value of a newly joined node to avoid overloading it while joining a cluster. You can disable this behavior by setting the value as -1 . When disabled, the node will report a load value of 100 , indicating that it has no load when joining a cluster. Table A.136. custom-load-metric Attribute Options Attribute Default Description capacity 1.0 The capacity of the metric. class The class name of the custom metric. property The properties for the metric. weight 1 The weight of the metric. Table A.137. load-metric Attribute Options Attribute Default Description capacity 1.0 The capacity of the metric. property The properties for the metric. type The type of the metric. Valid values are cpu , mem , heap , sessions , receive-traffic , send-traffic , requests , or busyness . weight 1 The weight of the metric. Table A.138. ssl Attribute Options Attribute Default Description ca-certificate-file Certificate authority. ca-revocation-url Certificate authority revocation list. certificate-key-file USD{user.home}/.keystore Key file for the certificate. cipher-suite The allowed cipher suite. key-alias The key alias. password changeit Password. protocol TLS The SSL protocols that are enabled. A.37. mod_jk Worker Properties The workers.properties file defines the behavior of the workers to which mod_jk passes client requests. The workers.properties file defines where the different application servers are located and the way the workload should be balanced across them. The general structure of a property is worker. WORKER_NAME . DIRECTIVE . The WORKER_NAME is a unique name that must match the instance-id configured in the JBoss EAP undertow subsystem . The DIRECTIVE is the setting to be applied to the worker. Configuration Reference for Apache mod_jk Load Balancers Templates specify default per-load-balancer settings. You can override the template within the load-balancer settings itself. Table A.139. Global properties Property Description worker.list A comma separated list of worker names that will be used by mod_jk . Table A.140. Mandatory Directives Property Description type The type of worker. The default type is ajp13 . Other possible values are ajp14 , lb , status . For more information on these directives, see the Apache Tomcat Connectors Reference at https://tomcat.apache.org/connectors-doc/reference/workers.html . Table A.141. Load Balancing Directives Property Description balance_workers Specifies the worker nodes that the load balancer must manage. You can use the directive multiple times for the same load balancer. It consists of a comma-separated list of worker node names. sticky_session Specifies whether requests from the same session are always routed to the same worker. The default is 1 , meaning that sticky sessions are enabled. To disable sticky sessions, set it to 0 . Sticky sessions should usually be enabled, unless all of your requests are truly stateless. Table A.142. Connection Directives Property Description host The host name or IP address of the back-end server. The back-end server must support the ajp protocol stack. The default value is localhost . port The port number of the back-end server instance listening for defined protocol requests. The default value is 8009 , which is the default listening port for AJP13 workers. The default value for AJP14 workers is 8011 . ping_mode The conditions under which connections are probed for network status. The probe uses an empty AJP13 packet for CPing, and expects a CPong in response. Specify the conditions by using a combination of directive flags. The flags are not separated by a comma or any white-space. The ping_mode can be any combination of C, P, I, and A. C - Connect. Probe the connection one time after connecting to the server. Specify the timeout using the value of connect_timeout. Otherwise, the value of ping_timeout is used. P - Prepost. Probe the connection before sending each request to the server. Specify the timeout using the prepost_timeout directive. Otherwise, the value of ping_timeout is used. I - Interval. Probe the connection at an interval specified by connection_ping_interval, if present. Otherwise, the value of ping_timeout is used. A - All. A shortcut for CPI, which specifies that all connection probes are used. ping_timeout, connect_timeout, prepost_timeout, connection_ping_interval The timeout values for the connection probe settings above. The value is specified in milliseconds, and the default value for ping_timeout is 10000 . lbfactor Specifies the load-balancing factor for an individual back-end server instance. This is useful to give a more powerful server more of the workload. To give a worker 3 times the default load, set this to 3 : worker.my_worker.lbfactor=3 The example below demonstrates load balancing with sticky sessions between two worker nodes, node1 and node2 , listening on port 8009 . Example: workers.properties File Further configuration details for Apache mod_jk are out of the scope of this document and can be found in the Apache documentation . A.38. Security Manager Subsystem Attributes The security-manager subsystem itself does not have configurable attributes, but it has one child resource with configurable attributes: deployment-permissions=default . Note Attribute names in this table are listed as they appear in the management model, for example, when using the management CLI. See the schema definition file located at EAP_HOME /docs/schema/wildfly-security-manager_1_0.xsd to view the elements as they appear in the XML, as there may be differences from the management model. Table A.143. deployment-permissions Configuration Options Attribute Description maximum-permissions The maximum set of permissions that can be granted to a deployment or JARs. minimum-permissions The minimum set of permissions to be granted to a deployment or JARs. A.39. Install OpenSSL from JBoss Core Services The JBoss Core Services OpenSSL files can be installed either from the ZIP or from the RPM distributions. Follow the below steps depending on your installation method of choice. Note On Red Hat Enterprise Linux 8, standard system OpenSSL is supported thus installation of OpenSSL from JBoss Core Services is not necessary anymore. Using JBoss Core Services OpenSSL ZIP File Distribution Note The path to libs/ directory in the ZIP archive is jbcs-openssl- VERSION /openssl/lib(64) for Linux and jbcs-openssl- VERSION /openssl/bin for Windows. Download the OpenSSL package from the Software Downloads page that pertains to your operating system and architecture. Extract the downloaded ZIP file to your installation directory. Notify JBoss EAP where to find the OpenSSL libaries. You can do this using either of the following methods. In each of the following commands, be sure to replace JBCS_OPENSSL_PATH with the path to the JBoss Core Services OpenSSL libraries, for example, /opt/rh/jbcs-httpd24/root/usr/lib64 . You can add the OpenSSL path to the JAVA_OPTS variable in the standalone.conf or domain.conf configuration file using the following argument. You can define a system property that specifies the OpenSSL path using the following management CLI command. Important Regardless of the method you use, you must perform a server restart for either the JAVA_OPTS value or the system property to take effect. A server reload is not sufficient. Using JBoss Core Services OpenSSL RPM Distribution Ensure that the system is registered to the JBoss Core Services channel: Determine the JBoss Core Services CDN repository name for your operating system version and architecture: RHEL 6 : jb-coreservices-1-for-rhel-6-server-rpms RHEL 7 : jb-coreservices-1-for-rhel-7-server-rpms Enable the repository on the system: Ensure the following message is seen: Install OpenSSL from this channel: Once the installation completes, the JBCS OpenSSL libraries will be available in /opt/rh/jbcs-httpd24/root/usr/lib64 , or just /opt/rh/jbcs-httpd24/root/usr/lib on x86 architecture. Notify JBoss EAP where to find the OpenSSL libaries. You can do this using either of the following methods. In each of the following commands, be sure to replace JBCS_OPENSSL_PATH with the path to the JBoss Core Services OpenSSL libraries, for example, /opt/rh/jbcs-httpd24/root/usr/lib64 . You can update the WILDFLY_OPTS variable for the eap7-standalone or eap7-domain settings in the service configuration file. You can define a system property that specifies the OpenSSL path using the following management CLI command. Important Regardless of the method you use, you must perform a server restart for either the WILDFLY_OPTS value or the system property to take effect. A server reload is not sufficient. A.40. Configure JBoss EAP to Use OpenSSL There are multiple ways in which you can configure JBoss EAP to use OpenSSL: You can reconfigure the elytron subsystem to give OpenSSL priority so that it is used in all cases by default. Note Although OpenSSL is installed in the elytron subsystem, it is not the default TLS provider. In the elytron subsystem, the OpenSSL provider can also be specified on the ssl-context resource. That way, the OpenSSL protocol can be selected on a case-by-case basis instead of using the default priority. To create the ssl-context resource and use the OpenSSL libraries in your Elytron-based SSL/TLS configuration, use the following command. To use the OpenSSL libraries in your legacy security subsystem SSL/TLS configuration: The different OpenSSL protocols that can be used are: openssl.TLS openssl.TLSv1 openssl.TLSv1.1 openssl.TLSv1.2 JBoss EAP will automatically try to search for the OpenSSL libraries on the system and use them. You can also specify a custom OpenSSL libraries location by using the org.wildfly.openssl.path property during JBoss EAP startup. Only the OpenSSL library version 1.0.2 or greater provided by JBoss Core Services is supported. If OpenSSL is loaded properly, you will see a message in the server.log during JBoss EAP startup, similar to: A.41. Platform Modules Provided for Java 8 java.base : This dependency is always included in the provided module loaders java.compiler java.datatransfer java.desktop java.instrument java.jnlp java.logging java.management java.management.rmi java.naming java.prefs java.rmi java.scripting java.se : This module alias aggregates the following set of base modules: java.compiler java.datatransfer java.desktop java.instrument java.logging java.management java.management.rmi java.naming java.prefs java.rmi java.scripting java.security.jgss java.security.sasl java.sql java.sql.rowset java.xml java.xml.crypto java.security.jgss java.security.sasl java.smartcardio java.sql java.sql.rowset java.xml java.xml.crypto javafx.base javafx.controls javafx.fxml javafx.graphics javafx.media javafx.swing javafx.web jdk.accessibility jdk.attach jdk.compiler jdk.httpserver jdk.jartool jdk.javadoc jdk.jconsole jdk.jdi jdk.jfr jdk.jsobject jdk.management jdk.management.cmm jdk.management.jfr jdk.management.resource jdk.net jdk.plugin.dom jdk.scripting.nashorn jdk.sctp jdk.security.auth jdk.security.jgss jdk.unsupported jdk.xml.dom A.42. Comparison of validation timing methods You can compare different aspects of the validate-on-match and background-validation methods to determine which method is suitable for configuring database connection validation. The following table includes a comparison matrix for validation timing methods: Table A.144. Comparison matrix for validation timing methods Comparison aspect Validate-on-match method Background-validation method Reliability The validate-on-match method validates immediately before the use of each database connection. This means validation is performed to test the connections that are checked out of the pool for use by the application. The background-validation method is less reliable because connections might fail between the periodic background validation and the time involved in the use of validated connections. When the background validation method runs frequently, the validation is performed only for those connections in the pool, which are not reserved by the application for use. This also means no validation is performed to test connections that are checked out of the pool for use. Performance, which depends on the use of the system, network performance, and the timing and scope of any connectivity issues Users of systems that remain idle for long periods are more likely to see brief or longer delays when requesting connections using validate-on-match . Users of systems with a more efficient validation mechanism, such as the JDBC 4 validation mechanism may notice fewer delays when using validate-on-match . This is true if the system is rarely idle and connections are less likely to time out. Following a wide-spread outage that impacts most or all of the connections in the pool, users of datasources configured with validate-on-match are more likely to encounter delays in getting connections. This is because the broken connections are iteratively validated and evicted when the user waits for a connection. Users of systems that remain idle for long periods are less likely to see brief or longer delays when requesting connections using background-validation . Users of systems with a more efficient validation mechanism, such as the JDBC 4 validation mechanism may notice fewer delays when using background-validation . This is true if the system is rarely idle and connections are less likely to time out. Following a wide-spread outage that impacts most or all of the connections in the pool, users of datasources configured with background-validation are more likely to encounter broken connections that need to be returned and retried multiple times. Coding for fault tolerance In case of any fault, the application logic remains the same when using validate-on-match because a connection can be externally terminated at any point even after the connection is obtained from the pool by the application. The broken connections are less likely to present when using validate-on-match . This is because validate-on-match performs immediate validation of a connection before its use. In case of any fault, the application logic remains the same when using background-validation because a connection can be externally terminated at any point even after the connection is obtained from the pool by the application. The broken connections are more likely to present when using background validation . Revised on 2024-06-10 19:31:33 UTC | [
"-Dorg.wildfly.openssl.path= PATH_TO_OPENSSL_LIBS",
"module add --name=com.mysql --resources= /path/to /mysql-connector-java-8.0.12.jar --export-dependencies=javaee.api,sun.jdk,ibm.jdk,javax.api,javax.transaction.api",
"module add --module-root-dir= /path/to /my-external-modules/ --name=com.mysql --resources= /path/to /mysql-connector-java-8.0.12.jar --dependencies=javaee.api,sun.jdk,ibm.jdk,javax.api,javax.transaction.api",
"module add --name=com.mysql --slot=8.0 --resources= /path/to /mysql-connector-java-8.0.12.jar --dependencies=javaee.api,sun.jdk,ibm.jdk,javax.api,javax.transaction.api",
"/subsystem=undertow/servlet-container=default/setting=crawler-session-management:add /subsystem=undertow/servlet-container=default/setting=crawler-session-management:read-resource",
"/subsystem=undertow/servlet-container=default/setting=jsp:read-resource",
"/subsystem=undertow/servlet-container=default/setting=persistent-sessions:add /subsystem=undertow/servlet-container=default/setting=persistent-sessions:read-resource",
"/subsystem=undertow/servlet-container=default/setting=session-cookie:add /subsystem=undertow/servlet-container=default/setting=session-cookie:read-resource",
"/subsystem=undertow/servlet-container=default/setting=websockets:read-resource",
"/subsystem=undertow/server=default-server/host=default-host/setting=access-log:add /subsystem=undertow/server=default-server/host=default-host/setting=access-log:read-resource",
"/subsystem=undertow/server=default-server/http-listener=default:write-attribute(name=record-request-start-time,value=true)",
"/subsystem=undertow/server=default-server/host=default-host/setting=single-sign-on:add /subsystem=undertow/server=default-server/host=default-host/setting=single-sign-on:read-resource",
"<module xmlns=\"urn:jboss:module:1.8\" name=\"com.sun.jsf-impl: IMPL_NAME - VERSION \"> <properties> <property name=\"jboss.api\" value=\"private\"/> </properties> <dependencies> <module name=\"javax.faces.api: IMPL_NAME - VERSION \"/> <module name=\"javaee.api\"/> <module name=\"javax.servlet.jstl.api\"/> <module name=\"org.apache.xerces\" services=\"import\"/> <module name=\"org.apache.xalan\" services=\"import\"/> <module name=\"org.jboss.weld.core\"/> <module name=\"org.jboss.weld.spi\"/> <module name=\"javax.xml.rpc.api\"/> <module name=\"javax.rmi.api\"/> <module name=\"org.omg.api\"/> </dependencies> <resources> <resource-root path=\"impl- VERSION .jar\"/> </resources> </module>",
"<module xmlns=\"urn:jboss:module:1.8\" name=\"com.sun.jsf-impl: IMPL_NAME - VERSION \"> <properties> <property name=\"jboss.api\" value=\"private\"/> </properties> <dependencies> <module name=\"javax.faces.api: IMPL_NAME - VERSION \"> <imports> <include path=\"META-INF/**\"/> </imports> </module> <module name=\"javaee.api\"/> <module name=\"javax.servlet.jstl.api\"/> <module name=\"org.apache.xerces\" services=\"import\"/> <module name=\"org.apache.xalan\" services=\"import\"/> <!-- extra dependencies for MyFaces --> <module name=\"org.apache.commons.collections\"/> <module name=\"org.apache.commons.codec\"/> <module name=\"org.apache.commons.beanutils\"/> <module name=\"org.apache.commons.digester\"/> <!-- extra dependencies for MyFaces 1.1 <module name=\"org.apache.commons.logging\"/> <module name=\"org.apache.commons.el\"/> <module name=\"org.apache.commons.lang\"/> --> <module name=\"javax.xml.rpc.api\"/> <module name=\"javax.rmi.api\"/> <module name=\"org.omg.api\"/> </dependencies> <resources> <resource-root path=\" IMPL_NAME -impl- VERSION .jar\"/> </resources> </module>",
"<module xmlns=\"urn:jboss:module:1.8\" name=\"javax.faces.api: IMPL_NAME - VERSION \"> <dependencies> <module name=\"com.sun.jsf-impl: IMPL_NAME - VERSION \"/> <module name=\"javax.enterprise.api\" export=\"true\"/> <module name=\"javax.servlet.api\" export=\"true\"/> <module name=\"javax.servlet.jsp.api\" export=\"true\"/> <module name=\"javax.servlet.jstl.api\" export=\"true\"/> <module name=\"javax.validation.api\" export=\"true\"/> <module name=\"org.glassfish.javax.el\" export=\"true\"/> <module name=\"javax.api\"/> <module name=\"javax.websocket.api\"/> </dependencies> <resources> <resource-root path=\"jsf-api- VERSION .jar\"/> </resources> </module>",
"<module xmlns=\"urn:jboss:module:1.8\" name=\"javax.faces.api: IMPL_NAME - VERSION \"> <dependencies> <module name=\"javax.enterprise.api\" export=\"true\"/> <module name=\"javax.servlet.api\" export=\"true\"/> <module name=\"javax.servlet.jsp.api\" export=\"true\"/> <module name=\"javax.servlet.jstl.api\" export=\"true\"/> <module name=\"javax.validation.api\" export=\"true\"/> <module name=\"org.glassfish.javax.el\" export=\"true\"/> <module name=\"javax.api\"/> <!-- extra dependencies for MyFaces 1.1 <module name=\"org.apache.commons.logging\"/> <module name=\"org.apache.commons.el\"/> <module name=\"org.apache.commons.lang\"/> --> </dependencies> <resources> <resource-root path=\"myfaces-api- VERSION .jar\"/> </resources> </module>",
"<module xmlns=\"urn:jboss:module:1.8\" name=\"org.jboss.as.jsf-injection: IMPL_NAME - VERSION \"> <properties> <property name=\"jboss.api\" value=\"private\"/> </properties> <resources> <resource-root path=\"wildfly-jsf-injection- INJECTION_VERSION .jar\"/> <resource-root path=\"weld-core-jsf- WELD_VERSION .jar\"/> </resources> <dependencies> <module name=\"com.sun.jsf-impl: IMPL_NAME - VERSION \"/> <module name=\"java.naming\"/> <module name=\"java.desktop\"/> <module name=\"org.jboss.as.jsf\"/> <module name=\"org.jboss.as.web-common\"/> <module name=\"javax.servlet.api\"/> <module name=\"org.jboss.as.ee\"/> <module name=\"org.jboss.as.jsf\"/> <module name=\"javax.enterprise.api\"/> <module name=\"org.jboss.logging\"/> <module name=\"org.jboss.weld.core\"/> <module name=\"org.jboss.weld.api\"/> <module name=\"javax.faces.api: IMPL_NAME - VERSION \"/> </dependencies> </module>",
"<module xmlns=\"urn:jboss:module:1.8\" name=\"org.jboss.as.jsf-injection: IMPL_NAME - VERSION \"> <properties> <property name=\"jboss.api\" value=\"private\"/> </properties> <resources> <resource-root path=\"wildfly-jsf-injection- INJECTION_VERSION .jar\"/> <resource-root path=\"weld-jsf- WELD_VERSION .jar\"/> </resources> <dependencies> <module name=\"com.sun.jsf-impl: IMPL_NAME - VERSION \"/> <module name=\"javax.api\"/> <module name=\"org.jboss.as.web-common\"/> <module name=\"javax.servlet.api\"/> <module name=\"org.jboss.as.jsf\"/> <module name=\"org.jboss.as.ee\"/> <module name=\"org.jboss.as.jsf\"/> <module name=\"javax.enterprise.api\"/> <module name=\"org.jboss.logging\"/> <module name=\"org.jboss.weld.core\"/> <module name=\"org.jboss.weld.api\"/> <module name=\"org.wildfly.security.elytron\"/> <module name=\"javax.faces.api: IMPL_NAME - VERSION \"/> </dependencies> </module>",
"<module xmlns=\"urn:jboss:module:1.5\" name=\"org.apache.commons.digester\"> <properties> <property name=\"jboss.api\" value=\"private\"/> </properties> <resources> <resource-root path=\"commons-digester- VERSION .jar\"/> </resources> <dependencies> <module name=\"javax.api\"/> <module name=\"org.apache.commons.collections\"/> <module name=\"org.apache.commons.logging\"/> <module name=\"org.apache.commons.beanutils\"/> </dependencies> </module>",
"<Location /mod_cluster-manager> SetHandler mod_cluster-manager Require ip 127.0.0.1 </Location>",
"Define list of workers that will be used for mapping requests worker.list=loadbalancer,status Define Node1 modify the host as your host IP or DNS name. worker.node1.port=8009 worker.node1.host=node1.mydomain.com worker.node1.type=ajp13 worker.node1.ping_mode=A worker.node1.lbfactor=1 Define Node2 modify the host as your host IP or DNS name. worker.node2.port=8009 worker.node2.host= node2.mydomain.com worker.node2.type=ajp13 worker.node2.ping_mode=A worker.node2.lbfactor=1 Load-balancing behavior worker.loadbalancer.type=lb worker.loadbalancer.balance_workers=node1,node2 worker.loadbalancer.sticky_session=1 Status worker for managing load balancer worker.status.type=status",
"JAVA_OPTS=\"USDJAVA_OPTS -Dorg.wildfly.openssl.path= JBCS_OPENSSL_PATH",
"/system-property=org.wildfly.openssl.path:add(value= JBCS_OPENSSL_PATH )",
"subscription-manager repos --enable REPO_NAME",
"Repository REPO_NAME is enabled for this system.",
"yum install jbcs-httpd24-openssl",
"WILDFLY_OPTS=\"USDWILDFLY_OPTS -Dorg.wildfly.openssl.path= JBCS_OPENSSL_PATH \"",
"/system-property=org.wildfly.openssl.path:add(value= JBCS_OPENSSL_PATH )",
"/subsystem=elytron:write-attribute(name=initial-providers, value=combined-providers) /subsystem=elytron:undefine-attribute(name=final-providers) reload",
"/subsystem=elytron/server-ssl-context=httpsSSC:add(key-manager=localhost-manager, trust-manager=ca-manager, provider-name=openssl) reload",
"/core-service=management/security-realm=ApplicationRealm/server-identity=ssl:write-attribute(name=protocol,value=openssl.TLSv1.2) reload",
"15:37:59,814 INFO [org.wildfly.openssl.SSL] (MSC service thread 1-7) WFOPENSSL0002 OpenSSL Version OpenSSL 1.0.2k-fips 23 Mar 2017"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/configuration_guide/reference_material |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_.net_client/making-open-source-more-inclusive |
Chapter 1. Compiling your Red Hat build of Quarkus applications to native executables | Chapter 1. Compiling your Red Hat build of Quarkus applications to native executables As an application developer, you can use Red Hat build of Quarkus 3.15 to create microservices written in Java that run on OpenShift Container Platform and serverless environments. Quarkus applications can run as regular Java applications (on top of a Java Virtual Machine (JVM)), or be compiled into native executables. Applications compiled to native executables have a smaller memory footprint and faster startup times than their Java counterpart. This guide shows you how to compile the Red Hat build of Quarkus 3.15 Getting Started project into a native executable and how to configure and test the native executable. You will need the application that you created earlier in Getting started with Red Hat build of Quarkus . Building a native executable with Red Hat build of Quarkus covers: Building a native executable with a single command by using a container runtime such as Podman or Docker Creating a custom container image by using the produced native executable Creating a container image by using the OpenShift Container Platform Docker build strategy Deploying the Quarkus native application to OpenShift Container Platform Configuring the native executable Testing the native executable Prerequisites Have the JAVA_HOME environment variable set to specify the location of the Java SDK. Log in to the Red Hat Customer Portal to download Red Hat build of OpenJDK from the Software Downloads page. An Open Container Initiative (OCI) compatible container runtime, such as Podman or Docker. A completed Quarkus Getting Started project. To learn how to build the Quarkus Getting Started project, see Getting started with Red Hat build of Quarkus . Alternatively, you can download the Quarkus Quickstarts archive or clone the Quarkus Quickstarts Git repository. The sample project is in the getting-started directory. 1.1. Producing a native executable A native binary is an executable that is created to run on a specific operating system and CPU architecture. The following list outlines some examples of a native executable: An ELF binary for Linux AMD 64 bits An EXE binary for Windows AMD 64 bits An ELF binary for ARM 64 bits Note Only the ELF binary for Linux x86-64 or AArch64 bits is supported in Red Hat build of Quarkus. One advantage of building a native executable is that your application and dependencies, including the Java Virtual Machine (JVM), are packaged into a single file. The native executable for your application contains the following items: The compiled application code The required Java libraries A reduced version of the virtual machine (VM) for improved application startup times and minimal disk and memory footprint, which is also tailored for the application code and its dependencies To produce a native executable from your Quarkus application, you can select either an in-container build or a local-host build. The following table explains the different building options that you can use: Table 1.1. Building options for producing a native executable Building option Requires Uses Results in Benefits In-container build - Supported A container runtime, for example, Podman or Docker The default registry.access.redhat.com/quarkus/mandrel-for-jdk-21-rhel8:23.1 builder image A Linux 64-bit executable using the CPU architecture of the host GraalVM does not need to be set up locally, which makes your CI pipelines run more efficiently Local-host build - Only supported upstream A local installation of GraalVM or Mandrel Its local installation as a default for the quarkus.native.builder-image property An executable that has the same operating system and CPU architecture as the machine on which the build is executed An alternative for developers that are not allowed or do not want to use tools such as Docker or Podman. Overall, it is faster than the in-container build approach. Important Red Hat build of Quarkus 3.15 only supports the building of native Linux executables by using the Java 21-based Red Hat build of Quarkus Native Builder image ( quarkus/mandrel-for-jdk-21-rhel8 ) , which is a productized distribution of GraalVM Mandrel . While other images are available in the Quarkus community, they are not supported in the product, so do not use them for production builds for which you want Red Hat to provide support. Applications whose source is written based on 17, with no Java 18 - 21 features used, can still compile a native executable of that application by using the Java 21-based Mandrel 23.1 base image. Red Hat build of Quarkus does not support building native executables by using Oracle GraalVM Community Edition (CE), Mandrel community edition, or any other GraalVM distributions. For more information, see Compiling your Red Hat build of Quarkus applications to native executables . 1.1.1. Producing a native executable by using an in-container build To create a native executable and run the native image tests, use the native profile that is provided by Red Hat build of Quarkus for an in-container build. Prerequisites Podman or Docker is installed. The container has access to at least 8GB of memory. Optional: You have installed the Quarkus CLI, which is one of the methods you can use to build a native executable. For more information, see Installing the Quarkus CLI . Note The Quarkus CLI is intended for development purposes, including tasks such as creating, updating, and building Quarkus projects. However, Red Hat does not support using the Quarkus CLI in production environments. Procedure Open the Getting Started project pom.xml file, and verify that the project includes the native profile: <profiles> <profile> <id>native</id> <activation> <property> <name>native</name> </property> </activation> <properties> <skipITs>false</skipITs> <quarkus.package.type>native</quarkus.package.type> </properties> </profile> </profiles> Build a native executable by using one of the following methods: Using Maven: For Docker: ./mvnw package -Dnative -Dquarkus.native.container-build=true For Podman: ./mvnw package -Dnative -Dquarkus.native.container-build=true -Dquarkus.native.container-runtime=podman Using the Quarkus CLI: For Docker: quarkus build --native -Dquarkus.native.container-build=true For Podman: quarkus build --native -Dquarkus.native.container-build=true -Dquarkus.native.container-runtime=podman Step results These commands create a *-runner binary in the target directory, where the following applies: The *-runner file is the built native binary that Quarkus produces. The target directory is a directory that Maven creates when you build a Maven application. Important Compiling a Quarkus application to a native executable consumes a large amount of memory during analysis and optimization. You can limit the amount of memory used during native compilation by setting the quarkus.native.native-image-xmx configuration property. Setting low memory limits might increase the build time. To run the native executable, enter the following command: ./target/*-runner Additional resources Native executable configuration properties 1.1.2. Producing a native executable by using a local-host build If you are not using Docker or Podman, use the Quarkus local-host build option to create and run a native executable. Using the local-host build approach is faster than using containers and is suitable for machines that use a Linux operating system. Important Red Hat build of Quarkus does not support using the following procedure in production. Use this method only when testing or as a backup approach when Docker or Podman is not available. Prerequisites A local installation of Mandrel or GraalVm, correctly configured according to the Quarkus Building a native executable guide. Additionally, for a GraalVM installation, native-image must also be installed. Optional: You have installed the Quarkus CLI, which is one of the methods you can use to build a native executable. For more information, see Installing the Quarkus CLI . Note The Quarkus CLI is intended for development purposes, including tasks such as creating, updating, and building Quarkus projects. However, Red Hat does not support using the Quarkus CLI in production environments. Procedure For GraalVM or Mandrel, build a native executable by using one of the following methods: Using Maven: ./mvnw package -Dnative Using the Quarkus CLI: quarkus build --native Step results These commands create a *-runner binary in the target directory, where the following applies: The *-runner file is the built native binary that Quarkus produces. The target directory is a directory that Maven creates when you build a Maven application. Note When you build the native executable, the prod profile is enabled unless modified in the quarkus.profile property. Run the native executable: ./target/*-runner Additional resources For more information, see the Producing a native executable section of the Quarkus "Building a native executable" guide. 1.2. Creating a custom container image You can create a container image from your Quarkus application by using one of the following methods: Creating a container manually Creating a container by using the OpenShift Container Platform Docker build Important Compiling a Red Hat build of Quarkus application to a native executable consumes a large amount of memory during analysis and optimization. You can limit the amount of memory used during native compilation by setting the quarkus.native.native-image-xmx configuration property. Setting low memory limits might increase the build time. 1.2.1. Creating a container manually You can manually create a container image with your application for Linux AMD64. When you produce a native image by using the Quarkus Native container, the native image creates an executable that targets Linux AMD64. If your host operating system is different from Linux AMD64, you cannot run the binary directly and you need to create a container manually. Your Quarkus Getting Started project includes a Dockerfile.native in the src/main/docker directory with the following content: FROM registry.access.redhat.com/ubi8/ubi-minimal:8.10 WORKDIR /work/ RUN chown 1001 /work \ && chmod "g+rwX" /work \ && chown 1001:root /work COPY --chown=1001:root target/*-runner /work/application EXPOSE 8080 USER 1001 ENTRYPOINT ["./application", "-Dquarkus.http.host=0.0.0.0"] Note Universal Base Image (UBI) The following list displays the suitable images for use with Dockerfiles. Red Hat Universal Base Image 8 (UBI8). This base image is designed and engineered to be the base layer for all of your containerized applications, middleware, and utilities. Red Hat Universal Base Image 8 Minimal (UBI8-minimal). A stripped-down UBI8 image that uses microdnf as a package manager. All Red Hat Base images are available on the Container images catalog site. Procedure Build a native Linux executable by using one of the following methods: Docker: ./mvnw package -Dnative -Dquarkus.native.container-build=true Podman: ./mvnw package -Dnative -Dquarkus.native.container-build=true -Dquarkus.native.container-runtime=podman Build the container image by using one of the following methods: Docker: docker build -f src/main/docker/Dockerfile.native -t quarkus-quickstart/getting-started . Podman podman build -f src/main/docker/Dockerfile.native -t quarkus-quickstart/getting-started . Run the container by using one of the following methods: Docker: docker run -i --rm -p 8080:8080 quarkus-quickstart/getting-started . Podman: podman run -i --rm -p 8080:8080 quarkus-quickstart/getting-started . 1.2.2. Creating a container by using the OpenShift Docker build You can create a container image for your Quarkus application by using the OpenShift Container Platform Docker build strategy. This strategy creates a container image by using a build configuration in the cluster. Prerequisites You have access to an OpenShift Container Platform cluster and the latest version of the oc tool installed. For information about installing oc , see Installing the OpenShift CLI in the CLI tools guide. A URL for the OpenShift Container Platform API endpoint. Procedure Log in to the OpenShift CLI: oc login -u <username_url> Create a new project in OpenShift: oc new-project <project_name> Create a build config based on the src/main/docker/Dockerfile.native file: cat src/main/docker/Dockerfile.native | oc new-build --name <build_name> --strategy=docker --dockerfile - Build the project: oc start-build <build_name> --from-dir . Deploy the project to OpenShift Container Platform: oc new-app <build_name> Expose the services: oc expose svc/ <build_name> 1.3. Native executable configuration properties Configuration properties define how the native executable is generated. You can configure your Quarkus application by using the application.properties file. Configuration properties The following table lists the configuration properties that you can set to define how the native executable is generated: Property Description Type Default quarkus.native.debug.enabled Enables debugging and generates debug symbols in a separate .debug file. When used with quarkus.native.container-build , Red Hat build of Quarkus only supports Red Hat Enterprise Linux or other Linux distributions as they contain the binutils package that installs the objcopy utility that splits the debug info from the native image. boolean false quarkus.native.resources.excludes A comma-separated list of globs to match resource paths that should not be added to the native image. list of strings quarkus.native.additional-build-args Additional arguments to pass to the build process. list of strings quarkus.native.enable-http-url-handler Enables HTTP URL handler, with which you can do URL.openConnection() for HTTP URLs. boolean true quarkus.native.enable-https-url-handler Enables HTTPS URL handler, with which you can do URL.openConnection() for HTTPS URLs. boolean false quarkus.native.enable-all-security-services Adds all security services to the native image. boolean false quarkus.native.add-all-charsets Adds all character sets to the native image. This increases the image size. boolean false quarkus.native.graalvm-home Contains the path of the GraalVM distribution. string USD{GRAALVM_HOME:} quarkus.native.java-home Contains the path of the JDK. file USD{java.home} quarkus.native.native-image-xmx The maximum Java heap used to generate the native image. string quarkus.native.debug-build-process Waits for a debugger to attach to the build process before running the native image build. This is an advanced option for those familiar with GraalVM internals. boolean false quarkus.native.publish-debug-build-process-port Publishes the debug port when building with docker if debug-build-process is true . boolean true quarkus.native.cleanup-server Restarts the native image server. boolean false quarkus.native.enable-isolates Enables isolates to improve memory management. boolean true quarkus.native.enable-fallback-images Creates a JVM-based fallback image if the native image fails. boolean false quarkus.native.enable-server Uses the native image server. This can speed up compilation but can result in lost changes due to cache invalidation issues. boolean false quarkus.native.auto-service-loader-registration Automatically registers all META-INF/services entries. boolean false quarkus.native.dump-proxies Dumps the bytecode of all proxies for inspection. boolean false quarkus.native.container-build Builds that use a container runtime. Docker is used by default. boolean false quarkus.native.builder-image The docker image to build the image. string registry.access.redhat.com/quarkus/mandrel-for-jdk-21-rhel8:23.1 quarkus.native.container-runtime The container runtime used to build the image. For example, Docker. string quarkus.native.container-runtime-options Options to pass to the container runtime. list of strings quarkus.native.enable-vm-inspection Enables VM introspection in the image. boolean false quarkus.native.full-stack-traces Enables full stack traces in the image. boolean true quarkus.native.enable-reports Generates reports on call paths and included packages, classes, or methods. boolean false quarkus.native.report-exception-stack-traces Reports exceptions with a full stack trace. boolean true quarkus.native.report-errors-at-runtime Reports errors at runtime. This might cause your application to fail at runtime if you use unsupported features. boolean false quarkus.native.resources.includes A comma-separated list of globs to match resource paths that should be added to the native image. Use a slash ( / ) character as a path separator on all platforms. Globs must not start with a slash. For example, if you have src/main/resources/ignored.png and src/main/resources/foo/selected.png in your source tree and one of your dependency JARs contains a bar/some.txt file, with quarkus.native.resources.includes set to foo/ ,bar/ /*.txt , the files src/main/resources/foo/selected.png and bar/some.txt will be included in the native image, while src/main/resources/ignored.png will not be included. For more information, see the following table, which lists the supported glob features. list of strings During build configuration, you can use glob patterns if you want to include a set of files or resources that share a common pattern or location in your project. For example, if you have a directory that contains multiple configuration files, you can use a glob pattern to include all files within that directory. For example: quarkus.native.resources.includes = my/config/files/* The following example shows a comma-separated list of globs to match resource paths to add to the native image. These patterns result in adding any .png images found on the classpath to the native image as well as all files that end with .txt under the folder bar even if nested under subdirectories: quarkus.native.resources.includes = **/*.png,bar/**/*.txt Supported glob features The following table lists the supported glob features and descriptions: Character Feature description * Matches a possibly-empty sequence of characters that does not contain slash ( / ). ** Matches a possibly-empty sequence of characters that might contain slash ( / ). ? Matches one character, but not slash. [abc] Matches one character specified in the bracket, but not slash. [a-z] Matches one character from the range specified in the bracket, but not slash. [!abc] Matches one character not specified in the bracket; does not match slash. [!a-z] Matches one character outside the range specified in the bracket; does not match slash. {one,two,three} Matches any of the alternating tokens separated by commas; the tokens can contain wildcards, nested alternations, and ranges. \ The escape character. There are three levels of escaping: application.properties parser, MicroProfile Config list converter, and Glob parser. All three levels use the backslash as the escape character. Additional resources Configuring your Red Hat build of Quarkus applications 1.3.1. Configuring memory consumption for Red Hat build of Quarkus native compilation Compiling a Red Hat build of Quarkus application to a native executable consumes a large amount of memory during analysis and optimization. You can limit the amount of memory used during native compilation by setting the quarkus.native.native-image-xmx configuration property. Setting low memory limits might increase the build time. Procedure Use one of the following methods to set a value for the quarkus.native.native-image-xmx property to limit the memory consumption during the native image build time: Using the application.properties file: quarkus.native.native-image-xmx= <maximum_memory> Setting system properties: mvn package -Dnative -Dquarkus.native.container-build=true -Dquarkus.native.native-image-xmx=<maximum_memory> This command builds the native executable with Docker. To use Podman, add the -Dquarkus.native.container-runtime=podman argument. Note For example, to set the memory limit to 8 GB, enter quarkus.native.native-image-xmx=8g . The value must be a multiple of 1024 and greater than 2MB. Append the letter m or M to indicate megabytes, or g or G to indicate gigabytes. 1.4. Testing the native executable Test the application in native mode to test the functionality of the native executable. Use the @QuarkusIntegrationTest annotation to build the native executable and run tests against the HTTP endpoints. Important The following example shows how to test a native executable with a local installation of GraalVM or Mandrel. Before you begin, consider the following points: Red Hat build of Quarkus does not support this scenario, as outlined in Producing a native executable . The native executable you are testing with here must match the operating system and architecture of the host. Therefore, this procedure does not work if the native binary is built in a container on a macOS. Procedure Open the pom.xml file and verify that the build section has the following elements: <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-failsafe-plugin</artifactId> <version>USD{surefire-plugin.version}</version> <executions> <execution> <goals> <goal>integration-test</goal> <goal>verify</goal> </goals> <configuration> <systemPropertyVariables> <native.image.path>USD{project.build.directory}/USD{project.build.finalName}-runner</native.image.path> <java.util.logging.manager>org.jboss.logmanager.LogManager</java.util.logging.manager> <maven.home>USD{maven.home}</maven.home> </systemPropertyVariables> </configuration> </execution> </executions> </plugin> The Maven Failsafe plugin ( maven-failsafe-plugin ) runs the integration test and indicates the location of the native executable that is generated. Open the src/test/java/org/acme/GreetingResourceIT.java file and verify that it includes the following content: package org.acme; import io.quarkus.test.junit.QuarkusIntegrationTest; @QuarkusIntegrationTest 1 public class GreetingResourceIT extends GreetingResourceTest { 2 // Execute the same tests but in native mode. } 1 Use another test runner that starts the application from the native file before the tests. The executable is retrieved by using the native.image.path system property configured in the Maven Failsafe plugin. 2 This example extends the GreetingResourceTest , but you can also create a new test. Run the test: ./mvnw verify -Dnative The following example shows the output of this command: ./mvnw verify -Dnative .... GraalVM Native Image: Generating 'getting-started-1.0.0-SNAPSHOT-runner' (executable)... ======================================================================================================================== [1/8] Initializing... (6.6s @ 0.22GB) Java version: 21.0.4+7-LTS, vendor version: Mandrel-23.1.4.0-1b1 Graal compiler: optimization level: 2, target machine: x86-64-v3 C compiler: gcc (redhat, x86_64, 13.2.1) Garbage collector: Serial GC (max heap size: 80% of RAM) 2 user-specific feature(s) - io.quarkus.runner.Feature: Auto-generated class by Red Hat build of Quarkus from the existing extensions - io.quarkus.runtime.graal.DisableLoggingFeature: Disables INFO logging during the analysis phase [2/8] Performing analysis... [******] (40.0s @ 2.05GB) 10,318 (86.40%) of 11,942 types reachable 15,064 (57.36%) of 26,260 fields reachable 52,128 (55.75%) of 93,501 methods reachable 3,298 types, 109 fields, and 2,698 methods registered for reflection 63 types, 68 fields, and 55 methods registered for JNI access 4 native libraries: dl, pthread, rt, z [3/8] Building universe... (5.9s @ 1.31GB) [4/8] Parsing methods... [**] (3.7s @ 2.08GB) [5/8] Inlining methods... [***] (2.0s @ 1.92GB) [6/8] Compiling methods... [******] (34.4s @ 3.25GB) [7/8] Layouting methods... [**] (4.1s @ 1.78GB) [8/8] Creating image... [**] (4.5s @ 2.31GB) 20.93MB (48.43%) for code area: 33,233 compilation units 21.95MB (50.80%) for image heap: 285,664 objects and 8 resources 337.06kB ( 0.76%) for other data 43.20MB in total .... [INFO] [INFO] --- maven-failsafe-plugin:3.0.0-M7:integration-test (default) @ getting-started --- [INFO] Using auto detected provider org.apache.maven.surefire.junitplatform.JUnitPlatformProvider [INFO] [INFO] ------------------------------------------------------- [INFO] T E S T S [INFO] ------------------------------------------------------- [INFO] Running org.acme.GreetingResourceIT __ ____ __ _____ ___ __ ____ ______ --/ __ \/ / / / _ | / _ \/ //_/ / / / __/ -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \ --\___\_\____/_/ |_/_/|_/_/|_|\____/___/ 2024-09-27 14:04:52,681 INFO [io.quarkus] (main) getting-started 1.0.0-SNAPSHOT native (powered by Quarkus 3.15.3.SP1-redhat-00002) started in 0.038s. Listening on: http://0.0.0.0:8081 2024-09-27 14:04:52,682 INFO [io.quarkus] (main) Profile prod activated. 2024-09-27 14:04:52,682 INFO [io.quarkus] (main) Installed features: [cdi, rest, smallrye-context-propagation, vertx] [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.696 s - in org.acme.GreetingResourceIT [INFO] [INFO] Results: [INFO] [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0 [INFO] [INFO] [INFO] --- maven-failsafe-plugin:3.0.0-M7:verify (default) @ getting-started --- Note Quarkus waits 60 seconds for the native image to start before automatically failing the native tests. You can change this duration by configuring the quarkus.test.wait-time system property. You can extend the wait time by using the following command, where <duration> is the wait time in seconds: Note By default, native tests run by using the prod profile unless modified in the quarkus.test.native-image-profile property. 1.4.1. Excluding tests when running as a native executable When you run tests against your native executable, you can only run black-box testing, for example, interacting with the HTTP endpoints of your application. Note Black box refers to the hidden internal workings of a product or program, such as in black-box testing. Because tests do not run natively, you cannot link against your application's code like you do when running tests on the Java Virtual Machine (JVM). Therefore, in your native tests, you cannot inject beans. You can share your test class between your JVM and native executions and exclude certain tests by using the @DisabledOnIntegrationTest annotation to run tests only on the JVM. 1.4.2. Testing an existing native executable By using the Failsafe Maven plugin, you can test against the existing executable build. You can run multiple sets of tests in stages on the binary after it is built. Note To test the native executable that you produced with Quarkus, use the available Maven commands. There are no equivalent Quarkus CLI commands to complete this task by using the command line. Procedure Run a test against a native executable that is already built: ./mvnw test-compile failsafe:integration-test -Dnative This command runs the test against the existing native image by using the Failsafe Maven plugin. Alternatively, you can specify the path to the native executable with the following command where <path> is the native image path: ./mvnw test-compile failsafe:integration-test -Dnative.image.path=<path> 1.5. Additional resources Deploying your Red Hat build of Quarkus applications to OpenShift Container Platform Developing and compiling your Red Hat build of Quarkus applications with Apache Maven Quarkus community: Building a native executable Apache Maven Project Red Hat Universal Base Image 8 Minimal The List of UBI-minimal Tags Revised on 2025-02-28 13:35:37 UTC | [
"<profiles> <profile> <id>native</id> <activation> <property> <name>native</name> </property> </activation> <properties> <skipITs>false</skipITs> <quarkus.package.type>native</quarkus.package.type> </properties> </profile> </profiles>",
"./mvnw package -Dnative -Dquarkus.native.container-build=true",
"./mvnw package -Dnative -Dquarkus.native.container-build=true -Dquarkus.native.container-runtime=podman",
"quarkus build --native -Dquarkus.native.container-build=true",
"quarkus build --native -Dquarkus.native.container-build=true -Dquarkus.native.container-runtime=podman",
"./target/*-runner",
"./mvnw package -Dnative",
"quarkus build --native",
"./target/*-runner",
"FROM registry.access.redhat.com/ubi8/ubi-minimal:8.10 WORKDIR /work/ RUN chown 1001 /work && chmod \"g+rwX\" /work && chown 1001:root /work COPY --chown=1001:root target/*-runner /work/application EXPOSE 8080 USER 1001 ENTRYPOINT [\"./application\", \"-Dquarkus.http.host=0.0.0.0\"]",
"registry.access.redhat.com/ubi8/ubi:8.10",
"registry.access.redhat.com/ubi8/ubi-minimal:8.10",
"./mvnw package -Dnative -Dquarkus.native.container-build=true",
"./mvnw package -Dnative -Dquarkus.native.container-build=true -Dquarkus.native.container-runtime=podman",
"docker build -f src/main/docker/Dockerfile.native -t quarkus-quickstart/getting-started .",
"build -f src/main/docker/Dockerfile.native -t quarkus-quickstart/getting-started .",
"docker run -i --rm -p 8080:8080 quarkus-quickstart/getting-started .",
"run -i --rm -p 8080:8080 quarkus-quickstart/getting-started .",
"login -u <username_url>",
"new-project <project_name>",
"cat src/main/docker/Dockerfile.native | oc new-build --name <build_name> --strategy=docker --dockerfile -",
"start-build <build_name> --from-dir .",
"new-app <build_name>",
"expose svc/ <build_name>",
"quarkus.native.resources.includes = my/config/files/*",
"quarkus.native.resources.includes = **/*.png,bar/**/*.txt",
"quarkus.native.native-image-xmx= <maximum_memory>",
"mvn package -Dnative -Dquarkus.native.container-build=true -Dquarkus.native.native-image-xmx=<maximum_memory>",
"<plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-failsafe-plugin</artifactId> <version>USD{surefire-plugin.version}</version> <executions> <execution> <goals> <goal>integration-test</goal> <goal>verify</goal> </goals> <configuration> <systemPropertyVariables> <native.image.path>USD{project.build.directory}/USD{project.build.finalName}-runner</native.image.path> <java.util.logging.manager>org.jboss.logmanager.LogManager</java.util.logging.manager> <maven.home>USD{maven.home}</maven.home> </systemPropertyVariables> </configuration> </execution> </executions> </plugin>",
"package org.acme; import io.quarkus.test.junit.QuarkusIntegrationTest; @QuarkusIntegrationTest 1 public class GreetingResourceIT extends GreetingResourceTest { 2 // Execute the same tests but in native mode. }",
"./mvnw verify -Dnative",
"./mvnw verify -Dnative . GraalVM Native Image: Generating 'getting-started-1.0.0-SNAPSHOT-runner' (executable) ======================================================================================================================== [1/8] Initializing... (6.6s @ 0.22GB) Java version: 21.0.4+7-LTS, vendor version: Mandrel-23.1.4.0-1b1 Graal compiler: optimization level: 2, target machine: x86-64-v3 C compiler: gcc (redhat, x86_64, 13.2.1) Garbage collector: Serial GC (max heap size: 80% of RAM) 2 user-specific feature(s) - io.quarkus.runner.Feature: Auto-generated class by Red Hat build of Quarkus from the existing extensions - io.quarkus.runtime.graal.DisableLoggingFeature: Disables INFO logging during the analysis phase [2/8] Performing analysis... [******] (40.0s @ 2.05GB) 10,318 (86.40%) of 11,942 types reachable 15,064 (57.36%) of 26,260 fields reachable 52,128 (55.75%) of 93,501 methods reachable 3,298 types, 109 fields, and 2,698 methods registered for reflection 63 types, 68 fields, and 55 methods registered for JNI access 4 native libraries: dl, pthread, rt, z [3/8] Building universe... (5.9s @ 1.31GB) [4/8] Parsing methods... [**] (3.7s @ 2.08GB) [5/8] Inlining methods... [***] (2.0s @ 1.92GB) [6/8] Compiling methods... [******] (34.4s @ 3.25GB) [7/8] Layouting methods... [**] (4.1s @ 1.78GB) [8/8] Creating image... [**] (4.5s @ 2.31GB) 20.93MB (48.43%) for code area: 33,233 compilation units 21.95MB (50.80%) for image heap: 285,664 objects and 8 resources 337.06kB ( 0.76%) for other data 43.20MB in total . [INFO] [INFO] --- maven-failsafe-plugin:3.0.0-M7:integration-test (default) @ getting-started --- [INFO] Using auto detected provider org.apache.maven.surefire.junitplatform.JUnitPlatformProvider [INFO] [INFO] ------------------------------------------------------- [INFO] T E S T S [INFO] ------------------------------------------------------- [INFO] Running org.acme.GreetingResourceIT __ ____ __ _____ ___ __ ____ ______ --/ __ \\/ / / / _ | / _ \\/ //_/ / / / __/ -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\\ --\\___\\_\\____/_/ |_/_/|_/_/|_|\\____/___/ 2024-09-27 14:04:52,681 INFO [io.quarkus] (main) getting-started 1.0.0-SNAPSHOT native (powered by Quarkus 3.15.3.SP1-redhat-00002) started in 0.038s. Listening on: http://0.0.0.0:8081 2024-09-27 14:04:52,682 INFO [io.quarkus] (main) Profile prod activated. 2024-09-27 14:04:52,682 INFO [io.quarkus] (main) Installed features: [cdi, rest, smallrye-context-propagation, vertx] [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.696 s - in org.acme.GreetingResourceIT [INFO] [INFO] Results: [INFO] [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0 [INFO] [INFO] [INFO] --- maven-failsafe-plugin:3.0.0-M7:verify (default) @ getting-started ---",
"./mvnw verify -Dnative -Dquarkus.test.wait-time= <duration>",
"./mvnw test-compile failsafe:integration-test -Dnative",
"./mvnw test-compile failsafe:integration-test -Dnative.image.path=<path>"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.15/html/compiling_your_red_hat_build_of_quarkus_applications_to_native_executables/assembly_quarkus-building-native-executable_quarkus-building-native-executable |
Chapter 7. Available BPF Features | Chapter 7. Available BPF Features This chapter provides the complete list of Berkeley Packet Filter ( BPF ) features available in the kernel of this minor version of Red Hat Enterprise Linux 9. The tables include the lists of: System configuration and other options Available program types and supported helpers Available map types This chapter contains automatically generated output of the bpftool feature command. Table 7.1. System configuration and other options Option Value unprivileged_bpf_disabled 2 (bpf() syscall restricted to privileged users, admin can change) JIT compiler 1 (enabled) JIT compiler hardening 1 (enabled for unprivileged users) JIT compiler kallsyms exports 1 (enabled for root) Memory limit for JIT for unprivileged users 264241152 CONFIG_BPF y CONFIG_BPF_SYSCALL y CONFIG_HAVE_EBPF_JIT y CONFIG_BPF_JIT y CONFIG_BPF_JIT_ALWAYS_ON y CONFIG_DEBUG_INFO_BTF y CONFIG_DEBUG_INFO_BTF_MODULES y CONFIG_CGROUPS y CONFIG_CGROUP_BPF y CONFIG_CGROUP_NET_CLASSID y CONFIG_SOCK_CGROUP_DATA y CONFIG_BPF_EVENTS y CONFIG_KPROBE_EVENTS y CONFIG_UPROBE_EVENTS y CONFIG_TRACING y CONFIG_FTRACE_SYSCALLS y CONFIG_FUNCTION_ERROR_INJECTION y CONFIG_BPF_KPROBE_OVERRIDE n CONFIG_NET y CONFIG_XDP_SOCKETS y CONFIG_LWTUNNEL_BPF y CONFIG_NET_ACT_BPF m CONFIG_NET_CLS_BPF m CONFIG_NET_CLS_ACT y CONFIG_NET_SCH_INGRESS m CONFIG_XFRM y CONFIG_IP_ROUTE_CLASSID y CONFIG_IPV6_SEG6_BPF n CONFIG_BPF_LIRC_MODE2 n CONFIG_BPF_STREAM_PARSER y CONFIG_NETFILTER_XT_MATCH_BPF m CONFIG_BPFILTER n CONFIG_BPFILTER_UMH n CONFIG_TEST_BPF m CONFIG_HZ 1000 bpf() syscall available Large program size limit available Bounded loop support available ISA extension v2 available ISA extension v3 available Table 7.2. Available program types and supported helpers Program type Available helpers socket_filter bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_perf_event_output, bpf_skb_load_bytes, bpf_get_current_task, bpf_get_numa_node_id, bpf_get_socket_cookie, bpf_get_socket_uid, bpf_skb_load_bytes_relative, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs, bpf_skc_to_unix_sock, bpf_loop, bpf_strncmp, bpf_kptr_xchg, bpf_map_lookup_percpu_elem, bpf_skc_to_mptcp_sock, bpf_dynptr_from_mem, bpf_ringbuf_reserve_dynptr, bpf_ringbuf_submit_dynptr, bpf_ringbuf_discard_dynptr, bpf_dynptr_read, bpf_dynptr_write, bpf_dynptr_data kprobe bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_probe_read, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_current_comm, bpf_perf_event_read, bpf_perf_event_output, bpf_get_stackid, bpf_get_current_task, bpf_current_task_under_cgroup, bpf_get_numa_node_id, bpf_probe_read_str, bpf_perf_event_read_value, bpf_get_stack, bpf_get_current_cgroup_id, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_send_signal, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_send_signal_thread, bpf_jiffies64, bpf_get_ns_current_pid_tgid, bpf_get_current_ancestor_cgroup_id, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_get_task_stack, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_task_storage_get, bpf_task_storage_delete, bpf_get_current_task_btf, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_get_func_ip, bpf_get_attach_cookie, bpf_task_pt_regs, bpf_get_branch_snapshot, bpf_find_vma, bpf_loop, bpf_strncmp, bpf_kptr_xchg, bpf_map_lookup_percpu_elem, bpf_dynptr_from_mem, bpf_ringbuf_reserve_dynptr, bpf_ringbuf_submit_dynptr, bpf_ringbuf_discard_dynptr, bpf_dynptr_read, bpf_dynptr_write, bpf_dynptr_data sched_cls bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_skb_store_bytes, bpf_l3_csum_replace, bpf_l4_csum_replace, bpf_tail_call, bpf_clone_redirect, bpf_get_cgroup_classid, bpf_skb_vlan_push, bpf_skb_vlan_pop, bpf_skb_get_tunnel_key, bpf_skb_set_tunnel_key, bpf_redirect, bpf_get_route_realm, bpf_perf_event_output, bpf_skb_load_bytes, bpf_csum_diff, bpf_skb_get_tunnel_opt, bpf_skb_set_tunnel_opt, bpf_skb_change_proto, bpf_skb_change_type, bpf_skb_under_cgroup, bpf_get_hash_recalc, bpf_get_current_task, bpf_skb_change_tail, bpf_skb_pull_data, bpf_csum_update, bpf_set_hash_invalid, bpf_get_numa_node_id, bpf_skb_change_head, bpf_get_socket_cookie, bpf_get_socket_uid, bpf_set_hash, bpf_skb_adjust_room, bpf_skb_get_xfrm_state, bpf_skb_load_bytes_relative, bpf_fib_lookup, bpf_skb_cgroup_id, bpf_skb_ancestor_cgroup_id, bpf_sk_lookup_tcp, bpf_sk_lookup_udp, bpf_sk_release, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_sk_fullsock, bpf_tcp_sock, bpf_skb_ecn_set_ce, bpf_get_listener_sock, bpf_skc_lookup_tcp, bpf_tcp_check_syncookie, bpf_sk_storage_get, bpf_sk_storage_delete, bpf_tcp_gen_syncookie, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_sk_assign, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_csum_level, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_skb_cgroup_classid, bpf_redirect_neigh, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_redirect_peer, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_check_mtu, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs, bpf_skc_to_unix_sock, bpf_loop, bpf_strncmp, bpf_skb_set_tstamp, bpf_kptr_xchg, bpf_map_lookup_percpu_elem, bpf_skc_to_mptcp_sock, bpf_dynptr_from_mem, bpf_ringbuf_reserve_dynptr, bpf_ringbuf_submit_dynptr, bpf_ringbuf_discard_dynptr, bpf_dynptr_read, bpf_dynptr_write, bpf_dynptr_data, bpf_tcp_raw_gen_syncookie_ipv4, bpf_tcp_raw_gen_syncookie_ipv6, bpf_tcp_raw_check_syncookie_ipv4, bpf_tcp_raw_check_syncookie_ipv6 sched_act bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_skb_store_bytes, bpf_l3_csum_replace, bpf_l4_csum_replace, bpf_tail_call, bpf_clone_redirect, bpf_get_cgroup_classid, bpf_skb_vlan_push, bpf_skb_vlan_pop, bpf_skb_get_tunnel_key, bpf_skb_set_tunnel_key, bpf_redirect, bpf_get_route_realm, bpf_perf_event_output, bpf_skb_load_bytes, bpf_csum_diff, bpf_skb_get_tunnel_opt, bpf_skb_set_tunnel_opt, bpf_skb_change_proto, bpf_skb_change_type, bpf_skb_under_cgroup, bpf_get_hash_recalc, bpf_get_current_task, bpf_skb_change_tail, bpf_skb_pull_data, bpf_csum_update, bpf_set_hash_invalid, bpf_get_numa_node_id, bpf_skb_change_head, bpf_get_socket_cookie, bpf_get_socket_uid, bpf_set_hash, bpf_skb_adjust_room, bpf_skb_get_xfrm_state, bpf_skb_load_bytes_relative, bpf_fib_lookup, bpf_skb_cgroup_id, bpf_skb_ancestor_cgroup_id, bpf_sk_lookup_tcp, bpf_sk_lookup_udp, bpf_sk_release, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_sk_fullsock, bpf_tcp_sock, bpf_skb_ecn_set_ce, bpf_get_listener_sock, bpf_skc_lookup_tcp, bpf_tcp_check_syncookie, bpf_sk_storage_get, bpf_sk_storage_delete, bpf_tcp_gen_syncookie, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_sk_assign, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_csum_level, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_skb_cgroup_classid, bpf_redirect_neigh, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_redirect_peer, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_check_mtu, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs, bpf_skc_to_unix_sock, bpf_loop, bpf_strncmp, bpf_skb_set_tstamp, bpf_kptr_xchg, bpf_map_lookup_percpu_elem, bpf_skc_to_mptcp_sock, bpf_dynptr_from_mem, bpf_ringbuf_reserve_dynptr, bpf_ringbuf_submit_dynptr, bpf_ringbuf_discard_dynptr, bpf_dynptr_read, bpf_dynptr_write, bpf_dynptr_data, bpf_tcp_raw_gen_syncookie_ipv4, bpf_tcp_raw_gen_syncookie_ipv6, bpf_tcp_raw_check_syncookie_ipv4, bpf_tcp_raw_check_syncookie_ipv6 tracepoint bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_probe_read, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_current_comm, bpf_perf_event_read, bpf_perf_event_output, bpf_get_stackid, bpf_get_current_task, bpf_current_task_under_cgroup, bpf_get_numa_node_id, bpf_probe_read_str, bpf_perf_event_read_value, bpf_get_stack, bpf_get_current_cgroup_id, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_send_signal, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_send_signal_thread, bpf_jiffies64, bpf_get_ns_current_pid_tgid, bpf_get_current_ancestor_cgroup_id, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_get_task_stack, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_task_storage_get, bpf_task_storage_delete, bpf_get_current_task_btf, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_get_func_ip, bpf_get_attach_cookie, bpf_task_pt_regs, bpf_get_branch_snapshot, bpf_find_vma, bpf_loop, bpf_strncmp, bpf_kptr_xchg, bpf_map_lookup_percpu_elem, bpf_dynptr_from_mem, bpf_ringbuf_reserve_dynptr, bpf_ringbuf_submit_dynptr, bpf_ringbuf_discard_dynptr, bpf_dynptr_read, bpf_dynptr_write, bpf_dynptr_data xdp bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_redirect, bpf_perf_event_output, bpf_csum_diff, bpf_get_current_task, bpf_get_numa_node_id, bpf_xdp_adjust_head, bpf_redirect_map, bpf_xdp_adjust_meta, bpf_xdp_adjust_tail, bpf_fib_lookup, bpf_sk_lookup_tcp, bpf_sk_lookup_udp, bpf_sk_release, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_skc_lookup_tcp, bpf_tcp_check_syncookie, bpf_tcp_gen_syncookie, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_check_mtu, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs, bpf_skc_to_unix_sock, bpf_loop, bpf_strncmp, bpf_xdp_get_buff_len, bpf_xdp_load_bytes, bpf_xdp_store_bytes, bpf_kptr_xchg, bpf_map_lookup_percpu_elem, bpf_skc_to_mptcp_sock, bpf_dynptr_from_mem, bpf_ringbuf_reserve_dynptr, bpf_ringbuf_submit_dynptr, bpf_ringbuf_discard_dynptr, bpf_dynptr_read, bpf_dynptr_write, bpf_dynptr_data, bpf_tcp_raw_gen_syncookie_ipv4, bpf_tcp_raw_gen_syncookie_ipv6, bpf_tcp_raw_check_syncookie_ipv4, bpf_tcp_raw_check_syncookie_ipv6 perf_event bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_probe_read, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_current_comm, bpf_perf_event_read, bpf_perf_event_output, bpf_get_stackid, bpf_get_current_task, bpf_current_task_under_cgroup, bpf_get_numa_node_id, bpf_probe_read_str, bpf_perf_event_read_value, bpf_perf_prog_read_value, bpf_get_stack, bpf_get_current_cgroup_id, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_send_signal, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_send_signal_thread, bpf_jiffies64, bpf_read_branch_records, bpf_get_ns_current_pid_tgid, bpf_get_current_ancestor_cgroup_id, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_get_task_stack, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_task_storage_get, bpf_task_storage_delete, bpf_get_current_task_btf, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_get_func_ip, bpf_get_attach_cookie, bpf_task_pt_regs, bpf_get_branch_snapshot, bpf_find_vma, bpf_loop, bpf_strncmp, bpf_kptr_xchg, bpf_map_lookup_percpu_elem, bpf_dynptr_from_mem, bpf_ringbuf_reserve_dynptr, bpf_ringbuf_submit_dynptr, bpf_ringbuf_discard_dynptr, bpf_dynptr_read, bpf_dynptr_write, bpf_dynptr_data cgroup_skb bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_perf_event_output, bpf_skb_load_bytes, bpf_get_current_task, bpf_get_numa_node_id, bpf_get_socket_cookie, bpf_get_socket_uid, bpf_skb_load_bytes_relative, bpf_skb_cgroup_id, bpf_get_local_storage, bpf_skb_ancestor_cgroup_id, bpf_sk_lookup_tcp, bpf_sk_lookup_udp, bpf_sk_release, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_sk_fullsock, bpf_tcp_sock, bpf_skb_ecn_set_ce, bpf_get_listener_sock, bpf_skc_lookup_tcp, bpf_sk_storage_get, bpf_sk_storage_delete, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_sk_cgroup_id, bpf_sk_ancestor_cgroup_id, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs, bpf_skc_to_unix_sock, bpf_loop, bpf_strncmp, bpf_kptr_xchg, bpf_map_lookup_percpu_elem, bpf_skc_to_mptcp_sock, bpf_dynptr_from_mem, bpf_ringbuf_reserve_dynptr, bpf_ringbuf_submit_dynptr, bpf_ringbuf_discard_dynptr, bpf_dynptr_read, bpf_dynptr_write, bpf_dynptr_data cgroup_sock bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_current_comm, bpf_get_cgroup_classid, bpf_perf_event_output, bpf_get_current_task, bpf_get_numa_node_id, bpf_get_socket_cookie, bpf_get_current_cgroup_id, bpf_get_local_storage, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_sk_storage_get, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_get_netns_cookie, bpf_get_current_ancestor_cgroup_id, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs, bpf_loop, bpf_strncmp, bpf_kptr_xchg, bpf_map_lookup_percpu_elem, bpf_dynptr_from_mem, bpf_ringbuf_reserve_dynptr, bpf_ringbuf_submit_dynptr, bpf_ringbuf_discard_dynptr, bpf_dynptr_read, bpf_dynptr_write, bpf_dynptr_data lwt_in bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_cgroup_classid, bpf_get_route_realm, bpf_perf_event_output, bpf_skb_load_bytes, bpf_csum_diff, bpf_skb_under_cgroup, bpf_get_hash_recalc, bpf_get_current_task, bpf_skb_pull_data, bpf_get_numa_node_id, bpf_lwt_push_encap, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs, bpf_skc_to_unix_sock, bpf_loop, bpf_strncmp, bpf_kptr_xchg, bpf_map_lookup_percpu_elem, bpf_skc_to_mptcp_sock, bpf_dynptr_from_mem, bpf_ringbuf_reserve_dynptr, bpf_ringbuf_submit_dynptr, bpf_ringbuf_discard_dynptr, bpf_dynptr_read, bpf_dynptr_write, bpf_dynptr_data lwt_out bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_cgroup_classid, bpf_get_route_realm, bpf_perf_event_output, bpf_skb_load_bytes, bpf_csum_diff, bpf_skb_under_cgroup, bpf_get_hash_recalc, bpf_get_current_task, bpf_skb_pull_data, bpf_get_numa_node_id, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs, bpf_skc_to_unix_sock, bpf_loop, bpf_strncmp, bpf_kptr_xchg, bpf_map_lookup_percpu_elem, bpf_skc_to_mptcp_sock, bpf_dynptr_from_mem, bpf_ringbuf_reserve_dynptr, bpf_ringbuf_submit_dynptr, bpf_ringbuf_discard_dynptr, bpf_dynptr_read, bpf_dynptr_write, bpf_dynptr_data lwt_xmit bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_skb_store_bytes, bpf_l3_csum_replace, bpf_l4_csum_replace, bpf_tail_call, bpf_clone_redirect, bpf_get_cgroup_classid, bpf_skb_get_tunnel_key, bpf_skb_set_tunnel_key, bpf_redirect, bpf_get_route_realm, bpf_perf_event_output, bpf_skb_load_bytes, bpf_csum_diff, bpf_skb_get_tunnel_opt, bpf_skb_set_tunnel_opt, bpf_skb_under_cgroup, bpf_get_hash_recalc, bpf_get_current_task, bpf_skb_change_tail, bpf_skb_pull_data, bpf_csum_update, bpf_set_hash_invalid, bpf_get_numa_node_id, bpf_skb_change_head, bpf_lwt_push_encap, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_csum_level, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs, bpf_skc_to_unix_sock, bpf_loop, bpf_strncmp, bpf_kptr_xchg, bpf_map_lookup_percpu_elem, bpf_skc_to_mptcp_sock, bpf_dynptr_from_mem, bpf_ringbuf_reserve_dynptr, bpf_ringbuf_submit_dynptr, bpf_ringbuf_discard_dynptr, bpf_dynptr_read, bpf_dynptr_write, bpf_dynptr_data sock_ops bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_perf_event_output, bpf_get_current_task, bpf_get_numa_node_id, bpf_get_socket_cookie, bpf_setsockopt, bpf_sock_map_update, bpf_getsockopt, bpf_sock_ops_cb_flags_set, bpf_sock_hash_update, bpf_get_local_storage, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_tcp_sock, bpf_sk_storage_get, bpf_sk_storage_delete, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_get_netns_cookie, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_load_hdr_opt, bpf_store_hdr_opt, bpf_reserve_hdr_opt, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs, bpf_skc_to_unix_sock, bpf_loop, bpf_strncmp, bpf_kptr_xchg, bpf_map_lookup_percpu_elem, bpf_skc_to_mptcp_sock, bpf_dynptr_from_mem, bpf_ringbuf_reserve_dynptr, bpf_ringbuf_submit_dynptr, bpf_ringbuf_discard_dynptr, bpf_dynptr_read, bpf_dynptr_write, bpf_dynptr_data sk_skb bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_skb_store_bytes, bpf_tail_call, bpf_perf_event_output, bpf_skb_load_bytes, bpf_get_current_task, bpf_skb_change_tail, bpf_skb_pull_data, bpf_get_numa_node_id, bpf_skb_change_head, bpf_get_socket_cookie, bpf_get_socket_uid, bpf_skb_adjust_room, bpf_sk_redirect_map, bpf_sk_redirect_hash, bpf_sk_lookup_tcp, bpf_sk_lookup_udp, bpf_sk_release, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_skc_lookup_tcp, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs, bpf_skc_to_unix_sock, bpf_loop, bpf_strncmp, bpf_kptr_xchg, bpf_map_lookup_percpu_elem, bpf_skc_to_mptcp_sock, bpf_dynptr_from_mem, bpf_ringbuf_reserve_dynptr, bpf_ringbuf_submit_dynptr, bpf_ringbuf_discard_dynptr, bpf_dynptr_read, bpf_dynptr_write, bpf_dynptr_data cgroup_device bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_uid_gid, bpf_perf_event_output, bpf_get_current_task, bpf_get_numa_node_id, bpf_get_current_cgroup_id, bpf_get_local_storage, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs, bpf_loop, bpf_strncmp, bpf_get_retval, bpf_set_retval, bpf_kptr_xchg, bpf_map_lookup_percpu_elem, bpf_dynptr_from_mem, bpf_ringbuf_reserve_dynptr, bpf_ringbuf_submit_dynptr, bpf_ringbuf_discard_dynptr, bpf_dynptr_read, bpf_dynptr_write, bpf_dynptr_data sk_msg bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_cgroup_classid, bpf_perf_event_output, bpf_get_current_task, bpf_get_numa_node_id, bpf_msg_redirect_map, bpf_msg_apply_bytes, bpf_msg_cork_bytes, bpf_msg_pull_data, bpf_msg_redirect_hash, bpf_get_current_cgroup_id, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_msg_push_data, bpf_msg_pop_data, bpf_spin_lock, bpf_spin_unlock, bpf_sk_storage_get, bpf_sk_storage_delete, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_get_netns_cookie, bpf_get_current_ancestor_cgroup_id, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs, bpf_skc_to_unix_sock, bpf_loop, bpf_strncmp, bpf_kptr_xchg, bpf_map_lookup_percpu_elem, bpf_skc_to_mptcp_sock, bpf_dynptr_from_mem, bpf_ringbuf_reserve_dynptr, bpf_ringbuf_submit_dynptr, bpf_ringbuf_discard_dynptr, bpf_dynptr_read, bpf_dynptr_write, bpf_dynptr_data raw_tracepoint bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_probe_read, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_current_comm, bpf_perf_event_read, bpf_perf_event_output, bpf_get_stackid, bpf_get_current_task, bpf_current_task_under_cgroup, bpf_get_numa_node_id, bpf_probe_read_str, bpf_perf_event_read_value, bpf_get_stack, bpf_get_current_cgroup_id, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_send_signal, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_send_signal_thread, bpf_jiffies64, bpf_get_ns_current_pid_tgid, bpf_get_current_ancestor_cgroup_id, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_get_task_stack, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_task_storage_get, bpf_task_storage_delete, bpf_get_current_task_btf, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_get_func_ip, bpf_task_pt_regs, bpf_get_branch_snapshot, bpf_find_vma, bpf_loop, bpf_strncmp, bpf_kptr_xchg, bpf_map_lookup_percpu_elem, bpf_dynptr_from_mem, bpf_ringbuf_reserve_dynptr, bpf_ringbuf_submit_dynptr, bpf_ringbuf_discard_dynptr, bpf_dynptr_read, bpf_dynptr_write, bpf_dynptr_data cgroup_sock_addr bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_current_comm, bpf_get_cgroup_classid, bpf_perf_event_output, bpf_get_current_task, bpf_get_numa_node_id, bpf_get_socket_cookie, bpf_setsockopt, bpf_getsockopt, bpf_bind, bpf_get_current_cgroup_id, bpf_get_local_storage, bpf_sk_lookup_tcp, bpf_sk_lookup_udp, bpf_sk_release, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_skc_lookup_tcp, bpf_sk_storage_get, bpf_sk_storage_delete, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_get_netns_cookie, bpf_get_current_ancestor_cgroup_id, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs, bpf_skc_to_unix_sock, bpf_loop, bpf_strncmp, bpf_kptr_xchg, bpf_map_lookup_percpu_elem, bpf_skc_to_mptcp_sock, bpf_dynptr_from_mem, bpf_ringbuf_reserve_dynptr, bpf_ringbuf_submit_dynptr, bpf_ringbuf_discard_dynptr, bpf_dynptr_read, bpf_dynptr_write, bpf_dynptr_data lwt_seg6local bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_cgroup_classid, bpf_get_route_realm, bpf_perf_event_output, bpf_skb_load_bytes, bpf_csum_diff, bpf_skb_under_cgroup, bpf_get_hash_recalc, bpf_get_current_task, bpf_skb_pull_data, bpf_get_numa_node_id, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs, bpf_skc_to_unix_sock, bpf_loop, bpf_strncmp, bpf_kptr_xchg, bpf_map_lookup_percpu_elem, bpf_skc_to_mptcp_sock, bpf_dynptr_from_mem, bpf_ringbuf_reserve_dynptr, bpf_ringbuf_submit_dynptr, bpf_ringbuf_discard_dynptr, bpf_dynptr_read, bpf_dynptr_write, bpf_dynptr_data lirc_mode2 not supported sk_reuseport bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_skb_load_bytes, bpf_get_current_task, bpf_get_numa_node_id, bpf_get_socket_cookie, bpf_skb_load_bytes_relative, bpf_sk_select_reuseport, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs, bpf_loop, bpf_strncmp, bpf_kptr_xchg, bpf_map_lookup_percpu_elem, bpf_dynptr_from_mem, bpf_ringbuf_reserve_dynptr, bpf_ringbuf_submit_dynptr, bpf_ringbuf_discard_dynptr, bpf_dynptr_read, bpf_dynptr_write, bpf_dynptr_data flow_dissector bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_skb_load_bytes, bpf_get_current_task, bpf_get_numa_node_id, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs, bpf_skc_to_unix_sock, bpf_loop, bpf_strncmp, bpf_kptr_xchg, bpf_map_lookup_percpu_elem, bpf_skc_to_mptcp_sock, bpf_dynptr_from_mem, bpf_ringbuf_reserve_dynptr, bpf_ringbuf_submit_dynptr, bpf_ringbuf_discard_dynptr, bpf_dynptr_read, bpf_dynptr_write, bpf_dynptr_data cgroup_sysctl bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_uid_gid, bpf_perf_event_output, bpf_get_current_task, bpf_get_numa_node_id, bpf_get_current_cgroup_id, bpf_get_local_storage, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_sysctl_get_name, bpf_sysctl_get_current_value, bpf_sysctl_get_new_value, bpf_sysctl_set_new_value, bpf_strtol, bpf_strtoul, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs, bpf_loop, bpf_strncmp, bpf_get_retval, bpf_set_retval, bpf_kptr_xchg, bpf_map_lookup_percpu_elem, bpf_dynptr_from_mem, bpf_ringbuf_reserve_dynptr, bpf_ringbuf_submit_dynptr, bpf_ringbuf_discard_dynptr, bpf_dynptr_read, bpf_dynptr_write, bpf_dynptr_data raw_tracepoint_writable bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_probe_read, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_current_comm, bpf_perf_event_read, bpf_perf_event_output, bpf_get_stackid, bpf_get_current_task, bpf_current_task_under_cgroup, bpf_get_numa_node_id, bpf_probe_read_str, bpf_perf_event_read_value, bpf_get_stack, bpf_get_current_cgroup_id, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_send_signal, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_send_signal_thread, bpf_jiffies64, bpf_get_ns_current_pid_tgid, bpf_get_current_ancestor_cgroup_id, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_get_task_stack, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_task_storage_get, bpf_task_storage_delete, bpf_get_current_task_btf, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_get_func_ip, bpf_task_pt_regs, bpf_get_branch_snapshot, bpf_find_vma, bpf_loop, bpf_strncmp, bpf_kptr_xchg, bpf_map_lookup_percpu_elem, bpf_dynptr_from_mem, bpf_ringbuf_reserve_dynptr, bpf_ringbuf_submit_dynptr, bpf_ringbuf_discard_dynptr, bpf_dynptr_read, bpf_dynptr_write, bpf_dynptr_data cgroup_sockopt bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_uid_gid, bpf_perf_event_output, bpf_get_current_task, bpf_get_numa_node_id, bpf_get_current_cgroup_id, bpf_get_local_storage, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_tcp_sock, bpf_sk_storage_get, bpf_sk_storage_delete, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_get_netns_cookie, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs, bpf_loop, bpf_strncmp, bpf_get_retval, bpf_set_retval, bpf_kptr_xchg, bpf_map_lookup_percpu_elem, bpf_dynptr_from_mem, bpf_ringbuf_reserve_dynptr, bpf_ringbuf_submit_dynptr, bpf_ringbuf_discard_dynptr, bpf_dynptr_read, bpf_dynptr_write, bpf_dynptr_data tracing not supported struct_ops not supported ext not supported lsm not supported sk_lookup bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_perf_event_output, bpf_get_current_task, bpf_get_numa_node_id, bpf_sk_release, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_sk_assign, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs, bpf_skc_to_unix_sock, bpf_loop, bpf_strncmp, bpf_kptr_xchg, bpf_map_lookup_percpu_elem, bpf_skc_to_mptcp_sock, bpf_dynptr_from_mem, bpf_ringbuf_reserve_dynptr, bpf_ringbuf_submit_dynptr, bpf_ringbuf_discard_dynptr, bpf_dynptr_read, bpf_dynptr_write, bpf_dynptr_data syscall bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_probe_read, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_current_comm, bpf_perf_event_read, bpf_perf_event_output, bpf_get_stackid, bpf_get_current_task, bpf_current_task_under_cgroup, bpf_get_numa_node_id, bpf_probe_read_str, bpf_get_socket_cookie, bpf_perf_event_read_value, bpf_get_stack, bpf_get_current_cgroup_id, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_sk_storage_get, bpf_sk_storage_delete, bpf_send_signal, bpf_skb_output, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_send_signal_thread, bpf_jiffies64, bpf_get_ns_current_pid_tgid, bpf_xdp_output, bpf_get_current_ancestor_cgroup_id, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_get_task_stack, bpf_d_path, bpf_copy_from_user, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_task_storage_get, bpf_task_storage_delete, bpf_get_current_task_btf, bpf_sock_from_file, bpf_for_each_map_elem, bpf_snprintf, bpf_sys_bpf, bpf_btf_find_by_name_kind, bpf_sys_close, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_get_func_ip, bpf_task_pt_regs, bpf_get_branch_snapshot, bpf_skc_to_unix_sock, bpf_kallsyms_lookup_name, bpf_find_vma, bpf_loop, bpf_strncmp, bpf_xdp_get_buff_len, bpf_copy_from_user_task, bpf_kptr_xchg, bpf_map_lookup_percpu_elem, bpf_skc_to_mptcp_sock, bpf_dynptr_from_mem, bpf_ringbuf_reserve_dynptr, bpf_ringbuf_submit_dynptr, bpf_ringbuf_discard_dynptr, bpf_dynptr_read, bpf_dynptr_write, bpf_dynptr_data Table 7.3. Available map types Map type Available hash yes array yes prog_array yes perf_event_array yes percpu_hash yes percpu_array yes stack_trace yes cgroup_array yes lru_hash yes lru_percpu_hash yes lpm_trie yes array_of_maps yes hash_of_maps yes devmap yes sockmap yes cpumap yes xskmap yes sockhash yes cgroup_storage yes reuseport_sockarray yes percpu_cgroup_storage yes queue yes stack yes sk_storage yes devmap_hash yes struct_ops yes ringbuf yes inode_storage yes task_storage yes bloom_filter yes | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/9.2_release_notes/available_bpf_features |
Chapter 1. Kickstart Installations | Chapter 1. Kickstart Installations 1.1. What are Kickstart Installations? Many system administrators would prefer to use an automated installation method to install Red Hat Enterprise Linux on their machines. To answer this need, Red Hat, Inc created the kickstart installation method. Using kickstart, a system administrator can create a single file containing the answers to all the questions that would normally be asked during a typical installation. Kickstart files can be kept on a single server system and read by individual computers during the installation. This installation method can support the use of a single kickstart file to install Red Hat Enterprise Linux on multiple machines, making it ideal for network and system administrators. Kickstart provides a way for users to automate a Red Hat Enterprise Linux installation. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Kickstart_Installations |
A.6. perf | A.6. perf The perf tool provides a number of useful commands, some of which are listed in this section. For detailed information about perf , see the Red Hat Enterprise Linux 7 Developer Guide , or refer to the man pages. perf stat This command provides overall statistics for common performance events, including instructions executed and clock cycles consumed. You can use the option flags to gather statistics on events other than the default measurement events. As of Red Hat Enterprise Linux 6.4, it is possible to use perf stat to filter monitoring based on one or more specified control groups (cgroups). For further information, read the man page: perf record This command records performance data into a file which can be later analyzed using perf report . For further details, read the man page: perf report This command reads the performance data from a file and analyzes the recorded data. For further details, read the man page: perf list This command lists the events available on a particular machine. These events vary based on the performance monitoring hardware and the software configuration of the system. For further information, read the man page: perf top This command performs a similar function to the top tool. It generates and displays a performance counter profile in realtime. For further information, read the man page: perf trace This command performs a similar function to the strace tool. It monitors the system calls used by a specified thread or process and all signals received by that application. Additional trace targets are available; refer to the man page for a full list: | [
"man perf-stat",
"man perf-record",
"man perf-report",
"man perf-list",
"man perf-top",
"man perf-trace"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/performance_tuning_guide/sect-Red_Hat_Enterprise_Linux-Performance_Tuning_Guide-Tool_Reference-perf |
Chapter 1. Introduction to autoscaling components | Chapter 1. Introduction to autoscaling components Use telemetry components to collect data about your Red Hat OpenStack Platform (RHOSP) environment, such as CPU, storage, and memory usage. You can launch and scale instances in response to workload demand and resource availability. You can define the upper and lower bounds of telemetry data that control the scaling of instances in your Orchestration service (heat) templates. Control automatic instance scaling with the following telemetry components: Data collection : Telemetry uses the data collection service (Ceilometer) to gather metric and event data. Storage : Telemetry stores metrics data in the time-series database service (gnocchi). Alarm : Telemetry uses the Alarming service (aodh) to trigger actions based on rules against metrics or event data collected by Ceilometer. 1.1. Data collection service (Ceilometer) for autoscaling You can use Ceilometer to collect data about metering and event information for Red Hat OpenStack Platform (RHOSP) components. The Ceilometer service uses three agents to collect data from RHOSP components: A compute agent (ceilometer-agent-compute) : Runs on each Compute node and polls for resource use statistics. A central agent (ceilometer-agent-central) : Runs on the Controller nodes to poll for resource use statistics for resources that are not provided by Compute nodes. A notification agent (ceilometer-agent-notification) : Runs on the Controller nodes and consumes messages from the message queues to build event and metering data. The Ceilometer agents use publishers to send data to the corresponding end points, for example the time-series database service (gnocchi). Additional resources Ceilometer in the Operational Measurements guide. 1.1.1. Publishers In Red Hat OpenStack Platform (RHOSP), you can use several transport methods to transfer the collected data into storage or external systems, such as Service Telemetry Framework (STF). When you enable the gnocchi publisher, the measurement and resource information is stored as time-series data. 1.2. Time-series database service (gnocchi) for autoscaling Gnocchi is a time-series database that you can use for storing metrics in SQL. The Alarming service (aodh) and Orchestration service (heat) use the data stored in gnocchi for autoscaling. Additional resources Storage with gnocchi . 1.3. Alarming service (aodh) You can configure the Alarming service (aodh) to trigger actions based on rules against metrics data collected by Ceilometer and stored in gnocchi. Alarms can be in one of the following states: Ok : The metric or event is in an acceptable state. Firing : The metric or event is outside of the defined Ok state. insufficient data : The alarm state is unknown, for example, if there is no data for the requested granularity, or the check has not been executed yet, and so on. 1.4. Orchestration service (heat) for autoscaling Director uses Orchestration service (heat) templates as the template format for the overcloud deployment. Heat templates are usually expressed in YAML format. The purpose of a template is to define and create a stack, which is a collection of resources that heat creates, and the configuration of the resources. Resources are objects in Red Hat OpenStack Platform (RHOSP) and can include compute resources, network configuration, security groups, scaling rules, and custom resources. Additional resources Understanding heat templates . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/autoscaling_for_instances/assembly_introduction-to-autoscaling-components_assembly_introduction-to-autoscaling-components |
15.6. Migrating with virt-manager | 15.6. Migrating with virt-manager This section covers migrating a KVM guest virtual machine with virt-manager from one host physical machine to another. Connect to the target host physical machine In the virt-manager interface , connect to the target host physical machine by selecting the File menu, then click Add Connection . Add connection The Add Connection window appears. Figure 15.1. Adding a connection to the target host physical machine Enter the following details: Hypervisor : Select QEMU/KVM . Method : Select the connection method. Username : Enter the user name for the remote host physical machine. Hostname : Enter the host name for the remote host physical machine. Note For more information on the connection options, see Section 19.5, "Adding a Remote Connection" . Click Connect . An SSH connection is used in this example, so the specified user's password must be entered in the step. Figure 15.2. Enter password Configure shared storage Ensure that both the source and the target host are sharing storage, for example using NFS . Migrate guest virtual machines Right-click the guest that is to be migrated, and click Migrate . In the New Host field, use the drop-down list to select the host physical machine you wish to migrate the guest virtual machine to and click Migrate . Figure 15.3. Choosing the destination host physical machine and starting the migration process A progress window appears. Figure 15.4. Progress window If the migration finishes without any problems, virt-manager displays the newly migrated guest virtual machine running in the destination host. Figure 15.5. Migrated guest virtual machine running in the destination host physical machine | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-KVM_live_migration-Migrating_with_virt_manager |
Chapter 33. InlineLogging schema reference | Chapter 33. InlineLogging schema reference Used in: CruiseControlSpec , EntityTopicOperatorSpec , EntityUserOperatorSpec , KafkaBridgeSpec , KafkaClusterSpec , KafkaConnectSpec , KafkaMirrorMaker2Spec , KafkaMirrorMakerSpec , ZookeeperClusterSpec The type property is a discriminator that distinguishes use of the InlineLogging type from ExternalLogging . It must have the value inline for the type InlineLogging . Property Property type Description type string Must be inline . loggers map A Map from logger name to logger level. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-InlineLogging-reference |
6.11. RHEA-2013:0339 - new packages: tbb | 6.11. RHEA-2013:0339 - new packages: tbb New tbb packages are now available for Red Hat Enterprise Linux 6. The tbb packages contain a C++ runtime library that abstracts the low-level threading details necessary for optimal multi-core performance. This enhancement update adds the tbb packages to Red Hat Enterprise Linux 6. (BZ# 844976 ) All users who require tbb are advised to install these new packages. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/rhea-2013-0339 |
Appendix B. Red Hat OpenStack Platform for POWER | Appendix B. Red Hat OpenStack Platform for POWER In a new Red Hat OpenStack Platform installation, you can deploy overcloud Compute nodes on POWER (ppc64le) hardware. For the Compute node cluster, you can use the same architecture, or use a combination of x86_64 and ppc64le systems. The undercloud, Controller nodes, Ceph Storage nodes, and all other systems are supported only on x86_64 hardware. B.1. Ceph Storage When you configure access to external Ceph in a multi-architecture cloud, set the CephAnsiblePlaybook parameter to /usr/share/ceph-ansible/site.yml.sample and include your client key and other Ceph-specific parameters. For example: B.2. Composable services The following services typically form part of the Controller node and are available for use in custom roles as Technology Preview: Block Storage service (cinder) Image service (glance) Identity service (keystone) Networking service (neutron) Object Storage service (swift) Note Red Hat does not support features in Technology Preview. For more information about composable services, see composable services and custom roles in the Advanced Overcloud Customization guide. Use the following example to understand how to move the listed services from the Controller node to a dedicated ppc64le node: | [
"parameter_defaults: CephAnsiblePlaybook: /usr/share/ceph-ansible/site.yml.sample CephClientKey: AQDLOh1VgEp6FRAAFzT7Zw+Y9V6JJExQAsRnRQ== CephClusterFSID: 4b5c8c0a-ff60-454b-a1b4-9747aa737d19 CephExternalMonHost: 172.16.1.7, 172.16.1.8",
"(undercloud) [stack@director ~]USD rsync -a /usr/share/openstack-tripleo-heat-templates/. ~/templates (undercloud) [stack@director ~]USD cd ~/templates/roles (undercloud) [stack@director roles]USD cat <<EO_TEMPLATE >ControllerPPC64LE.yaml ############################################################################### Role: ControllerPPC64LE # ############################################################################### - name: ControllerPPC64LE description: | Controller role that has all the controller services loaded and handles Database, Messaging and Network functions. CountDefault: 1 tags: - primary - controller networks: - External - InternalApi - Storage - StorageMgmt - Tenant # For systems with both IPv4 and IPv6, you may specify a gateway network for # each, such as ['ControlPlane', 'External'] default_route_networks: ['External'] HostnameFormatDefault: '%stackname%-controllerppc64le-%index%' ImageDefault: ppc64le-overcloud-full ServicesDefault: - OS::TripleO::Services::Aide - OS::TripleO::Services::AuditD - OS::TripleO::Services::CACerts - OS::TripleO::Services::CephClient - OS::TripleO::Services::CephExternal - OS::TripleO::Services::CertmongerUser - OS::TripleO::Services::CinderApi - OS::TripleO::Services::CinderBackendDellPs - OS::TripleO::Services::CinderBackendDellSc - OS::TripleO::Services::CinderBackendDellEMCUnity - OS::TripleO::Services::CinderBackendDellEMCVMAXISCSI - OS::TripleO::Services::CinderBackendDellEMCVNX - OS::TripleO::Services::CinderBackendDellEMCXTREMIOISCSI - OS::TripleO::Services::CinderBackendNetApp - OS::TripleO::Services::CinderBackendScaleIO - OS::TripleO::Services::CinderBackendVRTSHyperScale - OS::TripleO::Services::CinderBackup - OS::TripleO::Services::CinderHPELeftHandISCSI - OS::TripleO::Services::CinderScheduler - OS::TripleO::Services::CinderVolume - OS::TripleO::Services::Collectd - OS::TripleO::Services::Docker - OS::TripleO::Services::Fluentd - OS::TripleO::Services::GlanceApi - OS::TripleO::Services::GlanceRegistry - OS::TripleO::Services::Ipsec - OS::TripleO::Services::Iscsid - OS::TripleO::Services::Kernel - OS::TripleO::Services::Keystone - OS::TripleO::Services::LoginDefs - OS::TripleO::Services::MySQLClient - OS::TripleO::Services::NeutronApi - OS::TripleO::Services::NeutronBgpVpnApi - OS::TripleO::Services::NeutronSfcApi - OS::TripleO::Services::NeutronCorePlugin - OS::TripleO::Services::NeutronDhcpAgent - OS::TripleO::Services::NeutronL2gwAgent - OS::TripleO::Services::NeutronL2gwApi - OS::TripleO::Services::NeutronL3Agent - OS::TripleO::Services::NeutronLbaasv2Agent - OS::TripleO::Services::NeutronLbaasv2Api - OS::TripleO::Services::NeutronLinuxbridgeAgent - OS::TripleO::Services::NeutronMetadataAgent - OS::TripleO::Services::NeutronML2FujitsuCfab - OS::TripleO::Services::NeutronML2FujitsuFossw - OS::TripleO::Services::NeutronOvsAgent - OS::TripleO::Services::NeutronVppAgent - OS::TripleO::Services::Ntp - OS::TripleO::Services::ContainersLogrotateCrond - OS::TripleO::Services::OpenDaylightOvs - OS::TripleO::Services::Rhsm - OS::TripleO::Services::RsyslogSidecar - OS::TripleO::Services::Securetty - OS::TripleO::Services::SensuClient - OS::TripleO::Services::SkydiveAgent - OS::TripleO::Services::Snmp - OS::TripleO::Services::Sshd - OS::TripleO::Services::SwiftProxy - OS::TripleO::Services::SwiftDispersion - OS::TripleO::Services::SwiftRingBuilder - OS::TripleO::Services::SwiftStorage - OS::TripleO::Services::Timezone - OS::TripleO::Services::TripleoFirewall - OS::TripleO::Services::TripleoPackages - OS::TripleO::Services::Tuned - OS::TripleO::Services::Vpp - OS::TripleO::Services::OVNController - OS::TripleO::Services::OVNMetadataAgent - OS::TripleO::Services::Ptp EO_TEMPLATE (undercloud) [stack@director roles]USD sed -i~ -e '/OS::TripleO::Services::\\(Cinder\\|Glance\\|Swift\\|Keystone\\|Neutron\\)/d' Controller.yaml (undercloud) [stack@director roles]USD cd ../ (undercloud) [stack@director templates]USD openstack overcloud roles generate --roles-path roles -o roles_data.yaml Controller Compute ComputePPC64LE ControllerPPC64LE BlockStorage ObjectStorage CephStorage"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/director_installation_and_usage/appe-osp_on_power |
C.21. Transactions | C.21. Transactions org.infinispan.interceptors.TxInterceptor The Transactions component manages the cache's participation in JTA transactions. Table C.33. Attributes Name Description Type Writable commits Number of transaction commits performed since last reset. long No prepares Number of transaction prepares performed since last reset. long No rollbacks Number of transaction rollbacks performed since last reset. long No statisticsEnabled Enables or disables the gathering of statistics by this component. boolean Yes Table C.34. Operations Name Description Signature resetStatistics Resets statistics gathered by this component. void resetStatistics() 23149%2C+Administration+and+Configuration+Guide-6.628-06-2017+13%3A51%3A02JBoss+Data+Grid+6Documentation6.6.1 Report a bug | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/transactions2 |
4.4. Managing Cluster Nodes | 4.4. Managing Cluster Nodes The following sections describe the commands you use to manage cluster nodes, including commands to start and stop cluster services and to add and remove cluster nodes. 4.4.1. Stopping Cluster Services The following command stops cluster services on the specified node or nodes. As with the pcs cluster start , the --all option stops cluster services on all nodes and if you do not specify any nodes, cluster services are stopped on the local node only. You can force a stop of cluster services on the local node with the following command, which performs a kill -9 command. 4.4.2. Enabling and Disabling Cluster Services Use the following command to configure the cluster services to run on startup on the specified node or nodes. If you specify the --all option, the command enables cluster services on all nodes. If you do not specify any nodes, cluster services are enabled on the local node only. Use the following command to configure the cluster services not to run on startup on the specified node or nodes. If you specify the --all option, the command disables cluster services on all nodes. If you do not specify any nodes, cluster services are disabled on the local node only. 4.4.3. Adding Cluster Nodes Note It is highly recommended that you add nodes to existing clusters only during a production maintenance window. This allows you to perform appropriate resource and deployment testing for the new node and its fencing configuration. Use the following procedure to add a new node to an existing cluster. In this example, the existing cluster nodes are clusternode-01.example.com , clusternode-02.example.com , and clusternode-03.example.com . The new node is newnode.example.com . On the new node to add to the cluster, perform the following tasks. Install the cluster packages. If the cluster uses SBD, the Booth ticket manager, or a quorum device, you must manually install the respective packages ( sbd , booth-site , corosync-qdevice ) on the new node as well. If you are running the firewalld daemon, execute the following commands to enable the ports that are required by the Red Hat High Availability Add-On. Set a password for the user ID hacluster . It is recommended that you use the same password for each node in the cluster. Execute the following commands to start the pcsd service and to enable pcsd at system start. On a node in the existing cluster, perform the following tasks. Authenticate user hacluster on the new cluster node. Add the new node to the existing cluster. This command also syncs the cluster configuration file corosync.conf to all nodes in the cluster, including the new node you are adding. On the new node to add to the cluster, perform the following tasks. Start and enable cluster services on the new node. Ensure that you configure and test a fencing device for the new cluster node. For information on configuring fencing devices, see Chapter 5, Fencing: Configuring STONITH . 4.4.4. Removing Cluster Nodes The following command shuts down the specified node and removes it from the cluster configuration file, corosync.conf , on all of the other nodes in the cluster. For information on removing all information about the cluster from the cluster nodes entirely, thereby destroying the cluster permanently, see Section 4.6, "Removing the Cluster Configuration" . 4.4.5. Standby Mode The following command puts the specified node into standby mode. The specified node is no longer able to host resources. Any resources currently active on the node will be moved to another node. If you specify the --all , this command puts all nodes into standby mode. You can use this command when updating a resource's packages. You can also use this command when testing a configuration, to simulate recovery without actually shutting down a node. The following command removes the specified node from standby mode. After running this command, the specified node is then able to host resources. If you specify the --all , this command removes all nodes from standby mode. Note that when you execute the pcs cluster standby command, this prevents resources from running on the indicated node. When you execute the pcs cluster unstandby command, this allows resources to run on the indicated node. This does not necessarily move the resources back to the indicated node; where the resources can run at that point depends on how you have configured your resources initially. For information on resource constraints, see Chapter 7, Resource Constraints . | [
"pcs cluster stop [--all] [ node ] [...]",
"pcs cluster kill",
"pcs cluster enable [--all] [ node ] [...]",
"pcs cluster disable [--all] [ node ] [...]",
"yum install -y pcs fence-agents-all",
"firewall-cmd --permanent --add-service=high-availability firewall-cmd --add-service=high-availability",
"passwd hacluster Changing password for user hacluster. New password: Retype new password: passwd: all authentication tokens updated successfully.",
"systemctl start pcsd.service systemctl enable pcsd.service",
"pcs cluster auth newnode.example.com Username: hacluster Password: newnode.example.com: Authorized",
"pcs cluster node add newnode.example.com",
"pcs cluster start Starting Cluster pcs cluster enable",
"pcs cluster node remove node",
"pcs cluster standby node | --all",
"pcs cluster unstandby node | --all"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/s1-clusternodemanage-haar |
Chapter 4. Clair on OpenShift Container Platform | Chapter 4. Clair on OpenShift Container Platform To set up Clair v4 (Clair) on a Red Hat Quay deployment on OpenShift Container Platform, it is recommended to use the Red Hat Quay Operator. By default, the Red Hat Quay Operator installs or upgrades a Clair deployment along with your Red Hat Quay deployment and configure Clair automatically. | null | https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/vulnerability_reporting_with_clair_on_red_hat_quay/clair-quay-operator-overview |
Dynamic plugins reference | Dynamic plugins reference Red Hat Developer Hub 1.4 Red Hat Customer Content Services | [
"======= Skipping disabled dynamic plugin ./dynamic-plugins/dist/backstage-plugin-catalog-backend-module-github-dynamic",
"global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: ./dynamic-plugins/dist/backstage-plugin-catalog-backend-module-github-dynamic disabled: false"
] | https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html-single/dynamic_plugins_reference/index |
Chapter 23. OVN-Kubernetes network plugin | Chapter 23. OVN-Kubernetes network plugin 23.1. About the OVN-Kubernetes network plugin The OpenShift Container Platform cluster uses a virtualized network for pod and service networks. Part of Red Hat OpenShift Networking, the OVN-Kubernetes network plugin is the default network provider for OpenShift Container Platform. OVN-Kubernetes is based on Open Virtual Network (OVN) and provides an overlay-based networking implementation. A cluster that uses the OVN-Kubernetes plugin also runs Open vSwitch (OVS) on each node. OVN configures OVS on each node to implement the declared network configuration. Note OVN-Kubernetes is the default networking solution for OpenShift Container Platform and single-node OpenShift deployments. OVN-Kubernetes, which arose from the OVS project, uses many of the same constructs, such as open flow rules, to determine how packets travel through the network. For more information, see the Open Virtual Network website . OVN-Kubernetes is a series of daemons for OVS that translate virtual network configurations into OpenFlow rules. OpenFlow is a protocol for communicating with network switches and routers, providing a means for remotely controlling the flow of network traffic on a network device, allowing network administrators to configure, manage, and monitor the flow of network traffic. OVN-Kubernetes provides more of the advanced functionality not available with OpenFlow . OVN supports distributed virtual routing, distributed logical switches, access control, DHCP and DNS. OVN implements distributed virtual routing within logic flows which equate to open flows. So for example if you have a pod that sends out a DHCP request on the network, it sends out that broadcast looking for DHCP address there will be a logic flow rule that matches that packet, and it responds giving it a gateway, a DNS server an IP address and so on. OVN-Kubernetes runs a daemon on each node. There are daemon sets for the databases and for the OVN controller that run on every node. The OVN controller programs the Open vSwitch daemon on the nodes to support the network provider features; egress IPs, firewalls, routers, hybrid networking, IPSEC encryption, IPv6, network policy, network policy logs, hardware offloading and multicast. 23.1.1. OVN-Kubernetes purpose The OVN-Kubernetes network plugin is an open-source, fully-featured Kubernetes CNI plugin that uses Open Virtual Network (OVN) to manage network traffic flows. OVN is a community developed, vendor-agnostic network virtualization solution. The OVN-Kubernetes network plugin: Uses OVN (Open Virtual Network) to manage network traffic flows. OVN is a community developed, vendor-agnostic network virtualization solution. Implements Kubernetes network policy support, including ingress and egress rules. Uses the Geneve (Generic Network Virtualization Encapsulation) protocol rather than VXLAN to create an overlay network between nodes. The OVN-Kubernetes network plugin provides the following advantages over OpenShift SDN. Full support for IPv6 single-stack and IPv4/IPv6 dual-stack networking on supported platforms Support for hybrid clusters with both Linux and Microsoft Windows workloads Optional IPsec encryption of intra-cluster communications Offload of network data processing from host CPU to compatible network cards and data processing units (DPUs) 23.1.2. Supported network plugin feature matrix Red Hat OpenShift Networking offers two options for the network plugin, OpenShift SDN and OVN-Kubernetes, for the network plugin. The following table summarizes the current feature support for both network plugins: Table 23.1. Default CNI network plugin feature comparison Feature OpenShift SDN OVN-Kubernetes Egress IPs Supported Supported Egress firewall Supported Supported [1] Egress router Supported Supported [2] Hybrid networking Not supported Supported IPsec encryption for intra-cluster communication Not supported Supported IPv4 single-stack Supported Supported IPv6 single-stack Not supported Supported [3] IPv4/IPv6 dual-stack Not Supported Supported [4] IPv6/IPv4 dual-stack Not supported Supported [5] Kubernetes network policy Supported Supported Kubernetes network policy logs Not supported Supported Hardware offloading Not supported Supported Multicast Supported Supported Egress firewall is also known as egress network policy in OpenShift SDN. This is not the same as network policy egress. Egress router for OVN-Kubernetes supports only redirect mode. IPv6 single-stack networking on a bare-metal platform. IPv4/IPv6 dual-stack networking on bare-metal, IBM Power(R), and IBM Z(R) platforms. IPv6/IPv4 dual-stack networking on bare-metal and IBM Power(R) platforms. 23.1.3. OVN-Kubernetes IPv6 and dual-stack limitations The OVN-Kubernetes network plugin has the following limitations: For clusters configured for dual-stack networking, both IPv4 and IPv6 traffic must use the same network interface as the default gateway. If this requirement is not met, pods on the host in the ovnkube-node daemon set enter the CrashLoopBackOff state. If you display a pod with a command such as oc get pod -n openshift-ovn-kubernetes -l app=ovnkube-node -o yaml , the status field contains more than one message about the default gateway, as shown in the following output: I1006 16:09:50.985852 60651 helper_linux.go:73] Found default gateway interface br-ex 192.168.127.1 I1006 16:09:50.985923 60651 helper_linux.go:73] Found default gateway interface ens4 fe80::5054:ff:febe:bcd4 F1006 16:09:50.985939 60651 ovnkube.go:130] multiple gateway interfaces detected: br-ex ens4 The only resolution is to reconfigure the host networking so that both IP families use the same network interface for the default gateway. For clusters configured for dual-stack networking, both the IPv4 and IPv6 routing tables must contain the default gateway. If this requirement is not met, pods on the host in the ovnkube-node daemon set enter the CrashLoopBackOff state. If you display a pod with a command such as oc get pod -n openshift-ovn-kubernetes -l app=ovnkube-node -o yaml , the status field contains more than one message about the default gateway, as shown in the following output: I0512 19:07:17.589083 108432 helper_linux.go:74] Found default gateway interface br-ex 192.168.123.1 F0512 19:07:17.589141 108432 ovnkube.go:133] failed to get default gateway interface The only resolution is to reconfigure the host networking so that both IP families contain the default gateway. 23.1.4. Session affinity Session affinity is a feature that applies to Kubernetes Service objects. You can use session affinity if you want to ensure that each time you connect to a <service_VIP>:<Port>, the traffic is always load balanced to the same back end. For more information, including how to set session affinity based on a client's IP address, see Session affinity . Stickiness timeout for session affinity The OVN-Kubernetes network plugin for OpenShift Container Platform calculates the stickiness timeout for a session from a client based on the last packet. For example, if you run a curl command 10 times, the sticky session timer starts from the tenth packet not the first. As a result, if the client is continuously contacting the service, then the session never times out. The timeout starts when the service has not received a packet for the amount of time set by the timeoutSeconds parameter. Additional resources Configuring an egress firewall for a project About network policy Logging network policy events Enabling multicast for a project Configuring IPsec encryption Network [operator.openshift.io/v1] 23.2. OVN-Kubernetes architecture 23.2.1. Introduction to OVN-Kubernetes architecture The following diagram shows the OVN-Kubernetes architecture. Figure 23.1. OVK-Kubernetes architecture The key components are: Cloud Management System (CMS) - A platform specific client for OVN that provides a CMS specific plugin for OVN integration. The plugin translates the cloud management system's concept of the logical network configuration, stored in the CMS configuration database in a CMS-specific format, into an intermediate representation understood by OVN. OVN Northbound database ( nbdb ) - Stores the logical network configuration passed by the CMS plugin. OVN Southbound database ( sbdb ) - Stores the physical and logical network configuration state for OpenVswitch (OVS) system on each node, including tables that bind them. ovn-northd - This is the intermediary client between nbdb and sbdb . It translates the logical network configuration in terms of conventional network concepts, taken from the nbdb , into logical data path flows in the sbdb below it. The container name is northd and it runs in the ovnkube-master pods. ovn-controller - This is the OVN agent that interacts with OVS and hypervisors, for any information or update that is needed for sbdb . The ovn-controller reads logical flows from the sbdb , translates them into OpenFlow flows and sends them to the node's OVS daemon. The container name is ovn-controller and it runs in the ovnkube-node pods. The OVN northbound database has the logical network configuration passed down to it by the cloud management system (CMS). The OVN northbound Database contains the current desired state of the network, presented as a collection of logical ports, logical switches, logical routers, and more. The ovn-northd ( northd container) connects to the OVN northbound database and the OVN southbound database. It translates the logical network configuration in terms of conventional network concepts, taken from the OVN northbound Database, into logical data path flows in the OVN southbound database. The OVN southbound database has physical and logical representations of the network and binding tables that link them together. Every node in the cluster is represented in the southbound database, and you can see the ports that are connected to it. It also contains all the logic flows, the logic flows are shared with the ovn-controller process that runs on each node and the ovn-controller turns those into OpenFlow rules to program Open vSwitch . The Kubernetes control plane nodes each contain an ovnkube-master pod which hosts containers for the OVN northbound and southbound databases. All OVN northbound databases form a Raft cluster and all southbound databases form a separate Raft cluster. At any given time a single ovnkube-master is the leader and the other ovnkube-master pods are followers. 23.2.2. Listing all resources in the OVN-Kubernetes project Finding the resources and containers that run in the OVN-Kubernetes project is important to help you understand the OVN-Kubernetes networking implementation. Prerequisites Access to the cluster as a user with the cluster-admin role. The OpenShift CLI ( oc ) installed. Procedure Run the following command to get all resources, endpoints, and ConfigMaps in the OVN-Kubernetes project: USD oc get all,ep,cm -n openshift-ovn-kubernetes Example output NAME READY STATUS RESTARTS AGE pod/ovnkube-master-9g7zt 6/6 Running 1 (48m ago) 57m pod/ovnkube-master-lqs4v 6/6 Running 0 57m pod/ovnkube-master-vxhtq 6/6 Running 0 57m pod/ovnkube-node-9k9kc 5/5 Running 0 57m pod/ovnkube-node-jg52r 5/5 Running 0 51m pod/ovnkube-node-k8wf7 5/5 Running 0 57m pod/ovnkube-node-tlwk6 5/5 Running 0 47m pod/ovnkube-node-xsvnk 5/5 Running 0 57m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/ovn-kubernetes-master ClusterIP None <none> 9102/TCP 57m service/ovn-kubernetes-node ClusterIP None <none> 9103/TCP,9105/TCP 57m service/ovnkube-db ClusterIP None <none> 9641/TCP,9642/TCP 57m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/ovnkube-master 3 3 3 3 3 beta.kubernetes.io/os=linux,node-role.kubernetes.io/master= 57m daemonset.apps/ovnkube-node 5 5 5 5 5 beta.kubernetes.io/os=linux 57m NAME ENDPOINTS AGE endpoints/ovn-kubernetes-master 10.0.132.11:9102,10.0.151.18:9102,10.0.192.45:9102 57m endpoints/ovn-kubernetes-node 10.0.132.11:9105,10.0.143.72:9105,10.0.151.18:9105 + 7 more... 57m endpoints/ovnkube-db 10.0.132.11:9642,10.0.151.18:9642,10.0.192.45:9642 + 3 more... 57m NAME DATA AGE configmap/control-plane-status 1 55m configmap/kube-root-ca.crt 1 57m configmap/openshift-service-ca.crt 1 57m configmap/ovn-ca 1 57m configmap/ovn-kubernetes-master 0 55m configmap/ovnkube-config 1 57m configmap/signer-ca 1 57m There are three ovnkube-masters that run on the control plane nodes, and two daemon sets used to deploy the ovnkube-master and ovnkube-node pods. There is one ovnkube-node pod for each node in the cluster. In this example, there are 5, and since there is one ovnkube-node per node in the cluster, there are five nodes in the cluster. The ovnkube-config ConfigMap has the OpenShift Container Platform OVN-Kubernetes configurations started by online-master and ovnkube-node . The ovn-kubernetes-master ConfigMap has the information of the current online master leader. List all the containers in the ovnkube-master pods by running the following command: USD oc get pods ovnkube-master-9g7zt \ -o jsonpath='{.spec.containers[*].name}' -n openshift-ovn-kubernetes Expected output northd nbdb kube-rbac-proxy sbdb ovnkube-master ovn-dbchecker The ovnkube-master pod is made up of several containers. It is responsible for hosting the northbound database ( nbdb container), the southbound database ( sbdb container), watching for cluster events for pods, egressIP, namespaces, services, endpoints, egress firewall, and network policy and writing them to the northbound database ( ovnkube-master pod), as well as managing pod subnet allocation to nodes. List all the containers in the ovnkube-node pods by running the following command: USD oc get pods ovnkube-node-jg52r \ -o jsonpath='{.spec.containers[*].name}' -n openshift-ovn-kubernetes Expected output ovn-controller ovn-acl-logging kube-rbac-proxy kube-rbac-proxy-ovn-metrics ovnkube-node The ovnkube-node pod has a container ( ovn-controller ) that resides on each OpenShift Container Platform node. Each node's ovn-controller connects the OVN northbound to the OVN southbound database to learn about the OVN configuration. The ovn-controller connects southbound to ovs-vswitchd as an OpenFlow controller, for control over network traffic, and to the local ovsdb-server to allow it to monitor and control Open vSwitch configuration. 23.2.3. Listing the OVN-Kubernetes northbound database contents To understand logic flow rules you need to examine the northbound database and understand what objects are there to see how they are translated into logic flow rules. The up to date information is present on the OVN Raft leader and this procedure describes how to find the Raft leader and subsequently query it to list the OVN northbound database contents. Prerequisites Access to the cluster as a user with the cluster-admin role. The OpenShift CLI ( oc ) installed. Procedure Find the OVN Raft leader for the northbound database. Note The Raft leader stores the most up to date information. List the pods by running the following command: USD oc get po -n openshift-ovn-kubernetes Example output NAME READY STATUS RESTARTS AGE ovnkube-master-7j97q 6/6 Running 2 (148m ago) 149m ovnkube-master-gt4ms 6/6 Running 1 (140m ago) 147m ovnkube-master-mk6p6 6/6 Running 0 148m ovnkube-node-8qvtr 5/5 Running 0 149m ovnkube-node-fqdc9 5/5 Running 0 149m ovnkube-node-tlfwv 5/5 Running 0 149m ovnkube-node-wlwkn 5/5 Running 0 142m Choose one of the master pods at random and run the following command: USD oc exec -n openshift-ovn-kubernetes ovnkube-master-7j97q \ -- /usr/bin/ovn-appctl -t /var/run/ovn/ovnnb_db.ctl \ --timeout=3 cluster/status OVN_Northbound Example output Defaulted container "northd" out of: northd, nbdb, kube-rbac-proxy, sbdb, ovnkube-master, ovn-dbchecker 1c57 Name: OVN_Northbound Cluster ID: c48a (c48aa5c0-a704-4c77-a066-24fe99d9b338) Server ID: 1c57 (1c57b6fc-2849-49b7-8679-fbf18bafe339) Address: ssl:10.0.147.219:9643 Status: cluster member Role: follower 1 Term: 5 Leader: 2b4f 2 Vote: unknown Election timer: 10000 Log: [2, 3018] Entries not yet committed: 0 Entries not yet applied: 0 Connections: ->0000 ->0000 <-8844 <-2b4f Disconnections: 0 Servers: 1c57 (1c57 at ssl:10.0.147.219:9643) (self) 8844 (8844 at ssl:10.0.163.212:9643) last msg 8928047 ms ago 2b4f (2b4f at ssl:10.0.242.240:9643) last msg 620 ms ago 3 1 This pod is identified as a follower 2 The leader is identified as 2b4f 3 The 2b4f is on IP address 10.0.242.240 Find the ovnkube-master pod running on IP Address 10.0.242.240 using the following command: USD oc get po -o wide -n openshift-ovn-kubernetes | grep 10.0.242.240 | grep -v ovnkube-node Example output ovnkube-master-gt4ms 6/6 Running 1 (143m ago) 150m 10.0.242.240 ip-10-0-242-240.ec2.internal <none> <none> The ovnkube-master-gt4ms pod runs on IP Address 10.0.242.240. Run the following command to show all the objects in the northbound database: USD oc exec -n openshift-ovn-kubernetes -it ovnkube-master-gt4ms \ -c northd -- ovn-nbctl show The output is too long to list here. The list includes the NAT rules, logical switches, load balancers and so on. Run the following command to display the options available with the command ovn-nbctl : USD oc exec -n openshift-ovn-kubernetes -it ovnkube-master-mk6p6 \ -c northd ovn-nbctl --help You can narrow down and focus on specific components by using some of the following commands: Run the following command to show the list of logical routers: USD oc exec -n openshift-ovn-kubernetes -it ovnkube-master-gt4ms \ -c northd -- ovn-nbctl lr-list Example output f971f1f3-5112-402f-9d1e-48f1d091ff04 (GR_ip-10-0-145-205.ec2.internal) 69c992d8-a4cf-429e-81a3-5361209ffe44 (GR_ip-10-0-147-219.ec2.internal) 7d164271-af9e-4283-b84a-48f2a44851cd (GR_ip-10-0-163-212.ec2.internal) 111052e3-c395-408b-97b2-8dd0a20a29a5 (GR_ip-10-0-165-9.ec2.internal) ed50ce33-df5d-48e8-8862-2df6a59169a0 (GR_ip-10-0-209-170.ec2.internal) f44e2a96-8d1e-4a4d-abae-ed8728ac6851 (GR_ip-10-0-242-240.ec2.internal) ef3d0057-e557-4b1a-b3c6-fcc3463790b0 (ovn_cluster_router) Note From this output you can see there is router on each node plus an ovn_cluster_router . Run the following command to show the list of logical switches: USD oc exec -n openshift-ovn-kubernetes -it ovnkube-master-gt4ms \ -c northd -- ovn-nbctl ls-list Example output 82808c5c-b3bc-414a-bb59-8fec4b07eb14 (ext_ip-10-0-145-205.ec2.internal) 3d22444f-0272-4c51-afc6-de9e03db3291 (ext_ip-10-0-147-219.ec2.internal) bf73b9df-59ab-4c58-a456-ce8205b34ac5 (ext_ip-10-0-163-212.ec2.internal) bee1e8d0-ec87-45eb-b98b-63f9ec213e5e (ext_ip-10-0-165-9.ec2.internal) 812f08f2-6476-4abf-9a78-635f8516f95e (ext_ip-10-0-209-170.ec2.internal) f65e710b-32f9-482b-8eab-8d96a44799c1 (ext_ip-10-0-242-240.ec2.internal) 84dad700-afb8-4129-86f9-923a1ddeace9 (ip-10-0-145-205.ec2.internal) 1b7b448b-e36c-4ca3-9f38-4a2cf6814bfd (ip-10-0-147-219.ec2.internal) d92d1f56-2606-4f23-8b6a-4396a78951de (ip-10-0-163-212.ec2.internal) 6864a6b2-de15-4de3-92d8-f95014b6f28f (ip-10-0-165-9.ec2.internal) c26bf618-4d7e-4afd-804f-1a2cbc96ec6d (ip-10-0-209-170.ec2.internal) ab9a4526-44ed-4f82-ae1c-e20da04947d9 (ip-10-0-242-240.ec2.internal) a8588aba-21da-4276-ba0f-9d68e88911f0 (join) Note From this output you can see there is an ext switch for each node plus switches with the node name itself and a join switch. Run the following command to show the list of load balancers: USD oc exec -n openshift-ovn-kubernetes -it ovnkube-master-gt4ms \ -c northd -- ovn-nbctl lb-list Example output UUID LB PROTO VIP IPs f0fb50f9-4968-4b55-908c-616bae4db0a2 Service_default/ tcp 172.30.0.1:443 10.0.147.219:6443,10.0.163.212:6443,169.254.169.2:6443 0dc42012-4f5b-432e-ae01-2cc4bfe81b00 Service_default/ tcp 172.30.0.1:443 10.0.147.219:6443,169.254.169.2:6443,10.0.242.240:6443 f7fff5d5-5eff-4a40-98b1-3a4ba8f7f69c Service_default/ tcp 172.30.0.1:443 169.254.169.2:6443,10.0.163.212:6443,10.0.242.240:6443 12fe57a0-50a4-4a1b-ac10-5f288badee07 Service_default/ tcp 172.30.0.1:443 10.0.147.219:6443,10.0.163.212:6443,10.0.242.240:6443 3f137fbf-0b78-4875-ba44-fbf89f254cf7 Service_openshif tcp 172.30.23.153:443 10.130.0.14:8443 174199fe-0562-4141-b410-12094db922a7 Service_openshif tcp 172.30.69.51:50051 10.130.0.84:50051 5ee2d4bd-c9e2-4d16-a6df-f54cd17c9ac3 Service_openshif tcp 172.30.143.87:9001 10.0.145.205:9001,10.0.147.219:9001,10.0.163.212:9001,10.0.165.9:9001,10.0.209.170:9001,10.0.242.240:9001 a056ae3d-83f8-45bc-9c80-ef89bce7b162 Service_openshif tcp 172.30.164.74:443 10.0.147.219:6443,10.0.163.212:6443,10.0.242.240:6443 bac51f3d-9a6f-4f5e-ac02-28fd343a332a Service_openshif tcp 172.30.0.10:53 10.131.0.6:5353 tcp 172.30.0.10:9154 10.131.0.6:9154 48105bbc-51d7-4178-b975-417433f9c20a Service_openshif tcp 172.30.26.159:2379 10.0.147.219:2379,169.254.169.2:2379,10.0.242.240:2379 tcp 172.30.26.159:9979 10.0.147.219:9979,169.254.169.2:9979,10.0.242.240:9979 7de2b8fc-342a-415f-ac13-1a493f4e39c0 Service_openshif tcp 172.30.53.219:443 10.128.0.7:8443 tcp 172.30.53.219:9192 10.128.0.7:9192 2cef36bc-d720-4afb-8d95-9350eff1d27a Service_openshif tcp 172.30.81.66:443 10.128.0.23:8443 365cb6fb-e15e-45a4-a55b-21868b3cf513 Service_openshif tcp 172.30.96.51:50051 10.130.0.19:50051 41691cbb-ec55-4cdb-8431-afce679c5e8d Service_openshif tcp 172.30.98.218:9099 169.254.169.2:9099 82df10ba-8143-400b-977a-8f5f416a4541 Service_openshif tcp 172.30.26.159:2379 10.0.147.219:2379,10.0.163.212:2379,169.254.169.2:2379 tcp 172.30.26.159:9979 10.0.147.219:9979,10.0.163.212:9979,169.254.169.2:9979 debe7f3a-39a8-490e-bc0a-ebbfafdffb16 Service_openshif tcp 172.30.23.244:443 10.128.0.48:8443,10.129.0.27:8443,10.130.0.45:8443 8a749239-02d9-4dc2-8737-716528e0da7b Service_openshif tcp 172.30.124.255:8443 10.128.0.14:8443 880c7c78-c790-403d-a3cb-9f06592717a3 Service_openshif tcp 172.30.0.10:53 10.130.0.20:5353 tcp 172.30.0.10:9154 10.130.0.20:9154 d2f39078-6751-4311-a161-815bbaf7f9c7 Service_openshif tcp 172.30.26.159:2379 169.254.169.2:2379,10.0.163.212:2379,10.0.242.240:2379 tcp 172.30.26.159:9979 169.254.169.2:9979,10.0.163.212:9979,10.0.242.240:9979 30948278-602b-455c-934a-28e64c46de12 Service_openshif tcp 172.30.157.35:9443 10.130.0.43:9443 2cc7e376-7c02-4a82-89e8-dfa1e23fb003 Service_openshif tcp 172.30.159.212:17698 10.128.0.48:17698,10.129.0.27:17698,10.130.0.45:17698 e7d22d35-61c2-40c2-bc30-265cff8ed18d Service_openshif tcp 172.30.143.87:9001 10.0.145.205:9001,10.0.147.219:9001,10.0.163.212:9001,10.0.165.9:9001,10.0.209.170:9001,169.254.169.2:9001 75164e75-e0c5-40fb-9636-bfdbf4223a02 Service_openshif tcp 172.30.150.68:1936 10.129.4.8:1936,10.131.0.10:1936 tcp 172.30.150.68:443 10.129.4.8:443,10.131.0.10:443 tcp 172.30.150.68:80 10.129.4.8:80,10.131.0.10:80 7bc4ee74-dccf-47e9-9149-b011f09aff39 Service_openshif tcp 172.30.164.74:443 10.0.147.219:6443,10.0.163.212:6443,169.254.169.2:6443 0db59e74-1cc6-470c-bf44-57c520e0aa8f Service_openshif tcp 10.0.163.212:31460 tcp 10.0.163.212:32361 c300e134-018c-49af-9f84-9deb1d0715f8 Service_openshif tcp 172.30.42.244:50051 10.130.0.47:50051 5e352773-429b-4881-afb3-a13b7ba8b081 Service_openshif tcp 172.30.244.66:443 10.129.0.8:8443,10.130.0.8:8443 54b82d32-1939-4465-a87d-f26321442a7a Service_openshif tcp 172.30.12.9:8443 10.128.0.35:8443 Note From this truncated output you can see there are many OVN-Kubernetes load balancers. Load balancers in OVN-Kubernetes are representations of services. 23.2.4. Command line arguments for ovn-nbctl to examine northbound database contents The following table describes the command line arguments that can be used with ovn-nbctl to examine the contents of the northbound database. Table 23.2. Command line arguments to examine northbound database contents Argument Description ovn-nbctl show An overview of the northbound database contents. ovn-nbctl show <switch_or_router> Show the details associated with the specified switch or router. ovn-nbctl lr-list Show the logical routers. ovn-nbctl lrp-list <router> Using the router information from ovn-nbctl lr-list to show the router ports. ovn-nbctl lr-nat-list <router> Show network address translation details for the specified router. ovn-nbctl ls-list Show the logical switches ovn-nbctl lsp-list <switch> Using the switch information from ovn-nbctl ls-list to show the switch port. ovn-nbctl lsp-get-type <port> Get the type for the logical port. ovn-nbctl lb-list Show the load balancers. 23.2.5. Listing the OVN-Kubernetes southbound database contents Logic flow rules are stored in the southbound database that is a representation of your infrastructure. The up to date information is present on the OVN Raft leader and this procedure describes how to find the Raft leader and query it to list the OVN southbound database contents. Prerequisites Access to the cluster as a user with the cluster-admin role. The OpenShift CLI ( oc ) installed. Procedure Find the OVN Raft leader for the southbound database. Note The Raft leader stores the most up to date information. List the pods by running the following command: USD oc get po -n openshift-ovn-kubernetes Example output NAME READY STATUS RESTARTS AGE ovnkube-master-7j97q 6/6 Running 2 (134m ago) 135m ovnkube-master-gt4ms 6/6 Running 1 (126m ago) 133m ovnkube-master-mk6p6 6/6 Running 0 134m ovnkube-node-8qvtr 5/5 Running 0 135m ovnkube-node-bqztb 5/5 Running 0 117m ovnkube-node-fqdc9 5/5 Running 0 135m ovnkube-node-tlfwv 5/5 Running 0 135m ovnkube-node-wlwkn 5/5 Running 0 128m Choose one of the master pods at random and run the following command to find the OVN southbound Raft leader: USD oc exec -n openshift-ovn-kubernetes ovnkube-master-7j97q \ -- /usr/bin/ovn-appctl -t /var/run/ovn/ovnsb_db.ctl \ --timeout=3 cluster/status OVN_Southbound Example output Defaulted container "northd" out of: northd, nbdb, kube-rbac-proxy, sbdb, ovnkube-master, ovn-dbchecker 1930 Name: OVN_Southbound Cluster ID: f772 (f77273c0-7986-42dd-bd3c-a9f18e25701f) Server ID: 1930 (1930f4b7-314b-406f-9dcb-b81fe2729ae1) Address: ssl:10.0.147.219:9644 Status: cluster member Role: follower 1 Term: 3 Leader: 7081 2 Vote: unknown Election timer: 16000 Log: [2, 2423] Entries not yet committed: 0 Entries not yet applied: 0 Connections: ->0000 ->7145 <-7081 <-7145 Disconnections: 0 Servers: 7081 (7081 at ssl:10.0.163.212:9644) last msg 59 ms ago 3 1930 (1930 at ssl:10.0.147.219:9644) (self) 7145 (7145 at ssl:10.0.242.240:9644) last msg 7871735 ms ago 1 This pod is identified as a follower 2 The leader is identified as 7081 3 The 7081 is on IP address 10.0.163.212 Find the ovnkube-master pod running on IP Address 10.0.163.212 using the following command: USD oc get po -o wide -n openshift-ovn-kubernetes | grep 10.0.163.212 | grep -v ovnkube-node Example output ovnkube-master-mk6p6 6/6 Running 0 136m 10.0.163.212 ip-10-0-163-212.ec2.internal <none> <none> The ovnkube-master-mk6p6 pod runs on IP Address 10.0.163.212. Run the following command to show all the information stored in the southbound database: USD oc exec -n openshift-ovn-kubernetes -it ovnkube-master-mk6p6 \ -c northd -- ovn-sbctl show Example output Chassis "8ca57b28-9834-45f0-99b0-96486c22e1be" hostname: ip-10-0-156-16.ec2.internal Encap geneve ip: "10.0.156.16" options: {csum="true"} Port_Binding k8s-ip-10-0-156-16.ec2.internal Port_Binding etor-GR_ip-10-0-156-16.ec2.internal Port_Binding jtor-GR_ip-10-0-156-16.ec2.internal Port_Binding openshift-ingress-canary_ingress-canary-hsblx Port_Binding rtoj-GR_ip-10-0-156-16.ec2.internal Port_Binding openshift-monitoring_prometheus-adapter-658fc5967-9l46x Port_Binding rtoe-GR_ip-10-0-156-16.ec2.internal Port_Binding openshift-multus_network-metrics-daemon-77nvz Port_Binding openshift-ingress_router-default-64fd8c67c7-df598 Port_Binding openshift-dns_dns-default-ttpcq Port_Binding openshift-monitoring_alertmanager-main-0 Port_Binding openshift-e2e-loki_loki-promtail-g2pbh Port_Binding openshift-network-diagnostics_network-check-target-m6tn4 Port_Binding openshift-monitoring_thanos-querier-75b5cf8dcb-qf8qj Port_Binding cr-rtos-ip-10-0-156-16.ec2.internal Port_Binding openshift-image-registry_image-registry-7b7bc44566-mp9b8 This detailed output shows the chassis and the ports that are attached to the chassis which in this case are all of the router ports and anything that runs like host networking. Any pods communicate out to the wider network using source network address translation (SNAT). Their IP address is translated into the IP address of the node that the pod is running on and then sent out into the network. In addition to the chassis information the southbound database has all the logic flows and those logic flows are then sent to the ovn-controller running on each of the nodes. The ovn-controller translates the logic flows into open flow rules and ultimately programs OpenvSwitch so that your pods can then follow open flow rules and make it out of the network. Run the following command to display the options available with the command ovn-sbctl : USD oc exec -n openshift-ovn-kubernetes -it ovnkube-master-mk6p6 \ -c northd -- ovn-sbctl --help 23.2.6. Command line arguments for ovn-sbctl to examine southbound database contents The following table describes the command line arguments that can be used with ovn-sbctl to examine the contents of the southbound database. Table 23.3. Command line arguments to examine southbound database contents Argument Description ovn-sbctl show Overview of the southbound database contents. ovn-sbctl list Port_Binding <port> List the contents of southbound database for a the specified port . ovn-sbctl dump-flows List the logical flows. 23.2.7. OVN-Kubernetes logical architecture OVN is a network virtualization solution. It creates logical switches and routers. These switches and routers are interconnected to create any network topologies. When you run ovnkube-trace with the log level set to 2 or 5 the OVN-Kubernetes logical components are exposed. The following diagram shows how the routers and switches are connected in OpenShift Container Platform. Figure 23.2. OVN-Kubernetes router and switch components The key components involved in packet processing are: Gateway routers Gateway routers sometimes called L3 gateway routers, are typically used between the distributed routers and the physical network. Gateway routers including their logical patch ports are bound to a physical location (not distributed), or chassis. The patch ports on this router are known as l3gateway ports in the ovn-southbound database ( ovn-sbdb ). Distributed logical routers Distributed logical routers and the logical switches behind them, to which virtual machines and containers attach, effectively reside on each hypervisor. Join local switch Join local switches are used to connect the distributed router and gateway routers. It reduces the number of IP addresses needed on the distributed router. Logical switches with patch ports Logical switches with patch ports are used to virtualize the network stack. They connect remote logical ports through tunnels. Logical switches with localnet ports Logical switches with localnet ports are used to connect OVN to the physical network. They connect remote logical ports by bridging the packets to directly connected physical L2 segments using localnet ports. Patch ports Patch ports represent connectivity between logical switches and logical routers and between peer logical routers. A single connection has a pair of patch ports at each such point of connectivity, one on each side. l3gateway ports l3gateway ports are the port binding entries in the ovn-sbdb for logical patch ports used in the gateway routers. They are called l3gateway ports rather than patch ports just to portray the fact that these ports are bound to a chassis just like the gateway router itself. localnet ports localnet ports are present on the bridged logical switches that allows a connection to a locally accessible network from each ovn-controller instance. This helps model the direct connectivity to the physical network from the logical switches. A logical switch can only have a single localnet port attached to it. 23.2.7.1. Installing network-tools on local host Install network-tools on your local host to make a collection of tools available for debugging OpenShift Container Platform cluster network issues. Procedure Clone the network-tools repository onto your workstation with the following command: USD git clone [email protected]:openshift/network-tools.git Change into the directory for the repository you just cloned: USD cd network-tools Optional: List all available commands: USD ./debug-scripts/network-tools -h 23.2.7.2. Running network-tools Get information about the logical switches and routers by running network-tools . Prerequisites You installed the OpenShift CLI ( oc ). You are logged in to the cluster as a user with cluster-admin privileges. You have installed network-tools on local host. Procedure List the routers by running the following command: USD ./debug-scripts/network-tools ovn-db-run-command ovn-nbctl lr-list Example output Leader pod is ovnkube-master-vslqm 5351ddd1-f181-4e77-afc6-b48b0a9df953 (GR_helix13.lab.eng.tlv2.redhat.com) ccf9349e-1948-4df8-954e-39fb0c2d4d06 (GR_helix14.lab.eng.tlv2.redhat.com) e426b918-75a8-4220-9e76-20b7758f92b7 (GR_hlxcl7-master-0.hlxcl7.lab.eng.tlv2.redhat.com) dded77c8-0cc3-4b99-8420-56cd2ae6a840 (GR_hlxcl7-master-1.hlxcl7.lab.eng.tlv2.redhat.com) 4f6747e6-e7ba-4e0c-8dcd-94c8efa51798 (GR_hlxcl7-master-2.hlxcl7.lab.eng.tlv2.redhat.com) 52232654-336e-4952-98b9-0b8601e370b4 (ovn_cluster_router) List the localnet ports by running the following command: USD ./debug-scripts/network-tools ovn-db-run-command \ ovn-sbctl find Port_Binding type=localnet Example output Leader pod is ovnkube-master-vslqm _uuid : 3de79191-cca8-4c28-be5a-a228f0f9ebfc additional_chassis : [] additional_encap : [] chassis : [] datapath : 3f1a4928-7ff5-471f-9092-fe5f5c67d15c encap : [] external_ids : {} gateway_chassis : [] ha_chassis_group : [] logical_port : br-ex_helix13.lab.eng.tlv2.redhat.com mac : [unknown] nat_addresses : [] options : {network_name=physnet} parent_port : [] port_security : [] requested_additional_chassis: [] requested_chassis : [] tag : [] tunnel_key : 2 type : localnet up : false virtual_parent : [] _uuid : dbe21daf-9594-4849-b8f0-5efbfa09a455 additional_chassis : [] additional_encap : [] chassis : [] datapath : db2a6067-fe7c-4d11-95a7-ff2321329e11 encap : [] external_ids : {} gateway_chassis : [] ha_chassis_group : [] logical_port : br-ex_hlxcl7-master-2.hlxcl7.lab.eng.tlv2.redhat.com mac : [unknown] nat_addresses : [] options : {network_name=physnet} parent_port : [] port_security : [] requested_additional_chassis: [] requested_chassis : [] tag : [] tunnel_key : 2 type : localnet up : false virtual_parent : [] [...] List the l3gateway ports by running the following command: USD ./debug-scripts/network-tools ovn-db-run-command \ ovn-sbctl find Port_Binding type=l3gateway Example output Leader pod is ovnkube-master-vslqm _uuid : 9314dc80-39e1-4af7-9cc0-ae8a9708ed59 additional_chassis : [] additional_encap : [] chassis : 336a923d-99e8-4e71-89a6-12564fde5760 datapath : db2a6067-fe7c-4d11-95a7-ff2321329e11 encap : [] external_ids : {} gateway_chassis : [] ha_chassis_group : [] logical_port : etor-GR_hlxcl7-master-2.hlxcl7.lab.eng.tlv2.redhat.com mac : ["52:54:00:3e:95:d3"] nat_addresses : ["52:54:00:3e:95:d3 10.46.56.77"] options : {l3gateway-chassis="7eb1f1c3-87c2-4f68-8e89-60f5ca810971", peer=rtoe-GR_hlxcl7-master-2.hlxcl7.lab.eng.tlv2.redhat.com} parent_port : [] port_security : [] requested_additional_chassis: [] requested_chassis : [] tag : [] tunnel_key : 1 type : l3gateway up : true virtual_parent : [] _uuid : ad7eb303-b411-4e9f-8d36-d07f1f268e27 additional_chassis : [] additional_encap : [] chassis : f41453b8-29c5-4f39-b86b-e82cf344bce4 datapath : 082e7a60-d9c7-464b-b6ec-117d3426645a encap : [] external_ids : {} gateway_chassis : [] ha_chassis_group : [] logical_port : etor-GR_helix14.lab.eng.tlv2.redhat.com mac : ["34:48:ed:f3:e2:2c"] nat_addresses : ["34:48:ed:f3:e2:2c 10.46.56.14"] options : {l3gateway-chassis="2e8abe3a-cb94-4593-9037-f5f9596325e2", peer=rtoe-GR_helix14.lab.eng.tlv2.redhat.com} parent_port : [] port_security : [] requested_additional_chassis: [] requested_chassis : [] tag : [] tunnel_key : 1 type : l3gateway up : true virtual_parent : [] [...] List the patch ports by running the following command: USD ./debug-scripts/network-tools ovn-db-run-command \ ovn-sbctl find Port_Binding type=patch Example output Leader pod is ovnkube-master-vslqm _uuid : c48b1380-ff26-4965-a644-6bd5b5946c61 additional_chassis : [] additional_encap : [] chassis : [] datapath : 72734d65-fae1-4bd9-a1ee-1bf4e085a060 encap : [] external_ids : {} gateway_chassis : [] ha_chassis_group : [] logical_port : jtor-ovn_cluster_router mac : [router] nat_addresses : [] options : {peer=rtoj-ovn_cluster_router} parent_port : [] port_security : [] requested_additional_chassis: [] requested_chassis : [] tag : [] tunnel_key : 4 type : patch up : false virtual_parent : [] _uuid : 5df51302-f3cd-415b-a059-ac24389938f7 additional_chassis : [] additional_encap : [] chassis : [] datapath : 0551c90f-e891-4909-8e9e-acc7909e06d0 encap : [] external_ids : {} gateway_chassis : [] ha_chassis_group : [] logical_port : rtos-hlxcl7-master-1.hlxcl7.lab.eng.tlv2.redhat.com mac : ["0a:58:0a:82:00:01 10.130.0.1/23"] nat_addresses : [] options : {chassis-redirect-port=cr-rtos-hlxcl7-master-1.hlxcl7.lab.eng.tlv2.redhat.com, peer=stor-hlxcl7-master-1.hlxcl7.lab.eng.tlv2.redhat.com} parent_port : [] port_security : [] requested_additional_chassis: [] requested_chassis : [] tag : [] tunnel_key : 4 type : patch up : false virtual_parent : [] [...] 23.2.8. Additional resources How to list OVN database contents with ovn-kubernetes in Red Hat OpenShift Container Platform 4.x? Tracing Openflow with ovnkube-trace OVN architecture Raft (algorithm) ovn-nbctl linux manual page ovn-sbctl linux manual page 23.3. Troubleshooting OVN-Kubernetes OVN-Kubernetes has many sources of built-in health checks and logs. 23.3.1. Monitoring OVN-Kubernetes health by using readiness probes The ovnkube-master and ovnkube-node pods have containers configured with readiness probes. Prerequisites Access to the OpenShift CLI ( oc ). You have access to the cluster with cluster-admin privileges. You have installed jq . Procedure Review the details of the ovnkube-master readiness probe by running the following command: USD oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-master \ -o json | jq '.items[0].spec.containers[] | .name,.readinessProbe' The readiness probe for the northbound and southbound database containers in the ovnkube-master pod checks for the health of the Raft cluster hosting the databases. Review the details of the ovnkube-node readiness probe by running the following command: USD oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-master \ -o json | jq '.items[0].spec.containers[] | .name,.readinessProbe' The ovnkube-node container in the ovnkube-node pod has a readiness probe to verify the presence of the ovn-kubernetes CNI configuration file, the absence of which would indicate that the pod is not running or is not ready to accept requests to configure pods. Show all events including the probe failures, for the namespace by using the following command: USD oc get events -n openshift-ovn-kubernetes Show the events for just this pod: USD oc describe pod ovnkube-master-tp2z8 -n openshift-ovn-kubernetes Show the messages and statuses from the cluster network operator: USD oc get co/network -o json | jq '.status.conditions[]' Show the ready status of each container in ovnkube-master pods by running the following script: USD for p in USD(oc get pods --selector app=ovnkube-master -n openshift-ovn-kubernetes \ -o jsonpath='{range.items[*]}{" "}{.metadata.name}'); do echo === USDp ===; \ oc get pods -n openshift-ovn-kubernetes USDp -o json | jq '.status.containerStatuses[] | .name, .ready'; \ done Note The expectation is all container statuses are reporting as true . Failure of a readiness probe sets the status to false . Additional resources Monitoring application health by using health checks 23.3.2. Viewing OVN-Kubernetes alerts in the console The Alerting UI provides detailed information about alerts and their governing alerting rules and silences. Prerequisites You have access to the cluster as a developer or as a user with view permissions for the project that you are viewing metrics for. Procedure (UI) In the Administrator perspective, select Observe Alerting . The three main pages in the Alerting UI in this perspective are the Alerts , Silences , and Alerting Rules pages. View the rules for OVN-Kubernetes alerts by selecting Observe Alerting Alerting Rules . 23.3.3. Viewing OVN-Kubernetes alerts in the CLI You can get information about alerts and their governing alerting rules and silences from the command line. Prerequisites Access to the cluster as a user with the cluster-admin role. The OpenShift CLI ( oc ) installed. You have installed jq . Procedure View active or firing alerts by running the following commands. Set the alert manager route environment variable by running the following command: USD ALERT_MANAGER=USD(oc get route alertmanager-main -n openshift-monitoring \ -o jsonpath='{@.spec.host}') Issue a curl request to the alert manager route API with the correct authorization details requesting specific fields by running the following command: USD curl -s -k -H "Authorization: Bearer \ USD(oc create token prometheus-k8s -n openshift-monitoring)" \ https://USDALERT_MANAGER/api/v1/alerts \ | jq '.data[] | "\(.labels.severity) \(.labels.alertname) \(.labels.pod) \(.labels.container) \(.labels.endpoint) \(.labels.instance)"' View alerting rules by running the following command: USD oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -s 'http://localhost:9090/api/v1/rules' | jq '.data.groups[].rules[] | select(((.name|contains("ovn")) or (.name|contains("OVN")) or (.name|contains("Ovn")) or (.name|contains("North")) or (.name|contains("South"))) and .type=="alerting")' 23.3.4. Viewing the OVN-Kubernetes logs using the CLI You can view the logs for each of the pods in the ovnkube-master and ovnkube-node pods using the OpenShift CLI ( oc ). Prerequisites Access to the cluster as a user with the cluster-admin role. Access to the OpenShift CLI ( oc ). You have installed jq . Procedure View the log for a specific pod: USD oc logs -f <pod_name> -c <container_name> -n <namespace> where: -f Optional: Specifies that the output follows what is being written into the logs. <pod_name> Specifies the name of the pod. <container_name> Optional: Specifies the name of a container. When a pod has more than one container, you must specify the container name. <namespace> Specify the namespace the pod is running in. For example: USD oc logs ovnkube-master-7h4q7 -n openshift-ovn-kubernetes USD oc logs -f ovnkube-master-7h4q7 -n openshift-ovn-kubernetes -c ovn-dbchecker The contents of log files are printed out. Examine the most recent entries in all the containers in the ovnkube-master pods: USD for p in USD(oc get pods --selector app=ovnkube-master -n openshift-ovn-kubernetes \ -o jsonpath='{range.items[*]}{" "}{.metadata.name}'); \ do echo === USDp ===; for container in USD(oc get pods -n openshift-ovn-kubernetes USDp \ -o json | jq -r '.status.containerStatuses[] | .name');do echo ---USDcontainer---; \ oc logs -c USDcontainer USDp -n openshift-ovn-kubernetes --tail=5; done; done View the last 5 lines of every log in every container in an ovnkube-master pod using the following command: USD oc logs -l app=ovnkube-master -n openshift-ovn-kubernetes --all-containers --tail 5 23.3.5. Viewing the OVN-Kubernetes logs using the web console You can view the logs for each of the pods in the ovnkube-master and ovnkube-node pods in the web console. Prerequisites Access to the OpenShift CLI ( oc ). Procedure In the OpenShift Container Platform console, navigate to Workloads Pods or navigate to the pod through the resource you want to investigate. Select the openshift-ovn-kubernetes project from the drop-down menu. Click the name of the pod you want to investigate. Click Logs . By default for the ovnkube-master the logs associated with the northd container are displayed. Use the down-down menu to select logs for each container in turn. 23.3.5.1. Changing the OVN-Kubernetes log levels The default log level for OVN-Kubernetes is 2. To debug OVN-Kubernetes set the log level to 5. Follow this procedure to increase the log level of the OVN-Kubernetes to help you debug an issue. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. Procedure Run the following command to get detailed information for all pods in the OVN-Kubernetes project: USD oc get po -o wide -n openshift-ovn-kubernetes Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ovnkube-master-84nc9 6/6 Running 0 50m 10.0.134.156 ip-10-0-134-156.ec2.internal <none> <none> ovnkube-master-gmlqv 6/6 Running 0 50m 10.0.209.180 ip-10-0-209-180.ec2.internal <none> <none> ovnkube-master-nhts2 6/6 Running 1 (48m ago) 50m 10.0.147.31 ip-10-0-147-31.ec2.internal <none> <none> ovnkube-node-2cbh8 5/5 Running 0 43m 10.0.217.114 ip-10-0-217-114.ec2.internal <none> <none> ovnkube-node-6fvzl 5/5 Running 0 50m 10.0.147.31 ip-10-0-147-31.ec2.internal <none> <none> ovnkube-node-f4lzz 5/5 Running 0 24m 10.0.146.76 ip-10-0-146-76.ec2.internal <none> <none> ovnkube-node-jf67d 5/5 Running 0 50m 10.0.209.180 ip-10-0-209-180.ec2.internal <none> <none> ovnkube-node-np9mf 5/5 Running 0 40m 10.0.165.191 ip-10-0-165-191.ec2.internal <none> <none> ovnkube-node-qjldg 5/5 Running 0 50m 10.0.134.156 ip-10-0-134-156.ec2.internal <none> <none> Create a ConfigMap file similar to the following example and use a filename such as env-overrides.yaml : Example ConfigMap file kind: ConfigMap apiVersion: v1 metadata: name: env-overrides namespace: openshift-ovn-kubernetes data: ip-10-0-217-114.ec2.internal: | 1 # This sets the log level for the ovn-kubernetes node process: OVN_KUBE_LOG_LEVEL=5 # You might also/instead want to enable debug logging for ovn-controller: OVN_LOG_LEVEL=dbg ip-10-0-209-180.ec2.internal: | # This sets the log level for the ovn-kubernetes node process: OVN_KUBE_LOG_LEVEL=5 # You might also/instead want to enable debug logging for ovn-controller: OVN_LOG_LEVEL=dbg _master: | 2 # This sets the log level for the ovn-kubernetes master process as well as the ovn-dbchecker: OVN_KUBE_LOG_LEVEL=5 # You might also/instead want to enable debug logging for northd, nbdb and sbdb on all masters: OVN_LOG_LEVEL=dbg 1 Specify the name of the node you want to set the debug log level on. 2 Specify _master to set the log levels of ovnkube-master components. Apply the ConfigMap file by using the following command: USD oc apply -n openshift-ovn-kubernetes -f env-overrides.yaml Example output configmap/env-overrides.yaml created Restart the ovnkube pods to apply the new log level by using the following commands: USD oc delete pod -n openshift-ovn-kubernetes \ --field-selector spec.nodeName=ip-10-0-217-114.ec2.internal -l app=ovnkube-node USD oc delete pod -n openshift-ovn-kubernetes \ --field-selector spec.nodeName=ip-10-0-209-180.ec2.internal -l app=ovnkube-node USD oc delete pod -n openshift-ovn-kubernetes -l app=ovnkube-master 23.3.6. Checking the OVN-Kubernetes pod network connectivity The connectivity check controller, in OpenShift Container Platform 4.10 and later, orchestrates connection verification checks in your cluster. These include Kubernetes API, OpenShift API and individual nodes. The results for the connection tests are stored in PodNetworkConnectivity objects in the openshift-network-diagnostics namespace. Connection tests are performed every minute in parallel. Prerequisites Access to the OpenShift CLI ( oc ). Access to the cluster as a user with the cluster-admin role. You have installed jq . Procedure To list the current PodNetworkConnectivityCheck objects, enter the following command: USD oc get podnetworkconnectivitychecks -n openshift-network-diagnostics View the most recent success for each connection object by using the following command: USD oc get podnetworkconnectivitychecks -n openshift-network-diagnostics \ -o json | jq '.items[]| .spec.targetEndpoint,.status.successes[0]' View the most recent failures for each connection object by using the following command: USD oc get podnetworkconnectivitychecks -n openshift-network-diagnostics \ -o json | jq '.items[]| .spec.targetEndpoint,.status.failures[0]' View the most recent outages for each connection object by using the following command: USD oc get podnetworkconnectivitychecks -n openshift-network-diagnostics \ -o json | jq '.items[]| .spec.targetEndpoint,.status.outages[0]' The connectivity check controller also logs metrics from these checks into Prometheus. View all the metrics by running the following command: USD oc exec prometheus-k8s-0 -n openshift-monitoring -- \ promtool query instant http://localhost:9090 \ '{component="openshift-network-diagnostics"}' View the latency between the source pod and the openshift api service for the last 5 minutes: USD oc exec prometheus-k8s-0 -n openshift-monitoring -- \ promtool query instant http://localhost:9090 \ '{component="openshift-network-diagnostics"}' 23.3.7. Additional resources How do I change the ovn-kubernetes loglevel in OpenShift 4? Implementation of connection health checks Verifying network connectivity for an endpoint 23.4. Tracing Openflow with ovnkube-trace OVN and OVS traffic flows can be simulated in a single utility called ovnkube-trace . The ovnkube-trace utility runs ovn-trace , ovs-appctl ofproto/trace and ovn-detrace and correlates that information in a single output. You can execute the ovnkube-trace binary from a dedicated container. For releases after OpenShift Container Platform 4.7, you can also copy the binary to a local host and execute it from that host. Note The binaries in the Quay images do not currently work for Dual IP stack or IPv6 only environments. For those environments, you must build from source. 23.4.1. Installing the ovnkube-trace on local host The ovnkube-trace tool traces packet simulations for arbitrary UDP or TCP traffic between points in an OVN-Kubernetes driven OpenShift Container Platform cluster. Copy the ovnkube-trace binary to your local host making it available to run against the cluster. Prerequisites You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. Procedure Create a pod variable by using the following command: USD POD=USD(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-master -o name | head -1 | awk -F '/' '{print USDNF}') Run the following command on your local host to copy the binary from the ovnkube-master pods: USD oc cp -n openshift-ovn-kubernetes USDPOD:/usr/bin/ovnkube-trace ovnkube-trace Make ovnkube-trace executable by running the following command: USD chmod +x ovnkube-trace Display the options available with ovnkube-trace by running the following command: USD ./ovnkube-trace -help Expected output I0111 15:05:27.973305 204872 ovs.go:90] Maximum command line arguments set to: 191102 Usage of ./ovnkube-trace: -dst string dest: destination pod name -dst-ip string destination IP address (meant for tests to external targets) -dst-namespace string k8s namespace of dest pod (default "default") -dst-port string dst-port: destination port (default "80") -kubeconfig string absolute path to the kubeconfig file -loglevel string loglevel: klog level (default "0") -ovn-config-namespace string namespace used by ovn-config itself -service string service: destination service name -skip-detrace skip ovn-detrace command -src string src: source pod name -src-namespace string k8s namespace of source pod (default "default") -tcp use tcp transport protocol -udp use udp transport protocol The command-line arguments supported are familiar Kubernetes constructs, such as namespaces, pods, services so you do not need to find the MAC address, the IP address of the destination nodes, or the ICMP type. The log levels are: 0 (minimal output) 2 (more verbose output showing results of trace commands) 5 (debug output) 23.4.2. Running ovnkube-trace Run ovn-trace to simulate packet forwarding within an OVN logical network. Prerequisites You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. You have installed ovnkube-trace on local host Example: Testing that DNS resolution works from a deployed pod This example illustrates how to test the DNS resolution from a deployed pod to the core DNS pod that runs in the cluster. Procedure Start a web service in the default namespace by entering the following command: USD oc run web --namespace=default --image=nginx --labels="app=web" --expose --port=80 List the pods running in the openshift-dns namespace: oc get pods -n openshift-dns Example output NAME READY STATUS RESTARTS AGE dns-default-467qw 2/2 Running 0 49m dns-default-6prvx 2/2 Running 0 53m dns-default-fkqr8 2/2 Running 0 53m dns-default-qv2rg 2/2 Running 0 49m dns-default-s29vr 2/2 Running 0 49m dns-default-vdsbn 2/2 Running 0 53m node-resolver-6thtt 1/1 Running 0 53m node-resolver-7ksdn 1/1 Running 0 49m node-resolver-8sthh 1/1 Running 0 53m node-resolver-c5ksw 1/1 Running 0 50m node-resolver-gbvdp 1/1 Running 0 53m node-resolver-sxhkd 1/1 Running 0 50m Run the following ovn-kube-trace command to verify DNS resolution is working: USD ./ovnkube-trace \ -src-namespace default \ 1 -src web \ 2 -dst-namespace openshift-dns \ 3 -dst dns-default-467qw \ 4 -udp -dst-port 53 \ 5 -loglevel 0 6 1 Namespace of the source pod 2 Source pod name 3 Namespace of destination pod 4 Destination pod name 5 Use the udp transport protocol. Port 53 is the port the DNS service uses. 6 Set the log level to 1 (0 is minimal and 5 is debug) Expected output I0116 10:19:35.601303 17900 ovs.go:90] Maximum command line arguments set to: 191102 ovn-trace source pod to destination pod indicates success from web to dns-default-467qw ovn-trace destination pod to source pod indicates success from dns-default-467qw to web ovs-appctl ofproto/trace source pod to destination pod indicates success from web to dns-default-467qw ovs-appctl ofproto/trace destination pod to source pod indicates success from dns-default-467qw to web ovn-detrace source pod to destination pod indicates success from web to dns-default-467qw ovn-detrace destination pod to source pod indicates success from dns-default-467qw to web The ouput indicates success from the deployed pod to the DNS port and also indicates that it is successful going back in the other direction. So you know bi-directional traffic is supported on UDP port 53 if my web pod wants to do dns resolution from core DNS. If for example that did not work and you wanted to get the ovn-trace , the ovs-appctl ofproto/trace and ovn-detrace , and more debug type information increase the log level to 2 and run the command again as follows: USD ./ovnkube-trace \ -src-namespace default \ -src web \ -dst-namespace openshift-dns \ -dst dns-default-467qw \ -udp -dst-port 53 \ -loglevel 2 The output from this increased log level is too much to list here. In a failure situation the output of this command shows which flow is dropping that traffic. For example an egress or ingress network policy may be configured on the cluster that does not allow that traffic. Example: Verifying by using debug output a configured default deny This example illustrates how to identify by using the debug output that an ingress default deny policy blocks traffic. Procedure Create the following YAML that defines a deny-by-default policy to deny ingress from all pods in all namespaces. Save the YAML in the deny-by-default.yaml file: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default namespace: default spec: podSelector: {} ingress: [] Apply the policy by entering the following command: USD oc apply -f deny-by-default.yaml Example output networkpolicy.networking.k8s.io/deny-by-default created Start a web service in the default namespace by entering the following command: USD oc run web --namespace=default --image=nginx --labels="app=web" --expose --port=80 Run the following command to create the prod namespace: USD oc create namespace prod Run the following command to label the prod namespace: USD oc label namespace/prod purpose=production Run the following command to deploy an alpine image in the prod namespace and start a shell: USD oc run test-6459 --namespace=prod --rm -i -t --image=alpine -- sh Open another terminal session. In this new terminal session run ovn-trace to verify the failure in communication between the source pod test-6459 running in namespace prod and destination pod running in the default namespace: USD ./ovnkube-trace \ -src-namespace prod \ -src test-6459 \ -dst-namespace default \ -dst web \ -tcp -dst-port 80 \ -loglevel 0 Expected output I0116 14:20:47.380775 50822 ovs.go:90] Maximum command line arguments set to: 191102 ovn-trace source pod to destination pod indicates failure from test-6459 to web Increase the log level to 2 to expose the reason for the failure by running the following command: USD ./ovnkube-trace \ -src-namespace prod \ -src test-6459 \ -dst-namespace default \ -dst web \ -tcp -dst-port 80 \ -loglevel 2 Expected output ct_lb_mark /* default (use --ct to customize) */ ------------------------------------------------ 3. ls_out_acl_hint (northd.c:6092): !ct.new && ct.est && !ct.rpl && ct_mark.blocked == 0, priority 4, uuid 32d45ad4 reg0[8] = 1; reg0[10] = 1; ; 4. ls_out_acl (northd.c:6435): reg0[10] == 1 && (outport == @a16982411286042166782_ingressDefaultDeny), priority 2000, uuid f730a887 1 ct_commit { ct_mark.blocked = 1; }; 1 Ingress traffic is blocked due to the default deny policy being in place Create a policy that allows traffic from all pods in a particular namespaces with a label purpose=production . Save the YAML in the web-allow-prod.yaml file: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-allow-prod namespace: default spec: podSelector: matchLabels: app: web policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: purpose: production Apply the policy by entering the following command: USD oc apply -f web-allow-prod.yaml Run ovnkube-trace to verify that traffic is now allowed by entering the following command: USD ./ovnkube-trace \ -src-namespace prod \ -src test-6459 \ -dst-namespace default \ -dst web \ -tcp -dst-port 80 \ -loglevel 0 Expected output I0116 14:25:44.055207 51695 ovs.go:90] Maximum command line arguments set to: 191102 ovn-trace source pod to destination pod indicates success from test-6459 to web ovn-trace destination pod to source pod indicates success from web to test-6459 ovs-appctl ofproto/trace source pod to destination pod indicates success from test-6459 to web ovs-appctl ofproto/trace destination pod to source pod indicates success from web to test-6459 ovn-detrace source pod to destination pod indicates success from test-6459 to web ovn-detrace destination pod to source pod indicates success from web to test-6459 In the open shell run the following command: wget -qO- --timeout=2 http://web.default Expected output <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> 23.4.3. Additional resources Tracing Openflow with ovnkube-trace utility ovnkube-trace 23.5. Migrating from the OpenShift SDN network plugin As a cluster administrator, you can migrate to the OVN-Kubernetes network plugin from the OpenShift SDN network plugin. You can use the offline migration method for migrating from the OpenShift SDN network plugin to the OVN-Kubernetes plugin. The offline migration method is a manual process that includes some downtime. Additional resources About the OVN-Kubernetes network plugin 23.5.1. Migration to the OVN-Kubernetes network plugin Migrating to the OVN-Kubernetes network plugin is a manual process that includes some downtime during which your cluster is unreachable. Important Before you migrate your OpenShift Container Platform cluster to use the OVN-Kubernetes network plugin, update your cluster to the latest z-stream release so that all the latest bug fixes apply to your cluster. Although a rollback procedure is provided, the migration is intended to be a one-way process. A migration to the OVN-Kubernetes network plugin is supported on the following platforms: Bare metal hardware Amazon Web Services (AWS) Google Cloud Platform (GCP) IBM Cloud(R) Microsoft Azure Red Hat OpenStack Platform (RHOSP) Red Hat Virtualization (RHV) {vmw-first} Important Migrating to or from the OVN-Kubernetes network plugin is not supported for managed OpenShift cloud services such as Red Hat OpenShift Dedicated, Azure Red Hat OpenShift(ARO), and Red Hat OpenShift Service on AWS (ROSA). Migrating from OpenShift SDN network plugin to OVN-Kubernetes network plugin is not supported on Nutanix. 23.5.1.1. Considerations for migrating to the OVN-Kubernetes network plugin If you have more than 150 nodes in your OpenShift Container Platform cluster, then open a support case for consultation on your migration to the OVN-Kubernetes network plugin. The subnets assigned to nodes and the IP addresses assigned to individual pods are not preserved during the migration. While the OVN-Kubernetes network plugin implements many of the capabilities present in the OpenShift SDN network plugin, the configuration is not the same. If your cluster uses any of the following OpenShift SDN network plugin capabilities, you must manually configure the same capability in the OVN-Kubernetes network plugin: Namespace isolation Egress router pods If your cluster or surrounding network uses any part of the 100.64.0.0/16 address range, you must choose another unused IP range by specifying the v4InternalSubnet spec under the spec.defaultNetwork.ovnKubernetesConfig object definition. OVN-Kubernetes uses the IP range 100.64.0.0/16 internally by default. The following sections highlight the differences in configuration between the aforementioned capabilities in OVN-Kubernetes and OpenShift SDN network plugins. Primary network interface The OpenShift SDN plugin allows application of the NodeNetworkConfigurationPolicy (NNCP) custom resource (CR) to the primary interface on a node. The OVN-Kubernetes network plugin does not have this capability. If you have an NNCP applied to the primary interface, you must delete the NNCP before migrating to the OVN-Kubernetes network plugin. Deleting the NNCP does not remove the configuration from the primary interface, but with OVN-Kubernetes, the Kubernetes NMState cannot manage this configuration. Instead, the configure-ovs.sh shell script manages the primary interface and the configuration attached to this interface. Namespace isolation OVN-Kubernetes supports only the network policy isolation mode. Important For a cluster using OpenShift SDN that is configured in either the multitenant or subnet isolation mode, you can still migrate to the OVN-Kubernetes network plugin. Note that after the migration operation, multitenant isolation mode is dropped, so you must manually configure network policies to achieve the same level of project-level isolation for pods and services. Egress IP addresses OpenShift SDN supports two different Egress IP modes: In the automatically assigned approach, an egress IP address range is assigned to a node. In the manually assigned approach, a list of one or more egress IP addresses is assigned to a node. The migration process supports migrating Egress IP configurations that use the automatically assigned mode. The differences in configuring an egress IP address between OVN-Kubernetes and OpenShift SDN is described in the following table: Table 23.4. Differences in egress IP address configuration OVN-Kubernetes OpenShift SDN Create an EgressIPs object Add an annotation on a Node object Patch a NetNamespace object Patch a HostSubnet object For more information on using egress IP addresses in OVN-Kubernetes, see "Configuring an egress IP address". Egress network policies The difference in configuring an egress network policy, also known as an egress firewall, between OVN-Kubernetes and OpenShift SDN is described in the following table: Table 23.5. Differences in egress network policy configuration OVN-Kubernetes OpenShift SDN Create an EgressFirewall object in a namespace Create an EgressNetworkPolicy object in a namespace Note Because the name of an EgressFirewall object can only be set to default , after the migration all migrated EgressNetworkPolicy objects are named default , regardless of what the name was under OpenShift SDN. If you subsequently rollback to OpenShift SDN, all EgressNetworkPolicy objects are named default as the prior name is lost. For more information on using an egress firewall in OVN-Kubernetes, see "Configuring an egress firewall for a project". Egress router pods OVN-Kubernetes supports egress router pods in redirect mode. OVN-Kubernetes does not support egress router pods in HTTP proxy mode or DNS proxy mode. When you deploy an egress router with the Cluster Network Operator, you cannot specify a node selector to control which node is used to host the egress router pod. Multicast The difference between enabling multicast traffic on OVN-Kubernetes and OpenShift SDN is described in the following table: Table 23.6. Differences in multicast configuration OVN-Kubernetes OpenShift SDN Add an annotation on a Namespace object Add an annotation on a NetNamespace object For more information on using multicast in OVN-Kubernetes, see "Enabling multicast for a project". Network policies OVN-Kubernetes fully supports the Kubernetes NetworkPolicy API in the networking.k8s.io/v1 API group. No changes are necessary in your network policies when migrating from OpenShift SDN. Additional resources Understanding update channels and releases Asynchronous errata updates 23.5.1.2. How the migration process works The following table summarizes the migration process by segmenting between the user-initiated steps in the process and the actions that the migration performs in response. Table 23.7. Migrating to OVN-Kubernetes from OpenShift SDN User-initiated steps Migration activity Set the migration field of the Network.operator.openshift.io custom resource (CR) named cluster to OVNKubernetes . Make sure the migration field is null before setting it to a value. Cluster Network Operator (CNO) Updates the status of the Network.config.openshift.io CR named cluster accordingly. Machine Config Operator (MCO) Rolls out an update to the systemd configuration necessary for OVN-Kubernetes; the MCO updates a single machine per pool at a time by default, causing the total time the migration takes to increase with the size of the cluster. Update the networkType field of the Network.config.openshift.io CR. CNO Performs the following actions: Destroys the OpenShift SDN control plane pods. Deploys the OVN-Kubernetes control plane pods. Updates the Multus objects to reflect the new network plugin. Reboot each node in the cluster. Cluster As nodes reboot, the cluster assigns IP addresses to pods on the OVN-Kubernetes cluster network. If a rollback to OpenShift SDN is required, the following table describes the process. Important You must wait until the migration process from OpenShift SDN to OVN-Kubernetes network plugin is successful before initiating a rollback. Table 23.8. Performing a rollback to OpenShift SDN User-initiated steps Migration activity Suspend the MCO to ensure that it does not interrupt the migration. The MCO stops. Set the migration field of the Network.operator.openshift.io custom resource (CR) named cluster to OpenShiftSDN . Make sure the migration field is null before setting it to a value. CNO Updates the status of the Network.config.openshift.io CR named cluster accordingly. Update the networkType field. CNO Performs the following actions: Destroys the OVN-Kubernetes control plane pods. Deploys the OpenShift SDN control plane pods. Updates the Multus objects to reflect the new network plugin. Reboot each node in the cluster. Cluster As nodes reboot, the cluster assigns IP addresses to pods on the OpenShift-SDN network. Enable the MCO after all nodes in the cluster reboot. MCO Rolls out an update to the systemd configuration necessary for OpenShift SDN; the MCO updates a single machine per pool at a time by default, so the total time the migration takes increases with the size of the cluster. 23.5.2. Migrating to the OVN-Kubernetes network plugin As a cluster administrator, you can change the network plugin for your cluster to OVN-Kubernetes. During the migration, you must reboot every node in your cluster. Important While performing the migration, your cluster is unavailable and workloads might be interrupted. Perform the migration only when an interruption in service is acceptable. Prerequisites You have a cluster configured with the OpenShift SDN CNI network plugin in the network policy isolation mode. You installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. You have a recent backup of the etcd database. You can manually reboot each node. You checked that your cluster is in a known good state without any errors. You created a security group rule that allows User Datagram Protocol (UDP) packets on port 6081 for all nodes on all cloud platforms. Procedure To backup the configuration for the cluster network, enter the following command: USD oc get Network.config.openshift.io cluster -o yaml > cluster-openshift-sdn.yaml Verify that the OVN_SDN_MIGRATION_TIMEOUT environment variable is set and is equal to 0s by running the following command: #!/bin/bash if [ -n "USDOVN_SDN_MIGRATION_TIMEOUT" ] && [ "USDOVN_SDN_MIGRATION_TIMEOUT" = "0s" ]; then unset OVN_SDN_MIGRATION_TIMEOUT fi #loops the timeout command of the script to repeatedly check the cluster Operators until all are available. co_timeout=USD{OVN_SDN_MIGRATION_TIMEOUT:-1200s} timeout "USDco_timeout" bash <<EOT until oc wait co --all --for='condition=AVAILABLE=True' --timeout=10s && \ oc wait co --all --for='condition=PROGRESSING=False' --timeout=10s && \ oc wait co --all --for='condition=DEGRADED=False' --timeout=10s; do sleep 10 echo "Some ClusterOperators Degraded=False,Progressing=True,or Available=False"; done EOT Remove the configuration from the Cluster Network Operator (CNO) configuration object by running the following command: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{"spec":{"migration":null}}' Delete the NodeNetworkConfigurationPolicy (NNCP) custom resource (CR) that defines the primary network interface for the OpenShift SDN network plugin by completing the following steps: Check that the existing NNCP CR bonded the primary interface to your cluster by entering the following command: USD oc get nncp Example output NAME STATUS REASON bondmaster0 Available SuccessfullyConfigured Network Manager stores the connection profile for the bonded primary interface in the /etc/NetworkManager/system-connections system path. Remove the NNCP from your cluster: USD oc delete nncp <nncp_manifest_filename> To prepare all the nodes for the migration, set the migration field on the CNO configuration object by running the following command: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": { "networkType": "OVNKubernetes" } } }' Note This step does not deploy OVN-Kubernetes immediately. Instead, specifying the migration field triggers the Machine Config Operator (MCO) to apply new machine configs to all the nodes in the cluster in preparation for the OVN-Kubernetes deployment. Check that the reboot is finished by running the following command: USD oc get mcp Check that all cluster Operators are available by running the following command: USD oc get co Alternatively: You can disable automatic migration of several OpenShift SDN capabilities to the OVN-Kubernetes equivalents: Egress IPs Egress firewall Multicast To disable automatic migration of the configuration for any of the previously noted OpenShift SDN features, specify the following keys: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": { "networkType": "OVNKubernetes", "features": { "egressIP": <bool>, "egressFirewall": <bool>, "multicast": <bool> } } } }' where: bool : Specifies whether to enable migration of the feature. The default is true . Optional: You can customize the following settings for OVN-Kubernetes to meet your network infrastructure requirements: Maximum transmission unit (MTU). Consider the following before customizing the MTU for this optional step: If you use the default MTU, and you want to keep the default MTU during migration, this step can be ignored. If you used a custom MTU, and you want to keep the custom MTU during migration, you must declare the custom MTU value in this step. This step does not work if you want to change the MTU value during migration. Instead, you must first follow the instructions for "Changing the cluster MTU". You can then keep the custom MTU value by performing this procedure and declaring the custom MTU value in this step. Note OpenShift-SDN and OVN-Kubernetes have different overlay overhead. MTU values should be selected by following the guidelines found on the "MTU value selection" page. Geneve (Generic Network Virtualization Encapsulation) overlay network port OVN-Kubernetes IPv4 internal subnet To customize either of the previously noted settings, enter and customize the following command. If you do not need to change the default value, omit the key from the patch. USD oc patch Network.operator.openshift.io cluster --type=merge \ --patch '{ "spec":{ "defaultNetwork":{ "ovnKubernetesConfig":{ "mtu":<mtu>, "genevePort":<port>, "v4InternalSubnet":"<ipv4_subnet>" }}}}' where: mtu The MTU for the Geneve overlay network. This value is normally configured automatically, but if the nodes in your cluster do not all use the same MTU, then you must set this explicitly to 100 less than the smallest node MTU value. port The UDP port for the Geneve overlay network. If a value is not specified, the default is 6081 . The port cannot be the same as the VXLAN port that is used by OpenShift SDN. The default value for the VXLAN port is 4789 . ipv4_subnet An IPv4 address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is 100.64.0.0/16 . Example patch command to update mtu field USD oc patch Network.operator.openshift.io cluster --type=merge \ --patch '{ "spec":{ "defaultNetwork":{ "ovnKubernetesConfig":{ "mtu":1200 }}}}' As the MCO updates machines in each machine config pool, it reboots each node one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command: USD oc get mcp A successfully updated node has the following status: UPDATED=true , UPDATING=false , DEGRADED=false . Note By default, the MCO updates one machine per pool at a time, causing the total time the migration takes to increase with the size of the cluster. Confirm the status of the new machine configuration on the hosts: To list the machine configuration state and the name of the applied machine configuration, enter the following command: USD oc describe node | egrep "hostname|machineconfig" Example output kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done Verify that the following statements are true: The value of machineconfiguration.openshift.io/state field is Done . The value of the machineconfiguration.openshift.io/currentConfig field is equal to the value of the machineconfiguration.openshift.io/desiredConfig field. To confirm that the machine config is correct, enter the following command: USD oc get machineconfig <config_name> -o yaml | grep ExecStart where <config_name> is the name of the machine config from the machineconfiguration.openshift.io/currentConfig field. The machine config must include the following update to the systemd configuration: ExecStart=/usr/local/bin/configure-ovs.sh OVNKubernetes If a node is stuck in the NotReady state, investigate the machine config daemon pod logs and resolve any errors. To list the pods, enter the following command: USD oc get pod -n openshift-machine-config-operator Example output NAME READY STATUS RESTARTS AGE machine-config-controller-75f756f89d-sjp8b 1/1 Running 0 37m machine-config-daemon-5cf4b 2/2 Running 0 43h machine-config-daemon-7wzcd 2/2 Running 0 43h machine-config-daemon-fc946 2/2 Running 0 43h machine-config-daemon-g2v28 2/2 Running 0 43h machine-config-daemon-gcl4f 2/2 Running 0 43h machine-config-daemon-l5tnv 2/2 Running 0 43h machine-config-operator-79d9c55d5-hth92 1/1 Running 0 37m machine-config-server-bsc8h 1/1 Running 0 43h machine-config-server-hklrm 1/1 Running 0 43h machine-config-server-k9rtx 1/1 Running 0 43h The names for the config daemon pods are in the following format: machine-config-daemon-<seq> . The <seq> value is a random five character alphanumeric sequence. Display the pod log for the first machine config daemon pod shown in the output by enter the following command: USD oc logs <pod> -n openshift-machine-config-operator where pod is the name of a machine config daemon pod. Resolve any errors in the logs shown by the output from the command. To start the migration, configure the OVN-Kubernetes network plugin by using one of the following commands: To specify the network provider without changing the cluster network IP address block, enter the following command: USD oc patch Network.config.openshift.io cluster \ --type='merge' --patch '{ "spec": { "networkType": "OVNKubernetes" } }' To specify a different cluster network IP address block, enter the following command: USD oc patch Network.config.openshift.io cluster \ --type='merge' --patch '{ "spec": { "clusterNetwork": [ { "cidr": "<cidr>", "hostPrefix": <prefix> } ], "networkType": "OVNKubernetes" } }' where cidr is a CIDR block and prefix is the slice of the CIDR block apportioned to each node in your cluster. You cannot use any CIDR block that overlaps with the 100.64.0.0/16 CIDR block because the OVN-Kubernetes network provider uses this block internally. Important You cannot change the service network address block during the migration. Verify that the Multus daemon set rollout is complete before continuing with subsequent steps: USD oc -n openshift-multus rollout status daemonset/multus The name of the Multus pods is in the form of multus-<xxxxx> where <xxxxx> is a random sequence of letters. It might take several moments for the pods to restart. Example output Waiting for daemon set "multus" rollout to finish: 1 out of 6 new pods have been updated... ... Waiting for daemon set "multus" rollout to finish: 5 of 6 updated pods are available... daemon set "multus" successfully rolled out To complete changing the network plugin, reboot each node in your cluster. You can reboot the nodes in your cluster with either of the following approaches: Important The following scripts reboot all of the nodes in the cluster at the same time. This can cause your cluster to be unstable. Another option is to reboot your nodes manually one at a time. Rebooting nodes one-by-one causes considerable downtime in a cluster with many nodes. Cluster Operators will not work correctly before you reboot the nodes. With the oc rsh command, you can use a bash script similar to the following: #!/bin/bash readarray -t POD_NODES <<< "USD(oc get pod -n openshift-machine-config-operator -o wide| grep daemon|awk '{print USD1" "USD7}')" for i in "USD{POD_NODES[@]}" do read -r POD NODE <<< "USDi" until oc rsh -n openshift-machine-config-operator "USDPOD" chroot /rootfs shutdown -r +1 do echo "cannot reboot node USDNODE, retry" && sleep 3 done done With the ssh command, you can use a bash script similar to the following. The script assumes that you have configured sudo to not prompt for a password. #!/bin/bash for ip in USD(oc get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}') do echo "reboot node USDip" ssh -o StrictHostKeyChecking=no core@USDip sudo shutdown -r -t 3 done Confirm that the migration succeeded: To confirm that the network plugin is OVN-Kubernetes, enter the following command. The value of status.networkType must be OVNKubernetes . USD oc get network.config/cluster -o jsonpath='{.status.networkType}{"\n"}' To confirm that the cluster nodes are in the Ready state, enter the following command: USD oc get nodes To confirm that your pods are not in an error state, enter the following command: USD oc get pods --all-namespaces -o wide --sort-by='{.spec.nodeName}' If pods on a node are in an error state, reboot that node. To confirm that all of the cluster Operators are not in an abnormal state, enter the following command: USD oc get co The status of every cluster Operator must be the following: AVAILABLE="True" , PROGRESSING="False" , DEGRADED="False" . If a cluster Operator is not available or degraded, check the logs for the cluster Operator for more information. Complete the following steps only if the migration succeeds and your cluster is in a good state: To remove the migration configuration from the CNO configuration object, enter the following command: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": null } }' To remove custom configuration for the OpenShift SDN network provider, enter the following command: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "defaultNetwork": { "openshiftSDNConfig": null } } }' To remove the OpenShift SDN network provider namespace, enter the following command: USD oc delete namespace openshift-sdn steps Optional: After cluster migration, you can convert your IPv4 single-stack cluster to a dual-network cluster network that supports IPv4 and IPv6 address families. For more information, see "Converting to IPv4/IPv6 dual-stack networking". 23.5.3. Additional resources Configuration parameters for the OVN-Kubernetes network plugin Backing up etcd About network policy Changing the cluster MTU MTU value selection Converting to IPv4/IPv6 dual-stack networking OVN-Kubernetes capabilities Configuring an egress IP address Configuring an egress firewall for a project OVN-Kubernetes egress firewall blocks process to deploy application as DeploymentConfig Enabling multicast for a project OpenShift SDN capabilities Configuring egress IPs for a project Configuring an egress firewall for a project Enabling multicast for a project Network [operator.openshift.io/v1 ] 23.6. Rolling back to the OpenShift SDN network provider As a cluster administrator, you can rollback to the OpenShift SDN from the OVN-Kubernetes network plugin only after the migration to the OVN-Kubernetes network plugin is completed and successful. 23.6.1. Migrating to the OpenShift SDN network plugin Cluster administrators can roll back to the OpenShift SDN Container Network Interface (CNI) network plugin by using the offline migration method. During the migration you must manually reboot every node in your cluster. With the offline migration method, there is some downtime, during which your cluster is unreachable. Important You must wait until the migration process from OpenShift SDN to OVN-Kubernetes network plugin is successful before initiating a rollback. Prerequisites Install the OpenShift CLI ( oc ). Access to the cluster as a user with the cluster-admin role. A cluster installed on infrastructure configured with the OVN-Kubernetes network plugin. A recent backup of the etcd database is available. A reboot can be triggered manually for each node. The cluster is in a known good state, without any errors. Procedure Stop all of the machine configuration pools managed by the Machine Config Operator (MCO): Stop the master configuration pool by entering the following command in your CLI: USD oc patch MachineConfigPool master --type='merge' --patch \ '{ "spec": { "paused": true } }' Stop the worker machine configuration pool by entering the following command in your CLI: USD oc patch MachineConfigPool worker --type='merge' --patch \ '{ "spec":{ "paused": true } }' To prepare for the migration, set the migration field to null by entering the following command in your CLI: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": null } }' Check that the migration status is empty for the Network.config.openshift.io object by entering the following command in your CLI. Empty command output indicates that the object is not in a migration operation. USD oc get Network.config cluster -o jsonpath='{.status.migration}' Apply the patch to the Network.operator.openshift.io object to set the network plugin back to OpenShift SDN by entering the following command in your CLI: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": { "networkType": "OpenShiftSDN" } } }' Important If you applied the patch to the Network.config.openshift.io object before the patch operation finalizes on the Network.operator.openshift.io object, the Cluster Network Operator (CNO) enters into a degradation state and this causes a slight delay until the CNO recovers from the degraded state. Confirm that the migration status of the network plugin for the Network.config.openshift.io cluster object is OpenShiftSDN by entering the following command in your CLI: USD oc get Network.config cluster -o jsonpath='{.status.migration.networkType}' Apply the patch to the Network.config.openshift.io object to set the network plugin back to OpenShift SDN by entering the following command in your CLI: USD oc patch Network.config.openshift.io cluster --type='merge' \ --patch '{ "spec": { "networkType": "OpenShiftSDN" } }' Optional: Disable automatic migration of several OVN-Kubernetes capabilities to the OpenShift SDN equivalents: Egress IPs Egress firewall Multicast To disable automatic migration of the configuration for any of the previously noted OpenShift SDN features, specify the following keys: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": { "networkType": "OpenShiftSDN", "features": { "egressIP": <bool>, "egressFirewall": <bool>, "multicast": <bool> } } } }' where: bool : Specifies whether to enable migration of the feature. The default is true . Optional: You can customize the following settings for OpenShift SDN to meet your network infrastructure requirements: Maximum transmission unit (MTU) VXLAN port To customize either or both of the previously noted settings, customize and enter the following command in your CLI. If you do not need to change the default value, omit the key from the patch. USD oc patch Network.operator.openshift.io cluster --type=merge \ --patch '{ "spec":{ "defaultNetwork":{ "openshiftSDNConfig":{ "mtu":<mtu>, "vxlanPort":<port> }}}}' mtu The MTU for the VXLAN overlay network. This value is normally configured automatically, but if the nodes in your cluster do not all use the same MTU, then you must set this explicitly to 50 less than the smallest node MTU value. port The UDP port for the VXLAN overlay network. If a value is not specified, the default is 4789 . The port cannot be the same as the Geneve port that is used by OVN-Kubernetes. The default value for the Geneve port is 6081 . Example patch command USD oc patch Network.operator.openshift.io cluster --type=merge \ --patch '{ "spec":{ "defaultNetwork":{ "openshiftSDNConfig":{ "mtu":1200 }}}}' Reboot each node in your cluster. You can reboot the nodes in your cluster with either of the following approaches: With the oc rsh command, you can use a bash script similar to the following: #!/bin/bash readarray -t POD_NODES <<< "USD(oc get pod -n openshift-machine-config-operator -o wide| grep daemon|awk '{print USD1" "USD7}')" for i in "USD{POD_NODES[@]}" do read -r POD NODE <<< "USDi" until oc rsh -n openshift-machine-config-operator "USDPOD" chroot /rootfs shutdown -r +1 do echo "cannot reboot node USDNODE, retry" && sleep 3 done done With the ssh command, you can use a bash script similar to the following. The script assumes that you have configured sudo to not prompt for a password. #!/bin/bash for ip in USD(oc get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}') do echo "reboot node USDip" ssh -o StrictHostKeyChecking=no core@USDip sudo shutdown -r -t 3 done Wait until the Multus daemon set rollout completes. Run the following command to see your rollout status: USD oc -n openshift-multus rollout status daemonset/multus The name of the Multus pods is in the form of multus-<xxxxx> where <xxxxx> is a random sequence of letters. It might take several moments for the pods to restart. Example output Waiting for daemon set "multus" rollout to finish: 1 out of 6 new pods have been updated... ... Waiting for daemon set "multus" rollout to finish: 5 of 6 updated pods are available... daemon set "multus" successfully rolled out After the nodes in your cluster have rebooted and the multus pods are rolled out, start all of the machine configuration pools by running the following commands:: Start the master configuration pool: USD oc patch MachineConfigPool master --type='merge' --patch \ '{ "spec": { "paused": false } }' Start the worker configuration pool: USD oc patch MachineConfigPool worker --type='merge' --patch \ '{ "spec": { "paused": false } }' As the MCO updates machines in each config pool, it reboots each node. By default the MCO updates a single machine per pool at a time, so the time that the migration requires to complete grows with the size of the cluster. Confirm the status of the new machine configuration on the hosts: To list the machine configuration state and the name of the applied machine configuration, enter the following command in your CLI: USD oc describe node | egrep "hostname|machineconfig" Example output kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done Verify that the following statements are true: The value of machineconfiguration.openshift.io/state field is Done . The value of the machineconfiguration.openshift.io/currentConfig field is equal to the value of the machineconfiguration.openshift.io/desiredConfig field. To confirm that the machine config is correct, enter the following command in your CLI: USD oc get machineconfig <config_name> -o yaml where <config_name> is the name of the machine config from the machineconfiguration.openshift.io/currentConfig field. Confirm that the migration succeeded: To confirm that the network plugin is OpenShift SDN, enter the following command in your CLI. The value of status.networkType must be OpenShiftSDN . USD oc get Network.config/cluster -o jsonpath='{.status.networkType}{"\n"}' To confirm that the cluster nodes are in the Ready state, enter the following command in your CLI: USD oc get nodes If a node is stuck in the NotReady state, investigate the machine config daemon pod logs and resolve any errors. To list the pods, enter the following command in your CLI: USD oc get pod -n openshift-machine-config-operator Example output NAME READY STATUS RESTARTS AGE machine-config-controller-75f756f89d-sjp8b 1/1 Running 0 37m machine-config-daemon-5cf4b 2/2 Running 0 43h machine-config-daemon-7wzcd 2/2 Running 0 43h machine-config-daemon-fc946 2/2 Running 0 43h machine-config-daemon-g2v28 2/2 Running 0 43h machine-config-daemon-gcl4f 2/2 Running 0 43h machine-config-daemon-l5tnv 2/2 Running 0 43h machine-config-operator-79d9c55d5-hth92 1/1 Running 0 37m machine-config-server-bsc8h 1/1 Running 0 43h machine-config-server-hklrm 1/1 Running 0 43h machine-config-server-k9rtx 1/1 Running 0 43h The names for the config daemon pods are in the following format: machine-config-daemon-<seq> . The <seq> value is a random five character alphanumeric sequence. To display the pod log for each machine config daemon pod shown in the output, enter the following command in your CLI: USD oc logs <pod> -n openshift-machine-config-operator where pod is the name of a machine config daemon pod. Resolve any errors in the logs shown by the output from the command. To confirm that your pods are not in an error state, enter the following command in your CLI: USD oc get pods --all-namespaces -o wide --sort-by='{.spec.nodeName}' If pods on a node are in an error state, reboot that node. Complete the following steps only if the migration succeeds and your cluster is in a good state: To remove the migration configuration from the Cluster Network Operator configuration object, enter the following command in your CLI: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": null } }' To remove the OVN-Kubernetes configuration, enter the following command in your CLI: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "defaultNetwork": { "ovnKubernetesConfig":null } } }' To remove the OVN-Kubernetes network provider namespace, enter the following command in your CLI: USD oc delete namespace openshift-ovn-kubernetes 23.7. Converting to IPv4/IPv6 dual-stack networking As a cluster administrator, you can convert your IPv4 single-stack cluster to a dual-network cluster network that supports IPv4 and IPv6 address families. After converting to dual-stack, all newly created pods are dual-stack enabled. Note A dual-stack network is supported on clusters provisioned on bare metal, IBM Power, IBM Z infrastructure, and single node OpenShift clusters. Note While using dual-stack networking, you cannot use IPv4-mapped IPv6 addresses, such as ::FFFF:198.51.100.1 , where IPv6 is required. 23.7.1. Converting to a dual-stack cluster network As a cluster administrator, you can convert your single-stack cluster network to a dual-stack cluster network. Note After converting to dual-stack networking only newly created pods are assigned IPv6 addresses. Any pods created before the conversion must be recreated to receive an IPv6 address. Important Before proceeding, make sure your OpenShift cluster uses version 4.12.5 or later. Otherwise, the conversion can fail due to the bug ovnkube node pod crashed after converting to a dual-stack cluster network . Prerequisites You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. Your cluster uses the OVN-Kubernetes network plugin. The cluster nodes have IPv6 addresses. You have configured an IPv6-enabled router based on your infrastructure. Procedure To specify IPv6 address blocks for the cluster and service networks, create a file containing the following YAML: - op: add path: /spec/clusterNetwork/- value: 1 cidr: fd01::/48 hostPrefix: 64 - op: add path: /spec/serviceNetwork/- value: fd02::/112 2 1 Specify an object with the cidr and hostPrefix fields. The host prefix must be 64 or greater. The IPv6 CIDR prefix must be large enough to accommodate the specified host prefix. 2 Specify an IPv6 CIDR with a prefix of 112 . Kubernetes uses only the lowest 16 bits. For a prefix of 112 , IP addresses are assigned from 112 to 128 bits. To patch the cluster network configuration, enter the following command: USD oc patch network.config.openshift.io cluster \ --type='json' --patch-file <file>.yaml where: file Specifies the name of the file you created in the step. Example output network.config.openshift.io/cluster patched Verification Complete the following step to verify that the cluster network recognizes the IPv6 address blocks that you specified in the procedure. Display the network configuration: USD oc describe network Example output Status: Cluster Network: Cidr: 10.128.0.0/14 Host Prefix: 23 Cidr: fd01::/48 Host Prefix: 64 Cluster Network MTU: 1400 Network Type: OVNKubernetes Service Network: 172.30.0.0/16 fd02::/112 23.7.2. Converting to a single-stack cluster network As a cluster administrator, you can convert your dual-stack cluster network to a single-stack cluster network. Prerequisites You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. Your cluster uses the OVN-Kubernetes network plugin. The cluster nodes have IPv6 addresses. You have enabled dual-stack networking. Procedure Edit the networks.config.openshift.io custom resource (CR) by running the following command: USD oc edit networks.config.openshift.io Remove the IPv6 specific configuration that you have added to the cidr and hostPrefix fields in the procedure. 23.8. Logging for egress firewall and network policy rules As a cluster administrator, you can configure audit logging for your cluster and enable logging for one or more namespaces. OpenShift Container Platform produces audit logs for both egress firewalls and network policies. Note Audit logging is available for only the OVN-Kubernetes network plugin . 23.8.1. Audit logging The OVN-Kubernetes network plugin uses Open Virtual Network (OVN) ACLs to manage egress firewalls and network policies. Audit logging exposes allow and deny ACL events. You can configure the destination for audit logs, such as a syslog server or a UNIX domain socket. Regardless of any additional configuration, an audit log is always saved to /var/log/ovn/acl-audit-log.log on each OVN-Kubernetes pod in the cluster. You can enable audit logging for each namespace by annotating each namespace configuration with a k8s.ovn.org/acl-logging section. In the k8s.ovn.org/acl-logging section, you must specify allow , deny , or both values to enable audit logging for a namespace. Note A network policy does not support setting the Pass action set as a rule. The ACL-logging implementation logs access control list (ACL) events for a network. You can view these logs to analyze any potential security issues. Example namespace annotation kind: Namespace apiVersion: v1 metadata: name: example1 annotations: k8s.ovn.org/acl-logging: |- { "deny": "info", "allow": "info" } To view the default ACL logging configuration values, see the policyAuditConfig object in the cluster-network-03-config.yml file. If required, you can change the ACL logging configuration values for log file parameters in this file. The logging message format is compatible with syslog as defined by RFC5424. The syslog facility is configurable and defaults to local0 . The following example shows key parameters and their values outputted in a log message: Example logging message that outputs parameters and their values <timestamp>|<message_serial>|acl_log(ovn_pinctrl0)|<severity>|name="<acl_name>", verdict="<verdict>", severity="<severity>", direction="<direction>": <flow> Where: <timestamp> states the time and date for the creation of a log message. <message_serial> lists the serial number for a log message. acl_log(ovn_pinctrl0) is a literal string that prints the location of the log message in the OVN-Kubernetes plugin. <severity> sets the severity level for a log message. If you enable audit logging that supports allow and deny tasks then two severity levels show in the log message output. <name> states the name of the ACL-logging implementation in the OVN Network Bridging Database ( nbdb ) that was created by the network policy. <verdict> can be either allow or drop . <direction> can be either to-lport or from-lport to indicate that the policy was applied to traffic going to or away from a pod. <flow> shows packet information in a format equivalent to the OpenFlow protocol. This parameter comprises Open vSwitch (OVS) fields. The following example shows OVS fields that the flow parameter uses to extract packet information from system memory: Example of OVS fields used by the flow parameter to extract packet information <proto>,vlan_tci=0x0000,dl_src=<src_mac>,dl_dst=<source_mac>,nw_src=<source_ip>,nw_dst=<target_ip>,nw_tos=<tos_dscp>,nw_ecn=<tos_ecn>,nw_ttl=<ip_ttl>,nw_frag=<fragment>,tp_src=<tcp_src_port>,tp_dst=<tcp_dst_port>,tcp_flags=<tcp_flags> Where: <proto> states the protocol. Valid values are tcp and udp . vlan_tci=0x0000 states the VLAN header as 0 because a VLAN ID is not set for internal pod network traffic. <src_mac> specifies the source for the Media Access Control (MAC) address. <source_mac> specifies the destination for the MAC address. <source_ip> lists the source IP address <target_ip> lists the target IP address. <tos_dscp> states Differentiated Services Code Point (DSCP) values to classify and prioritize certain network traffic over other traffic. <tos_ecn> states Explicit Congestion Notification (ECN) values that indicate any congested traffic in your network. <ip_ttl> states the Time To Live (TTP) information for an packet. <fragment> specifies what type of IP fragments or IP non-fragments to match. <tcp_src_port> shows the source for the port for TCP and UDP protocols. <tcp_dst_port> lists the destination port for TCP and UDP protocols. <tcp_flags> supports numerous flags such as SYN , ACK , PSH and so on. If you need to set multiple values then each value is separated by a vertical bar ( | ). The UDP protocol does not support this parameter. Note For more information about the field descriptions, go to the OVS manual page for ovs-fields . Example ACL deny log entry for a network policy 2021-06-13T19:33:11.590Z|00005|acl_log(ovn_pinctrl0)|INFO|name="verify-audit-logging_deny-all", verdict=drop, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:39,dl_dst=0a:58:0a:80:02:37,nw_src=10.128.2.57,nw_dst=10.128.2.55,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0 The following table describes namespace annotation values: Table 23.9. Audit logging namespace annotation for k8s.ovn.org/acl-logging Field Description deny Blocks namespace access to any traffic that matches an ACL rule with the deny action. The field supports alert , warning , notice , info , or debug values. allow Permits namespace access to any traffic that matches an ACL rule with the allow action. The field supports alert , warning , notice , info , or debug values. pass A pass action applies to an admin network policy's ACL rule. A pass action allows either the network policy in the namespace or the baseline admin network policy rule to evaluate all incoming and outgoing traffic. A network policy does not support a pass action. 23.8.2. Audit configuration The configuration for audit logging is specified as part of the OVN-Kubernetes cluster network provider configuration. The following YAML illustrates the default values for the audit logging: Audit logging configuration apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: policyAuditConfig: destination: "null" maxFileSize: 50 rateLimit: 20 syslogFacility: local0 The following table describes the configuration fields for audit logging. Table 23.10. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . 23.8.3. Configuring egress firewall and network policy auditing for a cluster As a cluster administrator, you can customize audit logging for your cluster. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster with a user with cluster-admin privileges. Procedure To customize the audit logging configuration, enter the following command: USD oc edit network.operator.openshift.io/cluster Tip You can alternatively customize and apply the following YAML to configure audit logging: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: policyAuditConfig: destination: "null" maxFileSize: 50 rateLimit: 20 syslogFacility: local0 Verification To create a namespace with network policies complete the following steps: Create a namespace for verification: USD cat <<EOF| oc create -f - kind: Namespace apiVersion: v1 metadata: name: verify-audit-logging annotations: k8s.ovn.org/acl-logging: '{ "deny": "alert", "allow": "alert" }' EOF Example output namespace/verify-audit-logging created Enable audit logging: USD oc annotate namespace verify-audit-logging k8s.ovn.org/acl-logging='{ "deny": "alert", "allow": "alert" }' namespace/verify-audit-logging annotated Create network policies for the namespace: USD cat <<EOF| oc create -n verify-audit-logging -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: deny-all spec: podSelector: matchLabels: policyTypes: - Ingress - Egress --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-same-namespace spec: podSelector: {} policyTypes: - Ingress - Egress ingress: - from: - podSelector: {} egress: - to: - namespaceSelector: matchLabels: namespace: verify-audit-logging EOF Example output networkpolicy.networking.k8s.io/deny-all created networkpolicy.networking.k8s.io/allow-from-same-namespace created Create a pod for source traffic in the default namespace: USD cat <<EOF| oc create -n default -f - apiVersion: v1 kind: Pod metadata: name: client spec: containers: - name: client image: registry.access.redhat.com/rhel7/rhel-tools command: ["/bin/sh", "-c"] args: ["sleep inf"] EOF Create two pods in the verify-audit-logging namespace: USD for name in client server; do cat <<EOF| oc create -n verify-audit-logging -f - apiVersion: v1 kind: Pod metadata: name: USD{name} spec: containers: - name: USD{name} image: registry.access.redhat.com/rhel7/rhel-tools command: ["/bin/sh", "-c"] args: ["sleep inf"] EOF done Example output pod/client created pod/server created To generate traffic and produce network policy audit log entries, complete the following steps: Obtain the IP address for pod named server in the verify-audit-logging namespace: USD POD_IP=USD(oc get pods server -n verify-audit-logging -o jsonpath='{.status.podIP}') Ping the IP address from the command from the pod named client in the default namespace and confirm that all packets are dropped: USD oc exec -it client -n default -- /bin/ping -c 2 USDPOD_IP Example output PING 10.128.2.55 (10.128.2.55) 56(84) bytes of data. --- 10.128.2.55 ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 2041ms Ping the IP address saved in the POD_IP shell environment variable from the pod named client in the verify-audit-logging namespace and confirm that all packets are allowed: USD oc exec -it client -n verify-audit-logging -- /bin/ping -c 2 USDPOD_IP Example output PING 10.128.0.86 (10.128.0.86) 56(84) bytes of data. 64 bytes from 10.128.0.86: icmp_seq=1 ttl=64 time=2.21 ms 64 bytes from 10.128.0.86: icmp_seq=2 ttl=64 time=0.440 ms --- 10.128.0.86 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 0.440/1.329/2.219/0.890 ms Display the latest entries in the network policy audit log: USD for pod in USD(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node --no-headers=true | awk '{ print USD1 }') ; do oc exec -it USDpod -n openshift-ovn-kubernetes -- tail -4 /var/log/ovn/acl-audit-log.log done Example output Defaulting container name to ovn-controller. Use 'oc describe pod/ovnkube-node-hdb8v -n openshift-ovn-kubernetes' to see all of the containers in this pod. 2021-06-13T19:33:11.590Z|00005|acl_log(ovn_pinctrl0)|INFO|name="verify-audit-logging_deny-all", verdict=drop, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:39,dl_dst=0a:58:0a:80:02:37,nw_src=10.128.2.57,nw_dst=10.128.2.55,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0 2021-06-13T19:33:12.614Z|00006|acl_log(ovn_pinctrl0)|INFO|name="verify-audit-logging_deny-all", verdict=drop, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:39,dl_dst=0a:58:0a:80:02:37,nw_src=10.128.2.57,nw_dst=10.128.2.55,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0 2021-06-13T19:44:10.037Z|00007|acl_log(ovn_pinctrl0)|INFO|name="verify-audit-logging_allow-from-same-namespace_0", verdict=allow, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:3b,dl_dst=0a:58:0a:80:02:3a,nw_src=10.128.2.59,nw_dst=10.128.2.58,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0 2021-06-13T19:44:11.037Z|00008|acl_log(ovn_pinctrl0)|INFO|name="verify-audit-logging_allow-from-same-namespace_0", verdict=allow, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:3b,dl_dst=0a:58:0a:80:02:3a,nw_src=10.128.2.59,nw_dst=10.128.2.58,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0 23.8.4. Enabling egress firewall and network policy audit logging for a namespace As a cluster administrator, you can enable audit logging for a namespace. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster with a user with cluster-admin privileges. Procedure To enable audit logging for a namespace, enter the following command: USD oc annotate namespace <namespace> \ k8s.ovn.org/acl-logging='{ "deny": "alert", "allow": "notice" }' where: <namespace> Specifies the name of the namespace. Tip You can alternatively apply the following YAML to enable audit logging: kind: Namespace apiVersion: v1 metadata: name: <namespace> annotations: k8s.ovn.org/acl-logging: |- { "deny": "alert", "allow": "notice" } Example output namespace/verify-audit-logging annotated Verification Display the latest entries in the audit log: USD for pod in USD(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node --no-headers=true | awk '{ print USD1 }') ; do oc exec -it USDpod -n openshift-ovn-kubernetes -- tail -4 /var/log/ovn/acl-audit-log.log done Example output 2021-06-13T19:33:11.590Z|00005|acl_log(ovn_pinctrl0)|INFO|name="verify-audit-logging_deny-all", verdict=drop, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:39,dl_dst=0a:58:0a:80:02:37,nw_src=10.128.2.57,nw_dst=10.128.2.55,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0 23.8.5. Disabling egress firewall and network policy audit logging for a namespace As a cluster administrator, you can disable audit logging for a namespace. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster with a user with cluster-admin privileges. Procedure To disable audit logging for a namespace, enter the following command: USD oc annotate --overwrite namespace <namespace> k8s.ovn.org/acl-logging- where: <namespace> Specifies the name of the namespace. Tip You can alternatively apply the following YAML to disable audit logging: kind: Namespace apiVersion: v1 metadata: name: <namespace> annotations: k8s.ovn.org/acl-logging: null Example output namespace/verify-audit-logging annotated 23.8.6. Additional resources About network policy Configuring an egress firewall for a project 23.9. Configuring IPsec encryption With IPsec enabled, all pod-to-pod network traffic between nodes on the OVN-Kubernetes cluster network is encrypted with IPsec Transport mode . IPsec is disabled by default. It can be enabled either during or after installing the cluster. For information about cluster installation, see OpenShift Container Platform installation overview . If you need to enable IPsec after cluster installation, you must first resize your cluster MTU to account for the overhead of the IPsec ESP IP header. The following support limitations exist for IPsec on a OpenShift Container Platform cluster: You must disable IPsec before updating to OpenShift Container Platform 4.15. After disabling IPsec, you must also delete the associated IPsec daemonsets. There is a known issue that can cause interruptions in pod-to-pod communication if you update without disabling IPsec. ( OCPBUGS-43323 ) The following documentation describes how to enable and disable IPSec after cluster installation. 23.9.1. Prerequisites You have decreased the size of the cluster MTU by 46 bytes to allow for the additional overhead of the IPsec ESP header. For more information on resizing the MTU that your cluster uses, see Changing the MTU for the cluster network . 23.9.2. Types of network traffic flows encrypted by IPsec With IPsec enabled, only the following network traffic flows between pods are encrypted: Traffic between pods on different nodes on the cluster network Traffic from a pod on the host network to a pod on the cluster network The following traffic flows are not encrypted: Traffic between pods on the same node on the cluster network Traffic between pods on the host network Traffic from a pod on the cluster network to a pod on the host network The encrypted and unencrypted flows are illustrated in the following diagram: 23.9.2.1. Network connectivity requirements when IPsec is enabled You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. Table 23.11. Ports used for all-machine to all-machine communications Protocol Port Description UDP 500 IPsec IKE packets 4500 IPsec NAT-T packets ESP N/A IPsec Encapsulating Security Payload (ESP) 23.9.3. Encryption protocol and IPsec mode The encrypt cipher used is AES-GCM-16-256 . The integrity check value (ICV) is 16 bytes. The key length is 256 bits. The IPsec mode used is Transport mode , a mode that encrypts end-to-end communication by adding an Encapsulated Security Payload (ESP) header to the IP header of the original packet and encrypts the packet data. OpenShift Container Platform does not currently use or support IPsec Tunnel mode for pod-to-pod communication. 23.9.4. Security certificate generation and rotation The Cluster Network Operator (CNO) generates a self-signed X.509 certificate authority (CA) that is used by IPsec for encryption. Certificate signing requests (CSRs) from each node are automatically fulfilled by the CNO. The CA is valid for 10 years. The individual node certificates are valid for 5 years and are automatically rotated after 4 1/2 years elapse. 23.9.5. Enabling IPsec encryption As a cluster administrator, you can enable IPsec encryption after cluster installation. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster as a user with cluster-admin privileges. You have reduced the size of your cluster maximum transmission unit (MTU) by 46 bytes to allow for the overhead of the IPsec ESP header. Procedure To enable IPsec encryption, enter the following command: USD oc patch networks.operator.openshift.io cluster --type=merge \ -p '{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"ipsecConfig":{ }}}}}' Verification To find the names of the OVN-Kubernetes control plane pods, enter the following command: USD oc get pods -l app=ovnkube-master -n openshift-ovn-kubernetes Example output NAME READY STATUS RESTARTS AGE ovnkube-master-fvtnh 6/6 Running 0 122m ovnkube-master-hsgmm 6/6 Running 0 122m ovnkube-master-qcmdc 6/6 Running 0 122m Verify that IPsec is enabled on your cluster by entering the following command. The command output must state true to indicate that the node has IPsec enabled. USD oc -n openshift-ovn-kubernetes rsh ovnkube-master-<pod_number_sequence> \ 1 ovn-nbctl --no-leader-only get nb_global . ipsec 1 Replace <pod_number_sequence> with the random sequence of letters, fvtnh , for a data plane pod from the step. 23.9.6. Disabling IPsec encryption As a cluster administrator, you can disable IPsec encryption only if you enabled IPsec after cluster installation. Important After disabling IPsec, you must delete the associated IPsec daemonsets pods. If you do not delete these pods, you might experience issues with your cluster. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster with a user with cluster-admin privileges. Procedure To disable IPsec encryption, enter the following command: USD oc patch networks.operator.openshift.io/cluster --type=json \ -p='[{"op":"remove", "path":"/spec/defaultNetwork/ovnKubernetesConfig/ipsecConfig"}]' To find the name of the OVN-Kubernetes data plane pod that exists on the master node in your cluster, enter the following command: USD oc get pods -n openshift-ovn-kubernetes -l=app=ovnkube-master Example output ovnkube-master-5xqbf 8/8 Running 0 28m ... Verify that the master node in your cluster has IPsec disabled by entering the following command. The command output must state false to indicate that the node has IPsec disabled. USD oc -n openshift-ovn-kubernetes -c nbdb rsh ovnkube-master-<pod_number_sequence> \ 1 ovn-nbctl --no-leader-only get nb_global . ipsec 1 Replace <pod_number_sequence> with the random sequence of letters, such as 5xqbf , for the data plane pod from the step. To remove the IPsec ovn-ipsec daemonset pod from the openshift-ovn-kubernetes namespace on the node, enter the following command: USD oc delete daemonset ovn-ipsec -n openshift-ovn-kubernetes 1 1 The ovn-ipsec daemonset configures IPsec connections for east-west traffic on the node. Verify that the ovn-ipsec daemonset pod was removed from the all nodes in your cluster by entering the following command. If the command output does not list the pod, the removal operation is successful. USD oc get pods -n openshift-ovn-kubernetes -l=app=ovn-ipsec Note You might need to re-run the command for deleting the pod because sometimes the initial command attempt might not delete the pod. Optional: You can increase the size of your cluster MTU by 46 bytes because there is no longer any overhead from the IPsec ESP header in IP packets. 23.9.7. Additional resources About the OVN-Kubernetes Container Network Interface (CNI) network plugin Changing the MTU for the cluster network Network [operator.openshift.io/v1 ] API 23.10. Configuring an egress firewall for a project As a cluster administrator, you can create an egress firewall for a project that restricts egress traffic leaving your OpenShift Container Platform cluster. 23.10.1. How an egress firewall works in a project As a cluster administrator, you can use an egress firewall to limit the external hosts that some or all pods can access from within the cluster. An egress firewall supports the following scenarios: A pod can only connect to internal hosts and cannot initiate connections to the public internet. A pod can only connect to the public internet and cannot initiate connections to internal hosts that are outside the OpenShift Container Platform cluster. A pod cannot reach specified internal subnets or hosts outside the OpenShift Container Platform cluster. A pod can connect to only specific external hosts. For example, you can allow one project access to a specified IP range but deny the same access to a different project. Or you can restrict application developers from updating from Python pip mirrors, and force updates to come only from approved sources. Note Egress firewall does not apply to the host network namespace. Pods with host networking enabled are unaffected by egress firewall rules. You configure an egress firewall policy by creating an EgressFirewall custom resource (CR) object. The egress firewall matches network traffic that meets any of the following criteria: An IP address range in CIDR format A DNS name that resolves to an IP address A port number A protocol that is one of the following protocols: TCP, UDP, and SCTP Important If your egress firewall includes a deny rule for 0.0.0.0/0 , access to your OpenShift Container Platform API servers is blocked. You must either add allow rules for each IP address. The following example illustrates the order of the egress firewall rules necessary to ensure API server access: apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: default namespace: <namespace> 1 spec: egress: - to: cidrSelector: <api_server_address_range> 2 type: Allow # ... - to: cidrSelector: 0.0.0.0/0 3 type: Deny 1 The namespace for the egress firewall. 2 The IP address range that includes your OpenShift Container Platform API servers. 3 A global deny rule prevents access to the OpenShift Container Platform API servers. To find the IP address for your API servers, run oc get ep kubernetes -n default . For more information, see BZ#1988324 . Warning Egress firewall rules do not apply to traffic that goes through routers. Any user with permission to create a Route CR object can bypass egress firewall policy rules by creating a route that points to a forbidden destination. 23.10.1.1. Limitations of an egress firewall An egress firewall has the following limitations: No project can have more than one EgressFirewall object. A maximum of one EgressFirewall object with a maximum of 8,000 rules can be defined per project. If you are using the OVN-Kubernetes network plugin with shared gateway mode in Red Hat OpenShift Networking, return ingress replies are affected by egress firewall rules. If the egress firewall rules drop the ingress reply destination IP, the traffic is dropped. Violating any of these restrictions results in a broken egress firewall for the project. Consequently, all external network traffic is dropped, which can cause security risks for your organization. An Egress Firewall resource can be created in the kube-node-lease , kube-public , kube-system , openshift and openshift- projects. 23.10.1.2. Matching order for egress firewall policy rules The egress firewall policy rules are evaluated in the order that they are defined, from first to last. The first rule that matches an egress connection from a pod applies. Any subsequent rules are ignored for that connection. 23.10.1.3. How Domain Name Server (DNS) resolution works If you use DNS names in any of your egress firewall policy rules, proper resolution of the domain names is subject to the following restrictions: Domain name updates are polled based on a time-to-live (TTL) duration. By default, the duration is 30 minutes. When the egress firewall controller queries the local name servers for a domain name, if the response includes a TTL and the TTL is less than 30 minutes, the controller sets the duration for that DNS name to the returned value. Each DNS name is queried after the TTL for the DNS record expires. The pod must resolve the domain from the same local name servers when necessary. Otherwise the IP addresses for the domain known by the egress firewall controller and the pod can be different. If the IP addresses for a hostname differ, the egress firewall might not be enforced consistently. Because the egress firewall controller and pods asynchronously poll the same local name server, the pod might obtain the updated IP address before the egress controller does, which causes a race condition. Due to this current limitation, domain name usage in EgressFirewall objects is only recommended for domains with infrequent IP address changes. Note The egress firewall always allows pods access to the external interface of the node that the pod is on for DNS resolution. If you use domain names in your egress firewall policy and your DNS resolution is not handled by a DNS server on the local node, then you must add egress firewall rules that allow access to your DNS server's IP addresses. if you are using domain names in your pods. 23.10.2. EgressFirewall custom resource (CR) object You can define one or more rules for an egress firewall. A rule is either an Allow rule or a Deny rule, with a specification for the traffic that the rule applies to. The following YAML describes an EgressFirewall CR object: EgressFirewall object apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: <name> 1 spec: egress: 2 ... 1 The name for the object must be default . 2 A collection of one or more egress network policy rules as described in the following section. 23.10.2.1. EgressFirewall rules The following YAML describes an egress firewall rule object. The user can select either an IP address range in CIDR format or a domain name. The egress stanza expects an array of one or more objects. Egress policy rule stanza egress: - type: <type> 1 to: 2 cidrSelector: <cidr> 3 dnsName: <dns_name> 4 ports: 5 ... 1 The type of rule. The value must be either Allow or Deny . 2 A stanza describing an egress traffic match rule that specifies the cidrSelector field or the dnsName field. You cannot use both fields in the same rule. 3 An IP address range in CIDR format. 4 A DNS domain name. 5 Optional: A stanza describing a collection of network ports and protocols for the rule. Ports stanza ports: - port: <port> 1 protocol: <protocol> 2 1 A network port, such as 80 or 443 . If you specify a value for this field, you must also specify a value for protocol . 2 A network protocol. The value must be either TCP , UDP , or SCTP . 23.10.2.2. Example EgressFirewall CR objects The following example defines several egress firewall policy rules: apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: default spec: egress: 1 - type: Allow to: cidrSelector: 1.2.3.0/24 - type: Deny to: cidrSelector: 0.0.0.0/0 1 A collection of egress firewall policy rule objects. The following example defines a policy rule that denies traffic to the host at the 172.16.1.1/32 IP address, if the traffic is using either the TCP protocol and destination port 80 or any protocol and destination port 443 . apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: default spec: egress: - type: Deny to: cidrSelector: 172.16.1.1/32 ports: - port: 80 protocol: TCP - port: 443 23.10.3. Creating an egress firewall policy object As a cluster administrator, you can create an egress firewall policy object for a project. Important If the project already has an EgressFirewall object defined, you must edit the existing policy to make changes to the egress firewall rules. Prerequisites A cluster that uses the OVN-Kubernetes network plugin. Install the OpenShift CLI ( oc ). You must log in to the cluster as a cluster administrator. Procedure Create a policy rule: Create a <policy_name>.yaml file where <policy_name> describes the egress policy rules. In the file you created, define an egress policy object. Enter the following command to create the policy object. Replace <policy_name> with the name of the policy and <project> with the project that the rule applies to. USD oc create -f <policy_name>.yaml -n <project> In the following example, a new EgressFirewall object is created in a project named project1 : USD oc create -f default.yaml -n project1 Example output egressfirewall.k8s.ovn.org/v1 created Optional: Save the <policy_name>.yaml file so that you can make changes later. 23.11. Viewing an egress firewall for a project As a cluster administrator, you can list the names of any existing egress firewalls and view the traffic rules for a specific egress firewall. 23.11.1. Viewing an EgressFirewall object You can view an EgressFirewall object in your cluster. Prerequisites A cluster using the OVN-Kubernetes network plugin. Install the OpenShift Command-line Interface (CLI), commonly known as oc . You must log in to the cluster. Procedure Optional: To view the names of the EgressFirewall objects defined in your cluster, enter the following command: USD oc get egressfirewall --all-namespaces To inspect a policy, enter the following command. Replace <policy_name> with the name of the policy to inspect. USD oc describe egressfirewall <policy_name> Example output Name: default Namespace: project1 Created: 20 minutes ago Labels: <none> Annotations: <none> Rule: Allow to 1.2.3.0/24 Rule: Allow to www.example.com Rule: Deny to 0.0.0.0/0 23.12. Editing an egress firewall for a project As a cluster administrator, you can modify network traffic rules for an existing egress firewall. 23.12.1. Editing an EgressFirewall object As a cluster administrator, you can update the egress firewall for a project. Prerequisites A cluster using the OVN-Kubernetes network plugin. Install the OpenShift CLI ( oc ). You must log in to the cluster as a cluster administrator. Procedure Find the name of the EgressFirewall object for the project. Replace <project> with the name of the project. USD oc get -n <project> egressfirewall Optional: If you did not save a copy of the EgressFirewall object when you created the egress network firewall, enter the following command to create a copy. USD oc get -n <project> egressfirewall <name> -o yaml > <filename>.yaml Replace <project> with the name of the project. Replace <name> with the name of the object. Replace <filename> with the name of the file to save the YAML to. After making changes to the policy rules, enter the following command to replace the EgressFirewall object. Replace <filename> with the name of the file containing the updated EgressFirewall object. USD oc replace -f <filename>.yaml 23.13. Removing an egress firewall from a project As a cluster administrator, you can remove an egress firewall from a project to remove all restrictions on network traffic from the project that leaves the OpenShift Container Platform cluster. 23.13.1. Removing an EgressFirewall object As a cluster administrator, you can remove an egress firewall from a project. Prerequisites A cluster using the OVN-Kubernetes network plugin. Install the OpenShift CLI ( oc ). You must log in to the cluster as a cluster administrator. Procedure Find the name of the EgressFirewall object for the project. Replace <project> with the name of the project. USD oc get -n <project> egressfirewall Enter the following command to delete the EgressFirewall object. Replace <project> with the name of the project and <name> with the name of the object. USD oc delete -n <project> egressfirewall <name> 23.14. Configuring an egress IP address As a cluster administrator, you can configure the OVN-Kubernetes Container Network Interface (CNI) network plugin to assign one or more egress IP addresses to a namespace, or to specific pods in a namespace. Important In an installer-provisioned infrastructure cluster, do not assign egress IP addresses to the infrastructure node that already hosts the ingress VIP. For more information, see the Red Hat Knowledgebase solution POD from the egress IP enabled namespace cannot access OCP route in an IPI cluster when the egress IP is assigned to the infra node that already hosts the ingress VIP . 23.14.1. Egress IP address architectural design and implementation The OpenShift Container Platform egress IP address functionality allows you to ensure that the traffic from one or more pods in one or more namespaces has a consistent source IP address for services outside the cluster network. For example, you might have a pod that periodically queries a database that is hosted on a server outside of your cluster. To enforce access requirements for the server, a packet filtering device is configured to allow traffic only from specific IP addresses. To ensure that you can reliably allow access to the server from only that specific pod, you can configure a specific egress IP address for the pod that makes the requests to the server. An egress IP address assigned to a namespace is different from an egress router, which is used to send traffic to specific destinations. In some cluster configurations, application pods and ingress router pods run on the same node. If you configure an egress IP address for an application project in this scenario, the IP address is not used when you send a request to a route from the application project. Important Egress IP addresses must not be configured in any Linux network configuration files, such as ifcfg-eth0 . 23.14.1.1. Platform support Support for the egress IP address functionality on various platforms is summarized in the following table: Platform Supported Bare metal Yes VMware vSphere Yes Red Hat OpenStack Platform (RHOSP) Yes Amazon Web Services (AWS) Yes Google Cloud Platform (GCP) Yes Microsoft Azure Yes Important The assignment of egress IP addresses to control plane nodes with the EgressIP feature is not supported on a cluster provisioned on Amazon Web Services (AWS). ( BZ#2039656 ) 23.14.1.2. Public cloud platform considerations For clusters provisioned on public cloud infrastructure, there is a constraint on the absolute number of assignable IP addresses per node. The maximum number of assignable IP addresses per node, or the IP capacity , can be described in the following formula: IP capacity = public cloud default capacity - sum(current IP assignments) While the Egress IPs capability manages the IP address capacity per node, it is important to plan for this constraint in your deployments. For example, for a cluster installed on bare-metal infrastructure with 8 nodes you can configure 150 egress IP addresses. However, if a public cloud provider limits IP address capacity to 10 IP addresses per node, the total number of assignable IP addresses is only 80. To achieve the same IP address capacity in this example cloud provider, you would need to allocate 7 additional nodes. To confirm the IP capacity and subnets for any node in your public cloud environment, you can enter the oc get node <node_name> -o yaml command. The cloud.network.openshift.io/egress-ipconfig annotation includes capacity and subnet information for the node. The annotation value is an array with a single object with fields that provide the following information for the primary network interface: interface : Specifies the interface ID on AWS and Azure and the interface name on GCP. ifaddr : Specifies the subnet mask for one or both IP address families. capacity : Specifies the IP address capacity for the node. On AWS, the IP address capacity is provided per IP address family. On Azure and GCP, the IP address capacity includes both IPv4 and IPv6 addresses. Automatic attachment and detachment of egress IP addresses for traffic between nodes are available. This allows for traffic from many pods in namespaces to have a consistent source IP address to locations outside of the cluster. This also supports OpenShift SDN and OVN-Kubernetes, which is the default networking plugin in Red Hat OpenShift Networking in OpenShift Container Platform 4.12. Note The RHOSP egress IP address feature creates a Neutron reservation port called egressip-<IP address> . Using the same RHOSP user as the one used for the OpenShift Container Platform cluster installation, you can assign a floating IP address to this reservation port to have a predictable SNAT address for egress traffic. When an egress IP address on an RHOSP network is moved from one node to another, because of a node failover, for example, the Neutron reservation port is removed and recreated. This means that the floating IP association is lost and you need to manually reassign the floating IP address to the new reservation port. Note When an RHOSP cluster administrator assigns a floating IP to the reservation port, OpenShift Container Platform cannot delete the reservation port. The CloudPrivateIPConfig object cannot perform delete and move operations until an RHOSP cluster administrator unassigns the floating IP from the reservation port. The following examples illustrate the annotation from nodes on several public cloud providers. The annotations are indented for readability. Example cloud.network.openshift.io/egress-ipconfig annotation on AWS cloud.network.openshift.io/egress-ipconfig: [ { "interface":"eni-078d267045138e436", "ifaddr":{"ipv4":"10.0.128.0/18"}, "capacity":{"ipv4":14,"ipv6":15} } ] Example cloud.network.openshift.io/egress-ipconfig annotation on GCP cloud.network.openshift.io/egress-ipconfig: [ { "interface":"nic0", "ifaddr":{"ipv4":"10.0.128.0/18"}, "capacity":{"ip":14} } ] The following sections describe the IP address capacity for supported public cloud environments for use in your capacity calculation. 23.14.1.2.1. Amazon Web Services (AWS) IP address capacity limits On AWS, constraints on IP address assignments depend on the instance type configured. For more information, see IP addresses per network interface per instance type 23.14.1.2.2. Google Cloud Platform (GCP) IP address capacity limits On GCP, the networking model implements additional node IP addresses through IP address aliasing, rather than IP address assignments. However, IP address capacity maps directly to IP aliasing capacity. The following capacity limits exist for IP aliasing assignment: Per node, the maximum number of IP aliases, both IPv4 and IPv6, is 100. Per VPC, the maximum number of IP aliases is unspecified, but OpenShift Container Platform scalability testing reveals the maximum to be approximately 15,000. For more information, see Per instance quotas and Alias IP ranges overview . 23.14.1.2.3. Microsoft Azure IP address capacity limits On Azure, the following capacity limits exist for IP address assignment: Per NIC, the maximum number of assignable IP addresses, for both IPv4 and IPv6, is 256. Per virtual network, the maximum number of assigned IP addresses cannot exceed 65,536. For more information, see Networking limits . 23.14.1.3. Assignment of egress IPs to pods To assign one or more egress IPs to a namespace or specific pods in a namespace, the following conditions must be satisfied: At least one node in your cluster must have the k8s.ovn.org/egress-assignable: "" label. An EgressIP object exists that defines one or more egress IP addresses to use as the source IP address for traffic leaving the cluster from pods in a namespace. Important If you create EgressIP objects prior to labeling any nodes in your cluster for egress IP assignment, OpenShift Container Platform might assign every egress IP address to the first node with the k8s.ovn.org/egress-assignable: "" label. To ensure that egress IP addresses are widely distributed across nodes in the cluster, always apply the label to the nodes you intent to host the egress IP addresses before creating any EgressIP objects. 23.14.1.4. Assignment of egress IPs to nodes When creating an EgressIP object, the following conditions apply to nodes that are labeled with the k8s.ovn.org/egress-assignable: "" label: An egress IP address is never assigned to more than one node at a time. An egress IP address is equally balanced between available nodes that can host the egress IP address. If the spec.EgressIPs array in an EgressIP object specifies more than one IP address, the following conditions apply: No node will ever host more than one of the specified IP addresses. Traffic is balanced roughly equally between the specified IP addresses for a given namespace. If a node becomes unavailable, any egress IP addresses assigned to it are automatically reassigned, subject to the previously described conditions. When a pod matches the selector for multiple EgressIP objects, there is no guarantee which of the egress IP addresses that are specified in the EgressIP objects is assigned as the egress IP address for the pod. Additionally, if an EgressIP object specifies multiple egress IP addresses, there is no guarantee which of the egress IP addresses might be used. For example, if a pod matches a selector for an EgressIP object with two egress IP addresses, 10.10.20.1 and 10.10.20.2 , either might be used for each TCP connection or UDP conversation. 23.14.1.5. Architectural diagram of an egress IP address configuration The following diagram depicts an egress IP address configuration. The diagram describes four pods in two different namespaces running on three nodes in a cluster. The nodes are assigned IP addresses from the 192.168.126.0/18 CIDR block on the host network. Both Node 1 and Node 3 are labeled with k8s.ovn.org/egress-assignable: "" and thus available for the assignment of egress IP addresses. The dashed lines in the diagram depict the traffic flow from pod1, pod2, and pod3 traveling through the pod network to egress the cluster from Node 1 and Node 3. When an external service receives traffic from any of the pods selected by the example EgressIP object, the source IP address is either 192.168.126.10 or 192.168.126.102 . The traffic is balanced roughly equally between these two nodes. The following resources from the diagram are illustrated in detail: Namespace objects The namespaces are defined in the following manifest: Namespace objects apiVersion: v1 kind: Namespace metadata: name: namespace1 labels: env: prod --- apiVersion: v1 kind: Namespace metadata: name: namespace2 labels: env: prod EgressIP object The following EgressIP object describes a configuration that selects all pods in any namespace with the env label set to prod . The egress IP addresses for the selected pods are 192.168.126.10 and 192.168.126.102 . EgressIP object apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: egressips-prod spec: egressIPs: - 192.168.126.10 - 192.168.126.102 namespaceSelector: matchLabels: env: prod status: items: - node: node1 egressIP: 192.168.126.10 - node: node3 egressIP: 192.168.126.102 For the configuration in the example, OpenShift Container Platform assigns both egress IP addresses to the available nodes. The status field reflects whether and where the egress IP addresses are assigned. 23.14.2. EgressIP object The following YAML describes the API for the EgressIP object. The scope of the object is cluster-wide; it is not created in a namespace. apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: <name> 1 spec: egressIPs: 2 - <ip_address> namespaceSelector: 3 ... podSelector: 4 ... 1 The name for the EgressIPs object. 2 An array of one or more IP addresses. 3 One or more selectors for the namespaces to associate the egress IP addresses with. 4 Optional: One or more selectors for pods in the specified namespaces to associate egress IP addresses with. Applying these selectors allows for the selection of a subset of pods within a namespace. The following YAML describes the stanza for the namespace selector: Namespace selector stanza namespaceSelector: 1 matchLabels: <label_name>: <label_value> 1 One or more matching rules for namespaces. If more than one match rule is provided, all matching namespaces are selected. The following YAML describes the optional stanza for the pod selector: Pod selector stanza podSelector: 1 matchLabels: <label_name>: <label_value> 1 Optional: One or more matching rules for pods in the namespaces that match the specified namespaceSelector rules. If specified, only pods that match are selected. Others pods in the namespace are not selected. In the following example, the EgressIP object associates the 192.168.126.11 and 192.168.126.102 egress IP addresses with pods that have the app label set to web and are in the namespaces that have the env label set to prod : Example EgressIP object apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: egress-group1 spec: egressIPs: - 192.168.126.11 - 192.168.126.102 podSelector: matchLabels: app: web namespaceSelector: matchLabels: env: prod In the following example, the EgressIP object associates the 192.168.127.30 and 192.168.127.40 egress IP addresses with any pods that do not have the environment label set to development : Example EgressIP object apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: egress-group2 spec: egressIPs: - 192.168.127.30 - 192.168.127.40 namespaceSelector: matchExpressions: - key: environment operator: NotIn values: - development 23.14.3. The egressIPConfig object As a feature of egress IP, the reachabilityTotalTimeoutSeconds parameter configures the EgressIP node reachability check total timeout in seconds. If the EgressIP node cannot be reached within this timeout, the node is declared down. You can set a value for the reachabilityTotalTimeoutSeconds in the configuration file for the egressIPConfig object. Setting a large value might cause the EgressIP implementation to react slowly to node changes. The implementation reacts slowly for EgressIP nodes that have an issue and are unreachable. If you omit the reachabilityTotalTimeoutSeconds parameter from the egressIPConfig object, the platform chooses a reasonable default value, which is subject to change over time. The current default is 1 second. A value of 0 disables the reachability check for the EgressIP node. The following egressIPConfig object describes changing the reachabilityTotalTimeoutSeconds from the default 1 second probes to 5 second probes: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 defaultNetwork: ovnKubernetesConfig: egressIPConfig: 1 reachabilityTotalTimeoutSeconds: 5 2 gatewayConfig: routingViaHost: false genevePort: 6081 1 The egressIPConfig holds the configurations for the options of the EgressIP object. By changing these configurations, you can extend the EgressIP object. 2 The value for reachabilityTotalTimeoutSeconds accepts integer values from 0 to 60 . A value of 0 disables the reachability check of the egressIP node. Setting a value from 1 to 60 corresponds to the timeout in seconds for a probe to send the reachability check to the node. 23.14.4. Labeling a node to host egress IP addresses You can apply the k8s.ovn.org/egress-assignable="" label to a node in your cluster so that OpenShift Container Platform can assign one or more egress IP addresses to the node. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster as a cluster administrator. Procedure To label a node so that it can host one or more egress IP addresses, enter the following command: USD oc label nodes <node_name> k8s.ovn.org/egress-assignable="" 1 1 The name of the node to label. Tip You can alternatively apply the following YAML to add the label to a node: apiVersion: v1 kind: Node metadata: labels: k8s.ovn.org/egress-assignable: "" name: <node_name> 23.14.5. steps Assigning egress IPs 23.14.6. Additional resources LabelSelector meta/v1 LabelSelectorRequirement meta/v1 23.15. Assigning an egress IP address As a cluster administrator, you can assign an egress IP address for traffic leaving the cluster from a namespace or from specific pods in a namespace. 23.15.1. Assigning an egress IP address to a namespace You can assign one or more egress IP addresses to a namespace or to specific pods in a namespace. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster as a cluster administrator. Configure at least one node to host an egress IP address. Procedure Create an EgressIP object: Create a <egressips_name>.yaml file where <egressips_name> is the name of the object. In the file that you created, define an EgressIP object, as in the following example: apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: egress-project1 spec: egressIPs: - 192.168.127.10 - 192.168.127.11 namespaceSelector: matchLabels: env: qa To create the object, enter the following command. USD oc apply -f <egressips_name>.yaml 1 1 Replace <egressips_name> with the name of the object. Example output egressips.k8s.ovn.org/<egressips_name> created Optional: Store the <egressips_name>.yaml file so that you can make changes later. Add labels to the namespace that requires egress IP addresses. To add a label to the namespace of an EgressIP object defined in step 1, run the following command: USD oc label ns <namespace> env=qa 1 1 Replace <namespace> with the namespace that requires egress IP addresses. Verification To show all egress IPs that are in use in your cluster, enter the following command: USD oc get egressip -o yaml Note The command oc get egressip only returns one egress IP address regardless of how many are configured. This is not a bug and is a limitation of Kubernetes. As a workaround, you can pass in the -o yaml or -o json flags to return all egress IPs addresses in use. Example output # ... spec: egressIPs: - 192.168.127.10 - 192.168.127.11 # ... 23.15.2. Additional resources Configuring egress IP addresses 23.16. Considerations for the use of an egress router pod 23.16.1. About an egress router pod The OpenShift Container Platform egress router pod redirects traffic to a specified remote server from a private source IP address that is not used for any other purpose. An egress router pod can send network traffic to servers that are set up to allow access only from specific IP addresses. Note The egress router pod is not intended for every outgoing connection. Creating large numbers of egress router pods can exceed the limits of your network hardware. For example, creating an egress router pod for every project or application could exceed the number of local MAC addresses that the network interface can handle before reverting to filtering MAC addresses in software. Important The egress router image is not compatible with Amazon AWS, Azure Cloud, or any other cloud platform that does not support layer 2 manipulations due to their incompatibility with macvlan traffic. 23.16.1.1. Egress router modes In redirect mode , an egress router pod configures iptables rules to redirect traffic from its own IP address to one or more destination IP addresses. Client pods that need to use the reserved source IP address must be configured to access the service for the egress router rather than connecting directly to the destination IP. You can access the destination service and port from the application pod by using the curl command. For example: USD curl <router_service_IP> <port> Note The egress router CNI plugin supports redirect mode only. This is a difference with the egress router implementation that you can deploy with OpenShift SDN. Unlike the egress router for OpenShift SDN, the egress router CNI plugin does not support HTTP proxy mode or DNS proxy mode. 23.16.1.2. Egress router pod implementation The egress router implementation uses the egress router Container Network Interface (CNI) plugin. The plugin adds a secondary network interface to a pod. An egress router is a pod that has two network interfaces. For example, the pod can have eth0 and net1 network interfaces. The eth0 interface is on the cluster network and the pod continues to use the interface for ordinary cluster-related network traffic. The net1 interface is on a secondary network and has an IP address and gateway for that network. Other pods in the OpenShift Container Platform cluster can access the egress router service and the service enables the pods to access external services. The egress router acts as a bridge between pods and an external system. Traffic that leaves the egress router exits through a node, but the packets have the MAC address of the net1 interface from the egress router pod. When you add an egress router custom resource, the Cluster Network Operator creates the following objects: The network attachment definition for the net1 secondary network interface of the pod. A deployment for the egress router. If you delete an egress router custom resource, the Operator deletes the two objects in the preceding list that are associated with the egress router. 23.16.1.3. Deployment considerations An egress router pod adds an additional IP address and MAC address to the primary network interface of the node. As a result, you might need to configure your hypervisor or cloud provider to allow the additional address. Red Hat OpenStack Platform (RHOSP) If you deploy OpenShift Container Platform on RHOSP, you must allow traffic from the IP and MAC addresses of the egress router pod on your OpenStack environment. If you do not allow the traffic, then communication will fail : USD openstack port set --allowed-address \ ip_address=<ip_address>,mac_address=<mac_address> <neutron_port_uuid> Red Hat Virtualization (RHV) If you are using RHV , you must select No Network Filter for the Virtual network interface controller (vNIC). VMware vSphere If you are using VMware vSphere, see the VMware documentation for securing vSphere standard switches . View and change VMware vSphere default settings by selecting the host virtual switch from the vSphere Web Client. Specifically, ensure that the following are enabled: MAC Address Changes Forged Transits Promiscuous Mode Operation 23.16.1.4. Failover configuration To avoid downtime, the Cluster Network Operator deploys the egress router pod as a deployment resource. The deployment name is egress-router-cni-deployment . The pod that corresponds to the deployment has a label of app=egress-router-cni . To create a new service for the deployment, use the oc expose deployment/egress-router-cni-deployment --port <port_number> command or create a file like the following example: apiVersion: v1 kind: Service metadata: name: app-egress spec: ports: - name: tcp-8080 protocol: TCP port: 8080 - name: tcp-8443 protocol: TCP port: 8443 - name: udp-80 protocol: UDP port: 80 type: ClusterIP selector: app: egress-router-cni 23.16.2. Additional resources Deploying an egress router in redirection mode 23.17. Deploying an egress router pod in redirect mode As a cluster administrator, you can deploy an egress router pod to redirect traffic to specified destination IP addresses from a reserved source IP address. The egress router implementation uses the egress router Container Network Interface (CNI) plugin. 23.17.1. Egress router custom resource Define the configuration for an egress router pod in an egress router custom resource. The following YAML describes the fields for the configuration of an egress router in redirect mode: apiVersion: network.operator.openshift.io/v1 kind: EgressRouter metadata: name: <egress_router_name> namespace: <namespace> <.> spec: addresses: [ <.> { ip: "<egress_router>", <.> gateway: "<egress_gateway>" <.> } ] mode: Redirect redirect: { redirectRules: [ <.> { destinationIP: "<egress_destination>", port: <egress_router_port>, targetPort: <target_port>, <.> protocol: <network_protocol> <.> }, ... ], fallbackIP: "<egress_destination>" <.> } <.> Optional: The namespace field specifies the namespace to create the egress router in. If you do not specify a value in the file or on the command line, the default namespace is used. <.> The addresses field specifies the IP addresses to configure on the secondary network interface. <.> The ip field specifies the reserved source IP address and netmask from the physical network that the node is on to use with egress router pod. Use CIDR notation to specify the IP address and netmask. <.> The gateway field specifies the IP address of the network gateway. <.> Optional: The redirectRules field specifies a combination of egress destination IP address, egress router port, and protocol. Incoming connections to the egress router on the specified port and protocol are routed to the destination IP address. <.> Optional: The targetPort field specifies the network port on the destination IP address. If this field is not specified, traffic is routed to the same network port that it arrived on. <.> The protocol field supports TCP, UDP, or SCTP. <.> Optional: The fallbackIP field specifies a destination IP address. If you do not specify any redirect rules, the egress router sends all traffic to this fallback IP address. If you specify redirect rules, any connections to network ports that are not defined in the rules are sent by the egress router to this fallback IP address. If you do not specify this field, the egress router rejects connections to network ports that are not defined in the rules. Example egress router specification apiVersion: network.operator.openshift.io/v1 kind: EgressRouter metadata: name: egress-router-redirect spec: networkInterface: { macvlan: { mode: "Bridge" } } addresses: [ { ip: "192.168.12.99/24", gateway: "192.168.12.1" } ] mode: Redirect redirect: { redirectRules: [ { destinationIP: "10.0.0.99", port: 80, protocol: UDP }, { destinationIP: "203.0.113.26", port: 8080, targetPort: 80, protocol: TCP }, { destinationIP: "203.0.113.27", port: 8443, targetPort: 443, protocol: TCP } ] } 23.17.2. Deploying an egress router in redirect mode You can deploy an egress router to redirect traffic from its own reserved source IP address to one or more destination IP addresses. After you add an egress router, the client pods that need to use the reserved source IP address must be modified to connect to the egress router rather than connecting directly to the destination IP. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an egress router definition. To ensure that other pods can find the IP address of the egress router pod, create a service that uses the egress router, as in the following example: apiVersion: v1 kind: Service metadata: name: egress-1 spec: ports: - name: web-app protocol: TCP port: 8080 type: ClusterIP selector: app: egress-router-cni <.> <.> Specify the label for the egress router. The value shown is added by the Cluster Network Operator and is not configurable. After you create the service, your pods can connect to the service. The egress router pod redirects traffic to the corresponding port on the destination IP address. The connections originate from the reserved source IP address. Verification To verify that the Cluster Network Operator started the egress router, complete the following procedure: View the network attachment definition that the Operator created for the egress router: USD oc get network-attachment-definition egress-router-cni-nad The name of the network attachment definition is not configurable. Example output NAME AGE egress-router-cni-nad 18m View the deployment for the egress router pod: USD oc get deployment egress-router-cni-deployment The name of the deployment is not configurable. Example output NAME READY UP-TO-DATE AVAILABLE AGE egress-router-cni-deployment 1/1 1 1 18m View the status of the egress router pod: USD oc get pods -l app=egress-router-cni Example output NAME READY STATUS RESTARTS AGE egress-router-cni-deployment-575465c75c-qkq6m 1/1 Running 0 18m View the logs and the routing table for the egress router pod. Get the node name for the egress router pod: USD POD_NODENAME=USD(oc get pod -l app=egress-router-cni -o jsonpath="{.items[0].spec.nodeName}") Enter into a debug session on the target node. This step instantiates a debug pod called <node_name>-debug : USD oc debug node/USDPOD_NODENAME Set /host as the root directory within the debug shell. The debug pod mounts the root file system of the host in /host within the pod. By changing the root directory to /host , you can run binaries from the executable paths of the host: # chroot /host From within the chroot environment console, display the egress router logs: # cat /tmp/egress-router-log Example output 2021-04-26T12:27:20Z [debug] Called CNI ADD 2021-04-26T12:27:20Z [debug] Gateway: 192.168.12.1 2021-04-26T12:27:20Z [debug] IP Source Addresses: [192.168.12.99/24] 2021-04-26T12:27:20Z [debug] IP Destinations: [80 UDP 10.0.0.99/30 8080 TCP 203.0.113.26/30 80 8443 TCP 203.0.113.27/30 443] 2021-04-26T12:27:20Z [debug] Created macvlan interface 2021-04-26T12:27:20Z [debug] Renamed macvlan to "net1" 2021-04-26T12:27:20Z [debug] Adding route to gateway 192.168.12.1 on macvlan interface 2021-04-26T12:27:20Z [debug] deleted default route {Ifindex: 3 Dst: <nil> Src: <nil> Gw: 10.128.10.1 Flags: [] Table: 254} 2021-04-26T12:27:20Z [debug] Added new default route with gateway 192.168.12.1 2021-04-26T12:27:20Z [debug] Added iptables rule: iptables -t nat PREROUTING -i eth0 -p UDP --dport 80 -j DNAT --to-destination 10.0.0.99 2021-04-26T12:27:20Z [debug] Added iptables rule: iptables -t nat PREROUTING -i eth0 -p TCP --dport 8080 -j DNAT --to-destination 203.0.113.26:80 2021-04-26T12:27:20Z [debug] Added iptables rule: iptables -t nat PREROUTING -i eth0 -p TCP --dport 8443 -j DNAT --to-destination 203.0.113.27:443 2021-04-26T12:27:20Z [debug] Added iptables rule: iptables -t nat -o net1 -j SNAT --to-source 192.168.12.99 The logging file location and logging level are not configurable when you start the egress router by creating an EgressRouter object as described in this procedure. From within the chroot environment console, get the container ID: # crictl ps --name egress-router-cni-pod | awk '{print USD1}' Example output CONTAINER bac9fae69ddb6 Determine the process ID of the container. In this example, the container ID is bac9fae69ddb6 : # crictl inspect -o yaml bac9fae69ddb6 | grep 'pid:' | awk '{print USD2}' Example output 68857 Enter the network namespace of the container: # nsenter -n -t 68857 Display the routing table: # ip route In the following example output, the net1 network interface is the default route. Traffic for the cluster network uses the eth0 network interface. Traffic for the 192.168.12.0/24 network uses the net1 network interface and originates from the reserved source IP address 192.168.12.99 . The pod routes all other traffic to the gateway at IP address 192.168.12.1 . Routing for the service network is not shown. Example output default via 192.168.12.1 dev net1 10.128.10.0/23 dev eth0 proto kernel scope link src 10.128.10.18 192.168.12.0/24 dev net1 proto kernel scope link src 192.168.12.99 192.168.12.1 dev net1 23.18. Enabling multicast for a project 23.18.1. About multicast With IP multicast, data is broadcast to many IP addresses simultaneously. Important At this time, multicast is best used for low-bandwidth coordination or service discovery and not a high-bandwidth solution. By default, network policies affect all connections in a namespace. However, multicast is unaffected by network policies. If multicast is enabled in the same namespace as your network policies, it is always allowed, even if there is a deny-all network policy. Cluster administrators should consider the implications to the exemption of multicast from network policies before enabling it. Multicast traffic between OpenShift Container Platform pods is disabled by default. If you are using the OVN-Kubernetes network plugin, you can enable multicast on a per-project basis. 23.18.2. Enabling multicast between pods You can enable multicast between pods for your project. Prerequisites Install the OpenShift CLI ( oc ). You must log in to the cluster with a user that has the cluster-admin role. Procedure Run the following command to enable multicast for a project. Replace <namespace> with the namespace for the project you want to enable multicast for. USD oc annotate namespace <namespace> \ k8s.ovn.org/multicast-enabled=true Tip You can alternatively apply the following YAML to add the annotation: apiVersion: v1 kind: Namespace metadata: name: <namespace> annotations: k8s.ovn.org/multicast-enabled: "true" Verification To verify that multicast is enabled for a project, complete the following procedure: Change your current project to the project that you enabled multicast for. Replace <project> with the project name. USD oc project <project> Create a pod to act as a multicast receiver: USD cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: mlistener labels: app: multicast-verify spec: containers: - name: mlistener image: registry.access.redhat.com/ubi8 command: ["/bin/sh", "-c"] args: ["dnf -y install socat hostname && sleep inf"] ports: - containerPort: 30102 name: mlistener protocol: UDP EOF Create a pod to act as a multicast sender: USD cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: msender labels: app: multicast-verify spec: containers: - name: msender image: registry.access.redhat.com/ubi8 command: ["/bin/sh", "-c"] args: ["dnf -y install socat && sleep inf"] EOF In a new terminal window or tab, start the multicast listener. Get the IP address for the Pod: USD POD_IP=USD(oc get pods mlistener -o jsonpath='{.status.podIP}') Start the multicast listener by entering the following command: USD oc exec mlistener -i -t -- \ socat UDP4-RECVFROM:30102,ip-add-membership=224.1.0.1:USDPOD_IP,fork EXEC:hostname Start the multicast transmitter. Get the pod network IP address range: USD CIDR=USD(oc get Network.config.openshift.io cluster \ -o jsonpath='{.status.clusterNetwork[0].cidr}') To send a multicast message, enter the following command: USD oc exec msender -i -t -- \ /bin/bash -c "echo | socat STDIO UDP4-DATAGRAM:224.1.0.1:30102,range=USDCIDR,ip-multicast-ttl=64" If multicast is working, the command returns the following output: mlistener 23.19. Disabling multicast for a project 23.19.1. Disabling multicast between pods You can disable multicast between pods for your project. Prerequisites Install the OpenShift CLI ( oc ). You must log in to the cluster with a user that has the cluster-admin role. Procedure Disable multicast by running the following command: USD oc annotate namespace <namespace> \ 1 k8s.ovn.org/multicast-enabled- 1 The namespace for the project you want to disable multicast for. Tip You can alternatively apply the following YAML to delete the annotation: apiVersion: v1 kind: Namespace metadata: name: <namespace> annotations: k8s.ovn.org/multicast-enabled: null 23.20. Tracking network flows As a cluster administrator, you can collect information about pod network flows from your cluster to assist with the following areas: Monitor ingress and egress traffic on the pod network. Troubleshoot performance issues. Gather data for capacity planning and security audits. When you enable the collection of the network flows, only the metadata about the traffic is collected. For example, packet data is not collected, but the protocol, source address, destination address, port numbers, number of bytes, and other packet-level information is collected. The data is collected in one or more of the following record formats: NetFlow sFlow IPFIX When you configure the Cluster Network Operator (CNO) with one or more collector IP addresses and port numbers, the Operator configures Open vSwitch (OVS) on each node to send the network flows records to each collector. You can configure the Operator to send records to more than one type of network flow collector. For example, you can send records to NetFlow collectors and also send records to sFlow collectors. When OVS sends data to the collectors, each type of collector receives identical records. For example, if you configure two NetFlow collectors, OVS on a node sends identical records to the two collectors. If you also configure two sFlow collectors, the two sFlow collectors receive identical records. However, each collector type has a unique record format. Collecting the network flows data and sending the records to collectors affects performance. Nodes process packets at a slower rate. If the performance impact is too great, you can delete the destinations for collectors to disable collecting network flows data and restore performance. Note Enabling network flow collectors might have an impact on the overall performance of the cluster network. 23.20.1. Network object configuration for tracking network flows The fields for configuring network flows collectors in the Cluster Network Operator (CNO) are shown in the following table: Table 23.12. Network flows configuration Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.exportNetworkFlows object One or more of netFlow , sFlow , or ipfix . spec.exportNetworkFlows.netFlow.collectors array A list of IP address and network port pairs for up to 10 collectors. spec.exportNetworkFlows.sFlow.collectors array A list of IP address and network port pairs for up to 10 collectors. spec.exportNetworkFlows.ipfix.collectors array A list of IP address and network port pairs for up to 10 collectors. After applying the following manifest to the CNO, the Operator configures Open vSwitch (OVS) on each node in the cluster to send network flows records to the NetFlow collector that is listening at 192.168.1.99:2056 . Example configuration for tracking network flows apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: exportNetworkFlows: netFlow: collectors: - 192.168.1.99:2056 23.20.2. Adding destinations for network flows collectors As a cluster administrator, you can configure the Cluster Network Operator (CNO) to send network flows metadata about the pod network to a network flows collector. Prerequisites You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. You have a network flows collector and know the IP address and port that it listens on. Procedure Create a patch file that specifies the network flows collector type and the IP address and port information of the collectors: spec: exportNetworkFlows: netFlow: collectors: - 192.168.1.99:2056 Configure the CNO with the network flows collectors: USD oc patch network.operator cluster --type merge -p "USD(cat <file_name>.yaml)" Example output network.operator.openshift.io/cluster patched Verification Verification is not typically necessary. You can run the following command to confirm that Open vSwitch (OVS) on each node is configured to send network flows records to one or more collectors. View the Operator configuration to confirm that the exportNetworkFlows field is configured: USD oc get network.operator cluster -o jsonpath="{.spec.exportNetworkFlows}" Example output {"netFlow":{"collectors":["192.168.1.99:2056"]}} View the network flows configuration in OVS from each node: USD for pod in USD(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node -o jsonpath='{[email protected][*]}{.metadata.name}{"\n"}{end}'); do ; echo; echo USDpod; oc -n openshift-ovn-kubernetes exec -c ovnkube-node USDpod \ -- bash -c 'for type in ipfix sflow netflow ; do ovs-vsctl find USDtype ; done'; done Example output ovnkube-node-xrn4p _uuid : a4d2aaca-5023-4f3d-9400-7275f92611f9 active_timeout : 60 add_id_to_interface : false engine_id : [] engine_type : [] external_ids : {} targets : ["192.168.1.99:2056"] ovnkube-node-z4vq9 _uuid : 61d02fdb-9228-4993-8ff5-b27f01a29bd6 active_timeout : 60 add_id_to_interface : false engine_id : [] engine_type : [] external_ids : {} targets : ["192.168.1.99:2056"]- ... 23.20.3. Deleting all destinations for network flows collectors As a cluster administrator, you can configure the Cluster Network Operator (CNO) to stop sending network flows metadata to a network flows collector. Prerequisites You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. Procedure Remove all network flows collectors: USD oc patch network.operator cluster --type='json' \ -p='[{"op":"remove", "path":"/spec/exportNetworkFlows"}]' Example output network.operator.openshift.io/cluster patched 23.20.4. Additional resources Network [operator.openshift.io/v1 ] 23.21. Configuring hybrid networking As a cluster administrator, you can configure the Red Hat OpenShift Networking OVN-Kubernetes network plugin to allow Linux and Windows nodes to host Linux and Windows workloads, respectively. 23.21.1. Configuring hybrid networking with OVN-Kubernetes You can configure your cluster to use hybrid networking with OVN-Kubernetes. This allows a hybrid cluster that supports different node networking configurations. For example, this is necessary to run both Linux and Windows nodes in a cluster. Important You must configure hybrid networking with OVN-Kubernetes during the installation of your cluster. You cannot switch to hybrid networking after the installation process. Prerequisites You defined OVNKubernetes for the networking.networkType parameter in the install-config.yaml file. See the installation documentation for configuring OpenShift Container Platform network customizations on your chosen cloud provider for more information. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> where: <installation_directory> Specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: USD cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF where: <installation_directory> Specifies the directory name that contains the manifests/ directory for your cluster. Open the cluster-network-03-config.yml file in an editor and configure OVN-Kubernetes with hybrid networking, such as in the following example: Specify a hybrid networking configuration apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: 1 - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 9898 2 1 Specify the CIDR configuration used for nodes on the additional overlay network. The hybridClusterNetwork CIDR must not overlap with the clusterNetwork CIDR. 2 Specify a custom VXLAN port for the additional overlay network. This is required for running Windows nodes in a cluster installed on vSphere, and must not be configured for any other cloud provider. The custom port can be any open port excluding the default 4789 port. For more information on this requirement, see the Microsoft documentation on Pod-to-pod connectivity between hosts is broken . Note Windows Server Long-Term Servicing Channel (LTSC): Windows Server 2019 is not supported on clusters with a custom hybridOverlayVXLANPort value because this Windows server version does not support selecting a custom VXLAN port. Save the cluster-network-03-config.yml file and quit the text editor. Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program deletes the manifests/ directory when creating the cluster. Complete any further installation configurations, and then create your cluster. Hybrid networking is enabled when the installation process is finished. 23.21.2. Additional resources Understanding Windows container workloads Enabling Windows container workloads Installing a cluster on AWS with network customizations Installing a cluster on Azure with network customizations | [
"I1006 16:09:50.985852 60651 helper_linux.go:73] Found default gateway interface br-ex 192.168.127.1 I1006 16:09:50.985923 60651 helper_linux.go:73] Found default gateway interface ens4 fe80::5054:ff:febe:bcd4 F1006 16:09:50.985939 60651 ovnkube.go:130] multiple gateway interfaces detected: br-ex ens4",
"I0512 19:07:17.589083 108432 helper_linux.go:74] Found default gateway interface br-ex 192.168.123.1 F0512 19:07:17.589141 108432 ovnkube.go:133] failed to get default gateway interface",
"oc get all,ep,cm -n openshift-ovn-kubernetes",
"NAME READY STATUS RESTARTS AGE pod/ovnkube-master-9g7zt 6/6 Running 1 (48m ago) 57m pod/ovnkube-master-lqs4v 6/6 Running 0 57m pod/ovnkube-master-vxhtq 6/6 Running 0 57m pod/ovnkube-node-9k9kc 5/5 Running 0 57m pod/ovnkube-node-jg52r 5/5 Running 0 51m pod/ovnkube-node-k8wf7 5/5 Running 0 57m pod/ovnkube-node-tlwk6 5/5 Running 0 47m pod/ovnkube-node-xsvnk 5/5 Running 0 57m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/ovn-kubernetes-master ClusterIP None <none> 9102/TCP 57m service/ovn-kubernetes-node ClusterIP None <none> 9103/TCP,9105/TCP 57m service/ovnkube-db ClusterIP None <none> 9641/TCP,9642/TCP 57m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/ovnkube-master 3 3 3 3 3 beta.kubernetes.io/os=linux,node-role.kubernetes.io/master= 57m daemonset.apps/ovnkube-node 5 5 5 5 5 beta.kubernetes.io/os=linux 57m NAME ENDPOINTS AGE endpoints/ovn-kubernetes-master 10.0.132.11:9102,10.0.151.18:9102,10.0.192.45:9102 57m endpoints/ovn-kubernetes-node 10.0.132.11:9105,10.0.143.72:9105,10.0.151.18:9105 + 7 more... 57m endpoints/ovnkube-db 10.0.132.11:9642,10.0.151.18:9642,10.0.192.45:9642 + 3 more... 57m NAME DATA AGE configmap/control-plane-status 1 55m configmap/kube-root-ca.crt 1 57m configmap/openshift-service-ca.crt 1 57m configmap/ovn-ca 1 57m configmap/ovn-kubernetes-master 0 55m configmap/ovnkube-config 1 57m configmap/signer-ca 1 57m",
"oc get pods ovnkube-master-9g7zt -o jsonpath='{.spec.containers[*].name}' -n openshift-ovn-kubernetes",
"northd nbdb kube-rbac-proxy sbdb ovnkube-master ovn-dbchecker",
"oc get pods ovnkube-node-jg52r -o jsonpath='{.spec.containers[*].name}' -n openshift-ovn-kubernetes",
"ovn-controller ovn-acl-logging kube-rbac-proxy kube-rbac-proxy-ovn-metrics ovnkube-node",
"oc get po -n openshift-ovn-kubernetes",
"NAME READY STATUS RESTARTS AGE ovnkube-master-7j97q 6/6 Running 2 (148m ago) 149m ovnkube-master-gt4ms 6/6 Running 1 (140m ago) 147m ovnkube-master-mk6p6 6/6 Running 0 148m ovnkube-node-8qvtr 5/5 Running 0 149m ovnkube-node-fqdc9 5/5 Running 0 149m ovnkube-node-tlfwv 5/5 Running 0 149m ovnkube-node-wlwkn 5/5 Running 0 142m",
"oc exec -n openshift-ovn-kubernetes ovnkube-master-7j97q -- /usr/bin/ovn-appctl -t /var/run/ovn/ovnnb_db.ctl --timeout=3 cluster/status OVN_Northbound",
"Defaulted container \"northd\" out of: northd, nbdb, kube-rbac-proxy, sbdb, ovnkube-master, ovn-dbchecker 1c57 Name: OVN_Northbound Cluster ID: c48a (c48aa5c0-a704-4c77-a066-24fe99d9b338) Server ID: 1c57 (1c57b6fc-2849-49b7-8679-fbf18bafe339) Address: ssl:10.0.147.219:9643 Status: cluster member Role: follower 1 Term: 5 Leader: 2b4f 2 Vote: unknown Election timer: 10000 Log: [2, 3018] Entries not yet committed: 0 Entries not yet applied: 0 Connections: ->0000 ->0000 <-8844 <-2b4f Disconnections: 0 Servers: 1c57 (1c57 at ssl:10.0.147.219:9643) (self) 8844 (8844 at ssl:10.0.163.212:9643) last msg 8928047 ms ago 2b4f (2b4f at ssl:10.0.242.240:9643) last msg 620 ms ago 3",
"oc get po -o wide -n openshift-ovn-kubernetes | grep 10.0.242.240 | grep -v ovnkube-node",
"ovnkube-master-gt4ms 6/6 Running 1 (143m ago) 150m 10.0.242.240 ip-10-0-242-240.ec2.internal <none> <none>",
"oc exec -n openshift-ovn-kubernetes -it ovnkube-master-gt4ms -c northd -- ovn-nbctl show",
"oc exec -n openshift-ovn-kubernetes -it ovnkube-master-mk6p6 -c northd ovn-nbctl --help",
"oc exec -n openshift-ovn-kubernetes -it ovnkube-master-gt4ms -c northd -- ovn-nbctl lr-list",
"f971f1f3-5112-402f-9d1e-48f1d091ff04 (GR_ip-10-0-145-205.ec2.internal) 69c992d8-a4cf-429e-81a3-5361209ffe44 (GR_ip-10-0-147-219.ec2.internal) 7d164271-af9e-4283-b84a-48f2a44851cd (GR_ip-10-0-163-212.ec2.internal) 111052e3-c395-408b-97b2-8dd0a20a29a5 (GR_ip-10-0-165-9.ec2.internal) ed50ce33-df5d-48e8-8862-2df6a59169a0 (GR_ip-10-0-209-170.ec2.internal) f44e2a96-8d1e-4a4d-abae-ed8728ac6851 (GR_ip-10-0-242-240.ec2.internal) ef3d0057-e557-4b1a-b3c6-fcc3463790b0 (ovn_cluster_router)",
"oc exec -n openshift-ovn-kubernetes -it ovnkube-master-gt4ms -c northd -- ovn-nbctl ls-list",
"82808c5c-b3bc-414a-bb59-8fec4b07eb14 (ext_ip-10-0-145-205.ec2.internal) 3d22444f-0272-4c51-afc6-de9e03db3291 (ext_ip-10-0-147-219.ec2.internal) bf73b9df-59ab-4c58-a456-ce8205b34ac5 (ext_ip-10-0-163-212.ec2.internal) bee1e8d0-ec87-45eb-b98b-63f9ec213e5e (ext_ip-10-0-165-9.ec2.internal) 812f08f2-6476-4abf-9a78-635f8516f95e (ext_ip-10-0-209-170.ec2.internal) f65e710b-32f9-482b-8eab-8d96a44799c1 (ext_ip-10-0-242-240.ec2.internal) 84dad700-afb8-4129-86f9-923a1ddeace9 (ip-10-0-145-205.ec2.internal) 1b7b448b-e36c-4ca3-9f38-4a2cf6814bfd (ip-10-0-147-219.ec2.internal) d92d1f56-2606-4f23-8b6a-4396a78951de (ip-10-0-163-212.ec2.internal) 6864a6b2-de15-4de3-92d8-f95014b6f28f (ip-10-0-165-9.ec2.internal) c26bf618-4d7e-4afd-804f-1a2cbc96ec6d (ip-10-0-209-170.ec2.internal) ab9a4526-44ed-4f82-ae1c-e20da04947d9 (ip-10-0-242-240.ec2.internal) a8588aba-21da-4276-ba0f-9d68e88911f0 (join)",
"oc exec -n openshift-ovn-kubernetes -it ovnkube-master-gt4ms -c northd -- ovn-nbctl lb-list",
"UUID LB PROTO VIP IPs f0fb50f9-4968-4b55-908c-616bae4db0a2 Service_default/ tcp 172.30.0.1:443 10.0.147.219:6443,10.0.163.212:6443,169.254.169.2:6443 0dc42012-4f5b-432e-ae01-2cc4bfe81b00 Service_default/ tcp 172.30.0.1:443 10.0.147.219:6443,169.254.169.2:6443,10.0.242.240:6443 f7fff5d5-5eff-4a40-98b1-3a4ba8f7f69c Service_default/ tcp 172.30.0.1:443 169.254.169.2:6443,10.0.163.212:6443,10.0.242.240:6443 12fe57a0-50a4-4a1b-ac10-5f288badee07 Service_default/ tcp 172.30.0.1:443 10.0.147.219:6443,10.0.163.212:6443,10.0.242.240:6443 3f137fbf-0b78-4875-ba44-fbf89f254cf7 Service_openshif tcp 172.30.23.153:443 10.130.0.14:8443 174199fe-0562-4141-b410-12094db922a7 Service_openshif tcp 172.30.69.51:50051 10.130.0.84:50051 5ee2d4bd-c9e2-4d16-a6df-f54cd17c9ac3 Service_openshif tcp 172.30.143.87:9001 10.0.145.205:9001,10.0.147.219:9001,10.0.163.212:9001,10.0.165.9:9001,10.0.209.170:9001,10.0.242.240:9001 a056ae3d-83f8-45bc-9c80-ef89bce7b162 Service_openshif tcp 172.30.164.74:443 10.0.147.219:6443,10.0.163.212:6443,10.0.242.240:6443 bac51f3d-9a6f-4f5e-ac02-28fd343a332a Service_openshif tcp 172.30.0.10:53 10.131.0.6:5353 tcp 172.30.0.10:9154 10.131.0.6:9154 48105bbc-51d7-4178-b975-417433f9c20a Service_openshif tcp 172.30.26.159:2379 10.0.147.219:2379,169.254.169.2:2379,10.0.242.240:2379 tcp 172.30.26.159:9979 10.0.147.219:9979,169.254.169.2:9979,10.0.242.240:9979 7de2b8fc-342a-415f-ac13-1a493f4e39c0 Service_openshif tcp 172.30.53.219:443 10.128.0.7:8443 tcp 172.30.53.219:9192 10.128.0.7:9192 2cef36bc-d720-4afb-8d95-9350eff1d27a Service_openshif tcp 172.30.81.66:443 10.128.0.23:8443 365cb6fb-e15e-45a4-a55b-21868b3cf513 Service_openshif tcp 172.30.96.51:50051 10.130.0.19:50051 41691cbb-ec55-4cdb-8431-afce679c5e8d Service_openshif tcp 172.30.98.218:9099 169.254.169.2:9099 82df10ba-8143-400b-977a-8f5f416a4541 Service_openshif tcp 172.30.26.159:2379 10.0.147.219:2379,10.0.163.212:2379,169.254.169.2:2379 tcp 172.30.26.159:9979 10.0.147.219:9979,10.0.163.212:9979,169.254.169.2:9979 debe7f3a-39a8-490e-bc0a-ebbfafdffb16 Service_openshif tcp 172.30.23.244:443 10.128.0.48:8443,10.129.0.27:8443,10.130.0.45:8443 8a749239-02d9-4dc2-8737-716528e0da7b Service_openshif tcp 172.30.124.255:8443 10.128.0.14:8443 880c7c78-c790-403d-a3cb-9f06592717a3 Service_openshif tcp 172.30.0.10:53 10.130.0.20:5353 tcp 172.30.0.10:9154 10.130.0.20:9154 d2f39078-6751-4311-a161-815bbaf7f9c7 Service_openshif tcp 172.30.26.159:2379 169.254.169.2:2379,10.0.163.212:2379,10.0.242.240:2379 tcp 172.30.26.159:9979 169.254.169.2:9979,10.0.163.212:9979,10.0.242.240:9979 30948278-602b-455c-934a-28e64c46de12 Service_openshif tcp 172.30.157.35:9443 10.130.0.43:9443 2cc7e376-7c02-4a82-89e8-dfa1e23fb003 Service_openshif tcp 172.30.159.212:17698 10.128.0.48:17698,10.129.0.27:17698,10.130.0.45:17698 e7d22d35-61c2-40c2-bc30-265cff8ed18d Service_openshif tcp 172.30.143.87:9001 10.0.145.205:9001,10.0.147.219:9001,10.0.163.212:9001,10.0.165.9:9001,10.0.209.170:9001,169.254.169.2:9001 75164e75-e0c5-40fb-9636-bfdbf4223a02 Service_openshif tcp 172.30.150.68:1936 10.129.4.8:1936,10.131.0.10:1936 tcp 172.30.150.68:443 10.129.4.8:443,10.131.0.10:443 tcp 172.30.150.68:80 10.129.4.8:80,10.131.0.10:80 7bc4ee74-dccf-47e9-9149-b011f09aff39 Service_openshif tcp 172.30.164.74:443 10.0.147.219:6443,10.0.163.212:6443,169.254.169.2:6443 0db59e74-1cc6-470c-bf44-57c520e0aa8f Service_openshif tcp 10.0.163.212:31460 tcp 10.0.163.212:32361 c300e134-018c-49af-9f84-9deb1d0715f8 Service_openshif tcp 172.30.42.244:50051 10.130.0.47:50051 5e352773-429b-4881-afb3-a13b7ba8b081 Service_openshif tcp 172.30.244.66:443 10.129.0.8:8443,10.130.0.8:8443 54b82d32-1939-4465-a87d-f26321442a7a Service_openshif tcp 172.30.12.9:8443 10.128.0.35:8443",
"oc get po -n openshift-ovn-kubernetes",
"NAME READY STATUS RESTARTS AGE ovnkube-master-7j97q 6/6 Running 2 (134m ago) 135m ovnkube-master-gt4ms 6/6 Running 1 (126m ago) 133m ovnkube-master-mk6p6 6/6 Running 0 134m ovnkube-node-8qvtr 5/5 Running 0 135m ovnkube-node-bqztb 5/5 Running 0 117m ovnkube-node-fqdc9 5/5 Running 0 135m ovnkube-node-tlfwv 5/5 Running 0 135m ovnkube-node-wlwkn 5/5 Running 0 128m",
"oc exec -n openshift-ovn-kubernetes ovnkube-master-7j97q -- /usr/bin/ovn-appctl -t /var/run/ovn/ovnsb_db.ctl --timeout=3 cluster/status OVN_Southbound",
"Defaulted container \"northd\" out of: northd, nbdb, kube-rbac-proxy, sbdb, ovnkube-master, ovn-dbchecker 1930 Name: OVN_Southbound Cluster ID: f772 (f77273c0-7986-42dd-bd3c-a9f18e25701f) Server ID: 1930 (1930f4b7-314b-406f-9dcb-b81fe2729ae1) Address: ssl:10.0.147.219:9644 Status: cluster member Role: follower 1 Term: 3 Leader: 7081 2 Vote: unknown Election timer: 16000 Log: [2, 2423] Entries not yet committed: 0 Entries not yet applied: 0 Connections: ->0000 ->7145 <-7081 <-7145 Disconnections: 0 Servers: 7081 (7081 at ssl:10.0.163.212:9644) last msg 59 ms ago 3 1930 (1930 at ssl:10.0.147.219:9644) (self) 7145 (7145 at ssl:10.0.242.240:9644) last msg 7871735 ms ago",
"oc get po -o wide -n openshift-ovn-kubernetes | grep 10.0.163.212 | grep -v ovnkube-node",
"ovnkube-master-mk6p6 6/6 Running 0 136m 10.0.163.212 ip-10-0-163-212.ec2.internal <none> <none>",
"oc exec -n openshift-ovn-kubernetes -it ovnkube-master-mk6p6 -c northd -- ovn-sbctl show",
"Chassis \"8ca57b28-9834-45f0-99b0-96486c22e1be\" hostname: ip-10-0-156-16.ec2.internal Encap geneve ip: \"10.0.156.16\" options: {csum=\"true\"} Port_Binding k8s-ip-10-0-156-16.ec2.internal Port_Binding etor-GR_ip-10-0-156-16.ec2.internal Port_Binding jtor-GR_ip-10-0-156-16.ec2.internal Port_Binding openshift-ingress-canary_ingress-canary-hsblx Port_Binding rtoj-GR_ip-10-0-156-16.ec2.internal Port_Binding openshift-monitoring_prometheus-adapter-658fc5967-9l46x Port_Binding rtoe-GR_ip-10-0-156-16.ec2.internal Port_Binding openshift-multus_network-metrics-daemon-77nvz Port_Binding openshift-ingress_router-default-64fd8c67c7-df598 Port_Binding openshift-dns_dns-default-ttpcq Port_Binding openshift-monitoring_alertmanager-main-0 Port_Binding openshift-e2e-loki_loki-promtail-g2pbh Port_Binding openshift-network-diagnostics_network-check-target-m6tn4 Port_Binding openshift-monitoring_thanos-querier-75b5cf8dcb-qf8qj Port_Binding cr-rtos-ip-10-0-156-16.ec2.internal Port_Binding openshift-image-registry_image-registry-7b7bc44566-mp9b8",
"oc exec -n openshift-ovn-kubernetes -it ovnkube-master-mk6p6 -c northd -- ovn-sbctl --help",
"git clone [email protected]:openshift/network-tools.git",
"cd network-tools",
"./debug-scripts/network-tools -h",
"./debug-scripts/network-tools ovn-db-run-command ovn-nbctl lr-list",
"Leader pod is ovnkube-master-vslqm 5351ddd1-f181-4e77-afc6-b48b0a9df953 (GR_helix13.lab.eng.tlv2.redhat.com) ccf9349e-1948-4df8-954e-39fb0c2d4d06 (GR_helix14.lab.eng.tlv2.redhat.com) e426b918-75a8-4220-9e76-20b7758f92b7 (GR_hlxcl7-master-0.hlxcl7.lab.eng.tlv2.redhat.com) dded77c8-0cc3-4b99-8420-56cd2ae6a840 (GR_hlxcl7-master-1.hlxcl7.lab.eng.tlv2.redhat.com) 4f6747e6-e7ba-4e0c-8dcd-94c8efa51798 (GR_hlxcl7-master-2.hlxcl7.lab.eng.tlv2.redhat.com) 52232654-336e-4952-98b9-0b8601e370b4 (ovn_cluster_router)",
"./debug-scripts/network-tools ovn-db-run-command ovn-sbctl find Port_Binding type=localnet",
"Leader pod is ovnkube-master-vslqm _uuid : 3de79191-cca8-4c28-be5a-a228f0f9ebfc additional_chassis : [] additional_encap : [] chassis : [] datapath : 3f1a4928-7ff5-471f-9092-fe5f5c67d15c encap : [] external_ids : {} gateway_chassis : [] ha_chassis_group : [] logical_port : br-ex_helix13.lab.eng.tlv2.redhat.com mac : [unknown] nat_addresses : [] options : {network_name=physnet} parent_port : [] port_security : [] requested_additional_chassis: [] requested_chassis : [] tag : [] tunnel_key : 2 type : localnet up : false virtual_parent : [] _uuid : dbe21daf-9594-4849-b8f0-5efbfa09a455 additional_chassis : [] additional_encap : [] chassis : [] datapath : db2a6067-fe7c-4d11-95a7-ff2321329e11 encap : [] external_ids : {} gateway_chassis : [] ha_chassis_group : [] logical_port : br-ex_hlxcl7-master-2.hlxcl7.lab.eng.tlv2.redhat.com mac : [unknown] nat_addresses : [] options : {network_name=physnet} parent_port : [] port_security : [] requested_additional_chassis: [] requested_chassis : [] tag : [] tunnel_key : 2 type : localnet up : false virtual_parent : [] [...]",
"./debug-scripts/network-tools ovn-db-run-command ovn-sbctl find Port_Binding type=l3gateway",
"Leader pod is ovnkube-master-vslqm _uuid : 9314dc80-39e1-4af7-9cc0-ae8a9708ed59 additional_chassis : [] additional_encap : [] chassis : 336a923d-99e8-4e71-89a6-12564fde5760 datapath : db2a6067-fe7c-4d11-95a7-ff2321329e11 encap : [] external_ids : {} gateway_chassis : [] ha_chassis_group : [] logical_port : etor-GR_hlxcl7-master-2.hlxcl7.lab.eng.tlv2.redhat.com mac : [\"52:54:00:3e:95:d3\"] nat_addresses : [\"52:54:00:3e:95:d3 10.46.56.77\"] options : {l3gateway-chassis=\"7eb1f1c3-87c2-4f68-8e89-60f5ca810971\", peer=rtoe-GR_hlxcl7-master-2.hlxcl7.lab.eng.tlv2.redhat.com} parent_port : [] port_security : [] requested_additional_chassis: [] requested_chassis : [] tag : [] tunnel_key : 1 type : l3gateway up : true virtual_parent : [] _uuid : ad7eb303-b411-4e9f-8d36-d07f1f268e27 additional_chassis : [] additional_encap : [] chassis : f41453b8-29c5-4f39-b86b-e82cf344bce4 datapath : 082e7a60-d9c7-464b-b6ec-117d3426645a encap : [] external_ids : {} gateway_chassis : [] ha_chassis_group : [] logical_port : etor-GR_helix14.lab.eng.tlv2.redhat.com mac : [\"34:48:ed:f3:e2:2c\"] nat_addresses : [\"34:48:ed:f3:e2:2c 10.46.56.14\"] options : {l3gateway-chassis=\"2e8abe3a-cb94-4593-9037-f5f9596325e2\", peer=rtoe-GR_helix14.lab.eng.tlv2.redhat.com} parent_port : [] port_security : [] requested_additional_chassis: [] requested_chassis : [] tag : [] tunnel_key : 1 type : l3gateway up : true virtual_parent : [] [...]",
"./debug-scripts/network-tools ovn-db-run-command ovn-sbctl find Port_Binding type=patch",
"Leader pod is ovnkube-master-vslqm _uuid : c48b1380-ff26-4965-a644-6bd5b5946c61 additional_chassis : [] additional_encap : [] chassis : [] datapath : 72734d65-fae1-4bd9-a1ee-1bf4e085a060 encap : [] external_ids : {} gateway_chassis : [] ha_chassis_group : [] logical_port : jtor-ovn_cluster_router mac : [router] nat_addresses : [] options : {peer=rtoj-ovn_cluster_router} parent_port : [] port_security : [] requested_additional_chassis: [] requested_chassis : [] tag : [] tunnel_key : 4 type : patch up : false virtual_parent : [] _uuid : 5df51302-f3cd-415b-a059-ac24389938f7 additional_chassis : [] additional_encap : [] chassis : [] datapath : 0551c90f-e891-4909-8e9e-acc7909e06d0 encap : [] external_ids : {} gateway_chassis : [] ha_chassis_group : [] logical_port : rtos-hlxcl7-master-1.hlxcl7.lab.eng.tlv2.redhat.com mac : [\"0a:58:0a:82:00:01 10.130.0.1/23\"] nat_addresses : [] options : {chassis-redirect-port=cr-rtos-hlxcl7-master-1.hlxcl7.lab.eng.tlv2.redhat.com, peer=stor-hlxcl7-master-1.hlxcl7.lab.eng.tlv2.redhat.com} parent_port : [] port_security : [] requested_additional_chassis: [] requested_chassis : [] tag : [] tunnel_key : 4 type : patch up : false virtual_parent : [] [...]",
"oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-master -o json | jq '.items[0].spec.containers[] | .name,.readinessProbe'",
"oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-master -o json | jq '.items[0].spec.containers[] | .name,.readinessProbe'",
"oc get events -n openshift-ovn-kubernetes",
"oc describe pod ovnkube-master-tp2z8 -n openshift-ovn-kubernetes",
"oc get co/network -o json | jq '.status.conditions[]'",
"for p in USD(oc get pods --selector app=ovnkube-master -n openshift-ovn-kubernetes -o jsonpath='{range.items[*]}{\" \"}{.metadata.name}'); do echo === USDp ===; get pods -n openshift-ovn-kubernetes USDp -o json | jq '.status.containerStatuses[] | .name, .ready'; done",
"ALERT_MANAGER=USD(oc get route alertmanager-main -n openshift-monitoring -o jsonpath='{@.spec.host}')",
"curl -s -k -H \"Authorization: Bearer USD(oc create token prometheus-k8s -n openshift-monitoring)\" https://USDALERT_MANAGER/api/v1/alerts | jq '.data[] | \"\\(.labels.severity) \\(.labels.alertname) \\(.labels.pod) \\(.labels.container) \\(.labels.endpoint) \\(.labels.instance)\"'",
"oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -s 'http://localhost:9090/api/v1/rules' | jq '.data.groups[].rules[] | select(((.name|contains(\"ovn\")) or (.name|contains(\"OVN\")) or (.name|contains(\"Ovn\")) or (.name|contains(\"North\")) or (.name|contains(\"South\"))) and .type==\"alerting\")'",
"oc logs -f <pod_name> -c <container_name> -n <namespace>",
"oc logs ovnkube-master-7h4q7 -n openshift-ovn-kubernetes",
"oc logs -f ovnkube-master-7h4q7 -n openshift-ovn-kubernetes -c ovn-dbchecker",
"for p in USD(oc get pods --selector app=ovnkube-master -n openshift-ovn-kubernetes -o jsonpath='{range.items[*]}{\" \"}{.metadata.name}'); do echo === USDp ===; for container in USD(oc get pods -n openshift-ovn-kubernetes USDp -o json | jq -r '.status.containerStatuses[] | .name');do echo ---USDcontainer---; logs -c USDcontainer USDp -n openshift-ovn-kubernetes --tail=5; done; done",
"oc logs -l app=ovnkube-master -n openshift-ovn-kubernetes --all-containers --tail 5",
"oc get po -o wide -n openshift-ovn-kubernetes",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ovnkube-master-84nc9 6/6 Running 0 50m 10.0.134.156 ip-10-0-134-156.ec2.internal <none> <none> ovnkube-master-gmlqv 6/6 Running 0 50m 10.0.209.180 ip-10-0-209-180.ec2.internal <none> <none> ovnkube-master-nhts2 6/6 Running 1 (48m ago) 50m 10.0.147.31 ip-10-0-147-31.ec2.internal <none> <none> ovnkube-node-2cbh8 5/5 Running 0 43m 10.0.217.114 ip-10-0-217-114.ec2.internal <none> <none> ovnkube-node-6fvzl 5/5 Running 0 50m 10.0.147.31 ip-10-0-147-31.ec2.internal <none> <none> ovnkube-node-f4lzz 5/5 Running 0 24m 10.0.146.76 ip-10-0-146-76.ec2.internal <none> <none> ovnkube-node-jf67d 5/5 Running 0 50m 10.0.209.180 ip-10-0-209-180.ec2.internal <none> <none> ovnkube-node-np9mf 5/5 Running 0 40m 10.0.165.191 ip-10-0-165-191.ec2.internal <none> <none> ovnkube-node-qjldg 5/5 Running 0 50m 10.0.134.156 ip-10-0-134-156.ec2.internal <none> <none>",
"kind: ConfigMap apiVersion: v1 metadata: name: env-overrides namespace: openshift-ovn-kubernetes data: ip-10-0-217-114.ec2.internal: | 1 # This sets the log level for the ovn-kubernetes node process: OVN_KUBE_LOG_LEVEL=5 # You might also/instead want to enable debug logging for ovn-controller: OVN_LOG_LEVEL=dbg ip-10-0-209-180.ec2.internal: | # This sets the log level for the ovn-kubernetes node process: OVN_KUBE_LOG_LEVEL=5 # You might also/instead want to enable debug logging for ovn-controller: OVN_LOG_LEVEL=dbg _master: | 2 # This sets the log level for the ovn-kubernetes master process as well as the ovn-dbchecker: OVN_KUBE_LOG_LEVEL=5 # You might also/instead want to enable debug logging for northd, nbdb and sbdb on all masters: OVN_LOG_LEVEL=dbg",
"oc apply -n openshift-ovn-kubernetes -f env-overrides.yaml",
"configmap/env-overrides.yaml created",
"oc delete pod -n openshift-ovn-kubernetes --field-selector spec.nodeName=ip-10-0-217-114.ec2.internal -l app=ovnkube-node",
"oc delete pod -n openshift-ovn-kubernetes --field-selector spec.nodeName=ip-10-0-209-180.ec2.internal -l app=ovnkube-node",
"oc delete pod -n openshift-ovn-kubernetes -l app=ovnkube-master",
"oc get podnetworkconnectivitychecks -n openshift-network-diagnostics",
"oc get podnetworkconnectivitychecks -n openshift-network-diagnostics -o json | jq '.items[]| .spec.targetEndpoint,.status.successes[0]'",
"oc get podnetworkconnectivitychecks -n openshift-network-diagnostics -o json | jq '.items[]| .spec.targetEndpoint,.status.failures[0]'",
"oc get podnetworkconnectivitychecks -n openshift-network-diagnostics -o json | jq '.items[]| .spec.targetEndpoint,.status.outages[0]'",
"oc exec prometheus-k8s-0 -n openshift-monitoring -- promtool query instant http://localhost:9090 '{component=\"openshift-network-diagnostics\"}'",
"oc exec prometheus-k8s-0 -n openshift-monitoring -- promtool query instant http://localhost:9090 '{component=\"openshift-network-diagnostics\"}'",
"POD=USD(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-master -o name | head -1 | awk -F '/' '{print USDNF}')",
"oc cp -n openshift-ovn-kubernetes USDPOD:/usr/bin/ovnkube-trace ovnkube-trace",
"chmod +x ovnkube-trace",
"./ovnkube-trace -help",
"I0111 15:05:27.973305 204872 ovs.go:90] Maximum command line arguments set to: 191102 Usage of ./ovnkube-trace: -dst string dest: destination pod name -dst-ip string destination IP address (meant for tests to external targets) -dst-namespace string k8s namespace of dest pod (default \"default\") -dst-port string dst-port: destination port (default \"80\") -kubeconfig string absolute path to the kubeconfig file -loglevel string loglevel: klog level (default \"0\") -ovn-config-namespace string namespace used by ovn-config itself -service string service: destination service name -skip-detrace skip ovn-detrace command -src string src: source pod name -src-namespace string k8s namespace of source pod (default \"default\") -tcp use tcp transport protocol -udp use udp transport protocol",
"oc run web --namespace=default --image=nginx --labels=\"app=web\" --expose --port=80",
"get pods -n openshift-dns",
"NAME READY STATUS RESTARTS AGE dns-default-467qw 2/2 Running 0 49m dns-default-6prvx 2/2 Running 0 53m dns-default-fkqr8 2/2 Running 0 53m dns-default-qv2rg 2/2 Running 0 49m dns-default-s29vr 2/2 Running 0 49m dns-default-vdsbn 2/2 Running 0 53m node-resolver-6thtt 1/1 Running 0 53m node-resolver-7ksdn 1/1 Running 0 49m node-resolver-8sthh 1/1 Running 0 53m node-resolver-c5ksw 1/1 Running 0 50m node-resolver-gbvdp 1/1 Running 0 53m node-resolver-sxhkd 1/1 Running 0 50m",
"./ovnkube-trace -src-namespace default \\ 1 -src web \\ 2 -dst-namespace openshift-dns \\ 3 -dst dns-default-467qw \\ 4 -udp -dst-port 53 \\ 5 -loglevel 0 6",
"I0116 10:19:35.601303 17900 ovs.go:90] Maximum command line arguments set to: 191102 ovn-trace source pod to destination pod indicates success from web to dns-default-467qw ovn-trace destination pod to source pod indicates success from dns-default-467qw to web ovs-appctl ofproto/trace source pod to destination pod indicates success from web to dns-default-467qw ovs-appctl ofproto/trace destination pod to source pod indicates success from dns-default-467qw to web ovn-detrace source pod to destination pod indicates success from web to dns-default-467qw ovn-detrace destination pod to source pod indicates success from dns-default-467qw to web",
"./ovnkube-trace -src-namespace default -src web -dst-namespace openshift-dns -dst dns-default-467qw -udp -dst-port 53 -loglevel 2",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default namespace: default spec: podSelector: {} ingress: []",
"oc apply -f deny-by-default.yaml",
"networkpolicy.networking.k8s.io/deny-by-default created",
"oc run web --namespace=default --image=nginx --labels=\"app=web\" --expose --port=80",
"oc create namespace prod",
"oc label namespace/prod purpose=production",
"oc run test-6459 --namespace=prod --rm -i -t --image=alpine -- sh",
"./ovnkube-trace -src-namespace prod -src test-6459 -dst-namespace default -dst web -tcp -dst-port 80 -loglevel 0",
"I0116 14:20:47.380775 50822 ovs.go:90] Maximum command line arguments set to: 191102 ovn-trace source pod to destination pod indicates failure from test-6459 to web",
"./ovnkube-trace -src-namespace prod -src test-6459 -dst-namespace default -dst web -tcp -dst-port 80 -loglevel 2",
"ct_lb_mark /* default (use --ct to customize) */ ------------------------------------------------ 3. ls_out_acl_hint (northd.c:6092): !ct.new && ct.est && !ct.rpl && ct_mark.blocked == 0, priority 4, uuid 32d45ad4 reg0[8] = 1; reg0[10] = 1; next; 4. ls_out_acl (northd.c:6435): reg0[10] == 1 && (outport == @a16982411286042166782_ingressDefaultDeny), priority 2000, uuid f730a887 1 ct_commit { ct_mark.blocked = 1; };",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-allow-prod namespace: default spec: podSelector: matchLabels: app: web policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: purpose: production",
"oc apply -f web-allow-prod.yaml",
"./ovnkube-trace -src-namespace prod -src test-6459 -dst-namespace default -dst web -tcp -dst-port 80 -loglevel 0",
"I0116 14:25:44.055207 51695 ovs.go:90] Maximum command line arguments set to: 191102 ovn-trace source pod to destination pod indicates success from test-6459 to web ovn-trace destination pod to source pod indicates success from web to test-6459 ovs-appctl ofproto/trace source pod to destination pod indicates success from test-6459 to web ovs-appctl ofproto/trace destination pod to source pod indicates success from web to test-6459 ovn-detrace source pod to destination pod indicates success from test-6459 to web ovn-detrace destination pod to source pod indicates success from web to test-6459",
"wget -qO- --timeout=2 http://web.default",
"<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href=\"http://nginx.org/\">nginx.org</a>.<br/> Commercial support is available at <a href=\"http://nginx.com/\">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>",
"oc get Network.config.openshift.io cluster -o yaml > cluster-openshift-sdn.yaml",
"#!/bin/bash if [ -n \"USDOVN_SDN_MIGRATION_TIMEOUT\" ] && [ \"USDOVN_SDN_MIGRATION_TIMEOUT\" = \"0s\" ]; then unset OVN_SDN_MIGRATION_TIMEOUT fi #loops the timeout command of the script to repeatedly check the cluster Operators until all are available. co_timeout=USD{OVN_SDN_MIGRATION_TIMEOUT:-1200s} timeout \"USDco_timeout\" bash <<EOT until oc wait co --all --for='condition=AVAILABLE=True' --timeout=10s && oc wait co --all --for='condition=PROGRESSING=False' --timeout=10s && oc wait co --all --for='condition=DEGRADED=False' --timeout=10s; do sleep 10 echo \"Some ClusterOperators Degraded=False,Progressing=True,or Available=False\"; done EOT",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{\"spec\":{\"migration\":null}}'",
"oc get nncp",
"NAME STATUS REASON bondmaster0 Available SuccessfullyConfigured",
"oc delete nncp <nncp_manifest_filename>",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": { \"networkType\": \"OVNKubernetes\" } } }'",
"oc get mcp",
"oc get co",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": { \"networkType\": \"OVNKubernetes\", \"features\": { \"egressIP\": <bool>, \"egressFirewall\": <bool>, \"multicast\": <bool> } } } }'",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{ \"spec\":{ \"defaultNetwork\":{ \"ovnKubernetesConfig\":{ \"mtu\":<mtu>, \"genevePort\":<port>, \"v4InternalSubnet\":\"<ipv4_subnet>\" }}}}'",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{ \"spec\":{ \"defaultNetwork\":{ \"ovnKubernetesConfig\":{ \"mtu\":1200 }}}}'",
"oc get mcp",
"oc describe node | egrep \"hostname|machineconfig\"",
"kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done",
"oc get machineconfig <config_name> -o yaml | grep ExecStart",
"ExecStart=/usr/local/bin/configure-ovs.sh OVNKubernetes",
"oc get pod -n openshift-machine-config-operator",
"NAME READY STATUS RESTARTS AGE machine-config-controller-75f756f89d-sjp8b 1/1 Running 0 37m machine-config-daemon-5cf4b 2/2 Running 0 43h machine-config-daemon-7wzcd 2/2 Running 0 43h machine-config-daemon-fc946 2/2 Running 0 43h machine-config-daemon-g2v28 2/2 Running 0 43h machine-config-daemon-gcl4f 2/2 Running 0 43h machine-config-daemon-l5tnv 2/2 Running 0 43h machine-config-operator-79d9c55d5-hth92 1/1 Running 0 37m machine-config-server-bsc8h 1/1 Running 0 43h machine-config-server-hklrm 1/1 Running 0 43h machine-config-server-k9rtx 1/1 Running 0 43h",
"oc logs <pod> -n openshift-machine-config-operator",
"oc patch Network.config.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"networkType\": \"OVNKubernetes\" } }'",
"oc patch Network.config.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"clusterNetwork\": [ { \"cidr\": \"<cidr>\", \"hostPrefix\": <prefix> } ], \"networkType\": \"OVNKubernetes\" } }'",
"oc -n openshift-multus rollout status daemonset/multus",
"Waiting for daemon set \"multus\" rollout to finish: 1 out of 6 new pods have been updated Waiting for daemon set \"multus\" rollout to finish: 5 of 6 updated pods are available daemon set \"multus\" successfully rolled out",
"#!/bin/bash readarray -t POD_NODES <<< \"USD(oc get pod -n openshift-machine-config-operator -o wide| grep daemon|awk '{print USD1\" \"USD7}')\" for i in \"USD{POD_NODES[@]}\" do read -r POD NODE <<< \"USDi\" until oc rsh -n openshift-machine-config-operator \"USDPOD\" chroot /rootfs shutdown -r +1 do echo \"cannot reboot node USDNODE, retry\" && sleep 3 done done",
"#!/bin/bash for ip in USD(oc get nodes -o jsonpath='{.items[*].status.addresses[?(@.type==\"InternalIP\")].address}') do echo \"reboot node USDip\" ssh -o StrictHostKeyChecking=no core@USDip sudo shutdown -r -t 3 done",
"oc get network.config/cluster -o jsonpath='{.status.networkType}{\"\\n\"}'",
"oc get nodes",
"oc get pods --all-namespaces -o wide --sort-by='{.spec.nodeName}'",
"oc get co",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": null } }'",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"defaultNetwork\": { \"openshiftSDNConfig\": null } } }'",
"oc delete namespace openshift-sdn",
"oc patch MachineConfigPool master --type='merge' --patch '{ \"spec\": { \"paused\": true } }'",
"oc patch MachineConfigPool worker --type='merge' --patch '{ \"spec\":{ \"paused\": true } }'",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": null } }'",
"oc get Network.config cluster -o jsonpath='{.status.migration}'",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": { \"networkType\": \"OpenShiftSDN\" } } }'",
"oc get Network.config cluster -o jsonpath='{.status.migration.networkType}'",
"oc patch Network.config.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"networkType\": \"OpenShiftSDN\" } }'",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": { \"networkType\": \"OpenShiftSDN\", \"features\": { \"egressIP\": <bool>, \"egressFirewall\": <bool>, \"multicast\": <bool> } } } }'",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{ \"spec\":{ \"defaultNetwork\":{ \"openshiftSDNConfig\":{ \"mtu\":<mtu>, \"vxlanPort\":<port> }}}}'",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{ \"spec\":{ \"defaultNetwork\":{ \"openshiftSDNConfig\":{ \"mtu\":1200 }}}}'",
"#!/bin/bash readarray -t POD_NODES <<< \"USD(oc get pod -n openshift-machine-config-operator -o wide| grep daemon|awk '{print USD1\" \"USD7}')\" for i in \"USD{POD_NODES[@]}\" do read -r POD NODE <<< \"USDi\" until oc rsh -n openshift-machine-config-operator \"USDPOD\" chroot /rootfs shutdown -r +1 do echo \"cannot reboot node USDNODE, retry\" && sleep 3 done done",
"#!/bin/bash for ip in USD(oc get nodes -o jsonpath='{.items[*].status.addresses[?(@.type==\"InternalIP\")].address}') do echo \"reboot node USDip\" ssh -o StrictHostKeyChecking=no core@USDip sudo shutdown -r -t 3 done",
"oc -n openshift-multus rollout status daemonset/multus",
"Waiting for daemon set \"multus\" rollout to finish: 1 out of 6 new pods have been updated Waiting for daemon set \"multus\" rollout to finish: 5 of 6 updated pods are available daemon set \"multus\" successfully rolled out",
"oc patch MachineConfigPool master --type='merge' --patch '{ \"spec\": { \"paused\": false } }'",
"oc patch MachineConfigPool worker --type='merge' --patch '{ \"spec\": { \"paused\": false } }'",
"oc describe node | egrep \"hostname|machineconfig\"",
"kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done",
"oc get machineconfig <config_name> -o yaml",
"oc get Network.config/cluster -o jsonpath='{.status.networkType}{\"\\n\"}'",
"oc get nodes",
"oc get pod -n openshift-machine-config-operator",
"NAME READY STATUS RESTARTS AGE machine-config-controller-75f756f89d-sjp8b 1/1 Running 0 37m machine-config-daemon-5cf4b 2/2 Running 0 43h machine-config-daemon-7wzcd 2/2 Running 0 43h machine-config-daemon-fc946 2/2 Running 0 43h machine-config-daemon-g2v28 2/2 Running 0 43h machine-config-daemon-gcl4f 2/2 Running 0 43h machine-config-daemon-l5tnv 2/2 Running 0 43h machine-config-operator-79d9c55d5-hth92 1/1 Running 0 37m machine-config-server-bsc8h 1/1 Running 0 43h machine-config-server-hklrm 1/1 Running 0 43h machine-config-server-k9rtx 1/1 Running 0 43h",
"oc logs <pod> -n openshift-machine-config-operator",
"oc get pods --all-namespaces -o wide --sort-by='{.spec.nodeName}'",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": null } }'",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"defaultNetwork\": { \"ovnKubernetesConfig\":null } } }'",
"oc delete namespace openshift-ovn-kubernetes",
"- op: add path: /spec/clusterNetwork/- value: 1 cidr: fd01::/48 hostPrefix: 64 - op: add path: /spec/serviceNetwork/- value: fd02::/112 2",
"oc patch network.config.openshift.io cluster --type='json' --patch-file <file>.yaml",
"network.config.openshift.io/cluster patched",
"oc describe network",
"Status: Cluster Network: Cidr: 10.128.0.0/14 Host Prefix: 23 Cidr: fd01::/48 Host Prefix: 64 Cluster Network MTU: 1400 Network Type: OVNKubernetes Service Network: 172.30.0.0/16 fd02::/112",
"oc edit networks.config.openshift.io",
"kind: Namespace apiVersion: v1 metadata: name: example1 annotations: k8s.ovn.org/acl-logging: |- { \"deny\": \"info\", \"allow\": \"info\" }",
"<timestamp>|<message_serial>|acl_log(ovn_pinctrl0)|<severity>|name=\"<acl_name>\", verdict=\"<verdict>\", severity=\"<severity>\", direction=\"<direction>\": <flow>",
"<proto>,vlan_tci=0x0000,dl_src=<src_mac>,dl_dst=<source_mac>,nw_src=<source_ip>,nw_dst=<target_ip>,nw_tos=<tos_dscp>,nw_ecn=<tos_ecn>,nw_ttl=<ip_ttl>,nw_frag=<fragment>,tp_src=<tcp_src_port>,tp_dst=<tcp_dst_port>,tcp_flags=<tcp_flags>",
"2021-06-13T19:33:11.590Z|00005|acl_log(ovn_pinctrl0)|INFO|name=\"verify-audit-logging_deny-all\", verdict=drop, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:39,dl_dst=0a:58:0a:80:02:37,nw_src=10.128.2.57,nw_dst=10.128.2.55,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: policyAuditConfig: destination: \"null\" maxFileSize: 50 rateLimit: 20 syslogFacility: local0",
"oc edit network.operator.openshift.io/cluster",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: policyAuditConfig: destination: \"null\" maxFileSize: 50 rateLimit: 20 syslogFacility: local0",
"cat <<EOF| oc create -f - kind: Namespace apiVersion: v1 metadata: name: verify-audit-logging annotations: k8s.ovn.org/acl-logging: '{ \"deny\": \"alert\", \"allow\": \"alert\" }' EOF",
"namespace/verify-audit-logging created",
"oc annotate namespace verify-audit-logging k8s.ovn.org/acl-logging='{ \"deny\": \"alert\", \"allow\": \"alert\" }'",
"namespace/verify-audit-logging annotated",
"cat <<EOF| oc create -n verify-audit-logging -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: deny-all spec: podSelector: matchLabels: policyTypes: - Ingress - Egress --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-same-namespace spec: podSelector: {} policyTypes: - Ingress - Egress ingress: - from: - podSelector: {} egress: - to: - namespaceSelector: matchLabels: namespace: verify-audit-logging EOF",
"networkpolicy.networking.k8s.io/deny-all created networkpolicy.networking.k8s.io/allow-from-same-namespace created",
"cat <<EOF| oc create -n default -f - apiVersion: v1 kind: Pod metadata: name: client spec: containers: - name: client image: registry.access.redhat.com/rhel7/rhel-tools command: [\"/bin/sh\", \"-c\"] args: [\"sleep inf\"] EOF",
"for name in client server; do cat <<EOF| oc create -n verify-audit-logging -f - apiVersion: v1 kind: Pod metadata: name: USD{name} spec: containers: - name: USD{name} image: registry.access.redhat.com/rhel7/rhel-tools command: [\"/bin/sh\", \"-c\"] args: [\"sleep inf\"] EOF done",
"pod/client created pod/server created",
"POD_IP=USD(oc get pods server -n verify-audit-logging -o jsonpath='{.status.podIP}')",
"oc exec -it client -n default -- /bin/ping -c 2 USDPOD_IP",
"PING 10.128.2.55 (10.128.2.55) 56(84) bytes of data. --- 10.128.2.55 ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 2041ms",
"oc exec -it client -n verify-audit-logging -- /bin/ping -c 2 USDPOD_IP",
"PING 10.128.0.86 (10.128.0.86) 56(84) bytes of data. 64 bytes from 10.128.0.86: icmp_seq=1 ttl=64 time=2.21 ms 64 bytes from 10.128.0.86: icmp_seq=2 ttl=64 time=0.440 ms --- 10.128.0.86 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 0.440/1.329/2.219/0.890 ms",
"for pod in USD(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node --no-headers=true | awk '{ print USD1 }') ; do oc exec -it USDpod -n openshift-ovn-kubernetes -- tail -4 /var/log/ovn/acl-audit-log.log done",
"Defaulting container name to ovn-controller. Use 'oc describe pod/ovnkube-node-hdb8v -n openshift-ovn-kubernetes' to see all of the containers in this pod. 2021-06-13T19:33:11.590Z|00005|acl_log(ovn_pinctrl0)|INFO|name=\"verify-audit-logging_deny-all\", verdict=drop, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:39,dl_dst=0a:58:0a:80:02:37,nw_src=10.128.2.57,nw_dst=10.128.2.55,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0 2021-06-13T19:33:12.614Z|00006|acl_log(ovn_pinctrl0)|INFO|name=\"verify-audit-logging_deny-all\", verdict=drop, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:39,dl_dst=0a:58:0a:80:02:37,nw_src=10.128.2.57,nw_dst=10.128.2.55,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0 2021-06-13T19:44:10.037Z|00007|acl_log(ovn_pinctrl0)|INFO|name=\"verify-audit-logging_allow-from-same-namespace_0\", verdict=allow, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:3b,dl_dst=0a:58:0a:80:02:3a,nw_src=10.128.2.59,nw_dst=10.128.2.58,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0 2021-06-13T19:44:11.037Z|00008|acl_log(ovn_pinctrl0)|INFO|name=\"verify-audit-logging_allow-from-same-namespace_0\", verdict=allow, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:3b,dl_dst=0a:58:0a:80:02:3a,nw_src=10.128.2.59,nw_dst=10.128.2.58,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0",
"oc annotate namespace <namespace> k8s.ovn.org/acl-logging='{ \"deny\": \"alert\", \"allow\": \"notice\" }'",
"kind: Namespace apiVersion: v1 metadata: name: <namespace> annotations: k8s.ovn.org/acl-logging: |- { \"deny\": \"alert\", \"allow\": \"notice\" }",
"namespace/verify-audit-logging annotated",
"for pod in USD(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node --no-headers=true | awk '{ print USD1 }') ; do oc exec -it USDpod -n openshift-ovn-kubernetes -- tail -4 /var/log/ovn/acl-audit-log.log done",
"2021-06-13T19:33:11.590Z|00005|acl_log(ovn_pinctrl0)|INFO|name=\"verify-audit-logging_deny-all\", verdict=drop, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:39,dl_dst=0a:58:0a:80:02:37,nw_src=10.128.2.57,nw_dst=10.128.2.55,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0",
"oc annotate --overwrite namespace <namespace> k8s.ovn.org/acl-logging-",
"kind: Namespace apiVersion: v1 metadata: name: <namespace> annotations: k8s.ovn.org/acl-logging: null",
"namespace/verify-audit-logging annotated",
"oc patch networks.operator.openshift.io cluster --type=merge -p '{\"spec\":{\"defaultNetwork\":{\"ovnKubernetesConfig\":{\"ipsecConfig\":{ }}}}}'",
"oc get pods -l app=ovnkube-master -n openshift-ovn-kubernetes",
"NAME READY STATUS RESTARTS AGE ovnkube-master-fvtnh 6/6 Running 0 122m ovnkube-master-hsgmm 6/6 Running 0 122m ovnkube-master-qcmdc 6/6 Running 0 122m",
"oc -n openshift-ovn-kubernetes rsh ovnkube-master-<pod_number_sequence> \\ 1 ovn-nbctl --no-leader-only get nb_global . ipsec",
"oc patch networks.operator.openshift.io/cluster --type=json -p='[{\"op\":\"remove\", \"path\":\"/spec/defaultNetwork/ovnKubernetesConfig/ipsecConfig\"}]'",
"oc get pods -n openshift-ovn-kubernetes -l=app=ovnkube-master",
"ovnkube-master-5xqbf 8/8 Running 0 28m",
"oc -n openshift-ovn-kubernetes -c nbdb rsh ovnkube-master-<pod_number_sequence> \\ 1 ovn-nbctl --no-leader-only get nb_global . ipsec",
"oc delete daemonset ovn-ipsec -n openshift-ovn-kubernetes 1",
"oc get pods -n openshift-ovn-kubernetes -l=app=ovn-ipsec",
"apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: default namespace: <namespace> 1 spec: egress: - to: cidrSelector: <api_server_address_range> 2 type: Allow - to: cidrSelector: 0.0.0.0/0 3 type: Deny",
"apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: <name> 1 spec: egress: 2",
"egress: - type: <type> 1 to: 2 cidrSelector: <cidr> 3 dnsName: <dns_name> 4 ports: 5",
"ports: - port: <port> 1 protocol: <protocol> 2",
"apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: default spec: egress: 1 - type: Allow to: cidrSelector: 1.2.3.0/24 - type: Deny to: cidrSelector: 0.0.0.0/0",
"apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: default spec: egress: - type: Deny to: cidrSelector: 172.16.1.1/32 ports: - port: 80 protocol: TCP - port: 443",
"oc create -f <policy_name>.yaml -n <project>",
"oc create -f default.yaml -n project1",
"egressfirewall.k8s.ovn.org/v1 created",
"oc get egressfirewall --all-namespaces",
"oc describe egressfirewall <policy_name>",
"Name: default Namespace: project1 Created: 20 minutes ago Labels: <none> Annotations: <none> Rule: Allow to 1.2.3.0/24 Rule: Allow to www.example.com Rule: Deny to 0.0.0.0/0",
"oc get -n <project> egressfirewall",
"oc get -n <project> egressfirewall <name> -o yaml > <filename>.yaml",
"oc replace -f <filename>.yaml",
"oc get -n <project> egressfirewall",
"oc delete -n <project> egressfirewall <name>",
"IP capacity = public cloud default capacity - sum(current IP assignments)",
"cloud.network.openshift.io/egress-ipconfig: [ { \"interface\":\"eni-078d267045138e436\", \"ifaddr\":{\"ipv4\":\"10.0.128.0/18\"}, \"capacity\":{\"ipv4\":14,\"ipv6\":15} } ]",
"cloud.network.openshift.io/egress-ipconfig: [ { \"interface\":\"nic0\", \"ifaddr\":{\"ipv4\":\"10.0.128.0/18\"}, \"capacity\":{\"ip\":14} } ]",
"apiVersion: v1 kind: Namespace metadata: name: namespace1 labels: env: prod --- apiVersion: v1 kind: Namespace metadata: name: namespace2 labels: env: prod",
"apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: egressips-prod spec: egressIPs: - 192.168.126.10 - 192.168.126.102 namespaceSelector: matchLabels: env: prod status: items: - node: node1 egressIP: 192.168.126.10 - node: node3 egressIP: 192.168.126.102",
"apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: <name> 1 spec: egressIPs: 2 - <ip_address> namespaceSelector: 3 podSelector: 4",
"namespaceSelector: 1 matchLabels: <label_name>: <label_value>",
"podSelector: 1 matchLabels: <label_name>: <label_value>",
"apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: egress-group1 spec: egressIPs: - 192.168.126.11 - 192.168.126.102 podSelector: matchLabels: app: web namespaceSelector: matchLabels: env: prod",
"apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: egress-group2 spec: egressIPs: - 192.168.127.30 - 192.168.127.40 namespaceSelector: matchExpressions: - key: environment operator: NotIn values: - development",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 defaultNetwork: ovnKubernetesConfig: egressIPConfig: 1 reachabilityTotalTimeoutSeconds: 5 2 gatewayConfig: routingViaHost: false genevePort: 6081",
"oc label nodes <node_name> k8s.ovn.org/egress-assignable=\"\" 1",
"apiVersion: v1 kind: Node metadata: labels: k8s.ovn.org/egress-assignable: \"\" name: <node_name>",
"apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: egress-project1 spec: egressIPs: - 192.168.127.10 - 192.168.127.11 namespaceSelector: matchLabels: env: qa",
"oc apply -f <egressips_name>.yaml 1",
"egressips.k8s.ovn.org/<egressips_name> created",
"oc label ns <namespace> env=qa 1",
"oc get egressip -o yaml",
"spec: egressIPs: - 192.168.127.10 - 192.168.127.11",
"curl <router_service_IP> <port>",
"openstack port set --allowed-address ip_address=<ip_address>,mac_address=<mac_address> <neutron_port_uuid>",
"apiVersion: v1 kind: Service metadata: name: app-egress spec: ports: - name: tcp-8080 protocol: TCP port: 8080 - name: tcp-8443 protocol: TCP port: 8443 - name: udp-80 protocol: UDP port: 80 type: ClusterIP selector: app: egress-router-cni",
"apiVersion: network.operator.openshift.io/v1 kind: EgressRouter metadata: name: <egress_router_name> namespace: <namespace> <.> spec: addresses: [ <.> { ip: \"<egress_router>\", <.> gateway: \"<egress_gateway>\" <.> } ] mode: Redirect redirect: { redirectRules: [ <.> { destinationIP: \"<egress_destination>\", port: <egress_router_port>, targetPort: <target_port>, <.> protocol: <network_protocol> <.> }, ], fallbackIP: \"<egress_destination>\" <.> }",
"apiVersion: network.operator.openshift.io/v1 kind: EgressRouter metadata: name: egress-router-redirect spec: networkInterface: { macvlan: { mode: \"Bridge\" } } addresses: [ { ip: \"192.168.12.99/24\", gateway: \"192.168.12.1\" } ] mode: Redirect redirect: { redirectRules: [ { destinationIP: \"10.0.0.99\", port: 80, protocol: UDP }, { destinationIP: \"203.0.113.26\", port: 8080, targetPort: 80, protocol: TCP }, { destinationIP: \"203.0.113.27\", port: 8443, targetPort: 443, protocol: TCP } ] }",
"apiVersion: v1 kind: Service metadata: name: egress-1 spec: ports: - name: web-app protocol: TCP port: 8080 type: ClusterIP selector: app: egress-router-cni <.>",
"oc get network-attachment-definition egress-router-cni-nad",
"NAME AGE egress-router-cni-nad 18m",
"oc get deployment egress-router-cni-deployment",
"NAME READY UP-TO-DATE AVAILABLE AGE egress-router-cni-deployment 1/1 1 1 18m",
"oc get pods -l app=egress-router-cni",
"NAME READY STATUS RESTARTS AGE egress-router-cni-deployment-575465c75c-qkq6m 1/1 Running 0 18m",
"POD_NODENAME=USD(oc get pod -l app=egress-router-cni -o jsonpath=\"{.items[0].spec.nodeName}\")",
"oc debug node/USDPOD_NODENAME",
"chroot /host",
"cat /tmp/egress-router-log",
"2021-04-26T12:27:20Z [debug] Called CNI ADD 2021-04-26T12:27:20Z [debug] Gateway: 192.168.12.1 2021-04-26T12:27:20Z [debug] IP Source Addresses: [192.168.12.99/24] 2021-04-26T12:27:20Z [debug] IP Destinations: [80 UDP 10.0.0.99/30 8080 TCP 203.0.113.26/30 80 8443 TCP 203.0.113.27/30 443] 2021-04-26T12:27:20Z [debug] Created macvlan interface 2021-04-26T12:27:20Z [debug] Renamed macvlan to \"net1\" 2021-04-26T12:27:20Z [debug] Adding route to gateway 192.168.12.1 on macvlan interface 2021-04-26T12:27:20Z [debug] deleted default route {Ifindex: 3 Dst: <nil> Src: <nil> Gw: 10.128.10.1 Flags: [] Table: 254} 2021-04-26T12:27:20Z [debug] Added new default route with gateway 192.168.12.1 2021-04-26T12:27:20Z [debug] Added iptables rule: iptables -t nat PREROUTING -i eth0 -p UDP --dport 80 -j DNAT --to-destination 10.0.0.99 2021-04-26T12:27:20Z [debug] Added iptables rule: iptables -t nat PREROUTING -i eth0 -p TCP --dport 8080 -j DNAT --to-destination 203.0.113.26:80 2021-04-26T12:27:20Z [debug] Added iptables rule: iptables -t nat PREROUTING -i eth0 -p TCP --dport 8443 -j DNAT --to-destination 203.0.113.27:443 2021-04-26T12:27:20Z [debug] Added iptables rule: iptables -t nat -o net1 -j SNAT --to-source 192.168.12.99",
"crictl ps --name egress-router-cni-pod | awk '{print USD1}'",
"CONTAINER bac9fae69ddb6",
"crictl inspect -o yaml bac9fae69ddb6 | grep 'pid:' | awk '{print USD2}'",
"68857",
"nsenter -n -t 68857",
"ip route",
"default via 192.168.12.1 dev net1 10.128.10.0/23 dev eth0 proto kernel scope link src 10.128.10.18 192.168.12.0/24 dev net1 proto kernel scope link src 192.168.12.99 192.168.12.1 dev net1",
"oc annotate namespace <namespace> k8s.ovn.org/multicast-enabled=true",
"apiVersion: v1 kind: Namespace metadata: name: <namespace> annotations: k8s.ovn.org/multicast-enabled: \"true\"",
"oc project <project>",
"cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: mlistener labels: app: multicast-verify spec: containers: - name: mlistener image: registry.access.redhat.com/ubi8 command: [\"/bin/sh\", \"-c\"] args: [\"dnf -y install socat hostname && sleep inf\"] ports: - containerPort: 30102 name: mlistener protocol: UDP EOF",
"cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: msender labels: app: multicast-verify spec: containers: - name: msender image: registry.access.redhat.com/ubi8 command: [\"/bin/sh\", \"-c\"] args: [\"dnf -y install socat && sleep inf\"] EOF",
"POD_IP=USD(oc get pods mlistener -o jsonpath='{.status.podIP}')",
"oc exec mlistener -i -t -- socat UDP4-RECVFROM:30102,ip-add-membership=224.1.0.1:USDPOD_IP,fork EXEC:hostname",
"CIDR=USD(oc get Network.config.openshift.io cluster -o jsonpath='{.status.clusterNetwork[0].cidr}')",
"oc exec msender -i -t -- /bin/bash -c \"echo | socat STDIO UDP4-DATAGRAM:224.1.0.1:30102,range=USDCIDR,ip-multicast-ttl=64\"",
"mlistener",
"oc annotate namespace <namespace> \\ 1 k8s.ovn.org/multicast-enabled-",
"apiVersion: v1 kind: Namespace metadata: name: <namespace> annotations: k8s.ovn.org/multicast-enabled: null",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: exportNetworkFlows: netFlow: collectors: - 192.168.1.99:2056",
"spec: exportNetworkFlows: netFlow: collectors: - 192.168.1.99:2056",
"oc patch network.operator cluster --type merge -p \"USD(cat <file_name>.yaml)\"",
"network.operator.openshift.io/cluster patched",
"oc get network.operator cluster -o jsonpath=\"{.spec.exportNetworkFlows}\"",
"{\"netFlow\":{\"collectors\":[\"192.168.1.99:2056\"]}}",
"for pod in USD(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node -o jsonpath='{[email protected][*]}{.metadata.name}{\"\\n\"}{end}'); do ; echo; echo USDpod; oc -n openshift-ovn-kubernetes exec -c ovnkube-node USDpod -- bash -c 'for type in ipfix sflow netflow ; do ovs-vsctl find USDtype ; done'; done",
"ovnkube-node-xrn4p _uuid : a4d2aaca-5023-4f3d-9400-7275f92611f9 active_timeout : 60 add_id_to_interface : false engine_id : [] engine_type : [] external_ids : {} targets : [\"192.168.1.99:2056\"] ovnkube-node-z4vq9 _uuid : 61d02fdb-9228-4993-8ff5-b27f01a29bd6 active_timeout : 60 add_id_to_interface : false engine_id : [] engine_type : [] external_ids : {} targets : [\"192.168.1.99:2056\"]-",
"oc patch network.operator cluster --type='json' -p='[{\"op\":\"remove\", \"path\":\"/spec/exportNetworkFlows\"}]'",
"network.operator.openshift.io/cluster patched",
"./openshift-install create manifests --dir <installation_directory>",
"cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: 1 - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 9898 2"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/networking/ovn-kubernetes-network-plugin |
Chapter 8. Recertification workflow | Chapter 8. Recertification workflow You must recertify their cloud application image on every major release of the Red Hat Enterprise Linux included in the image and are recommended to also recertify their image on each minor release. To recertify an image, it is mandatory to create a new certification request for recertification. Run the certification tests and proceed with the rest of the workflow as documented. | null | https://docs.redhat.com/en/documentation/red_hat_certified_cloud_and_service_provider_certification/2025/html/red_hat_certified_cloud_and_service_provider_certification_for_red_hat_enterprise_linux_for_sap_images_workflow_guide/con-recertification-workflow_cloud-wf-configure-system-using-cli |
7.47. eclipse-nls | 7.47. eclipse-nls 7.47.1. RHBA-2013:0357 - eclipse-nls bug fix and enhancement update Updated eclipse-nls packages that fix multiple bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The eclipse-nls packages provide Native Language Support langpacks for the Eclipse IDE that contain translations into many languages. Note The clipse-nls packages have been upgraded to upstream version 3.6.0.v20120721114722, which updates the language packs and provides a number of bug fixes and enhancements over the version. (BZ# 692358 ) All users of eclipse-nls are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/eclipse-nls |
Networking | Networking OpenShift Container Platform 4.18 Configuring and managing cluster networking Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/networking/index |
Chapter 1. Installing and upgrading | Chapter 1. Installing and upgrading Before you install, review the required hardware and system configuration for each product. You can install online on Linux with a supported version of Red Hat OpenShift Container Platform. You must have a supported version of OpenShift Container Platform. For example, you can use Red Hat OpenShift Service on AWS, or Red Hat OpenShift Dedicated. Deprecated: Red Hat Advanced Cluster Management 2.8 and earlier versions are no longer supported. The documentation might remain available, but without any Errata or other updates. Best practice: Upgrade to the most recent version. FIPS notice: If you do not specify your own ciphers in spec.ingress.sslCiphers , then the multiclusterhub-operator provides a default list of ciphers. If you upgrade and want FIPS compliance, remove the following two ciphers from the multiclusterhub resource: ECDHE-ECDSA-CHACHA20-POLY1305 and ECDHE-RSA-CHACHA20-POLY1305 . The documentation references the earliest supported OpenShift Container Platform version, unless a specific component or function is introduced and tested only on a more recent version of OpenShift Container Platform. For full support information, see the Red Hat Advanced Cluster Management 2.11 Support Matrix and the Lifecycle and update policies for Red Hat Advanced Cluster Management for Kubernetes . Installing Red Hat Advanced Cluster Management for Kubernetes sets up a multi-node cluster production environment. You can install Red Hat Advanced Cluster Management for Kubernetes in either standard or high-availability configurations. View the following documentation for more information about the installation procedure: Installing while connected online Configuring infrastructure nodes for Red Hat Advanced Cluster Management Install on disconnected networks MultiClusterHub advanced configuration Sizing your cluster Performance and scalability Upgrading Upgrading in a disconnected network environment Uninstalling 1.1. Performance and scalability Red Hat Advanced Cluster Management for Kubernetes is tested to determine certain scalability and performance data. The major areas that are tested are cluster scalability and search performance. You can use this information as you plan your environment. Note: Data is based on the results from a lab environment at the time of testing. Red Hat Advanced Cluster Management is tested by using a three node hub cluster on bare metal machines. At testing, there is a sufficient amount of resource capacity (CPU, memory, and disk) to find software component limits. Your results might vary, depending on your environment, network speed, and changes to the product. Maximum number of managed clusters Search scalability Observability scalability Backup and restore scalability 1.1.1. Maximum number of managed clusters The maximum number of clusters that Red Hat Advanced Cluster Management can manage varies based on several factors, including: Number of resources in the cluster, which depends on factors like the number of policies and applications that are deployed. Configuration of the hub cluster, such as how many pods are used for scaling. The managed clusters are single-node OpenShift virtual machines hosted on Red Hat Enterprise Linux hypervisors. Virtual machines are used to achieve high-density counts of clusters per single bare metal machine in the testbed. Sushy-emulator is used with libvirt for virtual machines to have an accessible bare metal cluster by using Redfish APIs. The following operators are a part of the test installation, Topology Aware Lifecycle Manager, Local Storage Operator, and Red Hat OpenShift GitOps. The following table shows the lab environment scaling information: Table 1.1. Table for environment scaling Node Count Operating system Hardware CPU cores Memory Disks Hub cluster control plane 3 OpenShift Container Platform Bare metal 112 512 GiB 446 GB SSD, 2.9 TB NVMe, 2 x 1.8 TB SSD Managed cluster 3500 single-node OpenShift Virtual machine 8 18 GiB 120 GB 1.1.2. Search scalability The scalability of the Search component depends on the performance of the data store. The query run time is an important variable when analyzing the search performance. 1.1.2.1. Query run time considerations There are some things that can affect the time that it takes to run and return results from a query. Consider the following items when planning and configuring your environment: Searching for a keyword is not efficient. If you search for RedHat and you manage a large number of clusters, it might take a longer time to receive search results. The first search takes longer than later searches because it takes additional time to gather user role-based access control rules. The length of time to complete a request is proportional to the number of namespaces and resources the user is authorized to access. Note: If you save and share a Search query with another user, returned results depend on access level for that user. For more information on role access, see Using RBAC to define and apply permissions in the OpenShift Container Platform documentation. The worst performance is observed for a request by a non-administrator user with access to all of the namespaces, or all of the managed clusters. 1.1.3. Observability scalability You need to plan your environment if you want to enable and use the observability service. The resource consumption later is for the OpenShift Container Platform project, where observability components are installed. Values that you plan to use are sums for all observability components. Note: Data is based on the results from a lab environment at the time of testing. Your results might vary, depending on your environment, network speed, and changes to the product. 1.1.3.1. Sample observability environment In the sample environment, hub clusters and managed clusters are located in Amazon Web Services cloud platform and have the following topology and configuration: Node Flavor vCPU RAM (GiB) Disk type Disk size (GiB) Count Region Master node m5.4xlarge 16 64 gp2 100 3 sa-east-1 Worker node m5.4xlarge 16 64 gp2 100 3 sa-east-1 The observability deployment is configured for high availability environments. With a high availability environment, each Kubernetes deployment has two instances, and each StatefulSet has three instances. During the sample test, a different number of managed clusters are simulated to push metrics, and each test lasts for 24 hours. See the following throughput: 1.1.3.2. Write throughput Pods Interval (minute) Time series per min 400 1 83000 1.1.3.3. CPU usage (millicores) CPU usage is stable during testing: Size CPU Usage 10 clusters 400 20 clusters 800 1.1.3.4. RSS and working set memory View the following descriptions of the RSS and working set memory: Memory usage RSS: From the metrics container_memory_rss and remains stable during the test. Memory usage working set: From the metrics container_memory_working_set_bytes , increases along with the test. The following results are from a 24-hour test: Size Memory usage RSS Memory usage working set 10 clusters 9.84 4.93 20 clusters 13.10 8.76 1.1.3.5. Persistent volume for thanos-receive component Important: Metrics are stored in thanos-receive until retention time (four days) is reached. Other components do not require as much volume as thanos-receive components. Disk usage increases along with the test. Data represents disk usage after one day, so the final disk usage is multiplied by four. See the following disk usage: Size Disk usage (GiB) 10 clusters 2 20 clusters 3 1.1.3.6. Network transfer During tests, network transfer provides stability. See the sizes and network transfer values: Size Inbound network transfer Outbound network transfer 10 clusters 6.55 MBs per second 5.80 MBs per second 20 clusters 13.08 MBs per second 10.9 MBs per second 1.1.3.7. Amazon Simple Storage Service (S3) Total usage in Amazon Simple Storage Service (S3) increases. The metrics data is stored in S3 until default retention time (five days) is reached. See the following disk usages: Size Disk usage (GiB) 10 clusters 16.2 20 clusters 23.8 1.1.4. Backup and restore scalability The tests performed on large scaled environment show the following data for backup and restore: Table 1.2. Table of run times for managed cluster backups Backups Duration Number of resources Backup memory credentials 2m5s 18272 resources 55MiB backups size managed clusters 3m22s 58655 resources 38MiB backups size resources 1m34s 1190 resources 1.7MiB backups size generic/user 2m56s 0 resources 16.5KiB backups size The total backup time is 10m . Table 1.3. Table of run time for restoring passive hub cluster Backups Duration Number of resources redentials 47m8s 18272 resources resources 3m10s 1190 resources generic/user backup 0m 0 resources Total restore time is 50m18s . The number of backup file are pruned using the veleroTtl parameter option set when the BackupSchedule is created. Any backups with a creation time older than the specified TTL (time to live) are expired and automatically deleted from the storage location by Velero. apiVersion: cluster.open-cluster-management.io/v1beta1 kind: BackupSchedule metadata: name:schedule-acm namespace:open-cluster-management-backup spec: veleroSchedule:0 */1 * * * veleroTtl:120h 1.1.5. Sizing your cluster Each Red Hat Advanced Cluster Management for Kubernetes cluster is unique and the following guidelines give sample deployment sizes for you. Recommendations are classified by size and purpose. Red Hat Advanced Cluster Management applies the following dimensions for sizing and placement of supporting services: Availability zones isolate potential fault domains across the cluster. Typical clusters have near equal worker node capacity in three or more availability zones. vCPU reservations and limits establish vCPU capacity on a worker node to assign to a container. A vCPU is equal to a Kubernetes compute unit. For more information, see Kubernetes Meaning of CPU . Memory reservations and limits establish memory capacity on a worker node to assign to a container. Persistent data is managed by the product and stored in the etcd cluster that is used by Kubernetes. Important: For OpenShift Container Platform, distribute the master nodes of the cluster across three availability zones. 1.1.5.1. Product environment Note: The following requirements are not minimum requirements. Table 1.4. Product environment Node type Availability zones etcd Total reserved memory Total reserved CPU Master 3 3 Per OpenShift Container Platform sizing guidelines Per OpenShift Container Platform sizing guidelines Worker or infrastructure 3 1 12 GB 6 In addition to Red Hat Advanced Cluster Management, the OpenShift Container Platform cluster runs additional services to support cluster features. 1.1.5.1.1. OpenShift Container Platform on additional services Availability zones isolate potential fault domains across the cluster. Table 1.5. Additional services Service Node count Availability zones Instance size vCPU Memory Storage size Resources OpenShift Container Platform on Amazon Web Services 3 3 m5.xlarge 4 16 GB 120 GB See Installing a cluster on AWS with customizations in the OpenShift Container Platform product documentation for more information. Also learn more about machine types . OpenShift Container Platform on Google Cloud Platform 3 3 N1-standard-4 (0.95-6.5 GB) 4 15 GB 120 GB See the Google Cloud Platform product documentation for more information about quotas. Also learn more about machine types . OpenShift Container Platform on Microsoft Azure 3 3 Standard_D4_v3 4 16 GB 120 GB See Configuring an Azure account in the OpenShift Container Platform documentation for more details. OpenShift Container Platform on VMware vSphere 3 3 4 (2 cores per socket) 16 GB 120 GB See Installing on vSphere in the OpenShift Container Platform documentation for more details. OpenShift Container Platform on IBM Z systems 3 3 10 16 GB 100 GB See Installing a cluster on IBM Z systems in the OpenShift Container Platform documentation for more information. IBM Z systems provide the ability to configure simultaneous multithreading (SMT), which extends the number of vCPUs that can run on each core. If you configured SMT, One physical core (IFL) provides two logical cores (threads). The hypervisor can provide two or more vCPUs. One vCPU is equal to one physical core when simultaneous multithreading (SMT), or hyper-threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. For more information about SMT, see Simultaneous multithreading . OpenShift Container Platform on IBM Power systems 3 3 16 16 GB 120 GB See Installing a cluster on Power systems in the OpenShift Container Platform documentation for more information. IBM Power systems provide the ability to configure simultaneous multithreading (SMT), which extends the number of vCPUs that can run on each core. If you configured SMT, your SMT level determines how you satisfy the 16 vCPU requirement. The most common configurations are: Two cores running on SMT-8 (the default configuration for systems that are running IBM Power VM) provides the required 16 vCPUs. Four cores running on SMT-4 provides the required 16 vCPUs. For more information about SMT, see Simultaneous multithreading . OpenShift Container Platform on-premises 3 4 16 GB 120 GB See Configuring a three-node cluster in the OpenShift Container Platform documentation for more details. A Red Hat Advanced Cluster Management for Kubernetes hub cluster can be installed and supported on OpenShift Container Platform bare metal. The hub cluster can run on a compact bare metal topology, in which there are 3 schedulable control plane nodes, and 0 additional workers. 1.1.5.1.2. Creating and managing single node OpenShift Container Platform clusters View Installing on a single node to learn about the requirements. Since each cluster is unique, the following guidelines provide only sample deployment requirements that are classified by size and purpose. Availability zones isolate potential fault domains across the cluster. Typical clusters have an equal worker node capacity in three or more availability zones. High availability is not supported. Important: For OpenShift Container Platform, distribute the master nodes of the cluster across three availability zones. See example requirements for creating and managing 3500 single node OpenShift Container Platform clusters. See the minimum requirements for using Red Hat Advanced Cluster Management to create single-node OpenShift clusters (230 and more provisioned at the same time), and manage those single-node OpenShift clusters with a hub cluster: Table 1.6. Master (schedulable) Node count Memory (peak cluster usage) Memory (single node min-max) CPU cluster CPU single node 3 289 GB 64 GB - 110 GB 90 44 1.2. Installing while connected online You install Red Hat Advanced Cluster Management for Kubernetes through Operator Lifecycle Manager, which manages the installation, upgrade, and removal of the components that encompass the Red Hat Advanced Cluster Management hub cluster. Required access: Cluster administrator. OpenShift Container Platform Dedicated environment required access: You must have cluster-admin permissions. By default dedicated-admin role does not have the required permissions to create namespaces in the OpenShift Container Platform Dedicated environment. By default, the hub cluster components are installed on worker nodes of your OpenShift Container Platform cluster without any additional configuration. You can install the hub cluster on worker nodes by using the OpenShift Container Platform OperatorHub web console interface, or by using the OpenShift Container Platform CLI. If you have configured your OpenShift Container Platform cluster with infrastructure nodes, you can install the hub cluster on those infrastructure nodes by using the OpenShift Container Platform CLI with additional resource parameters. See the Installing the Red Hat Advanced Cluster Management hub cluster on infrastructure node section for more details. If you plan to import Kubernetes clusters that were not created by OpenShift Container Platform or Red Hat Advanced Cluster Management, you need to configure an image pull secret. For information on how to configure advanced configurations, see options in the MultiClusterHub advanced configuration section of the documentation. Prerequisites Confirm your OpenShift Container Platform installation Installing from the OperatorHub web console interface Installing from the OpenShift Container Platform CLI 1.2.1. Prerequisites Before you install Red Hat Advanced Cluster Management, see the following requirements: Your Red Hat OpenShift Container Platform cluster must have access to the Red Hat Advanced Cluster Management operator in the OperatorHub catalog from the OpenShift Container Platform console. You need access to the catalog.redhat.com . You need a supported OpenShift Container Platform and the OpenShift Container Platform CLI. See OpenShift Container Platform installing . Your OpenShift Container Platform command line interface (CLI) must be configured to run oc commands. See Getting started with the CLI for information about installing and configuring the OpenShift Container Platform CLI. Your OpenShift Container Platform permissions must allow you to create a namespace. Without a namespace, installation will fail. You must have an Internet connection to access the dependencies for the operator. Important: To install in a OpenShift Container Platform Dedicated environment, see the following requirements: You must have the OpenShift Container Platform Dedicated environment configured and running. You must have cluster-admin authority to the OpenShift Container Platform Dedicated environment where you are installing the hub cluster. To import, you must use the stable-2.0 channel of the klusterlet operator for 2.11. 1.2.2. Confirm your OpenShift Container Platform installation You must have a supported OpenShift Container Platform version, including the registry and storage services, installed and working. For more information about installing OpenShift Container Platform, see the OpenShift Container Platform documentation. Verify that a Red Hat Advanced Cluster Management hub cluster is not already installed on your OpenShift Container Platform cluster. Red Hat Advanced Cluster Management allows only one single Red Hat Advanced Cluster Management hub cluster installation on each OpenShift Container Platform cluster. Continue with the following steps if there is no Red Hat Advanced Cluster Management hub cluster installed: To ensure that the OpenShift Container Platform cluster is set up correctly, access the OpenShift Container Platform web console with the following command: See the following example output: Open the URL in your browser and check the result. If the console URL displays console-openshift-console.router.default.svc.cluster.local , set the value for openshift_master_default_subdomain when you install OpenShift Container Platform. See the following example of a URL: https://console-openshift-console.apps.new-coral.purple-chesterfield.com . You can proceed to install Red Hat Advanced Cluster Management from the console or the CLI. Both procedures are documented. 1.2.3. Installing from the OperatorHub web console interface Best practice: From the Administrator view in your OpenShift Container Platform navigation, install the OperatorHub web console interface that is provided with OpenShift Container Platform. Select Operators > OperatorHub to access the list of available operators, and select Advanced Cluster Management for Kubernetes operator. On the Operator subscription page, select the options for your installation: Namespace information: The Red Hat Advanced Cluster Management hub cluster must be installed in its own namespace, or project. By default, the OperatorHub console installation process creates a namespace titled open-cluster-management . Best practice: Continue to use the open-cluster-management namespace if it is available. If there is already a namespace named open-cluster-management , choose a different namespace. Channel: The channel that you select corresponds to the release that you are installing. When you select the channel, it installs the identified release, and establishes that the future Errata updates within that release are obtained. Approval strategy for updates: The approval strategy identifies the human interaction that is required for applying updates to the channel or release to which you subscribed. Select Automatic to ensure any updates within that release are automatically applied. Select Manual to receive a notification when an update is available. If you have concerns about when the updates are applied, this might be best practice for you. Important: To upgrade to the minor release, you must return to the OperatorHub page and select a new channel for the more current release. Select Install to apply your changes and create the operator. Create the MultiClusterHub custom resource. In the OpenShift Container Platform console navigation, select Installed Operators > Advanced Cluster Management for Kubernetes . Select the MultiClusterHub tab. Select Create MultiClusterHub . Update the default values in the YAML file. See options in the MultiClusterHub advanced configuration section of the documentation. The following example shows the default template. Confirm that namespace is your project namespace. See the sample: apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: name: multiclusterhub namespace: <namespace> Select Create to initialize the custom resource. It can take up to 10 minutes for the Red Hat Advanced Cluster Management hub cluster to build and start. After the Red Hat Advanced Cluster Management hub cluster is created, the MultiClusterHub resource status displays Running from the MultiClusterHub tab of the Red Hat Advanced Cluster Management operator details. To gain access to the console, see the Accessing your console topic. 1.2.4. Installing from the OpenShift Container Platform CLI Create a Red Hat Advanced Cluster Management hub cluster namespace where the operator requirements are contained. Run the following command, where namespace is the name for your Red Hat Advanced Cluster Management hub cluster namespace. The value for namespace might be referred to as Project in the OpenShift Container Platform environment: Switch your project namespace to the one that you created. Replace namespace with the name of the Red Hat Advanced Cluster Management hub cluster namespace that you created in step 1. Create a YAML file to configure an OperatorGroup resource. Each namespace can have only one operator group. Replace default with the name of your operator group. Replace namespace with the name of your project namespace. See the following sample: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <default> namespace: <namespace> spec: targetNamespaces: - <namespace> Run the following command to create the OperatorGroup resource. Replace operator-group with the name of the operator group YAML file that you created: Create a YAML file to configure an OpenShift Container Platform subscription. Your file is similar to the following sample, replacing release-2.x with the current release: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: acm-operator-subscription spec: sourceNamespace: openshift-marketplace source: redhat-operators channel: release-2.x installPlanApproval: Automatic name: advanced-cluster-management Note: For installing the Red Hat Advanced Cluster Management hub cluster on infrastructure nodes, the see Configuring infrastructure nodes for Red Hat Advanced Cluster Management . Run the following command to create the OpenShift Container Platform subscription. Replace subscription with the name of the subscription file that you created: Create a YAML file to configure the MultiClusterHub custom resource. Your default template should look similar to the following example. Replace namespace with the name of your project namespace: apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: name: multiclusterhub namespace: <namespace> spec: {} Note: For installing the Red Hat Advanced Cluster Management hub cluster on infrastructure nodes, see Configuring infrastructure nodes for Red Hat Advanced Cluster Management . Run the following command to create the MultiClusterHub custom resource. Replace custom-resource with the name of your custom resource file: If this step fails with the following error, the resources are still being created and applied. Run the command again in a few minutes when the resources are created: Run the following command to get the custom resource. It can take up to 10 minutes for the MultiClusterHub custom resource status to display as Running in the status.phase field after you run the command: If you are reinstalling Red Hat Advanced Cluster Management and the pods do not start, see Troubleshooting reinstallation failure for steps to work around this problem. Notes: A ServiceAccount with a ClusterRoleBinding automatically gives cluster administrator privileges to Red Hat Advanced Cluster Management and to any user credentials with access to the namespace where you install Red Hat Advanced Cluster Management. The installation also creates a namespace called local-cluster that is reserved for the Red Hat Advanced Cluster Management hub cluster when it is managed by itself. There cannot be an existing namespace called local-cluster . For security reasons, do not release access to the local-cluster namespace to any user who does not already have cluster-administrator access. You can now configure your OpenShift Container Platform cluster to contain infrastructure nodes to run approved management components. Running components on infrastructure nodes avoids allocating OpenShift Container Platform subscription quota for the nodes that are running those management components. See Configuring infrastructure nodes for Red Hat Advanced Cluster Management for that procedure. 1.3. Configuring infrastructure nodes for Red Hat Advanced Cluster Management Configure your OpenShift Container Platform cluster to contain infrastructure nodes to run approved Red Hat Advanced Cluster Management management components. Running components on infrastructure nodes avoids allocating OpenShift Container Platform subscription quota for the nodes that are running Red Hat Advanced Cluster Management management components. After adding infrastructure nodes to your OpenShift Container Platform cluster, follow the Installing from the OpenShift Container Platform CLI instructions and add configurations to the Operator Lifecycle Manager subscription and MultiClusterHub custom resource. 1.3.1. Configuring infrastructure nodes to the OpenShift Container Platform cluster Follow the procedures that are described in Creating infrastructure machine sets in the OpenShift Container Platform documentation. Infrastructure nodes are configured with a Kubernetes taints and labels to keep non-management workloads from running on them. To be compatible with the infrastructure node enablement provided by Red Hat Advanced Cluster Management, ensure your infrastructure nodes have the following taints and labels applied: metadata: labels: node-role.kubernetes.io/infra: "" spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/infra Add the following additional configuration before applying the Operator Lifecycle Manager Subscription: spec: config: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra effect: NoSchedule operator: Exists Add the following additional configuration before you apply the MultiClusterHub custom resource: spec: nodeSelector: node-role.kubernetes.io/infra: "" 1.4. Install in disconnected network environments You might need to install Red Hat Advanced Cluster Management for Kubernetes on disconnected Red Hat OpenShift Container Platform clusters. To install on a disconnected hub cluster, perform the following steps in addition to the usual install or upgrade steps that are for the connected network environment. Required access: You need cluster administration access for all installation and upgrade tasks. See the following sections: Prerequisites Confirm your OpenShift Container Platform installation Configure Operator Lifecycle Manager Configure image content source policies Install the Red Hat Advanced Cluster Management for Kubernetes operator and hub 1.4.1. Prerequisites You must meet the following requirements before you install Red Hat Advanced Cluster Management for Kubernetes: Since you are installing in a disconnected network environment, you need access to a local image registry to store mirrored Operator Lifecycle Manager catalogs and operator images. You probably already set up a local image registry when installing the OpenShift Container Platform cluster in this environment, so you should be able to use the same local image registry. You must have a workstation that has access to both the Internet and your local mirror registry. A supported Red Hat OpenShift Container Platform version must be deployed in your environment, and you must be logged in with the command line interface (CLI). See the OpenShift Container Platform version 4.11 install documentation for information on installing Red Hat OpenShift Container Platform. See Getting started with the CLI for information about installing and configuring oc commands with the Red Hat OpenShift CLI. Review Sizing your cluster to learn about setting up capacity for your hub cluster. 1.4.2. Confirm your OpenShift Container Platform installation While you are connected, run the oc -n openshift-console get route command to access the OpenShift Container Platform web console. See the following example output: openshift-console console console-openshift-console.apps.new-coral.purple-chesterfield.com console https reencrypt/Redirect None Open the URL in your browser and check the result. If the console URL displays console-openshift-console.router.default.svc.cluster.local , set the value for openshift_master_default_subdomain when you install OpenShift Container Platform. 1.4.3. Confirm availability of a local image registry Best practice: Use your existing mirror registry for the Operator Lifecycle Manager operator related content. Installing Red Hat Advanced Cluster Management for Kubernetes in a disconnected environment involves the use of a local mirror image registry. Because you have already completed the installation of the OpenShift Container Platform cluster in your disconnected environment, you already set up a mirror registry for use during the Red Hat OpenShift Container Platform cluster installation. If you do not already have a local image registry, create one by completing the procedure that is described in Mirroring images for a disconnected installation of the Red Hat OpenShift Container Platform documentation. 1.4.4. Configure Operator Lifecycle Manager Because Red Hat Advanced Cluster Management for Kubernetes is packaged as an operator, installing is completed by using Operator Lifecycle Manager. In disconnected environments, Operator Lifecycle Manager cannot access the standard operator sources that Red Hat provided operators can because they are hosted on image registries that are not accessible from a disconnected cluster. Instead, a cluster administrator can enable the installation and upgrade of operators in a disconnected environment by using mirrored image registries and operator catalogs. To prepare your disconnected cluster for installing Red Hat Advanced Cluster Management for Kubernetes, follow the procedure that is described in Using Operator Lifecycle Manager on restricted networks in the OpenShift Container Platform documentation. 1.4.4.1. Additional requirements When you complete the procedures, note the following requirements that are also specific to Red Hat Advanced Cluster Management for Kubernetes: 1.4.4.1.1. Include operator packages in mirror catalog Include the required operator packages in your mirror catalog. Red Hat provides the Red Hat Advanced Cluster Management for Kubernetes operator in the Red Hat operators catalog, which is delivered by the registry.redhat.io/redhat/redhat-operator-index index image. When you prepare your mirror of this catalog index image, you can choose to either mirror the entire catalog as provided by Red Hat, or you can mirror a subset that contains only the operator packages that you intend to use. If you are creating a full mirror catalog, no special considerations are needed as all of the packages required to install Red Hat Advanced Cluster Management for Kubernetes are included. However, if you are creating a partial or filtered mirrored catalog, for which you identify particular packages to be included, you need to include the following package names in your list: advanced-cluster-management multicluster-engine Use one of the two mirroring procedures. If you are creating the mirrored catalog or registry by using the OPM utility, opm index prune , include the following package names in the value of the -p option as displayed in the following example, with the current version replacing 4.x : opm index prune \ -f registry.redhat.io/redhat/redhat-operator-index:v4.x \ -p advanced-cluster-management,multicluster-engine \ -t myregistry.example.com:5000/mirror/my-operator-index:v4.x If you are populating the mirrored catalog or registry by using the oc-mirror plug-in instead, include the following package names in the packages list section of your ImageSetConfiguration , as displayed in the following example, with the current version replacing 4.x : kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 storageConfig: registry: imageURL: myregistry.example.com:5000/mirror/oc-mirror-metadata mirror: platform: channels: - name: stable-4.x type: ocp operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.11 packages: - name: advanced-cluster-management - name: multicluster-engine additionalImages: [] helm: {} 1.4.4.1.2. Configure to use your mirror registry When you have populated a local mirror registry with the earlier packages that are required for installing Red Hat Advanced Cluster Management for Kubernetes, complete the steps that are described in the topic Using Operator Lifecycle Manager on restricted networks to make your mirror registry and catalog available on your disconnected cluster, which includes the following steps: Disabling the default OperatorHub sources Mirroring the Operator catalog Adding a catalog source for your mirrored catalog 1.4.4.1.3. Find the catalog source name As described in the procedures in the Red Hat OpenShift Container Platform documentation, you need to add a CatalogSource resource to your disconnected cluster. Important: Take note of the value of the metadata.name field, which you will need later. Add the CatalogSource resource into the openshift-marketplace namespace by using a YAML file similar to the following example, replacing 4.x with the current version: apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-mirror-catalog-source namespace: openshift-marketplace spec: image: myregistry.example.com:5000/mirror/my-operator-index:v4.x sourceType: grpc You need the metadata.name field value for the annotation in the MulticlusterHub resource that you will create later. 1.4.5. Verify required packages are available Operator Lifecycle Manager polls catalog sources for available packages on a regular timed interval. After Operator Lifecycle Manager polls the catalog source for your mirrored catalog, you can verify that the required packages are available from on your disconnected cluster by querying the available PackageManifest resources. Run the following command, directed at your disconnected cluster: oc -n openshift-marketplace get packagemanifests The list that is displayed should include entries showing that the following packages are supplied by the catalog source for your mirror catalog: advanced-cluster-management multicluster-engine 1.4.6. Configure image content source policies In order to have your cluster obtain container images for the Red Hat Advanced Cluster Management for Kubernetes operator from your mirror registry, rather than from the internet-hosted registries, you must configure an ImageContentSourcePolicy on your disconnected cluster to redirect image references to your mirror registry. If you mirrored your catalog using the oc adm catalog mirror command, the needed image content source policy configuration is in the imageContentSourcePolicy.yaml file inside of the manifests-* directory that is created by that command. If you used the oc-mirror plug-in to mirror your catalog instead, the imageContentSourcePolicy.yaml file is within the oc-mirror-workspace/results-* directory create by the oc-mirror plug-in. In either case, you can apply the policies to your disconnected command using an oc apply or oc replace command such as: oc replace -f ./<path>/imageContentSourcePolicy.yaml The required image content source policy statements can vary based on how you created your mirror registry, but are similar to this example: apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: labels: operators.openshift.org/catalog: "true" name: operator-0 spec: repositoryDigestMirrors: - mirrors: - myregistry.example.com:5000/rhacm2 source: registry.redhat.io/rhacm2 - mirrors: - myregistry.example.com:5000/multicluster-engine source: registry.redhat.io/multicluster-engine - mirrors: - myregistry.example.com:5000/openshift4 source: registry.redhat.io/openshift4 - mirrors: - myregistry.example.com:5000/redhat source: registry.redhat.io/redhat 1.4.7. Install the Red Hat Advanced Cluster Management for Kubernetes operator and hub cluster After you have configured Operator Lifecycle Manager and Red Hat OpenShift Container Platform as previously described, you can install Red Hat Advanced Cluster Management for Kubernetes by using either the OperatorHub console or a CLI. Follow the same guidance described in the Installing while connected online topic. Important: Creating the MulticlusterHub resource is the beginning of the installation process of your hub cluster. Because operator installation on a cluster requires the use of a non-default catalog source for the mirror catalog, a special annotation is needed in the MulticlusterHub resource to provide the name of the mirror catalog source to the operator. The following example displays the required mce-subscription-spec annotation: apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: namespace: open-cluster-management name: hub annotations: installer.open-cluster-management.io/mce-subscription-spec: '{"source": "my-mirror-catalog-source"}' spec: {} The mce-subscription-spec annotation is required because multicluster engine operator is automatically installed during the Red Hat Advanced Cluster Management installation. If you are creating the resource with a CLI, include the mce-subscription-spec annotation in the YAML that you apply with the oc apply command to create the MulticlusterHub resource. If you create the resource by using the OperatorHub console, switch to the YAML view and insert the annotation as previously displayed. Important: There is no field in the OperatorHub console for the annotation in the Field view panel to create the MulticlusterHub . 1.5. MultiClusterHub advanced configuration Red Hat Advanced Cluster Management for Kubernetes is installed by using an operator that deploys all of the required components. Some of the listed components are enabled by default. If a component is disabled , that resource is not deployed to the cluster until it is enabled. The operator works to deploy the following components: Table 1.7. Table list of the deployed components Name Description Enabled app-lifecycle Unifies and simplifies options for constructing and deploying applications and application updates. True cluster-backup Provides backup and restore support for all hub cluster resources such as managed clusters, applications, and policies. False cluster-lifecycle Provides cluster management capabilities for OpenShift Container Platform and Red Hat Advanced Cluster Management hub clusters. True cluster-permission Automatically distributes RBAC resources to managed clusters and manage the lifecycle of those resources. True console Enables Red Hat Advanced Cluster Management web console plug-in. True grc Enables the security enhancement for you to define policies for your clusters. True insights Identifies existing or potential problems in your clusters. True multicluster-observability Enables monitoring to gain further insights into the health of your managed clusters. True search Provides visibility into your Kubernetes resources across all of your clusters. True submariner-addon Enables direct networking and service discovery between two or more managed clusters in your environment, either on-premises or in the cloud. True volsync Supports asynchronous replication of persistent volumes within a cluster, or across clusters with storage types that are not otherwise compatible for replication. True When you install Red Hat Advanced Cluster Management on to the cluster, not all of the listed components are enabled by default. You can further configure Red Hat Advanced Cluster Management during or after installation by adding one or more attributes to the MultiClusterHub custom resource. Continue reading for information about the attributes that you can add. 1.5.1. Console and component configuration The following example displays the spec.overrides default template that you can use to enable or disable the component: apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: name: multiclusterhub namespace: <namespace> 1 spec: overrides: components: - name: <name> 2 enabled: true Replace namespace with the name of your project. Replace name with the name of the component. Alternatively, you can run the following command. Replace namespace with the name of your project and name with the name of the component: Note: When the console component is disabled, the Red Hat OpenShift Container Platform console is disabled. 1.5.2. Custom Image Pull Secret If you plan to import Kubernetes clusters that were not created by OpenShift Container Platform or Red Hat Advanced Cluster Management, generate a secret that has your OpenShift Container Platform pull secret information to access the entitled content from the distribution registry. The secret requirements for OpenShift Container Platform clusters are automatically resolved by OpenShift Container Platform and Red Hat Advanced Cluster Management, so you do not have to create the secret if you are not importing other types of Kubernetes clusters to be managed. Your OpenShift Container Platform pull secret is associated with your Red Hat Customer Portal ID, and is the same across all Kubernetes providers. Important: These secrets are namespace-specific, so make sure that you are in the namespace that you use for your hub cluster. Go to cloud.redhat.com/openshift/install/pull-secret to download the OpenShift Container Platform pull secret file. Click Download pull secret . Run the following command to create your secret: Replace secret with the name of the secret that you want to create. Replace namespace with your project namespace, as the secrets are namespace-specific. Replace path-to-pull-secret with the path to your OpenShift Container Platform pull secret that you downloaded. The following example displays the spec.imagePullSecret template to use if you want to use a custom pull secret. Replace secret with the name of your pull secret: apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: name: multiclusterhub namespace: <namespace> spec: imagePullSecret: <secret> 1.5.3. availabilityConfig The Red Hat Advanced Cluster Management hub cluster has two availabilities: High and Basic . By default, the hub cluster has an availability of High , which gives hub cluster components a replicaCount of 2 . This provides better support in cases of failover but consumes more resources than the Basic availability, which gives components a replicaCount of 1 . Important: Set spec.availabilityConfig to Basic if you are using multicluster engine operator on a single-node OpenShift cluster. The following example shows the spec.availabilityConfig template with Basic availability: apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: name: multiclusterhub namespace: <namespace> spec: availabilityConfig: "Basic" 1.5.4. nodeSelector You can define a set of node selectors in the Red Hat Advanced Cluster Management hub cluster to install to specific nodes on your cluster. The following example shows spec.nodeSelector to assign Red Hat Advanced Cluster Management pods to nodes with the label node-role.kubernetes.io/infra : apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: name: multiclusterhub namespace: <namespace> spec: nodeSelector: node-role.kubernetes.io/infra: "" 1.5.5. tolerations You can define a list of tolerations to allow the Red Hat Advanced Cluster Management hub cluster to tolerate specific taints defined on the cluster. The following example shows a spec.tolerations that matches a node-role.kubernetes.io/infra taint: apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: name: multiclusterhub namespace: <namespace> spec: tolerations: - key: node-role.kubernetes.io/infra effect: NoSchedule operator: Exists The infra-node toleration is set on pods by default without specifying any tolerations in the configuration. Customizing tolerations in the configuration replaces this default. 1.5.6. disableHubSelfManagement By default, the Red Hat Advanced Cluster Management hub cluster is automatically imported and managed by itself. This managed hub cluster is named, local-cluster . The setting that specifies whether a hub cluster manages itself is in the multiclusterengine custom resource. Changing this setting in Red Hat Advanced Cluster Management automatically changes the setting in the multiclusterengine custom resource. Note: On a Red Hat Advanced Cluster Management hub cluster that is managing a multicluster engine operator cluster, any earlier manual configurations are replaced by this action. If you do not want the Red Hat Advanced Cluster Management hub cluster to manage itself, you need to change the setting for spec.disableHubSelfManagement from false to true . If the setting is not included in the YAML file that defines the custom resource, you need to add it. The hub cluster can only be managed with this option. Setting this option to true and attempting to manage the hub manually leads to unexpected behavior. The following example shows the default template to use if you want to disable the hub cluster self-management feature. Replace namespace with the name of your project: apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: name: multiclusterhub namespace: <namespace> spec: disableHubSelfManagement: true To enable the default local-cluster , return the setting to false , or remove this setting. 1.5.7. disableUpdateClusterImageSets If you want to ensure that you use the same release image for all of your clusters, you can create your own custom list of release images that are available when you create a cluster. See the following instructions in Maintaining a custom list of release images when connected to manage your available release images and to set the spec.disableUpdateClusterImageSets attribute, which stops the custom image list from being overwritten. The following example shows the default template that disables updates to the cluster image set. Replace namespace with the name of your project: apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: name: multiclusterhub namespace: <namespace> spec: disableUpdateClusterImageSets: true 1.5.8. customCAConfigmap (Deprecated) By default, Red Hat OpenShift Container Platform uses the Ingress Operator to create an internal CA. The following example shows the default template used to provide a customized OpenShift Container Platform default ingress CA certificate to Red Hat Advanced Cluster Management. Replace namespace with the name of your project. Replace the spec.customCAConfigmap value with the name of your ConfigMap : apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: name: multiclusterhub namespace: <namespace> spec: customCAConfigmap: <configmap> 1.5.9. sslCiphers (Deprecated) By default, the Red Hat Advanced Cluster Management hub cluster includes the full list of supported SSL ciphers. The following example shows the default spec.ingress.sslCiphers template that is used to list sslCiphers for the management ingress. Replace namespace with the name of your project: apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: name: multiclusterhub namespace: <namespace> spec: ingress: sslCiphers: - "ECDHE-ECDSA-AES128-GCM-SHA256" - "ECDHE-RSA-AES128-GCM-SHA256" 1.5.10. ClusterBackup The enableClusterBackup field is no longer supported and is replaced by this component. The following example shows the spec.overrides default template used to enable ClusterBackup . Replace namespace with the name of your project: apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: name: multiclusterhub namespace: <namespace> spec: overrides: components: - name: cluster-backup enabled: true Alternatively, you can run the following command. Replace namespace with the name of your project. 1.6. Upgrading You control your Red Hat Advanced Cluster Management for Kubernetes upgrades by using the operator subscription settings in the Red Hat OpenShift Container Platform console. Important: Upgrades are only supported from the immediate version. You can upgrade to the available feature release, but you cannot skip a release during upgrade. The Operator Lifecycle Manager operatorcondition helps control how versions are upgraded. When you initially deploy Red Hat Advanced Cluster Management by using the operator, you make the following selections: Channel: Channel corresponds to the version of the product that you are installing. The initial channel setting is often the most current channel that was available at the time of installation. Approval: Approval specifies whether approval is required for updates within the channel, or if they are done automatically. If set to Automatic , then minor release (Errata) updates in the selected channel are deployed without administrator intervention. If set to Manual , then each update to the minor release (Errata) within the channel requires an administrator to approve the update. Required access: OpenShift Container Platform administrator You also use these settings when you upgrade to the latest version of Red Hat Advanced Cluster Management by using the operator. Complete the following steps to upgrade your operator: Important: You cannot revert back to an earlier version after upgrading to a later version in the channel selection. You must uninstall the operator and reinstall it with the earlier version to use a version. . Log in to your OpenShift Container Platform operator hub. In the OpenShift Container Platform navigation, select Operators > Installed operators . Select the Red Hat Advanced Cluster Management for Kubernetes operator. Select the Subscription tab to edit the subscription settings. Ensure that the Upgrade Status is labeled Up to date . This status indicates that the operator is at the latest level that is available in the selected channel. If the Upgrade Status indicates that there is an upgrade pending, complete the following steps to update it to the latest minor release that is available in the channel: Click the Manual setting in the Approval field to edit the value. Select Automatic to enable automatic updates. Select Save to commit your change. Wait for the automatic updates to be applied to the operator. The updates automatically add the required updates to the latest version in the selected channel. When all of the updated updates are complete, the Upgrade Status field indicates Up to date . Note: It can take up to 10 minutes for the MultiClusterHub custom resource to finish upgrading. You can check whether the upgrade is still in process by entering the following command: While it is upgrading, the Status field shows Updating . After upgrading is complete, the Status field shows Running . Now that the Upgrade Status is Up to date , click the value in the Channel field to edit it. Select the channel for the available feature release, but do not attempt to skip a channel. Important: The Operator Lifecycle Manager operatorcondition resource checks for upgrades during the current upgrade process and prevents skipping versions. You can check that same resource status to see if the upgradable status is true or false . Select Save to save your changes. Wait for the automatic upgrade to complete. After the upgrade to the feature release completes, the updates to the latest patch releases within the channel are deployed. If you have to upgrade to a later feature release, repeat steps 7-9 until your operator is at the latest level of the desired channel. Make sure that all of the patch releases are deployed for your final channel. Optional: You can set your Approval setting to Manual , if you want your future updates within the channel to require manual approvals. For more information about upgrading your operator, see Operators in the OpenShift Container Platform documentation. 1.6.1. Managing cluster pools with an upgrade If you are Managing cluster pools (Technology Preview) , you need further configuration to stop automatic management of these cluster pools after upgrade. Set cluster.open-cluster-management.io/createmanagedcluster: "false" in the ClusterClaim metadata.annotations . All existing cluster claims are automatically imported when the product is upgraded unless you change this setting. 1.7. Upgrading in a disconnected network environment See the steps and information to upgrade Red Hat Advanced Cluster Management for Kubernetes in a disconnected network environment. Note: This information follows the upgrading procedure in Upgrading . Review that procedure, then see the following information: During your installation, or upgrade, you might encounter important information that is related to the interdependency between the Red Hat Advanced Cluster Management and multicluster engine operator. See Install in disconnected network environments for consideration during install or upgrade. As is the case for upgrading in a connected network environment, the upgrade process is started by changing the upgrade channel in your Operator Lifecycle Manager subscription for Red Hat Advanced Cluster Management for Kubernetes to the upgrade channel for the new release. However, because of the special characteristics of the disconnected environment, you need to address the following mirroring requirements before changing the update channel to start the upgrade process: Ensure that required packages are updated in your mirror catalog. During installation, or during a update, you created a mirror catalog and a registry that contains operator packages and images that are needed to install Red Hat Advanced Cluster Management for Kubernetes in a disconnected network environment. To upgrade, you need to update your mirror catalog and registry to pick up the updated versions of the operator packages. Similar to your installation actions, you need to ensure that your mirror catalog and registry include the following operator packages in the list of operators to be included or updated: advanced-cluster-manager multicluster-engine Verify your MutliclusterHub resource instance. During installation or a update, you created an instance of the MulticlusterHub resource, and due to the disconnected environment, you added a mce-subscription-spec annotation to that resource. If your procedures for updating your mirror catalog and registry resulted in the updated catalog being available on the OpenShift Container Platform cluster through a CatalogSource with the same name as the one that you previously used, you do not need to update your MulticlusterHub resource to update the mce-subscriptino-spec annotation. However, if your procedures for updating your mirrored catalog and registry resulted in a newly named CatalogSource being created, update the mce-subscription-spec annotation in your MulticlusterHub resource to reflect the new catalog source name. 1.7.1. Upgrade with catalog mirroring Red Hat Advanced Cluster Management uses the related multicluster engine operator functionality to provide foundational services that were delivered as part of the product. Red Hat Advanced Cluster Management automatically installs and manages the required multicluster engine operator and MulticlusterEngine resource instance as part of the hub cluster installation and upgrade. In connected network environments, the cluster administrator can install or upgrade Red Hat Advanced Cluster Management without special mirror catalogs and catalog sources. However, because installation of any Operator Lifecycle Manager operator in a disconnected environment involves the use of special mirror catalogs and catalog sources, as described in the earlier sections, some additional steps are necessary after installation. Update your procedures for populating the mirror catalog. If when installing Red Hat Advanced Cluster Management mirroring procedures created a full copy of the Red Hat Operators catalog, no special mirroring updates are required. Refresh your catalog to pick up the updated content for the new operator releases. However, if your procedures populated mirror catalog that is a filtered catalog, you need to update your mirroring procedures to ensure that the multcluster-engine operator package is included in the mirror catalog, in addition to the advanced-cluster-management package. See the Include required operator packages in your mirror catalog topic, which provides examples of the options to use when populating the mirror catalog. Update the operator-package lists that are used in your procedures to match these new requirements. Update your MutliclusterHub resource instance. As described in the Install in disconnected network environments topic, you need a new annotation on the MulticlusterHub resource when the hub cluster is installed or upgraded in a disconnected environment. Best practice: Update your MulticlusterHub resource instance to include the required annotation before you change the Operator Lifecycle Manager update channel in your Operator Lifecycle Manager subscription to the advanced-cluster-management operator package to start the upgrade. This update allows the upgrade to proceed without delay. Use the oc edit command to update your Multiclusterub resource to add the mce-subscription-spec annotation as displayed in the following example: metadata: annotations: installer.open-cluster-management.io/mce-subscription-spec: '{"source": "<my-mirror-catalog-source>"}' Replace <my-mirror-catalog-source> from the example with the name of the CatalogSource resource located in the openshift-marketplace namespace for your mirror catalog. Important: If you begin an upgrade before you add the annotation, the upgrade begins but stalls when the operator attempts to install a subscription to multicluster-engine in the background. The status of the MulticlusterHub resource continues to display upgrading during this time. To resolve this issue, run oc edit to add the mce-subscription-spec annotation as shown previously. 1.8. Uninstalling When you uninstall Red Hat Advanced Cluster Management for Kubernetes, you see two different levels of the uninstall process: A custom resource removal and a complete operator uninstall . The uninstall process can take up to 20 minutes. The first level is the custom resource removal, which is the most basic type of uninstall that removes the custom resource of the MultiClusterHub instance, but leaves other required operator resources. This level of uninstall is helpful if you plan to reinstall using the same settings and components. The second level is a more complete uninstall that removes most operator components, excluding components such as custom resource definitions. When you continue with this step, it removes all of the components and subscriptions that were not removed with the custom resource removal. After this uninstall, you must reinstall the operator before reinstalling the custom resource. 1.8.1. Prerequisites Before you uninstall the Red Hat Advanced Cluster Management hub cluster, you must detach all of the clusters that are managed by that hub cluster. Detach all clusters that are still managed by the hub cluster, then try to uninstall again. If you use Discovery, you might see the following error when you attempt uninstall: To disable Discovery, complete the following steps: From the console, navigate to the Discovered Clusters table and click Disable cluster discovery . Confirm that you want to remove the service. You can also use the terminal. Run the following command to disable Discover: oc delete discoveryconfigs --all --all-namespaces If you have agent service configurations attached, you might see the following message: Cannot delete MultiClusterHub resource because AgentServiceConfig resource(s) exist To disable and remove the AgentServiceConfig resource by using the command line interface, complete the following steps: Log in to your hub cluster. Delete the AgentServiceConfig custom resource by entering the following command: oc delete agentserviceconfig --all If you have managed clusters attached, you might see the following message. Note: This does not include the local-cluster , which is your self-managed hub cluster: Cannot delete MultiClusterHub resource because ManagedCluster resource(s) exist For more information about detaching clusters, see the Removing a cluster from management section by selecting the information for your provider in Cluster creation introduction . If you have Observability, you might see the following message: Cannot delete MultiClusterHub resource because MultiClusterObservability resource(s) exist To disable and remove the MultiClusterObservability using the terminal, see the following procedure: Log in to your hub cluster. Delete the MultiClusterObservability custom resource by entering the following command: oc delete mco observability To remove MultiClusterObservability custom resource with the console, see the following procedure: If the MultiClusterObservability custom resource is installed, select the tab for MultiClusterObservability . Select the Options menu for the MultiClusterObservability custom resource. Select Delete MultiClusterObservability . When you delete the resource, the pods in the open-cluster-management-observability namespace on Red Hat Advanced Cluster Management hub cluster, and the pods in open-cluster-management-addon-observability namespace on all managed clusters are removed. Note: Your object storage is not affected after you remove the observability service. 1.8.2. Removing resources by using commands If you have not already. ensure that your OpenShift Container Platform CLI is configured to run oc commands. See Getting started with the OpenShift CLI in the OpenShift Container Platform documentation for more information about how to configure the oc commands. Change to your project namespace by entering the following command. Replace namespace with the name of your project namespace: oc project <namespace> Enter the following command to remove the MultiClusterHub custom resource: oc delete multiclusterhub --all To view the progress, enter the following command: oc get mch -o yaml Remove any potential remaining artifacts by running the clean-up script. Run this clean-up script if you plan to reinstall with an older version of Red Hat Advanced Cluster Management on the same cluster. Copy the following script into a file: #!/bin/bash ACM_NAMESPACE=<namespace> oc delete mch --all -n USDACM_NAMESPACE oc delete apiservice v1.admission.cluster.open-cluster-management.io v1.admission.work.open-cluster-management.io oc delete clusterimageset --all oc delete clusterrole multiclusterengines.multicluster.openshift.io-v1-admin multiclusterengines.multicluster.openshift.io-v1-crdview multiclusterengines.multicluster.openshift.io-v1-edit multiclusterengines.multicluster.openshift.io-v1-view open-cluster-management:addons:application-manager open-cluster-management:admin-aggregate open-cluster-management:cert-policy-controller-hub open-cluster-management:cluster-manager-admin-aggregate open-cluster-management:config-policy-controller-hub open-cluster-management:edit-aggregate open-cluster-management:policy-framework-hub open-cluster-management:view-aggregate oc delete crd klusterletaddonconfigs.agent.open-cluster-management.io placementbindings.policy.open-cluster-management.io policies.policy.open-cluster-management.io userpreferences.console.open-cluster-management.io discoveredclusters.discovery.open-cluster-management.io discoveryconfigs.discovery.open-cluster-management.io oc delete mutatingwebhookconfiguration ocm-mutating-webhook managedclustermutators.admission.cluster.open-cluster-management.io multicluster-observability-operator oc delete validatingwebhookconfiguration channels.apps.open.cluster.management.webhook.validator application-webhook-validator multiclusterhub-operator-validating-webhook ocm-validating-webhook multicluster-observability-operator multiclusterengines.multicluster.openshift.io Replace <namespace> in the script with the name of the namespace where Red Hat Advanced Cluster Management was installed. Important: Ensure that you specify the correct namespace, as the namespace is cleaned out and deleted. Run the script to remove any possible artifacts that remain from the installation. If there are no remaining artifacts, a message is returned that no resources were found. Note: If you plan to reinstall the same Red Hat Advanced Cluster Management version, you can skip the steps in this procedure and reinstall the custom resource. Proceed for a complete operator uninstall. Enter the following commands to delete the Red Hat Advanced Cluster Management ClusterServiceVersion and Subscription in the namespace where it is installed. Replace the 2.x.0 value with the current major or minor release: oc get csv NAME DISPLAY VERSION REPLACES PHASE advanced-cluster-management.v2.x.0 Advanced Cluster Management for Kubernetes 2.x.0 Succeeded oc delete clusterserviceversion advanced-cluster-management.v2.x.0 oc get sub NAME PACKAGE SOURCE CHANNEL acm-operator-subscription advanced-cluster-management acm-custom-registry release-2.x oc delete sub acm-operator-subscription Note: The name of the subscription and version of the CSV might differ. 1.8.3. Deleting the components by using the console When you use the Red Hat OpenShift Container Platform console to uninstall, you remove the operator. Complete the following steps to uninstall by using the console: In the OpenShift Container Platform console navigation, select Operators > Installed Operators > Advanced Cluster Manager for Kubernetes . Remove the MultiClusterHub custom resource. Select the tab for Multiclusterhub . Select the Options menu for the MultiClusterHub custom resource. Select Delete MultiClusterHub . Run the clean-up script according to the procedure in Removing a MultiClusterHub instance by using commands . Note: If you plan to reinstall the same Red Hat Advanced Cluster Management version, you can skip the rest of the steps in this procedure and reinstall the custom resource. Navigate to Installed Operators . Remove the Red Hat Advanced Cluster Management operator by selecting the Options menu and selecting Uninstall operator . | [
"apiVersion: cluster.open-cluster-management.io/v1beta1 kind: BackupSchedule metadata: name:schedule-acm namespace:open-cluster-management-backup spec: veleroSchedule:0 */1 * * * veleroTtl:120h",
"-n openshift-console get route",
"openshift-console console console-openshift-console.apps.new-coral.purple-chesterfield.com console https reencrypt/Redirect None",
"apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: name: multiclusterhub namespace: <namespace>",
"create namespace <namespace>",
"project <namespace>",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <default> namespace: <namespace> spec: targetNamespaces: - <namespace>",
"apply -f <path-to-file>/<operator-group>.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: acm-operator-subscription spec: sourceNamespace: openshift-marketplace source: redhat-operators channel: release-2.x installPlanApproval: Automatic name: advanced-cluster-management",
"apply -f <path-to-file>/<subscription>.yaml",
"apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: name: multiclusterhub namespace: <namespace> spec: {}",
"apply -f <path-to-file>/<custom-resource>.yaml",
"error: unable to recognize \"./mch.yaml\": no matches for kind \"MultiClusterHub\" in version \"operator.open-cluster-management.io/v1\"",
"get mch -o=jsonpath='{.items[0].status.phase}'",
"metadata: labels: node-role.kubernetes.io/infra: \"\" spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/infra",
"spec: config: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra effect: NoSchedule operator: Exists",
"spec: nodeSelector: node-role.kubernetes.io/infra: \"\"",
"openshift-console console console-openshift-console.apps.new-coral.purple-chesterfield.com console https reencrypt/Redirect None",
"opm index prune -f registry.redhat.io/redhat/redhat-operator-index:v4.x -p advanced-cluster-management,multicluster-engine -t myregistry.example.com:5000/mirror/my-operator-index:v4.x",
"kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 storageConfig: registry: imageURL: myregistry.example.com:5000/mirror/oc-mirror-metadata mirror: platform: channels: - name: stable-4.x type: ocp operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.11 packages: - name: advanced-cluster-management - name: multicluster-engine additionalImages: [] helm: {}",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-mirror-catalog-source namespace: openshift-marketplace spec: image: myregistry.example.com:5000/mirror/my-operator-index:v4.x sourceType: grpc",
"-n openshift-marketplace get packagemanifests",
"replace -f ./<path>/imageContentSourcePolicy.yaml",
"apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: labels: operators.openshift.org/catalog: \"true\" name: operator-0 spec: repositoryDigestMirrors: - mirrors: - myregistry.example.com:5000/rhacm2 source: registry.redhat.io/rhacm2 - mirrors: - myregistry.example.com:5000/multicluster-engine source: registry.redhat.io/multicluster-engine - mirrors: - myregistry.example.com:5000/openshift4 source: registry.redhat.io/openshift4 - mirrors: - myregistry.example.com:5000/redhat source: registry.redhat.io/redhat",
"apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: namespace: open-cluster-management name: hub annotations: installer.open-cluster-management.io/mce-subscription-spec: '{\"source\": \"my-mirror-catalog-source\"}' spec: {}",
"apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: name: multiclusterhub namespace: <namespace> 1 spec: overrides: components: - name: <name> 2 enabled: true",
"patch MultiClusterHub multiclusterhub -n <namespace> --type=json -p='[{\"op\": \"add\", \"path\": \"/spec/overrides/components/-\",\"value\":{\"name\":\"<name>\",\"enabled\":true}}]'",
"create secret generic <secret> -n <namespace> --from-file=.dockerconfigjson=<path-to-pull-secret> --type=kubernetes.io/dockerconfigjson",
"apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: name: multiclusterhub namespace: <namespace> spec: imagePullSecret: <secret>",
"apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: name: multiclusterhub namespace: <namespace> spec: availabilityConfig: \"Basic\"",
"apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: name: multiclusterhub namespace: <namespace> spec: nodeSelector: node-role.kubernetes.io/infra: \"\"",
"apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: name: multiclusterhub namespace: <namespace> spec: tolerations: - key: node-role.kubernetes.io/infra effect: NoSchedule operator: Exists",
"apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: name: multiclusterhub namespace: <namespace> spec: disableHubSelfManagement: true",
"apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: name: multiclusterhub namespace: <namespace> spec: disableUpdateClusterImageSets: true",
"apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: name: multiclusterhub namespace: <namespace> spec: customCAConfigmap: <configmap>",
"apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: name: multiclusterhub namespace: <namespace> spec: ingress: sslCiphers: - \"ECDHE-ECDSA-AES128-GCM-SHA256\" - \"ECDHE-RSA-AES128-GCM-SHA256\"",
"apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: name: multiclusterhub namespace: <namespace> spec: overrides: components: - name: cluster-backup enabled: true",
"patch MultiClusterHub multiclusterhub -n <namespace> --type=json -p='[{\"op\": \"add\", \"path\": \"/spec/overrides/components/-\",\"value\":{\"name\":\"cluster-backup\",\"enabled\":true}}]'",
"get mch",
"metadata: annotations: installer.open-cluster-management.io/mce-subscription-spec: '{\"source\": \"<my-mirror-catalog-source>\"}'",
"Cannot delete MultiClusterHub resource because DiscoveryConfig resource(s) exist",
"delete discoveryconfigs --all --all-namespaces",
"Cannot delete MultiClusterHub resource because AgentServiceConfig resource(s) exist",
"delete agentserviceconfig --all",
"Cannot delete MultiClusterHub resource because ManagedCluster resource(s) exist",
"Cannot delete MultiClusterHub resource because MultiClusterObservability resource(s) exist",
"delete mco observability",
"project <namespace>",
"delete multiclusterhub --all",
"get mch -o yaml",
"#!/bin/bash ACM_NAMESPACE=<namespace> delete mch --all -n USDACM_NAMESPACE delete apiservice v1.admission.cluster.open-cluster-management.io v1.admission.work.open-cluster-management.io delete clusterimageset --all delete clusterrole multiclusterengines.multicluster.openshift.io-v1-admin multiclusterengines.multicluster.openshift.io-v1-crdview multiclusterengines.multicluster.openshift.io-v1-edit multiclusterengines.multicluster.openshift.io-v1-view open-cluster-management:addons:application-manager open-cluster-management:admin-aggregate open-cluster-management:cert-policy-controller-hub open-cluster-management:cluster-manager-admin-aggregate open-cluster-management:config-policy-controller-hub open-cluster-management:edit-aggregate open-cluster-management:policy-framework-hub open-cluster-management:view-aggregate delete crd klusterletaddonconfigs.agent.open-cluster-management.io placementbindings.policy.open-cluster-management.io policies.policy.open-cluster-management.io userpreferences.console.open-cluster-management.io discoveredclusters.discovery.open-cluster-management.io discoveryconfigs.discovery.open-cluster-management.io delete mutatingwebhookconfiguration ocm-mutating-webhook managedclustermutators.admission.cluster.open-cluster-management.io multicluster-observability-operator delete validatingwebhookconfiguration channels.apps.open.cluster.management.webhook.validator application-webhook-validator multiclusterhub-operator-validating-webhook ocm-validating-webhook multicluster-observability-operator multiclusterengines.multicluster.openshift.io",
"get csv NAME DISPLAY VERSION REPLACES PHASE advanced-cluster-management.v2.x.0 Advanced Cluster Management for Kubernetes 2.x.0 Succeeded delete clusterserviceversion advanced-cluster-management.v2.x.0 get sub NAME PACKAGE SOURCE CHANNEL acm-operator-subscription advanced-cluster-management acm-custom-registry release-2.x delete sub acm-operator-subscription"
] | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.11/html/install/installing |
Chapter 23. Logging Tapset | Chapter 23. Logging Tapset This family of functions is used to send simple message strings to various destinations. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/logging-dot-stp |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.