title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
url
stringlengths
79
342
Chapter 5. Uninstalling OpenShift sandboxed containers
Chapter 5. Uninstalling OpenShift sandboxed containers You can uninstall OpenShift sandboxed containers by using either the OpenShift Container Platform web console or OpenShift CLI ( oc ). Both procedures are explained below. 5.1. Uninstalling OpenShift sandboxed containers using the web console Use the OpenShift Container Platform web console to delete the relevant OpenShift sandboxed containers pods, resources, and namespace. 5.1.1. Deleting OpenShift sandboxed containers pods using the web console To uninstall OpenShift sandboxed containers, you must first delete all running pods that use kata as the runtimeClass . Prerequisites You have OpenShift Container Platform 4.10 installed on your cluster. You have access to the cluster as a user with the cluster-admin role. You have a list of the pods that use kata as the runtimeClass . Procedure From the Administrator perspective, navigate to Workloads Pods . Search for the pod that you want to delete using the Search by name field. Click the pod name to open it. On the Details page, check that kata is displayed for Runtime class . Click the Actions menu and select Delete Pod . Click Delete in the confirmation window. Additional resources You can retrieve a list of running pods that use kata as the runtimeClass from the OpenShift CLI. For details, see Deleting OpenShift sandboxed containers pods . 5.1.2. Deleting the KataConfig custom resource using the web console Deleting the KataConfig custom resource (CR) removes and uninstalls the kata runtime and its related resources from your cluster. Important Deleting the KataConfig CR automatically reboots the worker nodes. The reboot can take from 10 to more than 60 minutes. Factors that impede reboot time are as follows: A larger OpenShift Container Platform deployment with a greater number of worker nodes. Activation of the BIOS and Diagnostics utility. Deployment on a hard drive rather than on an SSD. Deployment on physical nodes such as bare metal, rather than on virtual nodes. A slow CPU or network. Prerequisites You have OpenShift Container Platform 4.10 installed on your cluster. You have access to the cluster as a user with the cluster-admin role. You have no running pods that use kata as the runtimeClass . Procedure From the Administrator perspective, navigate to Operators Installed Operators . Search for the OpenShift sandboxed containers Operator using the Search by name field. Click the Operator to open it, and then select the KataConfig tab. Click the Options menu for the KataConfig resource, and then select Delete KataConfig . Click Delete in the confirmation window. Wait for the Kata runtime and resources to uninstall and for the worker nodes to reboot before continuing to the step. 5.1.3. Deleting the OpenShift sandboxed containers Operator using the web console Deleting the OpenShift sandboxed containers Operator removes the catalog subscription, Operator group, and cluster service version (CSV) for that Operator. Prerequisites You have OpenShift Container Platform 4.10 installed on your cluster. You have access to the cluster as a user with the cluster-admin role. Procedure From the Administrator perspective, navigate to Operators Installed Operators . Search for the OpenShift sandboxed containers Operator using the Search by name field. Click the Options menu for the Operator and select Uninstall Operator . Click Uninstall in the confirmation window. 5.1.4. Deleting the OpenShift sandboxed containers namespace using the web console After you run the preceding commands, your cluster is restored to the state that it was prior to the installation process. You can now revoke namespace access to the Operator by deleting the openshift-sandboxed-containers-operator namespace. Prerequisites You have OpenShift Container Platform 4.10 installed on your cluster. You have access to the cluster as a user with the cluster-admin role. Procedure From the Administrator perspective, navigate to Administration Namespaces . Search for the openshift-sandboxed-containers-operator namespace using the Search by name field. Click the Options menu for the namespace and select Delete Namespace . Note If the Delete Namespace option is not available, you do not have permission to delete the namespace. In the Delete Namespace pane, enter openshift-sandboxed-containers-operator and click Delete . Click Delete . 5.1.5. Deleting the KataConfig custom resource definition using the web console The KataConfig custom resource definition (CRD) lets you define the KataConfig CR. To complete the uninstall process, delete the KataConfig CRD from your cluster. Prerequisites You have OpenShift Container Platform 4.10 installed on your cluster. You have access to the cluster as a user with the cluster-admin role. You have removed the KataConfig CR from your cluster. You have removed the OpenShift sandboxed containers Operator from your cluster. Procedure From the Administrator perspective, navigate to Administration CustomResourceDefinitions . Search for KataConfig using the Search by name field. Click the Options menu for the KataConfig CRD, and then select Delete CustomResourceDefinition . Click Delete in the confirmation window. Wait for the KataConfig CRD to disappear from the list. This can take several minutes. 5.2. Uninstalling OpenShift sandboxed containers using the CLI You can uninstall OpenShift sandboxed containers by using the OpenShift Container Platform command-line interface (CLI) . Follow the steps below in the order that they are presented. 5.2.1. Deleting OpenShift sandboxed containers pods using the CLI To uninstall OpenShift sandboxed containers, you must first delete all running pods that use kata as the runtimeClass . Prerequisites You have installed the OpenShift CLI ( oc ). You have the command-line JSON processor ( jq ) installed. Procedure Search for pods that use kata as the runtimeClass by running the following command: USD oc get pods -A -o json | jq -r '.items[] | select(.spec.runtimeClassName == "kata").metadata.name' To delete each pod, run the following command: USD oc delete pod <pod-name> 5.2.2. Deleting the KataConfig custom resource using the CLI Remove and uninstall the kata runtime and all its related resources, such as CRI-O config and RuntimeClass , from your cluster. The deletion typically takes between ten to forty minutes, depending on the size of the deployment. Prerequisites You have OpenShift Container Platform 4.10 installed on your cluster. You have installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. Important Deleting the KataConfig CR automatically reboots the worker nodes. The reboot can take from 10 to more than 60 minutes. Factors that impede reboot time are as follows: A larger OpenShift Container Platform deployment with a greater number of worker nodes. Activation of the BIOS and Diagnostics utility. Deployment on a hard drive rather than on an SSD. Deployment on physical nodes such as bare metal, rather than on virtual nodes. A slow CPU or network. Procedure Delete the KataConfig custom resource by running the following command: USD oc delete kataconfig <KataConfig_CR_Name> The OpenShift sandboxed containers Operator removes all resources that were initially created to enable the runtime on your cluster. Important During deletion, the CLI stops responding until all worker nodes reboot. Wait for the process to complete before performing the verification or continuing to the procedure. Verification To verify that the KataConfig custom resource is deleted, run the following command: USD oc get kataconfig <KataConfig_CR_Name> Example output No KataConfig instances exist 5.2.3. Deleting the OpenShift sandboxed containers Operator using the CLI Remove the OpenShift sandboxed containers Operator from your cluster by deleting the Operator subscription, Operator group, cluster service version (CSV), and namespace. Prerequisites You have OpenShift Container Platform 4.10 installed on your cluster. You have installed the OpenShift CLI ( oc ). You have installed the comand-line JSON processor ( jq ). You have access to the cluster as a user with the cluster-admin role. Procedure Fetch the cluster service version (CSV) name for OpenShift sandboxed containers from the subscription by running the following command: CSV_NAME=USD(oc get csv -n openshift-sandboxed-containers-operator -o=custom-columns=:metadata.name) Delete the OpenShift sandboxed containers Operator subscription from Operator Lifecyle Manager (OLM) by running the following command: USD oc delete subscription sandboxed-containers-operator -n openshift-sandboxed-containers-operator Delete the CSV name for OpenShift sandboxed containers by running the following command: USD oc delete csv USD{CSV_NAME} -n openshift-sandboxed-containers-operator Fetch the OpenShift sandboxed containers Operator group name by running the following command: USD OG_NAME=USD(oc get operatorgroup -n openshift-sandboxed-containers-operator -o=jsonpath={..name}) Delete the OpenShift sandboxed containers Operator group name by running the following command: USD oc delete operatorgroup USD{OG_NAME} -n openshift-sandboxed-containers-operator Delete the OpenShift sandboxed containers namespace by running the following command: USD oc delete namespace openshift-sandboxed-containers-operator 5.2.4. Deleting the KataConfig custom resource definition using the CLI The KataConfig custom resource definition (CRD) lets you define the KataConfig CR. Delete the KataConfig CRD from your cluster. Prerequisites You have installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. You have removed the KataConfig CR from your cluster. You have removed the OpenShift sandboxed containers Operator from your cluster. Procedure Delete the KataConfig CRD by running the following command: USD oc delete crd kataconfigs.kataconfiguration.openshift.io Verification To verify that the KataConfig CRD is deleted, run the following command: USD oc get crd kataconfigs.kataconfiguration.openshift.io Example output Unknown CR KataConfig
[ "oc get pods -A -o json | jq -r '.items[] | select(.spec.runtimeClassName == \"kata\").metadata.name'", "oc delete pod <pod-name>", "oc delete kataconfig <KataConfig_CR_Name>", "oc get kataconfig <KataConfig_CR_Name>", "No KataConfig instances exist", "CSV_NAME=USD(oc get csv -n openshift-sandboxed-containers-operator -o=custom-columns=:metadata.name)", "oc delete subscription sandboxed-containers-operator -n openshift-sandboxed-containers-operator", "oc delete csv USD{CSV_NAME} -n openshift-sandboxed-containers-operator", "OG_NAME=USD(oc get operatorgroup -n openshift-sandboxed-containers-operator -o=jsonpath={..name})", "oc delete operatorgroup USD{OG_NAME} -n openshift-sandboxed-containers-operator", "oc delete namespace openshift-sandboxed-containers-operator", "oc delete crd kataconfigs.kataconfiguration.openshift.io", "oc get crd kataconfigs.kataconfiguration.openshift.io", "Unknown CR KataConfig" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/sandboxed_containers_support_for_openshift/uninstalling-sandboxed-containers
Chapter 2. Accessing the web console
Chapter 2. Accessing the web console The OpenShift Container Platform web console is a user interface accessible from a web browser. Developers can use the web console to visualize, browse, and manage the contents of projects. 2.1. Prerequisites JavaScript must be enabled to use the web console. For the best experience, use a web browser that supports WebSockets . Review the OpenShift Container Platform 4.x Tested Integrations page before you create the supporting infrastructure for your cluster. 2.2. Understanding and accessing the web console The web console runs as a pod on the master. The static assets required to run the web console are served by the pod. After OpenShift Container Platform is successfully installed using openshift-install create cluster , find the URL for the web console and login credentials for your installed cluster in the CLI output of the installation program. For example: Example output INFO Install complete! INFO Run 'export KUBECONFIG=<your working directory>/auth/kubeconfig' to manage the cluster with 'oc', the OpenShift CLI. INFO The cluster is ready when 'oc login -u kubeadmin -p <provided>' succeeds (wait a few minutes). INFO Access the OpenShift web-console here: https://console-openshift-console.apps.demo1.openshift4-beta-abcorp.com INFO Login to the console with user: kubeadmin, password: <provided> Use those details to log in and access the web console. For existing clusters that you did not install, you can use oc whoami --show-console to see the web console URL. Additional resources Enabling feature sets using the web console
[ "INFO Install complete! INFO Run 'export KUBECONFIG=<your working directory>/auth/kubeconfig' to manage the cluster with 'oc', the OpenShift CLI. INFO The cluster is ready when 'oc login -u kubeadmin -p <provided>' succeeds (wait a few minutes). INFO Access the OpenShift web-console here: https://console-openshift-console.apps.demo1.openshift4-beta-abcorp.com INFO Login to the console with user: kubeadmin, password: <provided>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/web_console/web-console
Chapter 3. Custom Inventory Scripts
Chapter 3. Custom Inventory Scripts Note Inventory scripts have been discontinued. For more information, see Export old inventory scripts in the Automation controller User Guide . If you use custom inventory scripts, migrate to sourcing these scripts from a project. For more information, see Inventory File Importing , and Inventory sources in the Automation controller User Guide . If you are setting up an inventory file, see Editing the Red Hat Ansible Automation Platform installer inventory file and find examples specific to your setup. If you are migrating to execution environments, see: Upgrading to execution environments . Creating and consuming execution environments . Automation mesh design patterns . Mesh Topology in the Ansible Automation Platform Upgrade and Migration Guide to validate your topology. For more information about automation mesh on a VM-based installation, see the Red Hat Ansible Automation Platform automation mesh guide for VM-based installations . For more information about automation mesh on an operator-based installation, see the Red Hat Ansible Automation Platform automation mesh for operator-based installations . If you already have a mesh topology set up and want to view node type, node health, and specific details about each node, see Topology Viewer .
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/automation_controller_administration_guide/assembly-custom-inventory-scripts
8.241. xorg-x11-drv-qxl
8.241. xorg-x11-drv-qxl 8.241.1. RHBA-2013:1650 - xorg-x11-drv-qxl bug fix update Updated xorg-x11-drv-qxl packages that fix two bugs are now available for Red Hat Enterprise Linux 6. The xorg-x11-drv-qxl packages provide an X11 video driver for the QEMU QXL video accelerator. This driver makes it possible to use Red Hat Enterprise Linux 6 as a guest operating system under the KVM kernel module and the QEMU multi-platform emulator, using the SPICE protocol. Bug Fixes BZ# 929037 When the user tried to start a guest with Red Hat Enterprise Linux 6 on a host with Red Hat Enterprise Linux 5, the QEMU QXL video accelerator failed with a segmentation fault. As a consequence, the guest was not able to start the system GUI. This update applies a patch to fix this bug and the guest now starts correctly. BZ# 951000 When using multiple QXL devices with the Xinerama extension, or multiple QXL devices while each being a separate screen, an attempt to set a resolution higher than 1024 x 768 pixels in the xorg.conf file failed with an error. With this update, the underlying source code has been modified and the resolution can now be set as expected. Users of xorg-x11-drv-qxl are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/xorg-x11-drv-qxl
Chapter 4. Advisories related to this release
Chapter 4. Advisories related to this release The following advisories are issued to document bug fixes and CVE fixes included in this release: RHSA-2024:4571 RHSA-2024:4572 RHSA-2024:4573 Revised on 2024-07-23 11:32:54 UTC
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/release_notes_for_red_hat_build_of_openjdk_21.0.4/openjdk-2104-advisory_openjdk
Storage APIs
Storage APIs OpenShift Container Platform 4.13 Reference guide for storage APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html-single/storage_apis/index
Chapter 1. OpenShift Container Platform CI/CD overview
Chapter 1. OpenShift Container Platform CI/CD overview OpenShift Container Platform is an enterprise-ready Kubernetes platform for developers, which enables organizations to automate the application delivery process through DevOps practices, such as continuous integration (CI) and continuous delivery (CD). To meet your organizational needs, the OpenShift Container Platform provides the following CI/CD solutions: OpenShift Builds OpenShift Pipelines OpenShift GitOps 1.1. OpenShift Builds With OpenShift Builds, you can create cloud-native apps by using a declarative build process. You can define the build process in a YAML file that you use to create a BuildConfig object. This definition includes attributes such as build triggers, input parameters, and source code. When deployed, the BuildConfig object typically builds a runnable image and pushes it to a container image registry. OpenShift Builds provides the following extensible support for build strategies: Docker build Source-to-image (S2I) build Custom build For more information, see Understanding image builds 1.2. OpenShift Pipelines OpenShift Pipelines provides a Kubernetes-native CI/CD framework to design and run each step of the CI/CD pipeline in its own container. It can scale independently to meet the on-demand pipelines with predictable outcomes. For more information, see Understanding OpenShift Pipelines 1.3. OpenShift GitOps OpenShift GitOps is an Operator that uses Argo CD as the declarative GitOps engine. It enables GitOps workflows across multicluster OpenShift and Kubernetes infrastructure. Using OpenShift GitOps, administrators can consistently configure and deploy Kubernetes-based infrastructure and applications across clusters and development lifecycles. For more information, see Understanding OpenShift GitOps 1.4. Jenkins Jenkins automates the process of building, testing, and deploying applications and projects. OpenShift Developer Tools provides a Jenkins image that integrates directly with the OpenShift Container Platform. Jenkins can be deployed on OpenShift by using the Samples Operator templates or certified Helm chart.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/cicd/ci-cd-overview
Chapter 1. Edge clusters
Chapter 1. Edge clusters Edge clusters are a solution for cost-efficient object storage configurations. Red Hat supports the following minimum configuration of an Red Hat Ceph Storage cluster: A three node cluster with two replicas for SSDs. A four-node cluster with three replicas for HDDs. A four-node cluster with EC pool with 2+2 configuration. A four-node cluster with 8+6 CRUSH MSR configuration With smaller clusters, the utilization goes down because of the amount of usage and the loss of resiliency.
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/edge_guide/edge-clusters_edge
Chapter 6. Tuned [tuned.openshift.io/v1]
Chapter 6. Tuned [tuned.openshift.io/v1] Description Tuned is a collection of rules that allows cluster-wide deployment of node-level sysctls and more flexibility to add custom tuning specified by user needs. These rules are translated and passed to all containerized Tuned daemons running in the cluster in the format that the daemons understand. The responsibility for applying the node-level tuning then lies with the containerized Tuned daemons. More info: https://github.com/openshift/cluster-node-tuning-operator Type object 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec is the specification of the desired behavior of Tuned. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status status object TunedStatus is the status for a Tuned resource. 6.1.1. .spec Description spec is the specification of the desired behavior of Tuned. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status Type object Property Type Description managementState string managementState indicates whether the registry instance represented by this config instance is under operator management or not. Valid values are Force, Managed, Unmanaged, and Removed. profile array Tuned profiles. profile[] object A Tuned profile. recommend array Selection logic for all Tuned profiles. recommend[] object Selection logic for a single Tuned profile. 6.1.2. .spec.profile Description Tuned profiles. Type array 6.1.3. .spec.profile[] Description A Tuned profile. Type object Required data name Property Type Description data string Specification of the Tuned profile to be consumed by the Tuned daemon. name string Name of the Tuned profile to be used in the recommend section. 6.1.4. .spec.recommend Description Selection logic for all Tuned profiles. Type array 6.1.5. .spec.recommend[] Description Selection logic for a single Tuned profile. Type object Required priority profile Property Type Description machineConfigLabels object (string) MachineConfigLabels specifies the labels for a MachineConfig. The MachineConfig is created automatically to apply additional host settings (e.g. kernel boot parameters) profile 'Profile' needs and can only be applied by creating a MachineConfig. This involves finding all MachineConfigPools with machineConfigSelector matching the MachineConfigLabels and setting the profile 'Profile' on all nodes that match the MachineConfigPools' nodeSelectors. match array Rules governing application of a Tuned profile connected by logical OR operator. match[] object Rules governing application of a Tuned profile. operand object Optional operand configuration. priority integer Tuned profile priority. Highest priority is 0. profile string Name of the Tuned profile to recommend. 6.1.6. .spec.recommend[].match Description Rules governing application of a Tuned profile connected by logical OR operator. Type array 6.1.7. .spec.recommend[].match[] Description Rules governing application of a Tuned profile. Type object Required label Property Type Description label string Node or Pod label name. match array (undefined) Additional rules governing application of the tuned profile connected by logical AND operator. type string Match type: [node/pod]. If omitted, "node" is assumed. value string Node or Pod label value. If omitted, the presence of label name is enough to match. 6.1.8. .spec.recommend[].operand Description Optional operand configuration. Type object Property Type Description debug boolean turn debugging on/off for the TuneD daemon: true/false (default is false) tunedConfig object Global configuration for the TuneD daemon as defined in tuned-main.conf 6.1.9. .spec.recommend[].operand.tunedConfig Description Global configuration for the TuneD daemon as defined in tuned-main.conf Type object Property Type Description reapply_sysctl boolean turn reapply_sysctl functionality on/off for the TuneD daemon: true/false 6.1.10. .status Description TunedStatus is the status for a Tuned resource. Type object 6.2. API endpoints The following API endpoints are available: /apis/tuned.openshift.io/v1/tuneds GET : list objects of kind Tuned /apis/tuned.openshift.io/v1/namespaces/{namespace}/tuneds DELETE : delete collection of Tuned GET : list objects of kind Tuned POST : create a Tuned /apis/tuned.openshift.io/v1/namespaces/{namespace}/tuneds/{name} DELETE : delete a Tuned GET : read the specified Tuned PATCH : partially update the specified Tuned PUT : replace the specified Tuned 6.2.1. /apis/tuned.openshift.io/v1/tuneds Table 6.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind Tuned Table 6.2. HTTP responses HTTP code Reponse body 200 - OK TunedList schema 401 - Unauthorized Empty 6.2.2. /apis/tuned.openshift.io/v1/namespaces/{namespace}/tuneds Table 6.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 6.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Tuned Table 6.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 6.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Tuned Table 6.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 6.8. HTTP responses HTTP code Reponse body 200 - OK TunedList schema 401 - Unauthorized Empty HTTP method POST Description create a Tuned Table 6.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.10. Body parameters Parameter Type Description body Tuned schema Table 6.11. HTTP responses HTTP code Reponse body 200 - OK Tuned schema 201 - Created Tuned schema 202 - Accepted Tuned schema 401 - Unauthorized Empty 6.2.3. /apis/tuned.openshift.io/v1/namespaces/{namespace}/tuneds/{name} Table 6.12. Global path parameters Parameter Type Description name string name of the Tuned namespace string object name and auth scope, such as for teams and projects Table 6.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a Tuned Table 6.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 6.15. Body parameters Parameter Type Description body DeleteOptions schema Table 6.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Tuned Table 6.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 6.18. HTTP responses HTTP code Reponse body 200 - OK Tuned schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Tuned Table 6.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.20. Body parameters Parameter Type Description body Patch schema Table 6.21. HTTP responses HTTP code Reponse body 200 - OK Tuned schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Tuned Table 6.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.23. Body parameters Parameter Type Description body Tuned schema Table 6.24. HTTP responses HTTP code Reponse body 200 - OK Tuned schema 201 - Created Tuned schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/node_apis/tuned-tuned-openshift-io-v1
Chapter 12. ca
Chapter 12. ca This chapter describes the commands under the ca command. 12.1. ca get Retrieve a CA by providing its URI. Usage: Table 12.1. Positional arguments Value Summary URI The uri reference for the ca. Table 12.2. Command arguments Value Summary -h, --help Show this help message and exit Table 12.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 12.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 12.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 12.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 12.2. ca list List CAs. Usage: Table 12.7. Command arguments Value Summary -h, --help Show this help message and exit --limit LIMIT, -l LIMIT Specify the limit to the number of items to list per page (default: 10; maximum: 100) --offset OFFSET, -o OFFSET Specify the page offset (default: 0) --name NAME, -n NAME Specify the ca name (default: none) Table 12.8. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 12.9. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 12.10. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 12.11. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack ca get [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] URI", "openstack ca list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--limit LIMIT] [--offset OFFSET] [--name NAME]" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/command_line_interface_reference/ca
23.15. Timekeeping
23.15. Timekeeping The guest virtual machine clock is typically initialized from the host physical machine clock. Most operating systems expect the hardware clock to be kept in UTC, which is the default setting. Accurate timekeeping on guest virtual machines is a key challenge for virtualization platforms. Different hypervisors attempt to handle the problem of timekeeping in a variety of ways. libvirt provides hypervisor-independent configuration settings for time management, using the <clock> and <timer> elements in the domain XML. The domain XML can be edited using the virsh edit command. For details, see Section 20.22, "Editing a Guest Virtual Machine's XML Configuration Settings" . ... <clock offset='localtime'> <timer name='rtc' tickpolicy='catchup' track='guest'> <catchup threshold='123' slew='120' limit='10000'/> </timer> <timer name='pit' tickpolicy='delay'/> </clock> ... Figure 23.25. Timekeeping The components of this section of the domain XML are as follows: Table 23.11. Timekeeping elements State Description <clock> The <clock> element is used to determine how the guest virtual machine clock is synchronized with the host physical machine clock. The offset attribute takes four possible values, allowing for fine grained control over how the guest virtual machine clock is synchronized to the host physical machine. Note that hypervisors are not required to support all policies across all time sources utc - Synchronizes the clock to UTC when booted. utc mode can be converted to variable mode, which can be controlled by using the adjustment attribute. If the value is reset , the conversion is not done. A numeric value forces the conversion to variable mode using the value as the initial adjustment. The default adjustment is hypervisor-specific. localtime - Synchronizes the guest virtual machine clock with the host physical machine's configured timezone when booted. The adjustment attribute behaves the same as in utc mode. timezone - Synchronizes the guest virtual machine clock to the requested time zone. variable - Gives the guest virtual machine clock an arbitrary offset applied relative to UTC or localtime , depending on the basis attribute. The delta relative to UTC (or localtime ) is specified in seconds, using the adjustment attribute. The guest virtual machine is free to adjust the RTC over time and expect that it will be honored at reboot. This is in contrast to utc and localtime mode (with the optional attribute adjustment='reset' ), where the RTC adjustments are lost at each reboot. In addition, the basis attribute can be either utc (default) or localtime . The clock element may have zero or more <timer> elements. <timer> See Note <present> Specifies whether a particular timer is available to the guest virtual machine. Can be set to yes or no . Note A <clock> element can have zero or more <timer> elements as children. The <timer> element specifies a time source used for guest virtual machine clock synchronization. In each <timer> element only the name is required, and all other attributes are optional: name - Selects which timer is being modified. The following values are acceptable: kvmclock , pit , or rtc . track - Specifies the timer track. The following values are acceptable: boot , guest , or wall . track is only valid for name="rtc" . tickpolicy - Determines what happens when the deadline for injecting a tick to the guest virtual machine is missed. The following values can be assigned: delay - Continues to deliver ticks at the normal rate. The guest virtual machine time will be delayed due to the late tick. catchup - Delivers ticks at a higher rate in order to catch up with the missed tick. The guest virtual machine time is not displayed once catch up is complete. In addition, there can be three optional attributes, each a positive integer: threshold, slew, and limit. merge - Merges the missed tick(s) into one tick and injects them. The guest virtual machine time may be delayed, depending on how the merge is done. discard - Throws away the missed tick(s) and continues with future injection at its default interval setting. The guest virtual machine time may be delayed, unless there is an explicit statement for handling lost ticks. Note The value utc is set as the clock offset in a virtual machine by default. However, if the guest virtual machine clock is run with the localtime value, the clock offset needs to be changed to a different value in order to have the guest virtual machine clock synchronized with the host physical machine clock. Example 23.1. Always synchronize to UTC Example 23.2. Always synchronize to the host physical machine timezone Example 23.3. Synchronize to an arbitrary time zone Example 23.4. Synchronize to UTC + arbitrary offset
[ "<clock offset='localtime'> <timer name='rtc' tickpolicy='catchup' track='guest'> <catchup threshold='123' slew='120' limit='10000'/> </timer> <timer name='pit' tickpolicy='delay'/> </clock>", "<clock offset=\"utc\" />", "<clock offset=\"localtime\" />", "<clock offset=\"timezone\" timezone=\"Europe/Paris\" />", "<clock offset=\"variable\" adjustment=\"123456\" />" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-manipulating_the_domain_xml-time_keeping
Chapter 10. Variables of the postfix role in System Roles
Chapter 10. Variables of the postfix role in System Roles The postfix role variables allow the user to install, configure, and start the postfix Mail Transfer Agent (MTA). The following role variables are defined in this section: postfix_conf : It includes key/value pairs of all the supported postfix configuration parameters. By default, the postfix_conf does not have a value. If your scenario requires removing any existing configuration and apply the desired configuration on top of a clean postfix installation, specify the : replaced option within the postfix_conf dictionary: An example with the : replaced option: postfix_check : It determines if a check has been executed before starting the postfix to verify the configuration changes. The default value is true. For example: postfix_backup : It determines whether a single backup copy of the configuration is created. By default the postfix_backup value is false. To overwrite any backup run the following command: If the postfix_backup value is changed to true , you must also set the postfix_backup_multiple value to false. For example: postfix_backup_multiple : It determines if the role will make a timestamped backup copy of the configuration. To keep multiple backup copies, run the following command: By default the value of postfix_backup_multiple is true. The postfix_backup_multiple:true setting overrides postfix_backup . If you want to use postfix_backup you must set the postfix_backup_multiple:false . postfix_manage_firewall : Integrates the postfix role with the firewall role to manage port access. By default, the variable is set to false . If you want to automatically manage port access from the postfix role, set the variable to true . postfix_manage_selinux : Integrates the postfix role with the selinux role to manage port access. By default, the variable is set to false . If you want to automatically manage port access from the postfix role, set the variable to true . Important The configuration parameters cannot be removed. Before running the postfix role, set the postfix_conf to all the required configuration parameters and use the file module to remove /etc/postfix/main.cf . 10.1. Additional resources /usr/share/doc/rhel-system-roles/postfix/README.md
[ "postfix_conf: relayhost: example.com", "postfix_conf: relayhost: example.com previous: replaced", "postfix_check: true", "*cp /etc/postfix/main.cf /etc/postfix/main.cf.backup*", "postfix_backup: true postfix_backup_multiple: false", "*cp /etc/postfix/main.cf /etc/postfix/main.cf.USD(date -Isec)*" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/automating_system_administration_by_using_rhel_system_roles_in_rhel_7.9/assembly_postfix-role-variables-in-system-roles_automating-system-administration-by-using-rhel-system-roles
Chapter 3. Setting up a new instance using the web console
Chapter 3. Setting up a new instance using the web console If you prefer a browser-based interface to set up Directory Server, you can use the Directory Server web console. 3.1. Prerequisites The server meets the requirements of the latest Red Hat Directory Server version as described in the Red Hat Directory Server 12 Release Notes . You installed the Directory Server packages as described in Installing the Directory Server packages 3.2. Using the web console to set up a new Directory Server instance This section describes how to use the web console to set up a new Directory Server instance. Prerequisites The cockpit web console package is installed. The cockpit.socket systemd unit is enabled and started. You opened port 9090 in the local firewall to allow accessing the web console. Procedure Use a browser to connect to the web console running on port 9090 on the Directory Server host: https:// server.example.com :9090 Log in as the root user or as a user with sudo privileges. Select the Red Hat Directory Server entry. Create a new instance: If no instance exists on the server, click the Create New Instance button. If the server already runs existing instances, select Actions and click Create New Instance . Complete the fields of the Create New Server Instance form: Instance Name : Sets the name of the instance. Note that you cannot change the name of an instance after it has been created. Port : Sets the port number of the LDAP protocol. The port must not be in use by another instance or service. The default port is 389. Secure Port : Sets the port number of the LDAPS protocol. The port must not be in use by another instance or service. The default port is 636. Create Self-Signed TLS Certificate DB : Enables TLS encryption in the instance, and creates a self-signed certificate. For increased security, Red Hat recommends that you create the new instance with the self-signed certificate and TLS enabled. Note that you can replace the self-signed certificate with a certificate issued by a Certificate Authority (CA) at a later date. Directory Manager DN : Sets the distinguished name (DN) of the administrative user of the instance. The default value is cn=Directory Manager . Directory Manager Password : Sets the password of the administrative user of the instance. Confirm Password : Must be set to the same value as in the Directory Manager Password field. Create Database : Select this field to automatically create a suffix during instance creation. Important If you do not create a suffix during instance creation, you must create it later manually before you can store data in this instance. If you enabled this option, fill the addition fields: Database Suffix : Sets the suffix for the back end. Database Name : Sets the name of the back end database. Database Initialization : Set this field to Create Suffix Entry . Click Create Instance . The new instance starts and is configured to start automatically when the system boots. Open the required ports in the firewall: # firewall-cmd --permanent --add-port={389/tcp,636/tcp} Reload the firewall configuration: # firewall-cmd --reload Additional resources Enabling TLS-encrypted connections to Directory Server
[ "https:// server.example.com :9090", "firewall-cmd --permanent --add-port={389/tcp,636/tcp}", "firewall-cmd --reload" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/installing_red_hat_directory_server/assembly_setting-up-a-new-instance-using-the-web-console_installing-rhds
Chapter 57. overcloud
Chapter 57. overcloud This chapter describes the commands under the overcloud command. 57.1. overcloud admin authorize Deploy the ssh keys needed by Mistral. Usage: Table 57.1. Optional Arguments Value Summary -h, --help Show this help message and exit --stack STACK Name or id of heat stack (default=env: OVERCLOUD_STACK_NAME) --overcloud-ssh-user OVERCLOUD_SSH_USER User for ssh access to overcloud nodes --overcloud-ssh-key OVERCLOUD_SSH_KEY Key path for ssh access to overcloud nodes. Whenundefined the key will be autodetected. --overcloud-ssh-network OVERCLOUD_SSH_NETWORK Network name to use for ssh access to overcloud nodes. --overcloud-ssh-enable-timeout OVERCLOUD_SSH_ENABLE_TIMEOUT Timeout for the ssh enable process to finish. --overcloud-ssh-port-timeout OVERCLOUD_SSH_PORT_TIMEOUT Timeout for to wait for the ssh port to become active. 57.2. overcloud cell export Export cell information used as import of another cell Usage: Table 57.2. Positional Arguments Value Summary <cell name> Name of the stack used for the additional cell. Table 57.3. Optional Arguments Value Summary -h, --help Show this help message and exit --control-plane-stack <control plane stack> Name of the environment main heat stack to export information from. (default=Env: OVERCLOUD_STACK_NAME) --cell-stack <cell stack>, -e <cell stack> Name of the controller cell heat stack to export information from. Used in case of: control plane stack cell controller stack multiple compute stacks --output-file <output file>, -o <output file> Name of the output file for the cell data export. it will default to "<name>.yaml" --force-overwrite, -f Overwrite output file if it exists. 57.3. overcloud config download Download Overcloud Config Usage: Table 57.4. Optional Arguments Value Summary -h, --help Show this help message and exit --name NAME The name of the plan, which is used for the object storage container, workflow environment and orchestration stack names. --config-dir CONFIG_DIR The directory where the configuration files will be pushed --config-type CONFIG_TYPE Type of object config to be extract from the deployment, defaults to all keys available --no-preserve-config If specified, will delete and recreate the --config- dir if it already exists. Default is to use the existing dir location and overwrite files. Files in --config-dir not from the stack will be preserved by default. 57.4. overcloud container image build Build overcloud container images with kolla-build. Usage: Table 57.5. Optional Arguments Value Summary -h, --help Show this help message and exit --config-file <yaml config file> Yaml config file specifying the images to build. may be specified multiple times. Order is preserved, and later files will override some options in files. Other options will append. If not specified, the default set of containers will be built. --kolla-config-file <config file> Path to a kolla config file to use. multiple config files can be specified, with values in later files taking precedence. By default, tripleo kolla conf file /usr/share/tripleo-common/container- images/tripleo_kolla_config_overrides.conf is added. --list-images Show the images which would be built instead of building them. --list-dependencies Show the image build dependencies instead of building them. --exclude <container-name> Name of a container to match against the list of containers to be built to skip. Can be specified multiple times. --use-buildah Use buildah instead of docker to build the images with Kolla. --work-dir <container builds directory> Tripleo container builds directory, storing configs and logs for each image and its dependencies. 57.5. overcloud container image prepare Generate files defining the images, tags and registry. Usage: Table 57.6. Optional Arguments Value Summary -h, --help Show this help message and exit --template-file <yaml template file> Yaml template file which the images config file will be built from. Default: /usr/share/tripleo-common/container- images/overcloud_containers.yaml.j2 --push-destination <location> Location of image registry to push images to. if specified, a push_destination will be set for every image entry. --tag <tag> Override the default tag substitution. if --tag-from- label is specified, start discovery with this tag. Default: 16.0 --tag-from-label <image label> Use the value of the specified label(s) to discover the tag. Labels can be combined in a template format, for example: {version}-{release} --namespace <namespace> Override the default namespace substitution. Default: registry.redhat.io/rhosp-rhel8 --prefix <prefix> Override the default name prefix substitution. Default: openstack- --suffix <suffix> Override the default name suffix substitution. Default: --set <variable=value> Set the value of a variable in the template, even if it has no dedicated argument such as "--suffix". --exclude <regex> Pattern to match against resulting imagename entries to exclude from the final output. Can be specified multiple times. --include <regex> Pattern to match against resulting imagename entries to include in final output. Can be specified multiple times, entries not matching any --include will be excluded. --exclude is ignored if --include is used. --output-images-file <file path> File to write resulting image entries to, as well as stdout. Any existing file will be overwritten. --environment-file <file path>, -e <file path> Environment files specifying which services are containerized. Entries will be filtered to only contain images used by containerized services. (Can be specified more than once.) --environment-directory <HEAT ENVIRONMENT DIRECTORY> Environment file directories that are automatically added to the update command. Entries will be filtered to only contain images used by containerized services. Can be specified more than once. Files in directories are loaded in ascending sort order. --output-env-file <file path> File to write heat environment file which specifies all image parameters. Any existing file will be overwritten. --roles-file ROLES_FILE, -r ROLES_FILE Roles file, overrides the default roles_data.yaml in the t-h-t templates directory used for deployment. May be an absolute path or the path relative to the templates dir. --modify-role MODIFY_ROLE Name of ansible role to run between every image upload pull and push. --modify-vars MODIFY_VARS Ansible variable file containing variables to use when invoking the role --modify-role. 57.6. overcloud container image tag discover Discover the versioned tag for an image. Usage: Table 57.7. Optional Arguments Value Summary -h, --help Show this help message and exit --image <container image> Fully qualified name of the image to discover the tag for (Including registry and stable tag). --tag-from-label <image label> Use the value of the specified label(s) to discover the tag. Labels can be combined in a template format, for example: {version}-{release} 57.7. overcloud container image upload Push overcloud container images to registries. Usage: Table 57.8. Optional Arguments Value Summary -h, --help Show this help message and exit --config-file <yaml config file> Yaml config file specifying the image build. may be specified multiple times. Order is preserved, and later files will override some options in files. Other options will append. --cleanup <full, partial, none> Cleanup behavior for local images left after upload. The default full will attempt to delete all local images. partial will leave images required for deployment on this host. none will do no cleanup. 57.8. overcloud credentials Create the overcloudrc files Usage: Table 57.9. Positional Arguments Value Summary plan The name of the plan you want to create rc files for. Table 57.10. Optional Arguments Value Summary -h, --help Show this help message and exit --directory [DIRECTORY] The directory to create the rc files. defaults to the current directory. 57.9. overcloud delete Delete overcloud stack and plan Usage: Table 57.11. Positional Arguments Value Summary stack Name or id of heat stack to delete(default=env: OVERCLOUD_STACK_NAME) Table 57.12. Optional Arguments Value Summary -h, --help Show this help message and exit -y, --yes Skip yes/no prompt (assume yes). 57.10. overcloud deploy Deploy Overcloud Usage: Table 57.13. Optional Arguments Value Summary --templates [TEMPLATES] The directory containing the heat templates to deploy --stack STACK Stack name to create or update --timeout <TIMEOUT>, -t <TIMEOUT> Deployment timeout in minutes. --control-scale CONTROL_SCALE New number of control nodes. (deprecated. use an environment file and set the parameter ControllerCount. This option will be removed in the "U" release.) --compute-scale COMPUTE_SCALE New number of compute nodes. (deprecated. use an environment file and set the parameter ComputeCount. This option will be removed in the "U" release.) --ceph-storage-scale CEPH_STORAGE_SCALE New number of ceph storage nodes. (deprecated. use an environment file and set the parameter CephStorageCount. This option will be removed in the "U" release.) --block-storage-scale BLOCK_STORAGE_SCALE New number of cinder storage nodes. (deprecated. use an environment file and set the parameter BlockStorageCount. This option will be removed in the "U" release.) --swift-storage-scale SWIFT_STORAGE_SCALE New number of swift storage nodes. (deprecated. use an environment file and set the parameter ObjectStorageCount. This option will be removed in the "U" release.) --control-flavor CONTROL_FLAVOR Nova flavor to use for control nodes. (deprecated. use an environment file and set the parameter OvercloudControlFlavor. This option will be removed in the "U" release.) --compute-flavor COMPUTE_FLAVOR Nova flavor to use for compute nodes. (deprecated. use an environment file and set the parameter OvercloudComputeFlavor. This option will be removed in the "U" release.) --ceph-storage-flavor CEPH_STORAGE_FLAVOR Nova flavor to use for ceph storage nodes. (DEPRECATED. Use an environment file and set the parameter OvercloudCephStorageFlavor. This option will be removed in the "U" release.) --block-storage-flavor BLOCK_STORAGE_FLAVOR Nova flavor to use for cinder storage nodes (DEPRECATED. Use an environment file and set the parameter OvercloudBlockStorageFlavor. This option will be removed in the "U" release.) --swift-storage-flavor SWIFT_STORAGE_FLAVOR Nova flavor to use for swift storage nodes (DEPRECATED. Use an environment file and set the parameter OvercloudSwiftStorageFlavor. This option will be removed in the "U" release.) --libvirt-type {kvm,qemu} Libvirt domain type. --ntp-server NTP_SERVER The ntp for overcloud nodes. --no-proxy NO_PROXY A comma separated list of hosts that should not be proxied. --overcloud-ssh-user OVERCLOUD_SSH_USER User for ssh access to overcloud nodes --overcloud-ssh-key OVERCLOUD_SSH_KEY Key path for ssh access to overcloud nodes. Whenundefined the key will be autodetected. --overcloud-ssh-network OVERCLOUD_SSH_NETWORK Network name to use for ssh access to overcloud nodes. --overcloud-ssh-enable-timeout OVERCLOUD_SSH_ENABLE_TIMEOUT Timeout for the ssh enable process to finish. --overcloud-ssh-port-timeout OVERCLOUD_SSH_PORT_TIMEOUT Timeout for to wait for the ssh port to become active. --environment-file <HEAT ENVIRONMENT FILE>, -e <HEAT ENVIRONMENT FILE> Environment files to be passed to the heat stack- create or heat stack-update command. (Can be specified more than once.) --environment-directory <HEAT ENVIRONMENT DIRECTORY> Environment file directories that are automatically added to the heat stack-create or heat stack-update commands. Can be specified more than once. Files in directories are loaded in ascending sort order. --roles-file ROLES_FILE, -r ROLES_FILE Roles file, overrides the default roles_data.yaml in the --templates directory. May be an absolute path or the path relative to --templates --networks-file NETWORKS_FILE, -n NETWORKS_FILE Networks file, overrides the default network_data.yaml in the --templates directory --plan-environment-file PLAN_ENVIRONMENT_FILE, -p PLAN_ENVIRONMENT_FILE Plan environment file, overrides the default plan- environment.yaml in the --templates directory --no-cleanup Don't cleanup temporary files, just log their location --update-plan-only Only update the plan. do not perform the actual deployment. NOTE: Will move to a discrete command in a future release. --validation-errors-nonfatal Allow the deployment to continue in spite of validation errors. Note that attempting deployment while errors exist is likely to fail. --validation-warnings-fatal Exit if there are warnings from the configuration pre- checks. --disable-validations Deprecated. disable the pre-deployment validations entirely. These validations are the built-in pre- deployment validations. To enable external validations from tripleo-validations, use the --run-validations flag. These validations are now run via the external validations in tripleo-validations. --inflight-validations Activate in-flight validations during the deploy. in- flight validations provide a robust way to ensure deployed services are running right after their activation. Defaults to False. --dry-run Only run validations, but do not apply any changes. --run-validations Run external validations from the tripleo-validations project. --skip-postconfig Skip the overcloud post-deployment configuration. --force-postconfig Force the overcloud post-deployment configuration. --skip-deploy-identifier Skip generation of a unique identifier for the DeployIdentifier parameter. The software configuration deployment steps will only be triggered if there is an actual change to the configuration. This option should be used with Caution, and only if there is confidence that the software configuration does not need to be run, such as when scaling out certain roles. --answers-file ANSWERS_FILE Path to a yaml file with arguments and parameters. --disable-password-generation Disable password generation. --deployed-server Use pre-provisioned overcloud nodes. removes baremetal,compute and image services requirements from theundercloud node. Must only be used with the-- disable-validations. --config-download Run deployment via config-download mechanism. this is now the default, and this CLI options may be removed in the future. --no-config-download, --stack-only Disable the config-download workflow and only create the stack and associated OpenStack resources. No software configuration will be applied. --config-download-only Disable the stack create/update, and only run the config-download workflow to apply the software configuration. --output-dir OUTPUT_DIR Directory to use for saved output when using --config- download. The directory must be writeable by the mistral user. When not specified, the default server side value will be used (/var/lib/mistral/<execution id>. --override-ansible-cfg OVERRIDE_ANSIBLE_CFG Path to ansible configuration file. the configuration in the file will override any configuration used by config-download by default. --config-download-timeout CONFIG_DOWNLOAD_TIMEOUT Timeout (in minutes) to use for config-download steps. If unset, will default to however much time is leftover from the --timeout parameter after the stack operation. --deployment-python-interpreter DEPLOYMENT_PYTHON_INTERPRETER The path to python interpreter to use for the deployment actions. This may need to be used if deploying on a python2 host from a python3 system or vice versa. -b <baremetal_deployment.yaml>, --baremetal-deployment <baremetal_deployment.yaml> Configuration file describing the baremetal deployment 57.11. overcloud execute Execute a Heat software config on the servers. Usage: Table 57.14. Positional Arguments Value Summary file_in None Table 57.15. Optional Arguments Value Summary -h, --help Show this help message and exit -s SERVER_NAME, --server_name SERVER_NAME Nova server_name or partial name to match. -g GROUP, --group GROUP Heat software config "group" type. defaults to "script". 57.12. overcloud export Export stack information used as import of another stack Usage: Table 57.16. Optional Arguments Value Summary -h, --help Show this help message and exit --stack <stack> Name of the environment main heat stack to export information from. (default=Env: OVERCLOUD_STACK_NAME) --output-file <output file>, -o <output file> Name of the output file for the stack data export. it will default to "<name>.yaml" --force-overwrite, -f Overwrite output file if it exists. --config-download-dir CONFIG_DOWNLOAD_DIR Directory to search for config-download export data. Defaults to /var/lib/mistral/<stack> --no-password-excludes Dont exclude certain passwords from the password export. Defaults to False in that some passwords will be excluded that are not typically necessary. 57.13. overcloud external-update run Run external minor update Ansible playbook This will run the external minor update Ansible playbook, executing tasks from the undercloud. The update playbooks are made available after completion of the overcloud update prepare command. Usage: Table 57.17. Optional Arguments Value Summary -h, --help Show this help message and exit --static-inventory STATIC_INVENTORY Path to an existing ansible inventory to use. if not specified, one will be generated in ~/tripleo-ansible- inventory.yaml --ssh-user SSH_USER Deprecated: only tripleo-admin should be used as ssh user. --tags TAGS A string specifying the tag or comma separated list of tags to be passed as --tags to ansible-playbook. --skip-tags SKIP_TAGS A string specifying the tag or comma separated list of tags to be passed as --skip-tags to ansible-playbook. --stack STACK Name or id of heat stack (default=env: OVERCLOUD_STACK_NAME) -e EXTRA_VARS, --extra-vars EXTRA_VARS Set additional variables as key=value or yaml/json --no-workflow Run ansible-playbook directly via system command instead of running Ansiblevia the TripleO mistral workflows. 57.14. overcloud external-upgrade run Run external major upgrade Ansible playbook This will run the external major upgrade Ansible playbook, executing tasks from the undercloud. The upgrade playbooks are made available after completion of the overcloud upgrade prepare command. Usage: Table 57.18. Optional Arguments Value Summary -h, --help Show this help message and exit --static-inventory STATIC_INVENTORY Path to an existing ansible inventory to use. if not specified, one will be generated in ~/tripleo-ansible- inventory.yaml --ssh-user SSH_USER Deprecated: only tripleo-admin should be used as ssh user. --tags TAGS A string specifying the tag or comma separated list of tags to be passed as --tags to ansible-playbook. --skip-tags SKIP_TAGS A string specifying the tag or comma separated list of tags to be passed as --skip-tags to ansible-playbook. --stack STACK Name or id of heat stack (default=env: OVERCLOUD_STACK_NAME) -e EXTRA_VARS, --extra-vars EXTRA_VARS Set additional variables as key=value or yaml/json --no-workflow Run ansible-playbook directly via system command instead of running Ansiblevia the TripleO mistral workflows. 57.15. overcloud failures Get deployment failures Usage: Table 57.19. Optional Arguments Value Summary -h, --help Show this help message and exit --plan PLAN, --stack PLAN Name of the stack/plan. (default: overcloud) 57.16. overcloud ffwd-upgrade converge Converge the fast-forward upgrade on Overcloud Nodes This is the last step for completion of a fast forward upgrade. The main task is updating the plan and stack to unblock future stack updates. For the ffwd upgrade workflow we have set and used the config-download Software/Structured Deployment for the OS::TripleO and OS::Heat resources. This unsets those back to their default values. Usage: Table 57.20. Optional Arguments Value Summary --templates [TEMPLATES] The directory containing the heat templates to deploy --stack STACK Stack name to create or update --timeout <TIMEOUT>, -t <TIMEOUT> Deployment timeout in minutes. --control-scale CONTROL_SCALE New number of control nodes. (deprecated. use an environment file and set the parameter ControllerCount. This option will be removed in the "U" release.) --compute-scale COMPUTE_SCALE New number of compute nodes. (deprecated. use an environment file and set the parameter ComputeCount. This option will be removed in the "U" release.) --ceph-storage-scale CEPH_STORAGE_SCALE New number of ceph storage nodes. (deprecated. use an environment file and set the parameter CephStorageCount. This option will be removed in the "U" release.) --block-storage-scale BLOCK_STORAGE_SCALE New number of cinder storage nodes. (deprecated. use an environment file and set the parameter BlockStorageCount. This option will be removed in the "U" release.) --swift-storage-scale SWIFT_STORAGE_SCALE New number of swift storage nodes. (deprecated. use an environment file and set the parameter ObjectStorageCount. This option will be removed in the "U" release.) --control-flavor CONTROL_FLAVOR Nova flavor to use for control nodes. (deprecated. use an environment file and set the parameter OvercloudControlFlavor. This option will be removed in the "U" release.) --compute-flavor COMPUTE_FLAVOR Nova flavor to use for compute nodes. (deprecated. use an environment file and set the parameter OvercloudComputeFlavor. This option will be removed in the "U" release.) --ceph-storage-flavor CEPH_STORAGE_FLAVOR Nova flavor to use for ceph storage nodes. (DEPRECATED. Use an environment file and set the parameter OvercloudCephStorageFlavor. This option will be removed in the "U" release.) --block-storage-flavor BLOCK_STORAGE_FLAVOR Nova flavor to use for cinder storage nodes (DEPRECATED. Use an environment file and set the parameter OvercloudBlockStorageFlavor. This option will be removed in the "U" release.) --swift-storage-flavor SWIFT_STORAGE_FLAVOR Nova flavor to use for swift storage nodes (DEPRECATED. Use an environment file and set the parameter OvercloudSwiftStorageFlavor. This option will be removed in the "U" release.) --libvirt-type {kvm,qemu} Libvirt domain type. --ntp-server NTP_SERVER The ntp for overcloud nodes. --no-proxy NO_PROXY A comma separated list of hosts that should not be proxied. --overcloud-ssh-user OVERCLOUD_SSH_USER User for ssh access to overcloud nodes --overcloud-ssh-key OVERCLOUD_SSH_KEY Key path for ssh access to overcloud nodes. Whenundefined the key will be autodetected. --overcloud-ssh-network OVERCLOUD_SSH_NETWORK Network name to use for ssh access to overcloud nodes. --overcloud-ssh-enable-timeout OVERCLOUD_SSH_ENABLE_TIMEOUT Timeout for the ssh enable process to finish. --overcloud-ssh-port-timeout OVERCLOUD_SSH_PORT_TIMEOUT Timeout for to wait for the ssh port to become active. --environment-file <HEAT ENVIRONMENT FILE>, -e <HEAT ENVIRONMENT FILE> Environment files to be passed to the heat stack- create or heat stack-update command. (Can be specified more than once.) --environment-directory <HEAT ENVIRONMENT DIRECTORY> Environment file directories that are automatically added to the heat stack-create or heat stack-update commands. Can be specified more than once. Files in directories are loaded in ascending sort order. --roles-file ROLES_FILE, -r ROLES_FILE Roles file, overrides the default roles_data.yaml in the --templates directory. May be an absolute path or the path relative to --templates --networks-file NETWORKS_FILE, -n NETWORKS_FILE Networks file, overrides the default network_data.yaml in the --templates directory --plan-environment-file PLAN_ENVIRONMENT_FILE, -p PLAN_ENVIRONMENT_FILE Plan environment file, overrides the default plan- environment.yaml in the --templates directory --no-cleanup Don't cleanup temporary files, just log their location --update-plan-only Only update the plan. do not perform the actual deployment. NOTE: Will move to a discrete command in a future release. --validation-errors-nonfatal Allow the deployment to continue in spite of validation errors. Note that attempting deployment while errors exist is likely to fail. --validation-warnings-fatal Exit if there are warnings from the configuration pre- checks. --disable-validations Deprecated. disable the pre-deployment validations entirely. These validations are the built-in pre- deployment validations. To enable external validations from tripleo-validations, use the --run-validations flag. These validations are now run via the external validations in tripleo-validations. --inflight-validations Activate in-flight validations during the deploy. in- flight validations provide a robust way to ensure deployed services are running right after their activation. Defaults to False. --dry-run Only run validations, but do not apply any changes. --run-validations Run external validations from the tripleo-validations project. --skip-postconfig Skip the overcloud post-deployment configuration. --force-postconfig Force the overcloud post-deployment configuration. --skip-deploy-identifier Skip generation of a unique identifier for the DeployIdentifier parameter. The software configuration deployment steps will only be triggered if there is an actual change to the configuration. This option should be used with Caution, and only if there is confidence that the software configuration does not need to be run, such as when scaling out certain roles. --answers-file ANSWERS_FILE Path to a yaml file with arguments and parameters. --disable-password-generation Disable password generation. --deployed-server Use pre-provisioned overcloud nodes. removes baremetal,compute and image services requirements from theundercloud node. Must only be used with the-- disable-validations. --config-download Run deployment via config-download mechanism. this is now the default, and this CLI options may be removed in the future. --no-config-download, --stack-only Disable the config-download workflow and only create the stack and associated OpenStack resources. No software configuration will be applied. --config-download-only Disable the stack create/update, and only run the config-download workflow to apply the software configuration. --output-dir OUTPUT_DIR Directory to use for saved output when using --config- download. The directory must be writeable by the mistral user. When not specified, the default server side value will be used (/var/lib/mistral/<execution id>. --override-ansible-cfg OVERRIDE_ANSIBLE_CFG Path to ansible configuration file. the configuration in the file will override any configuration used by config-download by default. --config-download-timeout CONFIG_DOWNLOAD_TIMEOUT Timeout (in minutes) to use for config-download steps. If unset, will default to however much time is leftover from the --timeout parameter after the stack operation. --deployment-python-interpreter DEPLOYMENT_PYTHON_INTERPRETER The path to python interpreter to use for the deployment actions. This may need to be used if deploying on a python2 host from a python3 system or vice versa. -b <baremetal_deployment.yaml>, --baremetal-deployment <baremetal_deployment.yaml> Configuration file describing the baremetal deployment --yes Use --yes to skip the confirmation required before any ffwd-upgrade operation. Use this with caution! 57.17. overcloud ffwd-upgrade prepare Run heat stack update for overcloud nodes to refresh heat stack outputs. The heat stack outputs are what we use later on to generate ansible playbooks which deliver the ffwd upgrade workflow. This is used as the first step for a fast forward upgrade of your overcloud. Usage: Table 57.21. Optional Arguments Value Summary --templates [TEMPLATES] The directory containing the heat templates to deploy --stack STACK Stack name to create or update --timeout <TIMEOUT>, -t <TIMEOUT> Deployment timeout in minutes. --control-scale CONTROL_SCALE New number of control nodes. (deprecated. use an environment file and set the parameter ControllerCount. This option will be removed in the "U" release.) --compute-scale COMPUTE_SCALE New number of compute nodes. (deprecated. use an environment file and set the parameter ComputeCount. This option will be removed in the "U" release.) --ceph-storage-scale CEPH_STORAGE_SCALE New number of ceph storage nodes. (deprecated. use an environment file and set the parameter CephStorageCount. This option will be removed in the "U" release.) --block-storage-scale BLOCK_STORAGE_SCALE New number of cinder storage nodes. (deprecated. use an environment file and set the parameter BlockStorageCount. This option will be removed in the "U" release.) --swift-storage-scale SWIFT_STORAGE_SCALE New number of swift storage nodes. (deprecated. use an environment file and set the parameter ObjectStorageCount. This option will be removed in the "U" release.) --control-flavor CONTROL_FLAVOR Nova flavor to use for control nodes. (deprecated. use an environment file and set the parameter OvercloudControlFlavor. This option will be removed in the "U" release.) --compute-flavor COMPUTE_FLAVOR Nova flavor to use for compute nodes. (deprecated. use an environment file and set the parameter OvercloudComputeFlavor. This option will be removed in the "U" release.) --ceph-storage-flavor CEPH_STORAGE_FLAVOR Nova flavor to use for ceph storage nodes. (DEPRECATED. Use an environment file and set the parameter OvercloudCephStorageFlavor. This option will be removed in the "U" release.) --block-storage-flavor BLOCK_STORAGE_FLAVOR Nova flavor to use for cinder storage nodes (DEPRECATED. Use an environment file and set the parameter OvercloudBlockStorageFlavor. This option will be removed in the "U" release.) --swift-storage-flavor SWIFT_STORAGE_FLAVOR Nova flavor to use for swift storage nodes (DEPRECATED. Use an environment file and set the parameter OvercloudSwiftStorageFlavor. This option will be removed in the "U" release.) --libvirt-type {kvm,qemu} Libvirt domain type. --ntp-server NTP_SERVER The ntp for overcloud nodes. --no-proxy NO_PROXY A comma separated list of hosts that should not be proxied. --overcloud-ssh-user OVERCLOUD_SSH_USER User for ssh access to overcloud nodes --overcloud-ssh-key OVERCLOUD_SSH_KEY Key path for ssh access to overcloud nodes. Whenundefined the key will be autodetected. --overcloud-ssh-network OVERCLOUD_SSH_NETWORK Network name to use for ssh access to overcloud nodes. --overcloud-ssh-enable-timeout OVERCLOUD_SSH_ENABLE_TIMEOUT Timeout for the ssh enable process to finish. --overcloud-ssh-port-timeout OVERCLOUD_SSH_PORT_TIMEOUT Timeout for to wait for the ssh port to become active. --environment-file <HEAT ENVIRONMENT FILE>, -e <HEAT ENVIRONMENT FILE> Environment files to be passed to the heat stack- create or heat stack-update command. (Can be specified more than once.) --environment-directory <HEAT ENVIRONMENT DIRECTORY> Environment file directories that are automatically added to the heat stack-create or heat stack-update commands. Can be specified more than once. Files in directories are loaded in ascending sort order. --roles-file ROLES_FILE, -r ROLES_FILE Roles file, overrides the default roles_data.yaml in the --templates directory. May be an absolute path or the path relative to --templates --networks-file NETWORKS_FILE, -n NETWORKS_FILE Networks file, overrides the default network_data.yaml in the --templates directory --plan-environment-file PLAN_ENVIRONMENT_FILE, -p PLAN_ENVIRONMENT_FILE Plan environment file, overrides the default plan- environment.yaml in the --templates directory --no-cleanup Don't cleanup temporary files, just log their location --update-plan-only Only update the plan. do not perform the actual deployment. NOTE: Will move to a discrete command in a future release. --validation-errors-nonfatal Allow the deployment to continue in spite of validation errors. Note that attempting deployment while errors exist is likely to fail. --validation-warnings-fatal Exit if there are warnings from the configuration pre- checks. --disable-validations Deprecated. disable the pre-deployment validations entirely. These validations are the built-in pre- deployment validations. To enable external validations from tripleo-validations, use the --run-validations flag. These validations are now run via the external validations in tripleo-validations. --inflight-validations Activate in-flight validations during the deploy. in- flight validations provide a robust way to ensure deployed services are running right after their activation. Defaults to False. --dry-run Only run validations, but do not apply any changes. --run-validations Run external validations from the tripleo-validations project. --skip-postconfig Skip the overcloud post-deployment configuration. --force-postconfig Force the overcloud post-deployment configuration. --skip-deploy-identifier Skip generation of a unique identifier for the DeployIdentifier parameter. The software configuration deployment steps will only be triggered if there is an actual change to the configuration. This option should be used with Caution, and only if there is confidence that the software configuration does not need to be run, such as when scaling out certain roles. --answers-file ANSWERS_FILE Path to a yaml file with arguments and parameters. --disable-password-generation Disable password generation. --deployed-server Use pre-provisioned overcloud nodes. removes baremetal,compute and image services requirements from theundercloud node. Must only be used with the-- disable-validations. --config-download Run deployment via config-download mechanism. this is now the default, and this CLI options may be removed in the future. --no-config-download, --stack-only Disable the config-download workflow and only create the stack and associated OpenStack resources. No software configuration will be applied. --config-download-only Disable the stack create/update, and only run the config-download workflow to apply the software configuration. --output-dir OUTPUT_DIR Directory to use for saved output when using --config- download. The directory must be writeable by the mistral user. When not specified, the default server side value will be used (/var/lib/mistral/<execution id>. --override-ansible-cfg OVERRIDE_ANSIBLE_CFG Path to ansible configuration file. the configuration in the file will override any configuration used by config-download by default. --config-download-timeout CONFIG_DOWNLOAD_TIMEOUT Timeout (in minutes) to use for config-download steps. If unset, will default to however much time is leftover from the --timeout parameter after the stack operation. --deployment-python-interpreter DEPLOYMENT_PYTHON_INTERPRETER The path to python interpreter to use for the deployment actions. This may need to be used if deploying on a python2 host from a python3 system or vice versa. -b <baremetal_deployment.yaml>, --baremetal-deployment <baremetal_deployment.yaml> Configuration file describing the baremetal deployment --yes Use --yes to skip the confirmation required before any ffwd-upgrade operation. Use this with caution! 57.18. overcloud ffwd-upgrade run Run fast forward upgrade ansible playbooks on Overcloud nodes This will run the fast_forward_upgrade_playbook.yaml ansible playbook. This playbook was generated when you ran the ffwd-upgrade prepare command. Running 'ffwd- upgrade run ' is the second step in the ffwd upgrade workflow. Usage: Table 57.22. Optional Arguments Value Summary -h, --help Show this help message and exit --yes Use --yes to skip the confirmation required before any ffwd-upgrade operation. Use this with caution! --static-inventory STATIC_INVENTORY Path to an existing ansible inventory to use. if not specified, one will be generated in ~/tripleo-ansible- inventory.yaml --ssh-user SSH_USER Deprecated: only tripleo-admin should be used as ssh user. --stack STACK Name or id of heat stack (default=env: OVERCLOUD_STACK_NAME) --no-workflow Run ansible-playbook directly via system command instead of running Ansiblevia the TripleO mistral workflows. 57.19. overcloud generate fencing Generate fencing parameters Usage: Table 57.23. Positional Arguments Value Summary instackenv None Table 57.24. Optional Arguments Value Summary -h, --help Show this help message and exit -a FENCE_ACTION, --action FENCE_ACTION Deprecated: this option is ignored. --delay DELAY Wait delay seconds before fencing is started --ipmi-lanplus Deprecated: this is the default. --ipmi-no-lanplus Do not use lanplus. defaults to: false --ipmi-cipher IPMI_CIPHER Ciphersuite to use (same as ipmitool -c parameter. --ipmi-level IPMI_LEVEL Privilegel level on ipmi device. valid levels: callback, user, operator, administrator. --output OUTPUT Write parameters to a file 57.20. overcloud image build Build images for the overcloud Usage: Table 57.25. Optional Arguments Value Summary -h, --help Show this help message and exit --config-file <yaml config file> Yaml config file specifying the image build. may be specified multiple times. Order is preserved, and later files will override some options in files. Other options will append. --image-name <image name> Name of image to build. may be specified multiple times. If unspecified, will build all images in given YAML files. --no-skip Skip build if cached image exists. --output-directory OUTPUT_DIRECTORY Output directory for images. defaults to USDTRIPLEO_ROOT,or current directory if unset. 57.21. overcloud image upload Make existing image files available for overcloud deployment. Usage: Table 57.26. Optional Arguments Value Summary -h, --help Show this help message and exit --image-path IMAGE_PATH Path to directory containing image files --os-image-name OS_IMAGE_NAME Openstack disk image filename --ironic-python-agent-name IPA_NAME Openstack ironic-python-agent (agent) image filename --http-boot HTTP_BOOT Root directory for the ironic-python-agent image. if uploading images for multiple architectures/platforms, vary this argument such that a distinct folder is created for each architecture/platform. --update-existing Update images if already exist --whole-disk When set, the overcloud-full image to be uploaded will be considered as a whole disk one --architecture ARCHITECTURE Architecture type for these images, x86_64 , i386 and ppc64le are common options. This option should match at least one arch value in instackenv.json --platform PLATFORM Platform type for these images. platform is a sub- category of architecture. For example you may have generic images for x86_64 but offer images specific to SandyBridge (SNB). --image-type {os,ironic-python-agent} If specified, allows to restrict the image type to upload (os for the overcloud image or ironic-python- agent for the ironic-python-agent one) --progress Show progress bar for upload files action --local Copy files locally, even if there is an image service endpoint --local-path LOCAL_PATH Root directory for image file copy destination when there is no image endpoint, or when --local is specified 57.22. overcloud netenv validate Validate the network environment file. Usage: Table 57.27. Optional Arguments Value Summary -h, --help Show this help message and exit -f NETENV, --file NETENV Path to the network environment file 57.23. overcloud node bios configure Apply BIOS configuration on given nodes Usage: Table 57.28. Positional Arguments Value Summary <node_uuid> Baremetal node uuids for the node(s) to configure bios Table 57.29. Optional Arguments Value Summary -h, --help Show this help message and exit --all-manageable Configure bios for all nodes currently in manageable state --configuration <configuration> Bios configuration (yaml/json string or file name). 57.24. overcloud node bios reset Reset BIOS configuration to factory default Usage: Table 57.30. Positional Arguments Value Summary <node_uuid> Baremetal node uuids for the node(s) to reset bios Table 57.31. Optional Arguments Value Summary -h, --help Show this help message and exit --all-manageable Reset bios on all nodes currently in manageable state 57.25. overcloud node clean Run node(s) through cleaning. Usage: Table 57.32. Positional Arguments Value Summary <node_uuid> Baremetal node uuids for the node(s) to be cleaned Table 57.33. Optional Arguments Value Summary -h, --help Show this help message and exit --all-manageable Clean all nodes currently in manageable state --provide Provide (make available) the nodes once cleaned 57.26. overcloud node configure Configure Node boot options. Usage: Table 57.34. Positional Arguments Value Summary <node_uuid> Baremetal node uuids for the node(s) to be configured Table 57.35. Optional Arguments Value Summary -h, --help Show this help message and exit --all-manageable Configure all nodes currently in manageable state --deploy-kernel DEPLOY_KERNEL Image with deploy kernel. --deploy-ramdisk DEPLOY_RAMDISK Image with deploy ramdisk. --instance-boot-option {local,netboot} Whether to set instances for booting from local hard drive (local) or network (netboot). --root-device ROOT_DEVICE Define the root device for nodes. can be either a list of device names (without /dev) to choose from or one of two strategies: largest or smallest. For it to work this command should be run after the introspection. --root-device-minimum-size ROOT_DEVICE_MINIMUM_SIZE Minimum size (in gib) of the detected root device. Used with --root-device. --overwrite-root-device-hints Whether to overwrite existing root device hints when --root-device is used. 57.27. overcloud node delete Delete overcloud nodes. Usage: Table 57.36. Positional Arguments Value Summary <node> Node id(s) to delete (otherwise specified in the --baremetal-deployment file) Table 57.37. Optional Arguments Value Summary -h, --help Show this help message and exit -b <BAREMETAL DEPLOYMENT FILE>, --baremetal-deployment <BAREMETAL DEPLOYMENT FILE> Configuration file describing the baremetal deployment --stack STACK Name or id of heat stack to scale (default=env: OVERCLOUD_STACK_NAME) --templates [TEMPLATES] The directory containing the heat templates to deploy. This argument is deprecated. The command now utilizes a deployment plan, which should be updated prior to running this command, should that be required. Otherwise this argument will be silently ignored. -e <HEAT ENVIRONMENT FILE>, --environment-file <HEAT ENVIRONMENT FILE> Environment files to be passed to the heat stack- create or heat stack-update command. (Can be specified more than once.) This argument is deprecated. The command now utilizes a deployment plan, which should be updated prior to running this command, should that be required. Otherwise this argument will be silently ignored. --timeout <TIMEOUT> Timeout in minutes to wait for the nodes to be deleted. Keep in mind that due to keystone session duration that timeout has an upper bound of 4 hours -y, --yes Skip yes/no prompt (assume yes). 57.28. overcloud node discover Discover overcloud nodes by polling their BMCs. Usage: Table 57.38. Optional Arguments Value Summary -h, --help Show this help message and exit --ip <ips> Ip address(es) to probe --range <range> Ip range to probe --credentials <key:value> Key/value pairs of possible credentials --port <ports> Bmc port(s) to probe --introspect Introspect the imported nodes --run-validations Run the pre-deployment validations. these external validations are from the TripleO Validations project. --provide Provide (make available) the nodes --no-deploy-image Skip setting the deploy kernel and ramdisk. --instance-boot-option {local,netboot} Whether to set instances for booting from local hard drive (local) or network (netboot). --concurrency CONCURRENCY Maximum number of nodes to introspect at once. 57.29. overcloud node import Import baremetal nodes from a JSON, YAML or CSV file. The node status will be set to manageable by default. Usage: Table 57.39. Positional Arguments Value Summary env_file None Table 57.40. Optional Arguments Value Summary -h, --help Show this help message and exit --introspect Introspect the imported nodes --run-validations Run the pre-deployment validations. these external validations are from the TripleO Validations project. --validate-only Validate the env_file and then exit without actually importing the nodes. --provide Provide (make available) the nodes --no-deploy-image Skip setting the deploy kernel and ramdisk. --instance-boot-option {local,netboot} Whether to set instances for booting from local hard drive (local) or network (netboot). --http-boot HTTP_BOOT Root directory for the ironic-python-agent image --concurrency CONCURRENCY Maximum number of nodes to introspect at once. 57.30. overcloud node introspect Introspect specified nodes or all nodes in manageable state. Usage: Table 57.41. Positional Arguments Value Summary <node_uuid> Baremetal node uuids for the node(s) to be introspected Table 57.42. Optional Arguments Value Summary -h, --help Show this help message and exit --all-manageable Introspect all nodes currently in manageable state --provide Provide (make available) the nodes once introspected --run-validations Run the pre-deployment validations. these external validations are from the TripleO Validations project. --concurrency CONCURRENCY Maximum number of nodes to introspect at once. 57.31. overcloud node provide Mark nodes as available based on UUIDs or current manageable state. Usage: Table 57.43. Positional Arguments Value Summary <node_uuid> Baremetal node uuids for the node(s) to be provided Table 57.44. Optional Arguments Value Summary -h, --help Show this help message and exit --all-manageable Provide all nodes currently in manageable state 57.32. overcloud node provision Provision new nodes using Ironic. Usage: Table 57.45. Positional Arguments Value Summary <baremetal_deployment.yaml> Configuration file describing the baremetal deployment Table 57.46. Optional Arguments Value Summary -h, --help Show this help message and exit -o OUTPUT, --output OUTPUT The output environment file path --stack STACK Name or id of heat stack (default=env: OVERCLOUD_STACK_NAME) --overcloud-ssh-user OVERCLOUD_SSH_USER User for ssh access to newly deployed nodes --overcloud-ssh-key OVERCLOUD_SSH_KEY Key path for ssh access toovercloud nodes. when undefined the keywill be autodetected. --concurrency CONCURRENCY Maximum number of nodes to provision at once. (default=20) --timeout TIMEOUT Number of seconds to wait for the node provision to complete. (default=3600) 57.33. overcloud node unprovision Unprovisions nodes using Ironic. Usage: Table 57.47. Positional Arguments Value Summary <baremetal_deployment.yaml> Configuration file describing the baremetal deployment Table 57.48. Optional Arguments Value Summary -h, --help Show this help message and exit --stack STACK Name or id of heat stack (default=env: OVERCLOUD_STACK_NAME) --all Unprovision every instance in the deployment -y, --yes Skip yes/no prompt (assume yes) 57.34. overcloud parameters set Set a parameters for a plan Usage: Table 57.49. Positional Arguments Value Summary name The name of the plan, which is used for the swift container, Mistral environment and Heat stack names. file_in None Table 57.50. Optional Arguments Value Summary -h, --help Show this help message and exit 57.35. overcloud plan create Create a deployment plan Usage: Table 57.51. Positional Arguments Value Summary name The name of the plan, which is used for the object storage container, workflow environment and orchestration stack names. Table 57.52. Optional Arguments Value Summary -h, --help Show this help message and exit --templates TEMPLATES The directory containing the heat templates to deploy. If this or --source_url isn't provided, the templates packaged on the Undercloud will be used. --plan-environment-file PLAN_ENVIRONMENT_FILE, -p PLAN_ENVIRONMENT_FILE Plan environment file, overrides the default plan- environment.yaml in the --templates directory --disable-password-generation Disable password generation. --source-url SOURCE_URL The url of a git repository containing the heat templates to deploy. If this or --templates isn't provided, the templates packaged on the Undercloud will be used. 57.36. overcloud plan delete Delete an overcloud deployment plan. The plan will not be deleted if a stack exists with the same name. Usage: Table 57.53. Positional Arguments Value Summary <name> Name of the plan(s) to delete Table 57.54. Optional Arguments Value Summary -h, --help Show this help message and exit 57.37. overcloud plan deploy Deploy a deployment plan Usage: Table 57.55. Positional Arguments Value Summary name The name of the plan to deploy. Table 57.56. Optional Arguments Value Summary -h, --help Show this help message and exit --timeout <TIMEOUT>, -t <TIMEOUT> Deployment timeout in minutes. --run-validations Run the pre-deployment validations. these external validations are from the TripleO Validations project. 57.38. overcloud plan export Export a deployment plan Usage: Table 57.57. Positional Arguments Value Summary <name> Name of the plan to export. Table 57.58. Optional Arguments Value Summary -h, --help Show this help message and exit --output-file <output file>, -o <output file> Name of the output file for export. it will default to "<name>.tar.gz". --force-overwrite, -f Overwrite output file if it exists. 57.39. overcloud plan list List overcloud deployment plans. Usage: Table 57.59. Optional Arguments Value Summary -h, --help Show this help message and exit Table 57.60. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 57.61. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 57.62. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 57.63. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 57.40. overcloud profiles list List overcloud node profiles Usage: Table 57.64. Optional Arguments Value Summary -h, --help Show this help message and exit --all List all nodes, even those not available to nova. --control-scale CONTROL_SCALE New number of control nodes. --compute-scale COMPUTE_SCALE New number of compute nodes. --ceph-storage-scale CEPH_STORAGE_SCALE New number of ceph storage nodes. --block-storage-scale BLOCK_STORAGE_SCALE New number of cinder storage nodes. --swift-storage-scale SWIFT_STORAGE_SCALE New number of swift storage nodes. --control-flavor CONTROL_FLAVOR Nova flavor to use for control nodes. --compute-flavor COMPUTE_FLAVOR Nova flavor to use for compute nodes. --ceph-storage-flavor CEPH_STORAGE_FLAVOR Nova flavor to use for ceph storage nodes. --block-storage-flavor BLOCK_STORAGE_FLAVOR Nova flavor to use for cinder storage nodes --swift-storage-flavor SWIFT_STORAGE_FLAVOR Nova flavor to use for swift storage nodes Table 57.65. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 57.66. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 57.67. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 57.68. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 57.41. overcloud profiles match Assign and validate profiles on nodes Usage: Table 57.69. Optional Arguments Value Summary -h, --help Show this help message and exit --dry-run Only run validations, but do not apply any changes. --control-scale CONTROL_SCALE New number of control nodes. --compute-scale COMPUTE_SCALE New number of compute nodes. --ceph-storage-scale CEPH_STORAGE_SCALE New number of ceph storage nodes. --block-storage-scale BLOCK_STORAGE_SCALE New number of cinder storage nodes. --swift-storage-scale SWIFT_STORAGE_SCALE New number of swift storage nodes. --control-flavor CONTROL_FLAVOR Nova flavor to use for control nodes. --compute-flavor COMPUTE_FLAVOR Nova flavor to use for compute nodes. --ceph-storage-flavor CEPH_STORAGE_FLAVOR Nova flavor to use for ceph storage nodes. --block-storage-flavor BLOCK_STORAGE_FLAVOR Nova flavor to use for cinder storage nodes --swift-storage-flavor SWIFT_STORAGE_FLAVOR Nova flavor to use for swift storage nodes 57.42. overcloud raid create Create RAID on given nodes Usage: Table 57.70. Positional Arguments Value Summary configuration Raid configuration (yaml/json string or file name). Table 57.71. Optional Arguments Value Summary -h, --help Show this help message and exit --node NODE Nodes to create raid on (expected to be in manageable state). Can be specified multiple times. 57.43. overcloud role list List availables roles (DEPRECATED). Please use "openstack overcloud roles list" instead. Usage: Table 57.72. Optional Arguments Value Summary -h, --help Show this help message and exit --roles-path <roles directory> Filesystem path containing the role yaml files. by default this is /usr/share/openstack-tripleo-heat- templates/roles 57.44. overcloud role show Show information about a given role (DEPRECATED). Please use "openstack overcloud roles show" intead. Usage: Table 57.73. Positional Arguments Value Summary <role> Role to display more information about. Table 57.74. Optional Arguments Value Summary -h, --help Show this help message and exit --roles-path <roles directory> Filesystem path containing the role yaml files. by default this is /usr/share/openstack-tripleo-heat- templates/roles 57.45. overcloud roles generate Generate roles_data.yaml file Usage: Table 57.75. Positional Arguments Value Summary <role> List of roles to use to generate the roles_data.yaml file for the deployment. NOTE: Ordering is important if no role has the "primary" and "controller" tags. If no role is tagged then the first role listed will be considered the primary role. This usually is the controller role. Table 57.76. Optional Arguments Value Summary -h, --help Show this help message and exit --roles-path <roles directory> Filesystem path containing the role yaml files. by default this is /usr/share/openstack-tripleo-heat- templates/roles -o <output file>, --output-file <output file> File to capture all output to. for example, roles_data.yaml --skip-validate Skip role metadata type validation whengenerating the roles_data.yaml 57.46. overcloud roles list List the current and available roles in a given plan Usage: Table 57.77. Optional Arguments Value Summary -h, --help Show this help message and exit --name NAME The name of the plan, which is used for the object storage container, workflow environment and orchestration stack names. --detail Include details about each role --current Only show the information for the roles currently enabled for the plan. Table 57.78. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 57.79. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 57.80. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 57.81. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 57.47. overcloud roles show Show details for a specific role, given a plan Usage: Table 57.82. Positional Arguments Value Summary <role> Name of the role to look up. Table 57.83. Optional Arguments Value Summary -h, --help Show this help message and exit --name NAME The name of the plan, which is used for the object storage container, workflow environment and orchestration stack names. Table 57.84. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 57.85. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 57.86. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 57.87. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 57.48. overcloud status Get deployment status Usage: Table 57.88. Optional Arguments Value Summary -h, --help Show this help message and exit --plan PLAN, --stack PLAN Name of the stack/plan. (default: overcloud) 57.49. overcloud support report collect Run sosreport on selected servers. Usage: Table 57.89. Positional Arguments Value Summary server_name Nova server_name or partial name to match. for example "controller" will match all controllers for an environment. Table 57.90. Optional Arguments Value Summary -h, --help Show this help message and exit -c CONTAINER, --container CONTAINER Swift container to store logs to -o DESTINATION, --output DESTINATION Output directory for the report --skip-container-delete Do not delete the container after the files have been downloaded. Ignored if --collect-only or --download- only is provided. -t TIMEOUT, --timeout TIMEOUT Maximum time to wait for the log collection and container deletion workflows to finish. -n CONCURRENCY, --concurrency CONCURRENCY Number of parallel log collection and object deletion tasks to run. --collect-only Skip log downloads, only collect logs and put in the container --download-only Skip generation, only download from the provided container 57.50. overcloud update converge Converge the update on Overcloud nodes. This restores the plan and stack so that normal deployment workflow is back in place. Usage: Table 57.91. Optional Arguments Value Summary --templates [TEMPLATES] The directory containing the heat templates to deploy --stack STACK Stack name to create or update --timeout <TIMEOUT>, -t <TIMEOUT> Deployment timeout in minutes. --control-scale CONTROL_SCALE New number of control nodes. (deprecated. use an environment file and set the parameter ControllerCount. This option will be removed in the "U" release.) --compute-scale COMPUTE_SCALE New number of compute nodes. (deprecated. use an environment file and set the parameter ComputeCount. This option will be removed in the "U" release.) --ceph-storage-scale CEPH_STORAGE_SCALE New number of ceph storage nodes. (deprecated. use an environment file and set the parameter CephStorageCount. This option will be removed in the "U" release.) --block-storage-scale BLOCK_STORAGE_SCALE New number of cinder storage nodes. (deprecated. use an environment file and set the parameter BlockStorageCount. This option will be removed in the "U" release.) --swift-storage-scale SWIFT_STORAGE_SCALE New number of swift storage nodes. (deprecated. use an environment file and set the parameter ObjectStorageCount. This option will be removed in the "U" release.) --control-flavor CONTROL_FLAVOR Nova flavor to use for control nodes. (deprecated. use an environment file and set the parameter OvercloudControlFlavor. This option will be removed in the "U" release.) --compute-flavor COMPUTE_FLAVOR Nova flavor to use for compute nodes. (deprecated. use an environment file and set the parameter OvercloudComputeFlavor. This option will be removed in the "U" release.) --ceph-storage-flavor CEPH_STORAGE_FLAVOR Nova flavor to use for ceph storage nodes. (DEPRECATED. Use an environment file and set the parameter OvercloudCephStorageFlavor. This option will be removed in the "U" release.) --block-storage-flavor BLOCK_STORAGE_FLAVOR Nova flavor to use for cinder storage nodes (DEPRECATED. Use an environment file and set the parameter OvercloudBlockStorageFlavor. This option will be removed in the "U" release.) --swift-storage-flavor SWIFT_STORAGE_FLAVOR Nova flavor to use for swift storage nodes (DEPRECATED. Use an environment file and set the parameter OvercloudSwiftStorageFlavor. This option will be removed in the "U" release.) --libvirt-type {kvm,qemu} Libvirt domain type. --ntp-server NTP_SERVER The ntp for overcloud nodes. --no-proxy NO_PROXY A comma separated list of hosts that should not be proxied. --overcloud-ssh-user OVERCLOUD_SSH_USER User for ssh access to overcloud nodes --overcloud-ssh-key OVERCLOUD_SSH_KEY Key path for ssh access to overcloud nodes. Whenundefined the key will be autodetected. --overcloud-ssh-network OVERCLOUD_SSH_NETWORK Network name to use for ssh access to overcloud nodes. --overcloud-ssh-enable-timeout OVERCLOUD_SSH_ENABLE_TIMEOUT Timeout for the ssh enable process to finish. --overcloud-ssh-port-timeout OVERCLOUD_SSH_PORT_TIMEOUT Timeout for to wait for the ssh port to become active. --environment-file <HEAT ENVIRONMENT FILE>, -e <HEAT ENVIRONMENT FILE> Environment files to be passed to the heat stack- create or heat stack-update command. (Can be specified more than once.) --environment-directory <HEAT ENVIRONMENT DIRECTORY> Environment file directories that are automatically added to the heat stack-create or heat stack-update commands. Can be specified more than once. Files in directories are loaded in ascending sort order. --roles-file ROLES_FILE, -r ROLES_FILE Roles file, overrides the default roles_data.yaml in the --templates directory. May be an absolute path or the path relative to --templates --networks-file NETWORKS_FILE, -n NETWORKS_FILE Networks file, overrides the default network_data.yaml in the --templates directory --plan-environment-file PLAN_ENVIRONMENT_FILE, -p PLAN_ENVIRONMENT_FILE Plan environment file, overrides the default plan- environment.yaml in the --templates directory --no-cleanup Don't cleanup temporary files, just log their location --update-plan-only Only update the plan. do not perform the actual deployment. NOTE: Will move to a discrete command in a future release. --validation-errors-nonfatal Allow the deployment to continue in spite of validation errors. Note that attempting deployment while errors exist is likely to fail. --validation-warnings-fatal Exit if there are warnings from the configuration pre- checks. --disable-validations Deprecated. disable the pre-deployment validations entirely. These validations are the built-in pre- deployment validations. To enable external validations from tripleo-validations, use the --run-validations flag. These validations are now run via the external validations in tripleo-validations. --inflight-validations Activate in-flight validations during the deploy. in- flight validations provide a robust way to ensure deployed services are running right after their activation. Defaults to False. --dry-run Only run validations, but do not apply any changes. --run-validations Run external validations from the tripleo-validations project. --skip-postconfig Skip the overcloud post-deployment configuration. --force-postconfig Force the overcloud post-deployment configuration. --skip-deploy-identifier Skip generation of a unique identifier for the DeployIdentifier parameter. The software configuration deployment steps will only be triggered if there is an actual change to the configuration. This option should be used with Caution, and only if there is confidence that the software configuration does not need to be run, such as when scaling out certain roles. --answers-file ANSWERS_FILE Path to a yaml file with arguments and parameters. --disable-password-generation Disable password generation. --deployed-server Use pre-provisioned overcloud nodes. removes baremetal,compute and image services requirements from theundercloud node. Must only be used with the-- disable-validations. --config-download Run deployment via config-download mechanism. this is now the default, and this CLI options may be removed in the future. --no-config-download, --stack-only Disable the config-download workflow and only create the stack and associated OpenStack resources. No software configuration will be applied. --config-download-only Disable the stack create/update, and only run the config-download workflow to apply the software configuration. --output-dir OUTPUT_DIR Directory to use for saved output when using --config- download. The directory must be writeable by the mistral user. When not specified, the default server side value will be used (/var/lib/mistral/<execution id>. --override-ansible-cfg OVERRIDE_ANSIBLE_CFG Path to ansible configuration file. the configuration in the file will override any configuration used by config-download by default. --config-download-timeout CONFIG_DOWNLOAD_TIMEOUT Timeout (in minutes) to use for config-download steps. If unset, will default to however much time is leftover from the --timeout parameter after the stack operation. --deployment-python-interpreter DEPLOYMENT_PYTHON_INTERPRETER The path to python interpreter to use for the deployment actions. This may need to be used if deploying on a python2 host from a python3 system or vice versa. -b <baremetal_deployment.yaml>, --baremetal-deployment <baremetal_deployment.yaml> Configuration file describing the baremetal deployment 57.51. overcloud update prepare Run heat stack update for overcloud nodes to refresh heat stack outputs. The heat stack outputs are what we use later on to generate ansible playbooks which deliver the minor update workflow. This is used as the first step for a minor update of your overcloud. Usage: Table 57.92. Optional Arguments Value Summary --templates [TEMPLATES] The directory containing the heat templates to deploy --stack STACK Stack name to create or update --timeout <TIMEOUT>, -t <TIMEOUT> Deployment timeout in minutes. --control-scale CONTROL_SCALE New number of control nodes. (deprecated. use an environment file and set the parameter ControllerCount. This option will be removed in the "U" release.) --compute-scale COMPUTE_SCALE New number of compute nodes. (deprecated. use an environment file and set the parameter ComputeCount. This option will be removed in the "U" release.) --ceph-storage-scale CEPH_STORAGE_SCALE New number of ceph storage nodes. (deprecated. use an environment file and set the parameter CephStorageCount. This option will be removed in the "U" release.) --block-storage-scale BLOCK_STORAGE_SCALE New number of cinder storage nodes. (deprecated. use an environment file and set the parameter BlockStorageCount. This option will be removed in the "U" release.) --swift-storage-scale SWIFT_STORAGE_SCALE New number of swift storage nodes. (deprecated. use an environment file and set the parameter ObjectStorageCount. This option will be removed in the "U" release.) --control-flavor CONTROL_FLAVOR Nova flavor to use for control nodes. (deprecated. use an environment file and set the parameter OvercloudControlFlavor. This option will be removed in the "U" release.) --compute-flavor COMPUTE_FLAVOR Nova flavor to use for compute nodes. (deprecated. use an environment file and set the parameter OvercloudComputeFlavor. This option will be removed in the "U" release.) --ceph-storage-flavor CEPH_STORAGE_FLAVOR Nova flavor to use for ceph storage nodes. (DEPRECATED. Use an environment file and set the parameter OvercloudCephStorageFlavor. This option will be removed in the "U" release.) --block-storage-flavor BLOCK_STORAGE_FLAVOR Nova flavor to use for cinder storage nodes (DEPRECATED. Use an environment file and set the parameter OvercloudBlockStorageFlavor. This option will be removed in the "U" release.) --swift-storage-flavor SWIFT_STORAGE_FLAVOR Nova flavor to use for swift storage nodes (DEPRECATED. Use an environment file and set the parameter OvercloudSwiftStorageFlavor. This option will be removed in the "U" release.) --libvirt-type {kvm,qemu} Libvirt domain type. --ntp-server NTP_SERVER The ntp for overcloud nodes. --no-proxy NO_PROXY A comma separated list of hosts that should not be proxied. --overcloud-ssh-user OVERCLOUD_SSH_USER User for ssh access to overcloud nodes --overcloud-ssh-key OVERCLOUD_SSH_KEY Key path for ssh access to overcloud nodes. Whenundefined the key will be autodetected. --overcloud-ssh-network OVERCLOUD_SSH_NETWORK Network name to use for ssh access to overcloud nodes. --overcloud-ssh-enable-timeout OVERCLOUD_SSH_ENABLE_TIMEOUT Timeout for the ssh enable process to finish. --overcloud-ssh-port-timeout OVERCLOUD_SSH_PORT_TIMEOUT Timeout for to wait for the ssh port to become active. --environment-file <HEAT ENVIRONMENT FILE>, -e <HEAT ENVIRONMENT FILE> Environment files to be passed to the heat stack- create or heat stack-update command. (Can be specified more than once.) --environment-directory <HEAT ENVIRONMENT DIRECTORY> Environment file directories that are automatically added to the heat stack-create or heat stack-update commands. Can be specified more than once. Files in directories are loaded in ascending sort order. --roles-file ROLES_FILE, -r ROLES_FILE Roles file, overrides the default roles_data.yaml in the --templates directory. May be an absolute path or the path relative to --templates --networks-file NETWORKS_FILE, -n NETWORKS_FILE Networks file, overrides the default network_data.yaml in the --templates directory --plan-environment-file PLAN_ENVIRONMENT_FILE, -p PLAN_ENVIRONMENT_FILE Plan environment file, overrides the default plan- environment.yaml in the --templates directory --no-cleanup Don't cleanup temporary files, just log their location --update-plan-only Only update the plan. do not perform the actual deployment. NOTE: Will move to a discrete command in a future release. --validation-errors-nonfatal Allow the deployment to continue in spite of validation errors. Note that attempting deployment while errors exist is likely to fail. --validation-warnings-fatal Exit if there are warnings from the configuration pre- checks. --disable-validations Deprecated. disable the pre-deployment validations entirely. These validations are the built-in pre- deployment validations. To enable external validations from tripleo-validations, use the --run-validations flag. These validations are now run via the external validations in tripleo-validations. --inflight-validations Activate in-flight validations during the deploy. in- flight validations provide a robust way to ensure deployed services are running right after their activation. Defaults to False. --dry-run Only run validations, but do not apply any changes. --run-validations Run external validations from the tripleo-validations project. --skip-postconfig Skip the overcloud post-deployment configuration. --force-postconfig Force the overcloud post-deployment configuration. --skip-deploy-identifier Skip generation of a unique identifier for the DeployIdentifier parameter. The software configuration deployment steps will only be triggered if there is an actual change to the configuration. This option should be used with Caution, and only if there is confidence that the software configuration does not need to be run, such as when scaling out certain roles. --answers-file ANSWERS_FILE Path to a yaml file with arguments and parameters. --disable-password-generation Disable password generation. --deployed-server Use pre-provisioned overcloud nodes. removes baremetal,compute and image services requirements from theundercloud node. Must only be used with the-- disable-validations. --config-download Run deployment via config-download mechanism. this is now the default, and this CLI options may be removed in the future. --no-config-download, --stack-only Disable the config-download workflow and only create the stack and associated OpenStack resources. No software configuration will be applied. --config-download-only Disable the stack create/update, and only run the config-download workflow to apply the software configuration. --output-dir OUTPUT_DIR Directory to use for saved output when using --config- download. The directory must be writeable by the mistral user. When not specified, the default server side value will be used (/var/lib/mistral/<execution id>. --override-ansible-cfg OVERRIDE_ANSIBLE_CFG Path to ansible configuration file. the configuration in the file will override any configuration used by config-download by default. --config-download-timeout CONFIG_DOWNLOAD_TIMEOUT Timeout (in minutes) to use for config-download steps. If unset, will default to however much time is leftover from the --timeout parameter after the stack operation. --deployment-python-interpreter DEPLOYMENT_PYTHON_INTERPRETER The path to python interpreter to use for the deployment actions. This may need to be used if deploying on a python2 host from a python3 system or vice versa. -b <baremetal_deployment.yaml>, --baremetal-deployment <baremetal_deployment.yaml> Configuration file describing the baremetal deployment 57.52. overcloud update run Run minor update ansible playbooks on Overcloud nodes Usage: Table 57.93. Optional Arguments Value Summary -h, --help Show this help message and exit --limit LIMIT A string that identifies a single node or comma- separated list of nodes to be upgraded in parallel in this upgrade run invocation. For example: --limit "compute-0, compute-1, compute-5". --playbook PLAYBOOK Ansible playbook to use for the minor update. defaults to the special value all which causes all the update playbooks to be executed. That is the update_steps_playbook.yaml and then thedeploy_steps_playbook.yaml. Set this to each of those playbooks in consecutive invocations of this command if you prefer to run them manually. Note: make sure to run both those playbooks so that all services are updated and running with the target version configuration. --ssh-user SSH_USER Deprecated: only tripleo-admin should be used as ssh user. --static-inventory STATIC_INVENTORY Path to an existing ansible inventory to use. if not specified, one will be generated in ~/tripleo-ansible- inventory.yaml --stack STACK Name or id of heat stack (default=env: OVERCLOUD_STACK_NAME) --no-workflow Run ansible-playbook directly via system command instead of running Ansiblevia the TripleO mistral workflows. 57.53. overcloud upgrade converge Major upgrade converge - reset Heat resources in the stored plan This is the last step for completion of a overcloud major upgrade. The main task is updating the plan and stack to unblock future stack updates. For the major upgrade workflow we have set specific values for some stack Heat resources. This unsets those back to their default values. Usage: Table 57.94. Optional Arguments Value Summary --templates [TEMPLATES] The directory containing the heat templates to deploy --stack STACK Stack name to create or update --timeout <TIMEOUT>, -t <TIMEOUT> Deployment timeout in minutes. --control-scale CONTROL_SCALE New number of control nodes. (deprecated. use an environment file and set the parameter ControllerCount. This option will be removed in the "U" release.) --compute-scale COMPUTE_SCALE New number of compute nodes. (deprecated. use an environment file and set the parameter ComputeCount. This option will be removed in the "U" release.) --ceph-storage-scale CEPH_STORAGE_SCALE New number of ceph storage nodes. (deprecated. use an environment file and set the parameter CephStorageCount. This option will be removed in the "U" release.) --block-storage-scale BLOCK_STORAGE_SCALE New number of cinder storage nodes. (deprecated. use an environment file and set the parameter BlockStorageCount. This option will be removed in the "U" release.) --swift-storage-scale SWIFT_STORAGE_SCALE New number of swift storage nodes. (deprecated. use an environment file and set the parameter ObjectStorageCount. This option will be removed in the "U" release.) --control-flavor CONTROL_FLAVOR Nova flavor to use for control nodes. (deprecated. use an environment file and set the parameter OvercloudControlFlavor. This option will be removed in the "U" release.) --compute-flavor COMPUTE_FLAVOR Nova flavor to use for compute nodes. (deprecated. use an environment file and set the parameter OvercloudComputeFlavor. This option will be removed in the "U" release.) --ceph-storage-flavor CEPH_STORAGE_FLAVOR Nova flavor to use for ceph storage nodes. (DEPRECATED. Use an environment file and set the parameter OvercloudCephStorageFlavor. This option will be removed in the "U" release.) --block-storage-flavor BLOCK_STORAGE_FLAVOR Nova flavor to use for cinder storage nodes (DEPRECATED. Use an environment file and set the parameter OvercloudBlockStorageFlavor. This option will be removed in the "U" release.) --swift-storage-flavor SWIFT_STORAGE_FLAVOR Nova flavor to use for swift storage nodes (DEPRECATED. Use an environment file and set the parameter OvercloudSwiftStorageFlavor. This option will be removed in the "U" release.) --libvirt-type {kvm,qemu} Libvirt domain type. --ntp-server NTP_SERVER The ntp for overcloud nodes. --no-proxy NO_PROXY A comma separated list of hosts that should not be proxied. --overcloud-ssh-user OVERCLOUD_SSH_USER User for ssh access to overcloud nodes --overcloud-ssh-key OVERCLOUD_SSH_KEY Key path for ssh access to overcloud nodes. Whenundefined the key will be autodetected. --overcloud-ssh-network OVERCLOUD_SSH_NETWORK Network name to use for ssh access to overcloud nodes. --overcloud-ssh-enable-timeout OVERCLOUD_SSH_ENABLE_TIMEOUT Timeout for the ssh enable process to finish. --overcloud-ssh-port-timeout OVERCLOUD_SSH_PORT_TIMEOUT Timeout for to wait for the ssh port to become active. --environment-file <HEAT ENVIRONMENT FILE>, -e <HEAT ENVIRONMENT FILE> Environment files to be passed to the heat stack- create or heat stack-update command. (Can be specified more than once.) --environment-directory <HEAT ENVIRONMENT DIRECTORY> Environment file directories that are automatically added to the heat stack-create or heat stack-update commands. Can be specified more than once. Files in directories are loaded in ascending sort order. --roles-file ROLES_FILE, -r ROLES_FILE Roles file, overrides the default roles_data.yaml in the --templates directory. May be an absolute path or the path relative to --templates --networks-file NETWORKS_FILE, -n NETWORKS_FILE Networks file, overrides the default network_data.yaml in the --templates directory --plan-environment-file PLAN_ENVIRONMENT_FILE, -p PLAN_ENVIRONMENT_FILE Plan environment file, overrides the default plan- environment.yaml in the --templates directory --no-cleanup Don't cleanup temporary files, just log their location --update-plan-only Only update the plan. do not perform the actual deployment. NOTE: Will move to a discrete command in a future release. --validation-errors-nonfatal Allow the deployment to continue in spite of validation errors. Note that attempting deployment while errors exist is likely to fail. --validation-warnings-fatal Exit if there are warnings from the configuration pre- checks. --disable-validations Deprecated. disable the pre-deployment validations entirely. These validations are the built-in pre- deployment validations. To enable external validations from tripleo-validations, use the --run-validations flag. These validations are now run via the external validations in tripleo-validations. --inflight-validations Activate in-flight validations during the deploy. in- flight validations provide a robust way to ensure deployed services are running right after their activation. Defaults to False. --dry-run Only run validations, but do not apply any changes. --run-validations Run external validations from the tripleo-validations project. --skip-postconfig Skip the overcloud post-deployment configuration. --force-postconfig Force the overcloud post-deployment configuration. --skip-deploy-identifier Skip generation of a unique identifier for the DeployIdentifier parameter. The software configuration deployment steps will only be triggered if there is an actual change to the configuration. This option should be used with Caution, and only if there is confidence that the software configuration does not need to be run, such as when scaling out certain roles. --answers-file ANSWERS_FILE Path to a yaml file with arguments and parameters. --disable-password-generation Disable password generation. --deployed-server Use pre-provisioned overcloud nodes. removes baremetal,compute and image services requirements from theundercloud node. Must only be used with the-- disable-validations. --config-download Run deployment via config-download mechanism. this is now the default, and this CLI options may be removed in the future. --no-config-download, --stack-only Disable the config-download workflow and only create the stack and associated OpenStack resources. No software configuration will be applied. --config-download-only Disable the stack create/update, and only run the config-download workflow to apply the software configuration. --output-dir OUTPUT_DIR Directory to use for saved output when using --config- download. The directory must be writeable by the mistral user. When not specified, the default server side value will be used (/var/lib/mistral/<execution id>. --override-ansible-cfg OVERRIDE_ANSIBLE_CFG Path to ansible configuration file. the configuration in the file will override any configuration used by config-download by default. --config-download-timeout CONFIG_DOWNLOAD_TIMEOUT Timeout (in minutes) to use for config-download steps. If unset, will default to however much time is leftover from the --timeout parameter after the stack operation. --deployment-python-interpreter DEPLOYMENT_PYTHON_INTERPRETER The path to python interpreter to use for the deployment actions. This may need to be used if deploying on a python2 host from a python3 system or vice versa. -b <baremetal_deployment.yaml>, --baremetal-deployment <baremetal_deployment.yaml> Configuration file describing the baremetal deployment 57.54. overcloud upgrade prepare Run heat stack update for overcloud nodes to refresh heat stack outputs. The heat stack outputs are what we use later on to generate ansible playbooks which deliver the major upgrade workflow. This is used as the first step for a major upgrade of your overcloud. Usage: Table 57.95. Optional Arguments Value Summary --templates [TEMPLATES] The directory containing the heat templates to deploy --stack STACK Stack name to create or update --timeout <TIMEOUT>, -t <TIMEOUT> Deployment timeout in minutes. --control-scale CONTROL_SCALE New number of control nodes. (deprecated. use an environment file and set the parameter ControllerCount. This option will be removed in the "U" release.) --compute-scale COMPUTE_SCALE New number of compute nodes. (deprecated. use an environment file and set the parameter ComputeCount. This option will be removed in the "U" release.) --ceph-storage-scale CEPH_STORAGE_SCALE New number of ceph storage nodes. (deprecated. use an environment file and set the parameter CephStorageCount. This option will be removed in the "U" release.) --block-storage-scale BLOCK_STORAGE_SCALE New number of cinder storage nodes. (deprecated. use an environment file and set the parameter BlockStorageCount. This option will be removed in the "U" release.) --swift-storage-scale SWIFT_STORAGE_SCALE New number of swift storage nodes. (deprecated. use an environment file and set the parameter ObjectStorageCount. This option will be removed in the "U" release.) --control-flavor CONTROL_FLAVOR Nova flavor to use for control nodes. (deprecated. use an environment file and set the parameter OvercloudControlFlavor. This option will be removed in the "U" release.) --compute-flavor COMPUTE_FLAVOR Nova flavor to use for compute nodes. (deprecated. use an environment file and set the parameter OvercloudComputeFlavor. This option will be removed in the "U" release.) --ceph-storage-flavor CEPH_STORAGE_FLAVOR Nova flavor to use for ceph storage nodes. (DEPRECATED. Use an environment file and set the parameter OvercloudCephStorageFlavor. This option will be removed in the "U" release.) --block-storage-flavor BLOCK_STORAGE_FLAVOR Nova flavor to use for cinder storage nodes (DEPRECATED. Use an environment file and set the parameter OvercloudBlockStorageFlavor. This option will be removed in the "U" release.) --swift-storage-flavor SWIFT_STORAGE_FLAVOR Nova flavor to use for swift storage nodes (DEPRECATED. Use an environment file and set the parameter OvercloudSwiftStorageFlavor. This option will be removed in the "U" release.) --libvirt-type {kvm,qemu} Libvirt domain type. --ntp-server NTP_SERVER The ntp for overcloud nodes. --no-proxy NO_PROXY A comma separated list of hosts that should not be proxied. --overcloud-ssh-user OVERCLOUD_SSH_USER User for ssh access to overcloud nodes --overcloud-ssh-key OVERCLOUD_SSH_KEY Key path for ssh access to overcloud nodes. Whenundefined the key will be autodetected. --overcloud-ssh-network OVERCLOUD_SSH_NETWORK Network name to use for ssh access to overcloud nodes. --overcloud-ssh-enable-timeout OVERCLOUD_SSH_ENABLE_TIMEOUT Timeout for the ssh enable process to finish. --overcloud-ssh-port-timeout OVERCLOUD_SSH_PORT_TIMEOUT Timeout for to wait for the ssh port to become active. --environment-file <HEAT ENVIRONMENT FILE>, -e <HEAT ENVIRONMENT FILE> Environment files to be passed to the heat stack- create or heat stack-update command. (Can be specified more than once.) --environment-directory <HEAT ENVIRONMENT DIRECTORY> Environment file directories that are automatically added to the heat stack-create or heat stack-update commands. Can be specified more than once. Files in directories are loaded in ascending sort order. --roles-file ROLES_FILE, -r ROLES_FILE Roles file, overrides the default roles_data.yaml in the --templates directory. May be an absolute path or the path relative to --templates --networks-file NETWORKS_FILE, -n NETWORKS_FILE Networks file, overrides the default network_data.yaml in the --templates directory --plan-environment-file PLAN_ENVIRONMENT_FILE, -p PLAN_ENVIRONMENT_FILE Plan environment file, overrides the default plan- environment.yaml in the --templates directory --no-cleanup Don't cleanup temporary files, just log their location --update-plan-only Only update the plan. do not perform the actual deployment. NOTE: Will move to a discrete command in a future release. --validation-errors-nonfatal Allow the deployment to continue in spite of validation errors. Note that attempting deployment while errors exist is likely to fail. --validation-warnings-fatal Exit if there are warnings from the configuration pre- checks. --disable-validations Deprecated. disable the pre-deployment validations entirely. These validations are the built-in pre- deployment validations. To enable external validations from tripleo-validations, use the --run-validations flag. These validations are now run via the external validations in tripleo-validations. --inflight-validations Activate in-flight validations during the deploy. in- flight validations provide a robust way to ensure deployed services are running right after their activation. Defaults to False. --dry-run Only run validations, but do not apply any changes. --run-validations Run external validations from the tripleo-validations project. --skip-postconfig Skip the overcloud post-deployment configuration. --force-postconfig Force the overcloud post-deployment configuration. --skip-deploy-identifier Skip generation of a unique identifier for the DeployIdentifier parameter. The software configuration deployment steps will only be triggered if there is an actual change to the configuration. This option should be used with Caution, and only if there is confidence that the software configuration does not need to be run, such as when scaling out certain roles. --answers-file ANSWERS_FILE Path to a yaml file with arguments and parameters. --disable-password-generation Disable password generation. --deployed-server Use pre-provisioned overcloud nodes. removes baremetal,compute and image services requirements from theundercloud node. Must only be used with the-- disable-validations. --config-download Run deployment via config-download mechanism. this is now the default, and this CLI options may be removed in the future. --no-config-download, --stack-only Disable the config-download workflow and only create the stack and associated OpenStack resources. No software configuration will be applied. --config-download-only Disable the stack create/update, and only run the config-download workflow to apply the software configuration. --output-dir OUTPUT_DIR Directory to use for saved output when using --config- download. The directory must be writeable by the mistral user. When not specified, the default server side value will be used (/var/lib/mistral/<execution id>. --override-ansible-cfg OVERRIDE_ANSIBLE_CFG Path to ansible configuration file. the configuration in the file will override any configuration used by config-download by default. --config-download-timeout CONFIG_DOWNLOAD_TIMEOUT Timeout (in minutes) to use for config-download steps. If unset, will default to however much time is leftover from the --timeout parameter after the stack operation. --deployment-python-interpreter DEPLOYMENT_PYTHON_INTERPRETER The path to python interpreter to use for the deployment actions. This may need to be used if deploying on a python2 host from a python3 system or vice versa. -b <baremetal_deployment.yaml>, --baremetal-deployment <baremetal_deployment.yaml> Configuration file describing the baremetal deployment 57.55. overcloud upgrade run Run major upgrade ansible playbooks on Overcloud nodes This will run the major upgrade ansible playbooks on the overcloud. By default all playbooks are executed, that is the upgrade_steps_playbook.yaml then the deploy_steps_playbook.yaml and then the post_upgrade_steps_playbook.yaml. The upgrade playbooks are made available after completion of the overcloud upgrade prepare command. This overcloud upgrade run command is the second step in the major upgrade workflow. Usage: Table 57.96. Optional Arguments Value Summary -h, --help Show this help message and exit --limit LIMIT A string that identifies a single node or comma- separatedlist of nodes to be upgraded in parallel in this upgrade run invocation. For example: --limit "compute-0, compute-1, compute-5". --playbook PLAYBOOK Ansible playbook to use for the major upgrade. Defaults to the special value all which causes all the upgrade playbooks to run. That is the upgrade_steps_playbook.yaml then deploy_steps_playbook.yaml and then post_upgrade_steps_playbook.yaml. Set this to each of those playbooks in consecutive invocations of this command if you prefer to run them manually. Note: you will have to run all of those playbooks so that all services are upgraded and running with the target version configuration. --static-inventory STATIC_INVENTORY Path to an existing ansible inventory to use. if not specified, one will be generated in ~/tripleo-ansible- inventory.yaml --ssh-user SSH_USER Deprecated: only tripleo-admin should be used as ssh user. --tags TAGS A string specifying the tag or comma separated list of tags to be passed as --tags to ansible-playbook. --skip-tags SKIP_TAGS A string specifying the tag or comma separated list of tags to be passed as --skip-tags to ansible-playbook. The currently supported values are validation and pre-upgrade . In particular validation is useful if you must re-run following a failed upgrade and some services cannot be started. --stack STACK Name or id of heat stack (default=env: OVERCLOUD_STACK_NAME) --no-workflow Run ansible-playbook directly via system command instead of running Ansiblevia the TripleO mistral workflows.
[ "openstack overcloud admin authorize [-h] [--stack STACK] [--overcloud-ssh-user OVERCLOUD_SSH_USER] [--overcloud-ssh-key OVERCLOUD_SSH_KEY] [--overcloud-ssh-network OVERCLOUD_SSH_NETWORK] [--overcloud-ssh-enable-timeout OVERCLOUD_SSH_ENABLE_TIMEOUT] [--overcloud-ssh-port-timeout OVERCLOUD_SSH_PORT_TIMEOUT]", "openstack overcloud cell export [-h] [--control-plane-stack <control plane stack>] [--cell-stack <cell stack>] [--output-file <output file>] [--force-overwrite] <cell name>", "openstack overcloud config download [-h] [--name NAME] [--config-dir CONFIG_DIR] [--config-type CONFIG_TYPE] [--no-preserve-config]", "openstack overcloud container image build [-h] [--config-file <yaml config file>] --kolla-config-file <config file> [--list-images] [--list-dependencies] [--exclude <container-name>] [--use-buildah] [--work-dir <container builds directory>]", "openstack overcloud container image prepare [-h] [--template-file <yaml template file>] [--push-destination <location>] [--tag <tag>] [--tag-from-label <image label>] [--namespace <namespace>] [--prefix <prefix>] [--suffix <suffix>] [--set <variable=value>] [--exclude <regex>] [--include <regex>] [--output-images-file <file path>] [--environment-file <file path>] [--environment-directory <HEAT ENVIRONMENT DIRECTORY>] [--output-env-file <file path>] [--roles-file ROLES_FILE] [--modify-role MODIFY_ROLE] [--modify-vars MODIFY_VARS]", "openstack overcloud container image tag discover [-h] --image <container image> [--tag-from-label <image label>]", "openstack overcloud container image upload [-h] --config-file <yaml config file> [--cleanup <full, partial, none>]", "openstack overcloud credentials [-h] [--directory [DIRECTORY]] plan", "openstack overcloud delete [-h] [-y] [stack]", "openstack overcloud deploy [--templates [TEMPLATES]] [--stack STACK] [--timeout <TIMEOUT>] [--control-scale CONTROL_SCALE] [--compute-scale COMPUTE_SCALE] [--ceph-storage-scale CEPH_STORAGE_SCALE] [--block-storage-scale BLOCK_STORAGE_SCALE] [--swift-storage-scale SWIFT_STORAGE_SCALE] [--control-flavor CONTROL_FLAVOR] [--compute-flavor COMPUTE_FLAVOR] [--ceph-storage-flavor CEPH_STORAGE_FLAVOR] [--block-storage-flavor BLOCK_STORAGE_FLAVOR] [--swift-storage-flavor SWIFT_STORAGE_FLAVOR] [--libvirt-type {kvm,qemu}] [--ntp-server NTP_SERVER] [--no-proxy NO_PROXY] [--overcloud-ssh-user OVERCLOUD_SSH_USER] [--overcloud-ssh-key OVERCLOUD_SSH_KEY] [--overcloud-ssh-network OVERCLOUD_SSH_NETWORK] [--overcloud-ssh-enable-timeout OVERCLOUD_SSH_ENABLE_TIMEOUT] [--overcloud-ssh-port-timeout OVERCLOUD_SSH_PORT_TIMEOUT] [--environment-file <HEAT ENVIRONMENT FILE>] [--environment-directory <HEAT ENVIRONMENT DIRECTORY>] [--roles-file ROLES_FILE] [--networks-file NETWORKS_FILE] [--plan-environment-file PLAN_ENVIRONMENT_FILE] [--no-cleanup] [--update-plan-only] [--validation-errors-nonfatal] [--validation-warnings-fatal] [--disable-validations] [--inflight-validations] [--dry-run] [--run-validations] [--skip-postconfig] [--force-postconfig] [--skip-deploy-identifier] [--answers-file ANSWERS_FILE] [--disable-password-generation] [--deployed-server] [--config-download] [--no-config-download] [--config-download-only] [--output-dir OUTPUT_DIR] [--override-ansible-cfg OVERRIDE_ANSIBLE_CFG] [--config-download-timeout CONFIG_DOWNLOAD_TIMEOUT] [--deployment-python-interpreter DEPLOYMENT_PYTHON_INTERPRETER] [-b <baremetal_deployment.yaml>]", "openstack overcloud execute [-h] [-s SERVER_NAME] [-g GROUP] file_in", "openstack overcloud export [-h] [--stack <stack>] [--output-file <output file>] [--force-overwrite] [--config-download-dir CONFIG_DOWNLOAD_DIR] [--no-password-excludes]", "openstack overcloud external-update run [-h] [--static-inventory STATIC_INVENTORY] [--ssh-user SSH_USER] [--tags TAGS] [--skip-tags SKIP_TAGS] [--stack STACK] [-e EXTRA_VARS] [--no-workflow]", "openstack overcloud external-upgrade run [-h] [--static-inventory STATIC_INVENTORY] [--ssh-user SSH_USER] [--tags TAGS] [--skip-tags SKIP_TAGS] [--stack STACK] [-e EXTRA_VARS] [--no-workflow]", "openstack overcloud failures [-h] [--plan PLAN]", "openstack overcloud ffwd-upgrade converge [--templates [TEMPLATES]] [--stack STACK] [--timeout <TIMEOUT>] [--control-scale CONTROL_SCALE] [--compute-scale COMPUTE_SCALE] [--ceph-storage-scale CEPH_STORAGE_SCALE] [--block-storage-scale BLOCK_STORAGE_SCALE] [--swift-storage-scale SWIFT_STORAGE_SCALE] [--control-flavor CONTROL_FLAVOR] [--compute-flavor COMPUTE_FLAVOR] [--ceph-storage-flavor CEPH_STORAGE_FLAVOR] [--block-storage-flavor BLOCK_STORAGE_FLAVOR] [--swift-storage-flavor SWIFT_STORAGE_FLAVOR] [--libvirt-type {kvm,qemu}] [--ntp-server NTP_SERVER] [--no-proxy NO_PROXY] [--overcloud-ssh-user OVERCLOUD_SSH_USER] [--overcloud-ssh-key OVERCLOUD_SSH_KEY] [--overcloud-ssh-network OVERCLOUD_SSH_NETWORK] [--overcloud-ssh-enable-timeout OVERCLOUD_SSH_ENABLE_TIMEOUT] [--overcloud-ssh-port-timeout OVERCLOUD_SSH_PORT_TIMEOUT] [--environment-file <HEAT ENVIRONMENT FILE>] [--environment-directory <HEAT ENVIRONMENT DIRECTORY>] [--roles-file ROLES_FILE] [--networks-file NETWORKS_FILE] [--plan-environment-file PLAN_ENVIRONMENT_FILE] [--no-cleanup] [--update-plan-only] [--validation-errors-nonfatal] [--validation-warnings-fatal] [--disable-validations] [--inflight-validations] [--dry-run] [--run-validations] [--skip-postconfig] [--force-postconfig] [--skip-deploy-identifier] [--answers-file ANSWERS_FILE] [--disable-password-generation] [--deployed-server] [--config-download] [--no-config-download] [--config-download-only] [--output-dir OUTPUT_DIR] [--override-ansible-cfg OVERRIDE_ANSIBLE_CFG] [--config-download-timeout CONFIG_DOWNLOAD_TIMEOUT] [--deployment-python-interpreter DEPLOYMENT_PYTHON_INTERPRETER] [-b <baremetal_deployment.yaml>] [--yes]", "openstack overcloud ffwd-upgrade prepare [--templates [TEMPLATES]] [--stack STACK] [--timeout <TIMEOUT>] [--control-scale CONTROL_SCALE] [--compute-scale COMPUTE_SCALE] [--ceph-storage-scale CEPH_STORAGE_SCALE] [--block-storage-scale BLOCK_STORAGE_SCALE] [--swift-storage-scale SWIFT_STORAGE_SCALE] [--control-flavor CONTROL_FLAVOR] [--compute-flavor COMPUTE_FLAVOR] [--ceph-storage-flavor CEPH_STORAGE_FLAVOR] [--block-storage-flavor BLOCK_STORAGE_FLAVOR] [--swift-storage-flavor SWIFT_STORAGE_FLAVOR] [--libvirt-type {kvm,qemu}] [--ntp-server NTP_SERVER] [--no-proxy NO_PROXY] [--overcloud-ssh-user OVERCLOUD_SSH_USER] [--overcloud-ssh-key OVERCLOUD_SSH_KEY] [--overcloud-ssh-network OVERCLOUD_SSH_NETWORK] [--overcloud-ssh-enable-timeout OVERCLOUD_SSH_ENABLE_TIMEOUT] [--overcloud-ssh-port-timeout OVERCLOUD_SSH_PORT_TIMEOUT] [--environment-file <HEAT ENVIRONMENT FILE>] [--environment-directory <HEAT ENVIRONMENT DIRECTORY>] [--roles-file ROLES_FILE] [--networks-file NETWORKS_FILE] [--plan-environment-file PLAN_ENVIRONMENT_FILE] [--no-cleanup] [--update-plan-only] [--validation-errors-nonfatal] [--validation-warnings-fatal] [--disable-validations] [--inflight-validations] [--dry-run] [--run-validations] [--skip-postconfig] [--force-postconfig] [--skip-deploy-identifier] [--answers-file ANSWERS_FILE] [--disable-password-generation] [--deployed-server] [--config-download] [--no-config-download] [--config-download-only] [--output-dir OUTPUT_DIR] [--override-ansible-cfg OVERRIDE_ANSIBLE_CFG] [--config-download-timeout CONFIG_DOWNLOAD_TIMEOUT] [--deployment-python-interpreter DEPLOYMENT_PYTHON_INTERPRETER] [-b <baremetal_deployment.yaml>] [--yes]", "openstack overcloud ffwd-upgrade run [-h] [--yes] [--static-inventory STATIC_INVENTORY] [--ssh-user SSH_USER] [--stack STACK] [--no-workflow]", "openstack overcloud generate fencing [-h] [-a FENCE_ACTION] [--delay DELAY] [--ipmi-lanplus] [--ipmi-no-lanplus] [--ipmi-cipher IPMI_CIPHER] [--ipmi-level IPMI_LEVEL] [--output OUTPUT] instackenv", "openstack overcloud image build [-h] [--config-file <yaml config file>] [--image-name <image name>] [--no-skip] [--output-directory OUTPUT_DIRECTORY]", "openstack overcloud image upload [-h] [--image-path IMAGE_PATH] [--os-image-name OS_IMAGE_NAME] [--ironic-python-agent-name IPA_NAME] [--http-boot HTTP_BOOT] [--update-existing] [--whole-disk] [--architecture ARCHITECTURE] [--platform PLATFORM] [--image-type {os,ironic-python-agent}] [--progress] [--local] [--local-path LOCAL_PATH]", "openstack overcloud netenv validate [-h] [-f NETENV]", "openstack overcloud node bios configure [-h] [--all-manageable] [--configuration <configuration>] [<node_uuid> [<node_uuid> ...]]", "openstack overcloud node bios reset [-h] [--all-manageable] [<node_uuid> [<node_uuid> ...]]", "openstack overcloud node clean [-h] [--all-manageable] [--provide] [<node_uuid> [<node_uuid> ...]]", "openstack overcloud node configure [-h] [--all-manageable] [--deploy-kernel DEPLOY_KERNEL] [--deploy-ramdisk DEPLOY_RAMDISK] [--instance-boot-option {local,netboot}] [--root-device ROOT_DEVICE] [--root-device-minimum-size ROOT_DEVICE_MINIMUM_SIZE] [--overwrite-root-device-hints] [<node_uuid> [<node_uuid> ...]]", "openstack overcloud node delete [-h] [-b <BAREMETAL DEPLOYMENT FILE>] [--stack STACK] [--templates [TEMPLATES]] [-e <HEAT ENVIRONMENT FILE>] [--timeout <TIMEOUT>] [-y] [<node> [<node> ...]]", "openstack overcloud node discover [-h] (--ip <ips> | --range <range>) --credentials <key:value> [--port <ports>] [--introspect] [--run-validations] [--provide] [--no-deploy-image] [--instance-boot-option {local,netboot}] [--concurrency CONCURRENCY]", "openstack overcloud node import [-h] [--introspect] [--run-validations] [--validate-only] [--provide] [--no-deploy-image] [--instance-boot-option {local,netboot}] [--http-boot HTTP_BOOT] [--concurrency CONCURRENCY] env_file", "openstack overcloud node introspect [-h] [--all-manageable] [--provide] [--run-validations] [--concurrency CONCURRENCY] [<node_uuid> [<node_uuid> ...]]", "openstack overcloud node provide [-h] [--all-manageable] [<node_uuid> [<node_uuid> ...]]", "openstack overcloud node provision [-h] [-o OUTPUT] [--stack STACK] [--overcloud-ssh-user OVERCLOUD_SSH_USER] [--overcloud-ssh-key OVERCLOUD_SSH_KEY] [--concurrency CONCURRENCY] [--timeout TIMEOUT] <baremetal_deployment.yaml>", "openstack overcloud node unprovision [-h] [--stack STACK] [--all] [-y] <baremetal_deployment.yaml>", "openstack overcloud parameters set [-h] name file_in", "openstack overcloud plan create [-h] [--templates TEMPLATES] [--plan-environment-file PLAN_ENVIRONMENT_FILE] [--disable-password-generation] [--source-url SOURCE_URL] name", "openstack overcloud plan delete [-h] <name> [<name> ...]", "openstack overcloud plan deploy [-h] [--timeout <TIMEOUT>] [--run-validations] name", "openstack overcloud plan export [-h] [--output-file <output file>] [--force-overwrite] <name>", "openstack overcloud plan list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN]", "openstack overcloud profiles list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--all] [--control-scale CONTROL_SCALE] [--compute-scale COMPUTE_SCALE] [--ceph-storage-scale CEPH_STORAGE_SCALE] [--block-storage-scale BLOCK_STORAGE_SCALE] [--swift-storage-scale SWIFT_STORAGE_SCALE] [--control-flavor CONTROL_FLAVOR] [--compute-flavor COMPUTE_FLAVOR] [--ceph-storage-flavor CEPH_STORAGE_FLAVOR] [--block-storage-flavor BLOCK_STORAGE_FLAVOR] [--swift-storage-flavor SWIFT_STORAGE_FLAVOR]", "openstack overcloud profiles match [-h] [--dry-run] [--control-scale CONTROL_SCALE] [--compute-scale COMPUTE_SCALE] [--ceph-storage-scale CEPH_STORAGE_SCALE] [--block-storage-scale BLOCK_STORAGE_SCALE] [--swift-storage-scale SWIFT_STORAGE_SCALE] [--control-flavor CONTROL_FLAVOR] [--compute-flavor COMPUTE_FLAVOR] [--ceph-storage-flavor CEPH_STORAGE_FLAVOR] [--block-storage-flavor BLOCK_STORAGE_FLAVOR] [--swift-storage-flavor SWIFT_STORAGE_FLAVOR]", "openstack overcloud raid create [-h] --node NODE configuration", "openstack overcloud role list [-h] [--roles-path <roles directory>]", "openstack overcloud role show [-h] [--roles-path <roles directory>] <role>", "openstack overcloud roles generate [-h] [--roles-path <roles directory>] [-o <output file>] [--skip-validate] <role> [<role> ...]", "openstack overcloud roles list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--name NAME] [--detail] [--current]", "openstack overcloud roles show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--name NAME] <role>", "openstack overcloud status [-h] [--plan PLAN]", "openstack overcloud support report collect [-h] [-c CONTAINER] [-o DESTINATION] [--skip-container-delete] [-t TIMEOUT] [-n CONCURRENCY] [--collect-only | --download-only] server_name", "openstack overcloud update converge [--templates [TEMPLATES]] [--stack STACK] [--timeout <TIMEOUT>] [--control-scale CONTROL_SCALE] [--compute-scale COMPUTE_SCALE] [--ceph-storage-scale CEPH_STORAGE_SCALE] [--block-storage-scale BLOCK_STORAGE_SCALE] [--swift-storage-scale SWIFT_STORAGE_SCALE] [--control-flavor CONTROL_FLAVOR] [--compute-flavor COMPUTE_FLAVOR] [--ceph-storage-flavor CEPH_STORAGE_FLAVOR] [--block-storage-flavor BLOCK_STORAGE_FLAVOR] [--swift-storage-flavor SWIFT_STORAGE_FLAVOR] [--libvirt-type {kvm,qemu}] [--ntp-server NTP_SERVER] [--no-proxy NO_PROXY] [--overcloud-ssh-user OVERCLOUD_SSH_USER] [--overcloud-ssh-key OVERCLOUD_SSH_KEY] [--overcloud-ssh-network OVERCLOUD_SSH_NETWORK] [--overcloud-ssh-enable-timeout OVERCLOUD_SSH_ENABLE_TIMEOUT] [--overcloud-ssh-port-timeout OVERCLOUD_SSH_PORT_TIMEOUT] [--environment-file <HEAT ENVIRONMENT FILE>] [--environment-directory <HEAT ENVIRONMENT DIRECTORY>] [--roles-file ROLES_FILE] [--networks-file NETWORKS_FILE] [--plan-environment-file PLAN_ENVIRONMENT_FILE] [--no-cleanup] [--update-plan-only] [--validation-errors-nonfatal] [--validation-warnings-fatal] [--disable-validations] [--inflight-validations] [--dry-run] [--run-validations] [--skip-postconfig] [--force-postconfig] [--skip-deploy-identifier] [--answers-file ANSWERS_FILE] [--disable-password-generation] [--deployed-server] [--config-download] [--no-config-download] [--config-download-only] [--output-dir OUTPUT_DIR] [--override-ansible-cfg OVERRIDE_ANSIBLE_CFG] [--config-download-timeout CONFIG_DOWNLOAD_TIMEOUT] [--deployment-python-interpreter DEPLOYMENT_PYTHON_INTERPRETER] [-b <baremetal_deployment.yaml>]", "openstack overcloud update prepare [--templates [TEMPLATES]] [--stack STACK] [--timeout <TIMEOUT>] [--control-scale CONTROL_SCALE] [--compute-scale COMPUTE_SCALE] [--ceph-storage-scale CEPH_STORAGE_SCALE] [--block-storage-scale BLOCK_STORAGE_SCALE] [--swift-storage-scale SWIFT_STORAGE_SCALE] [--control-flavor CONTROL_FLAVOR] [--compute-flavor COMPUTE_FLAVOR] [--ceph-storage-flavor CEPH_STORAGE_FLAVOR] [--block-storage-flavor BLOCK_STORAGE_FLAVOR] [--swift-storage-flavor SWIFT_STORAGE_FLAVOR] [--libvirt-type {kvm,qemu}] [--ntp-server NTP_SERVER] [--no-proxy NO_PROXY] [--overcloud-ssh-user OVERCLOUD_SSH_USER] [--overcloud-ssh-key OVERCLOUD_SSH_KEY] [--overcloud-ssh-network OVERCLOUD_SSH_NETWORK] [--overcloud-ssh-enable-timeout OVERCLOUD_SSH_ENABLE_TIMEOUT] [--overcloud-ssh-port-timeout OVERCLOUD_SSH_PORT_TIMEOUT] [--environment-file <HEAT ENVIRONMENT FILE>] [--environment-directory <HEAT ENVIRONMENT DIRECTORY>] [--roles-file ROLES_FILE] [--networks-file NETWORKS_FILE] [--plan-environment-file PLAN_ENVIRONMENT_FILE] [--no-cleanup] [--update-plan-only] [--validation-errors-nonfatal] [--validation-warnings-fatal] [--disable-validations] [--inflight-validations] [--dry-run] [--run-validations] [--skip-postconfig] [--force-postconfig] [--skip-deploy-identifier] [--answers-file ANSWERS_FILE] [--disable-password-generation] [--deployed-server] [--config-download] [--no-config-download] [--config-download-only] [--output-dir OUTPUT_DIR] [--override-ansible-cfg OVERRIDE_ANSIBLE_CFG] [--config-download-timeout CONFIG_DOWNLOAD_TIMEOUT] [--deployment-python-interpreter DEPLOYMENT_PYTHON_INTERPRETER] [-b <baremetal_deployment.yaml>]", "openstack overcloud update run [-h] --limit LIMIT [--playbook PLAYBOOK] [--ssh-user SSH_USER] [--static-inventory STATIC_INVENTORY] [--stack STACK] [--no-workflow]", "openstack overcloud upgrade converge [--templates [TEMPLATES]] [--stack STACK] [--timeout <TIMEOUT>] [--control-scale CONTROL_SCALE] [--compute-scale COMPUTE_SCALE] [--ceph-storage-scale CEPH_STORAGE_SCALE] [--block-storage-scale BLOCK_STORAGE_SCALE] [--swift-storage-scale SWIFT_STORAGE_SCALE] [--control-flavor CONTROL_FLAVOR] [--compute-flavor COMPUTE_FLAVOR] [--ceph-storage-flavor CEPH_STORAGE_FLAVOR] [--block-storage-flavor BLOCK_STORAGE_FLAVOR] [--swift-storage-flavor SWIFT_STORAGE_FLAVOR] [--libvirt-type {kvm,qemu}] [--ntp-server NTP_SERVER] [--no-proxy NO_PROXY] [--overcloud-ssh-user OVERCLOUD_SSH_USER] [--overcloud-ssh-key OVERCLOUD_SSH_KEY] [--overcloud-ssh-network OVERCLOUD_SSH_NETWORK] [--overcloud-ssh-enable-timeout OVERCLOUD_SSH_ENABLE_TIMEOUT] [--overcloud-ssh-port-timeout OVERCLOUD_SSH_PORT_TIMEOUT] [--environment-file <HEAT ENVIRONMENT FILE>] [--environment-directory <HEAT ENVIRONMENT DIRECTORY>] [--roles-file ROLES_FILE] [--networks-file NETWORKS_FILE] [--plan-environment-file PLAN_ENVIRONMENT_FILE] [--no-cleanup] [--update-plan-only] [--validation-errors-nonfatal] [--validation-warnings-fatal] [--disable-validations] [--inflight-validations] [--dry-run] [--run-validations] [--skip-postconfig] [--force-postconfig] [--skip-deploy-identifier] [--answers-file ANSWERS_FILE] [--disable-password-generation] [--deployed-server] [--config-download] [--no-config-download] [--config-download-only] [--output-dir OUTPUT_DIR] [--override-ansible-cfg OVERRIDE_ANSIBLE_CFG] [--config-download-timeout CONFIG_DOWNLOAD_TIMEOUT] [--deployment-python-interpreter DEPLOYMENT_PYTHON_INTERPRETER] [-b <baremetal_deployment.yaml>]", "openstack overcloud upgrade prepare [--templates [TEMPLATES]] [--stack STACK] [--timeout <TIMEOUT>] [--control-scale CONTROL_SCALE] [--compute-scale COMPUTE_SCALE] [--ceph-storage-scale CEPH_STORAGE_SCALE] [--block-storage-scale BLOCK_STORAGE_SCALE] [--swift-storage-scale SWIFT_STORAGE_SCALE] [--control-flavor CONTROL_FLAVOR] [--compute-flavor COMPUTE_FLAVOR] [--ceph-storage-flavor CEPH_STORAGE_FLAVOR] [--block-storage-flavor BLOCK_STORAGE_FLAVOR] [--swift-storage-flavor SWIFT_STORAGE_FLAVOR] [--libvirt-type {kvm,qemu}] [--ntp-server NTP_SERVER] [--no-proxy NO_PROXY] [--overcloud-ssh-user OVERCLOUD_SSH_USER] [--overcloud-ssh-key OVERCLOUD_SSH_KEY] [--overcloud-ssh-network OVERCLOUD_SSH_NETWORK] [--overcloud-ssh-enable-timeout OVERCLOUD_SSH_ENABLE_TIMEOUT] [--overcloud-ssh-port-timeout OVERCLOUD_SSH_PORT_TIMEOUT] [--environment-file <HEAT ENVIRONMENT FILE>] [--environment-directory <HEAT ENVIRONMENT DIRECTORY>] [--roles-file ROLES_FILE] [--networks-file NETWORKS_FILE] [--plan-environment-file PLAN_ENVIRONMENT_FILE] [--no-cleanup] [--update-plan-only] [--validation-errors-nonfatal] [--validation-warnings-fatal] [--disable-validations] [--inflight-validations] [--dry-run] [--run-validations] [--skip-postconfig] [--force-postconfig] [--skip-deploy-identifier] [--answers-file ANSWERS_FILE] [--disable-password-generation] [--deployed-server] [--config-download] [--no-config-download] [--config-download-only] [--output-dir OUTPUT_DIR] [--override-ansible-cfg OVERRIDE_ANSIBLE_CFG] [--config-download-timeout CONFIG_DOWNLOAD_TIMEOUT] [--deployment-python-interpreter DEPLOYMENT_PYTHON_INTERPRETER] [-b <baremetal_deployment.yaml>]", "openstack overcloud upgrade run [-h] --limit LIMIT [--playbook PLAYBOOK] [--static-inventory STATIC_INVENTORY] [--ssh-user SSH_USER] [--tags TAGS] [--skip-tags SKIP_TAGS] [--stack STACK] [--no-workflow]" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/command_line_interface_reference/overcloud
Chapter 1. Using the RESP endpoint
Chapter 1. Using the RESP endpoint Data Grid Server includes an experimental module that implements the RESP3 protocol . The RESP endpoint allows Redis clients to connect to one or several Data Grid-backed RESP servers and perform cache operations. Important RESP protocol endpoint is available as a technology preview feature. 1.1. Technology Previews Technology Preview features or capabilities are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using Technology Preview features or capabilities for production. These features provide early access to upcoming product features, which enables you to test functionality and provide feedback during the development process. For more information, see Red Hat Technology Preview Features Support Scope . 1.2. Enabling the RESP endpoint Add the resp-connector to Data Grid Server configuration to enable the RESP endpoint. You can enable the RESP endpoint with: Standalone Data Grid Server deployments, exactly like standalone Redis, where each server instance runs independently of each other. Clustered Data Grid Server deployments, where server instances replicate or distribute data between each other. Clustered deployments provides clients with failover capabilities. Prerequisites Install Data Grid Server. Procedure Open your Data Grid Server configuration for editing. Add cache configuration to the cache-container section if required. Cache configuration cannot enable capabilities that violate the RESP protocol. For example, specifying expiration values in a cache for the RESP endpoint results in a fatal error at startup. Tip Configure your cache with Protobuf encoding if you want to view cache entries in the Data Grid Console ( encoding media-type="application/x-protostream" ). Add an endpoint declaration to your configuration. Add the resp-connector element and specify the name of the cache to use with the RESP connector with the cache attribute. You can use only one cache with the RESP endpoint. Declare the security realm to use with the RESP endpoint with the security-realm attribute, if required. Ensure that the endpoint declaration also adds a Hot Rod and REST connector. Save the changes to your configuration. Verification When you start Data Grid Server check for the following log message: You can now connect to the RESP endpoint with a Redis client. For example, with the Redis CLI you can do the following to add an entry to the cache: RESP endpoint configuration XML <endpoints> <endpoint socket-binding="default" security-realm="default"> <resp-connector cache="mycache" /> <hotrod-connector /> <rest-connector/> </endpoint> </endpoints> JSON { "server": { "endpoints": { "endpoint": { "socket-binding": "default", "security-realm": "default", "resp-connector": { "cache": "mycache" }, "hotrod-connector": {}, "rest-connector": {} } } } } YAML server: endpoints: endpoint: socketBinding: "default" securityRealm: "default" respConnector: cache: "mycache" hotrodConnector: ~ restConnector: ~ 1.3. Redis commands The Data Grid RESP endpoint implements the following Redis commands: AUTH DECR DEL ECHO GET HELLO INCR MGET MSET PING PUBLISH QUIT RESET SET SUBSCRIBE UNSUBSCRIBE
[ "[org.infinispan.SERVER] ISPN080018: Started connector Resp (internal)", "redis-cli -p 11222 --user username --pass password", "127.0.0.1:11222> SET k v OK 127.0.0.1:11222> GET k \"v\" 127.0.0.1:11222> quit", "<endpoints> <endpoint socket-binding=\"default\" security-realm=\"default\"> <resp-connector cache=\"mycache\" /> <hotrod-connector /> <rest-connector/> </endpoint> </endpoints>", "{ \"server\": { \"endpoints\": { \"endpoint\": { \"socket-binding\": \"default\", \"security-realm\": \"default\", \"resp-connector\": { \"cache\": \"mycache\" }, \"hotrod-connector\": {}, \"rest-connector\": {} } } } }", "server: endpoints: endpoint: socketBinding: \"default\" securityRealm: \"default\" respConnector: cache: \"mycache\" hotrodConnector: ~ restConnector: ~" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/using_the_resp_protocol_endpoint_with_data_grid/resp-endpoint
Chapter 4. Adding user preferences
Chapter 4. Adding user preferences You can change the default preferences for your profile to meet your requirements. You can set your default project, topology view (graph or list), editing medium (form or YAML), language preferences, and resource type. The changes made to the user preferences are automatically saved. 4.1. Setting user preferences You can set the default user preferences for your cluster. Procedure Log in to the OpenShift Container Platform web console using your login credentials. Use the masthead to access the user preferences under the user profile. In the General section: In the Theme field, you can set the theme that you want to work in. The console defaults to the selected theme each time you log in. In the Perspective field, you can set the default perspective you want to be logged in to. You can select the Administrator or the Developer perspective as required. If a perspective is not selected, you are logged into the perspective you last visited. In the Project field, select a project you want to work in. The console defaults to the project every time you log in. In the Topology field, you can set the topology view to default to the graph or list view. If not selected, the console defaults to the last view you used. In the Create/Edit resource method field, you can set a preference for creating or editing a resource. If both the form and YAML options are available, the console defaults to your selection. In the Language section, select Default browser language to use the default browser language settings. Otherwise, select the language that you want to use for the console. In the Notifications section, you can toggle display notifications created by users for specific projects on the Overview page or notification drawer. In the Applications section: You can view the default Resource type . For example, if the OpenShift Serverless Operator is installed, the default resource type is Serverless Deployment . Otherwise, the default resource type is Deployment . You can select another resource type to be the default resource type from the Resource Type field.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/web_console/adding-user-preferences
Chapter 15. Installing IBM Cloud Bare Metal (Classic)
Chapter 15. Installing IBM Cloud Bare Metal (Classic) 15.1. Prerequisites You can use installer-provisioned installation to install OpenShift Container Platform on IBM Cloud(R) Bare Metal (Classic) nodes. This document describes the prerequisites and procedures when installing OpenShift Container Platform on IBM Cloud nodes. Important Red Hat supports IPMI and PXE on the provisioning network only. Red Hat has not tested Red Fish, virtual media, or other complementary technologies such as Secure Boot on IBM Cloud deployments. A provisioning network is required. Installer-provisioned installation of OpenShift Container Platform requires: One node with Red Hat Enterprise Linux CoreOS (RHCOS) 8.x installed, for running the provisioner Three control plane nodes One routable network One provisioning network Before starting an installer-provisioned installation of OpenShift Container Platform on IBM Cloud Bare Metal (Classic), address the following prerequisites and requirements. 15.1.1. Setting up IBM Cloud Bare Metal (Classic) infrastructure To deploy an OpenShift Container Platform cluster on IBM Cloud(R) Bare Metal (Classic) infrastructure, you must first provision the IBM Cloud nodes. Important Red Hat supports IPMI and PXE on the provisioning network only. Red Hat has not tested Red Fish, virtual media, or other complementary technologies such as Secure Boot on IBM Cloud deployments. The provisioning network is required. You can customize IBM Cloud nodes using the IBM Cloud API. When creating IBM Cloud nodes, you must consider the following requirements. Use one data center per cluster All nodes in the OpenShift Container Platform cluster must run in the same IBM Cloud data center. Create public and private VLANs Create all nodes with a single public VLAN and a single private VLAN. Ensure subnets have sufficient IP addresses IBM Cloud public VLAN subnets use a /28 prefix by default, which provides 16 IP addresses. That is sufficient for a cluster consisting of three control plane nodes, four worker nodes, and two IP addresses for the API VIP and Ingress VIP on the baremetal network. For larger clusters, you might need a smaller prefix. IBM Cloud private VLAN subnets use a /26 prefix by default, which provides 64 IP addresses. IBM Cloud Bare Metal (Classic) uses private network IP addresses to access the Baseboard Management Controller (BMC) of each node. OpenShift Container Platform creates an additional subnet for the provisioning network. Network traffic for the provisioning network subnet routes through the private VLAN. For larger clusters, you might need a smaller prefix. Table 15.1. IP addresses per prefix IP addresses Prefix 32 /27 64 /26 128 /25 256 /24 Configuring NICs OpenShift Container Platform deploys with two networks: provisioning : The provisioning network is a non-routable network used for provisioning the underlying operating system on each node that is a part of the OpenShift Container Platform cluster. baremetal : The baremetal network is a routable network. You can use any NIC order to interface with the baremetal network, provided it is not the NIC specified in the provisioningNetworkInterface configuration setting or the NIC associated to a node's bootMACAddress configuration setting for the provisioning network. While the cluster nodes can contain more than two NICs, the installation process only focuses on the first two NICs. For example: NIC Network VLAN NIC1 provisioning <provisioning_vlan> NIC2 baremetal <baremetal_vlan> In the example, NIC1 on all control plane and worker nodes connects to the non-routable network ( provisioning ) that is only used for the installation of the OpenShift Container Platform cluster. NIC2 on all control plane and worker nodes connects to the routable baremetal network. PXE Boot order NIC1 PXE-enabled provisioning network 1 NIC2 baremetal network. 2 Note Ensure PXE is enabled on the NIC used for the provisioning network and is disabled on all other NICs. Configuring canonical names Clients access the OpenShift Container Platform cluster nodes over the baremetal network. Configure IBM Cloud subdomains or subzones where the canonical name extension is the cluster name. For example: Creating DNS entries You must create DNS A record entries resolving to unused IP addresses on the public subnet for the following: Usage Host Name IP API api.<cluster_name>.<domain> <ip> Ingress LB (apps) *.apps.<cluster_name>.<domain> <ip> Control plane and worker nodes already have DNS entries after provisioning. The following table provides an example of fully qualified domain names. The API and Nameserver addresses begin with canonical name extensions. The host names of the control plane and worker nodes are examples, so you can use any host naming convention you prefer. Usage Host Name IP API api.<cluster_name>.<domain> <ip> Ingress LB (apps) *.apps.<cluster_name>.<domain> <ip> Provisioner node provisioner.<cluster_name>.<domain> <ip> Master-0 openshift-master-0.<cluster_name>.<domain> <ip> Master-1 openshift-master-1.<cluster_name>.<domain> <ip> Master-2 openshift-master-2.<cluster_name>.<domain> <ip> Worker-0 openshift-worker-0.<cluster_name>.<domain> <ip> Worker-1 openshift-worker-1.<cluster_name>.<domain> <ip> Worker-n openshift-worker-n.<cluster_name>.<domain> <ip> OpenShift Container Platform includes functionality that uses cluster membership information to generate A records. This resolves the node names to their IP addresses. After the nodes are registered with the API, the cluster can disperse node information without using CoreDNS-mDNS. This eliminates the network traffic associated with multicast DNS. Important After provisioning the IBM Cloud nodes, you must create a DNS entry for the api.<cluster_name>.<domain> domain name on the external DNS because removing CoreDNS causes the local entry to disappear. Failure to create a DNS record for the api.<cluster_name>.<domain> domain name in the external DNS server prevents worker nodes from joining the cluster. Network Time Protocol (NTP) Each OpenShift Container Platform node in the cluster must have access to an NTP server. OpenShift Container Platform nodes use NTP to synchronize their clocks. For example, cluster nodes use SSL certificates that require validation, which might fail if the date and time between the nodes are not in sync. Important Define a consistent clock date and time format in each cluster node's BIOS settings, or installation might fail. Configure a DHCP server IBM Cloud Bare Metal (Classic) does not run DHCP on the public or private VLANs. After provisioning IBM Cloud nodes, you must set up a DHCP server for the public VLAN, which corresponds to OpenShift Container Platform's baremetal network. Note The IP addresses allocated to each node do not need to match the IP addresses allocated by the IBM Cloud Bare Metal (Classic) provisioning system. See the "Configuring the public subnet" section for details. Ensure BMC access privileges The "Remote management" page for each node on the dashboard contains the node's intelligent platform management interface (IPMI) credentials. The default IPMI privileges prevent the user from making certain boot target changes. You must change the privilege level to OPERATOR so that Ironic can make those changes. In the install-config.yaml file, add the privilegelevel parameter to the URLs used to configure each BMC. See the "Configuring the install-config.yaml file" section for additional details. For example: ipmi://<IP>:<port>?privilegelevel=OPERATOR Alternatively, contact IBM Cloud support and request that they increase the IPMI privileges to ADMINISTRATOR for each node. Create bare metal servers Create bare metal servers in the IBM Cloud dashboard by navigating to Create resource Bare Metal Servers for Classic . Alternatively, you can create bare metal servers with the ibmcloud CLI utility. For example: USD ibmcloud sl hardware create --hostname <SERVERNAME> \ --domain <DOMAIN> \ --size <SIZE> \ --os <OS-TYPE> \ --datacenter <DC-NAME> \ --port-speed <SPEED> \ --billing <BILLING> See Installing the stand-alone IBM Cloud CLI for details on installing the IBM Cloud CLI. Note IBM Cloud servers might take 3-5 hours to become available. 15.2. Setting up the environment for an OpenShift Container Platform installation 15.2.1. Preparing the provisioner node on IBM Cloud Bare Metal (Classic) infrastructure Perform the following steps to prepare the provisioner node. Procedure Log in to the provisioner node via ssh . Create a non-root user ( kni ) and provide that user with sudo privileges: # useradd kni # passwd kni # echo "kni ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/kni # chmod 0440 /etc/sudoers.d/kni Create an ssh key for the new user: # su - kni -c "ssh-keygen -f /home/kni/.ssh/id_rsa -N ''" Log in as the new user on the provisioner node: # su - kni Use Red Hat Subscription Manager to register the provisioner node: USD sudo subscription-manager register --username=<user> --password=<pass> --auto-attach USD sudo subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms \ --enable=rhel-8-for-x86_64-baseos-rpms Note For more information about Red Hat Subscription Manager, see Using and Configuring Red Hat Subscription Manager . Install the following packages: USD sudo dnf install -y libvirt qemu-kvm mkisofs python3-devel jq ipmitool Modify the user to add the libvirt group to the newly created user: USD sudo usermod --append --groups libvirt kni Start firewalld : USD sudo systemctl start firewalld Enable firewalld : USD sudo systemctl enable firewalld Start the http service: USD sudo firewall-cmd --zone=public --add-service=http --permanent USD sudo firewall-cmd --reload Start and enable the libvirtd service: USD sudo systemctl enable libvirtd --now Set the ID of the provisioner node: USD PRVN_HOST_ID=<ID> You can view the ID with the following ibmcloud command: USD ibmcloud sl hardware list Set the ID of the public subnet: USD PUBLICSUBNETID=<ID> You can view the ID with the following ibmcloud command: USD ibmcloud sl subnet list Set the ID of the private subnet: USD PRIVSUBNETID=<ID> You can view the ID with the following ibmcloud command: USD ibmcloud sl subnet list Set the provisioner node public IP address: USD PRVN_PUB_IP=USD(ibmcloud sl hardware detail USDPRVN_HOST_ID --output JSON | jq .primaryIpAddress -r) Set the CIDR for the public network: USD PUBLICCIDR=USD(ibmcloud sl subnet detail USDPUBLICSUBNETID --output JSON | jq .cidr) Set the IP address and CIDR for the public network: USD PUB_IP_CIDR=USDPRVN_PUB_IP/USDPUBLICCIDR Set the gateway for the public network: USD PUB_GATEWAY=USD(ibmcloud sl subnet detail USDPUBLICSUBNETID --output JSON | jq .gateway -r) Set the private IP address of the provisioner node: USD PRVN_PRIV_IP=USD(ibmcloud sl hardware detail USDPRVN_HOST_ID --output JSON | \ jq .primaryBackendIpAddress -r) Set the CIDR for the private network: USD PRIVCIDR=USD(ibmcloud sl subnet detail USDPRIVSUBNETID --output JSON | jq .cidr) Set the IP address and CIDR for the private network: USD PRIV_IP_CIDR=USDPRVN_PRIV_IP/USDPRIVCIDR Set the gateway for the private network: USD PRIV_GATEWAY=USD(ibmcloud sl subnet detail USDPRIVSUBNETID --output JSON | jq .gateway -r) Set up the bridges for the baremetal and provisioning networks: USD sudo nohup bash -c " nmcli --get-values UUID con show | xargs -n 1 nmcli con delete nmcli connection add ifname provisioning type bridge con-name provisioning nmcli con add type bridge-slave ifname eth1 master provisioning nmcli connection add ifname baremetal type bridge con-name baremetal nmcli con add type bridge-slave ifname eth2 master baremetal nmcli connection modify baremetal ipv4.addresses USDPUB_IP_CIDR ipv4.method manual ipv4.gateway USDPUB_GATEWAY nmcli connection modify provisioning ipv4.addresses 172.22.0.1/24,USDPRIV_IP_CIDR ipv4.method manual nmcli connection modify provisioning +ipv4.routes \"10.0.0.0/8 USDPRIV_GATEWAY\" nmcli con down baremetal nmcli con up baremetal nmcli con down provisioning nmcli con up provisioning init 6 " Note For eth1 and eth2 , substitute the appropriate interface name, as needed. If required, SSH back into the provisioner node: # ssh kni@provisioner.<cluster-name>.<domain> Verify the connection bridges have been properly created: USD sudo nmcli con show Example output NAME UUID TYPE DEVICE baremetal 4d5133a5-8351-4bb9-bfd4-3af264801530 bridge baremetal provisioning 43942805-017f-4d7d-a2c2-7cb3324482ed bridge provisioning virbr0 d9bca40f-eee1-410b-8879-a2d4bb0465e7 bridge virbr0 bridge-slave-eth1 76a8ed50-c7e5-4999-b4f6-6d9014dd0812 ethernet eth1 bridge-slave-eth2 f31c3353-54b7-48de-893a-02d2b34c4736 ethernet eth2 Create a pull-secret.txt file: USD vim pull-secret.txt In a web browser, navigate to Install on Bare Metal with user-provisioned infrastructure . In step 1, click Download pull secret . Paste the contents into the pull-secret.txt file and save the contents in the kni user's home directory. 15.2.2. Configuring the public subnet All of the OpenShift Container Platform cluster nodes must be on the public subnet. IBM Cloud(R) Bare Metal (Classic) does not provide a DHCP server on the subnet. Set it up separately on the provisioner node. You must reset the BASH variables defined when preparing the provisioner node. Rebooting the provisioner node after preparing it will delete the BASH variables previously set. Procedure Install dnsmasq : USD sudo dnf install dnsmasq Open the dnsmasq configuration file: USD sudo vi /etc/dnsmasq.conf Add the following configuration to the dnsmasq configuration file: interface=baremetal except-interface=lo bind-dynamic log-dhcp dhcp-range=<ip_addr>,<ip_addr>,<pub_cidr> 1 dhcp-option=baremetal,121,0.0.0.0/0,<pub_gateway>,<prvn_priv_ip>,<prvn_pub_ip> 2 dhcp-hostsfile=/var/lib/dnsmasq/dnsmasq.hostsfile 1 Set the DHCP range. Replace both instances of <ip_addr> with one unused IP address from the public subnet so that the dhcp-range for the baremetal network begins and ends with the same the IP address. Replace <pub_cidr> with the CIDR of the public subnet. 2 Set the DHCP option. Replace <pub_gateway> with the IP address of the gateway for the baremetal network. Replace <prvn_priv_ip> with the IP address of the provisioner node's private IP address on the provisioning network. Replace <prvn_pub_ip> with the IP address of the provisioner node's public IP address on the baremetal network. To retrieve the value for <pub_cidr> , execute: USD ibmcloud sl subnet detail <publicsubnetid> --output JSON | jq .cidr Replace <publicsubnetid> with the ID of the public subnet. To retrieve the value for <pub_gateway> , execute: USD ibmcloud sl subnet detail <publicsubnetid> --output JSON | jq .gateway -r Replace <publicsubnetid> with the ID of the public subnet. To retrieve the value for <prvn_priv_ip> , execute: USD ibmcloud sl hardware detail <id> --output JSON | \ jq .primaryBackendIpAddress -r Replace <id> with the ID of the provisioner node. To retrieve the value for <prvn_pub_ip> , execute: USD ibmcloud sl hardware detail <id> --output JSON | jq .primaryIpAddress -r Replace <id> with the ID of the provisioner node. Obtain the list of hardware for the cluster: USD ibmcloud sl hardware list Obtain the MAC addresses and IP addresses for each node: USD ibmcloud sl hardware detail <id> --output JSON | \ jq '.networkComponents[] | \ "\(.primaryIpAddress) \(.macAddress)"' | grep -v null Replace <id> with the ID of the node. Example output "10.196.130.144 00:e0:ed:6a:ca:b4" "141.125.65.215 00:e0:ed:6a:ca:b5" Make a note of the MAC address and IP address of the public network. Make a separate note of the MAC address of the private network, which you will use later in the install-config.yaml file. Repeat this procedure for each node until you have all the public MAC and IP addresses for the public baremetal network, and the MAC addresses of the private provisioning network. Add the MAC and IP address pair of the public baremetal network for each node into the dnsmasq.hostsfile file: USD sudo vim /var/lib/dnsmasq/dnsmasq.hostsfile Example input 00:e0:ed:6a:ca:b5,141.125.65.215,master-0 <mac>,<ip>,master-1 <mac>,<ip>,master-2 <mac>,<ip>,worker-0 <mac>,<ip>,worker-1 ... Replace <mac>,<ip> with the public MAC address and public IP address of the corresponding node name. Start dnsmasq : USD sudo systemctl start dnsmasq Enable dnsmasq so that it starts when booting the node: USD sudo systemctl enable dnsmasq Verify dnsmasq is running: USD sudo systemctl status dnsmasq Example output ● dnsmasq.service - DNS caching server. Loaded: loaded (/usr/lib/systemd/system/dnsmasq.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2021-10-05 05:04:14 CDT; 49s ago Main PID: 3101 (dnsmasq) Tasks: 1 (limit: 204038) Memory: 732.0K CGroup: /system.slice/dnsmasq.service └─3101 /usr/sbin/dnsmasq -k Open ports 53 and 67 with UDP protocol: USD sudo firewall-cmd --add-port 53/udp --permanent USD sudo firewall-cmd --add-port 67/udp --permanent Add provisioning to the external zone with masquerade: USD sudo firewall-cmd --change-zone=provisioning --zone=external --permanent This step ensures network address translation for IPMI calls to the management subnet. Reload the firewalld configuration: USD sudo firewall-cmd --reload 15.2.3. Retrieving the OpenShift Container Platform installer Use the stable-4.x version of the installation program and your selected architecture to deploy the generally available stable version of OpenShift Container Platform: USD export VERSION=stable-4.11 USD export RELEASE_ARCH=<architecture> USD export RELEASE_IMAGE=USD(curl -s https://mirror.openshift.com/pub/openshift-v4/USDRELEASE_ARCH/clients/ocp/USDVERSION/release.txt | grep 'Pull From: quay.io' | awk -F ' ' '{print USD3}') 15.2.4. Extracting the OpenShift Container Platform installer After retrieving the installer, the step is to extract it. Procedure Set the environment variables: USD export cmd=openshift-baremetal-install USD export pullsecret_file=~/pull-secret.txt USD export extract_dir=USD(pwd) Get the oc binary: USD curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDVERSION/openshift-client-linux.tar.gz | tar zxvf - oc Extract the installer: USD sudo cp oc /usr/local/bin USD oc adm release extract --registry-config "USD{pullsecret_file}" --command=USDcmd --to "USD{extract_dir}" USD{RELEASE_IMAGE} USD sudo cp openshift-baremetal-install /usr/local/bin 15.2.5. Configuring the install-config.yaml file The install-config.yaml file requires some additional details. Most of the information is teaching the installer and the resulting cluster enough about the available IBM Cloud(R) Bare Metal (Classic) hardware so that it is able to fully manage it. The material difference between installing on bare metal and installing on IBM Cloud Bare Metal (Classic) is that you must explicitly set the privilege level for IPMI in the BMC section of the install-config.yaml file. Procedure Configure install-config.yaml . Change the appropriate variables to match the environment, including pullSecret and sshKey . apiVersion: v1 baseDomain: <domain> metadata: name: <cluster_name> networking: machineNetwork: - cidr: <public-cidr> networkType: OVNKubernetes compute: - name: worker replicas: 2 controlPlane: name: master replicas: 3 platform: baremetal: {} platform: baremetal: apiVIP: <api_ip> ingressVIP: <wildcard_ip> provisioningNetworkInterface: <NIC1> provisioningNetworkCIDR: <CIDR> hosts: - name: openshift-master-0 role: master bmc: address: ipmi://10.196.130.145?privilegelevel=OPERATOR 1 username: root password: <password> bootMACAddress: 00:e0:ed:6a:ca:b4 2 rootDeviceHints: deviceName: "/dev/sda" - name: openshift-worker-0 role: worker bmc: address: ipmi://<out-of-band-ip>?privilegelevel=OPERATOR 3 username: <user> password: <password> bootMACAddress: <NIC1_mac_address> 4 rootDeviceHints: deviceName: "/dev/sda" pullSecret: '<pull_secret>' sshKey: '<ssh_pub_key>' 1 3 The bmc.address provides a privilegelevel configuration setting with the value set to OPERATOR . This is required for IBM Cloud Bare Metal (Classic) infrastructure. 2 4 Add the MAC address of the private provisioning network NIC for the corresponding node. Note You can use the ibmcloud command-line utility to retrieve the password. USD ibmcloud sl hardware detail <id> --output JSON | \ jq '"(.networkManagementIpAddress) (.remoteManagementAccounts[0].password)"' Replace <id> with the ID of the node. Create a directory to store the cluster configuration: USD mkdir ~/clusterconfigs Copy the install-config.yaml file into the directory: USD cp install-config.yaml ~/clusterconfig Ensure all bare metal nodes are powered off prior to installing the OpenShift Container Platform cluster: USD ipmitool -I lanplus -U <user> -P <password> -H <management_server_ip> power off Remove old bootstrap resources if any are left over from a deployment attempt: for i in USD(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print USD2'}); do sudo virsh destroy USDi; sudo virsh undefine USDi; sudo virsh vol-delete USDi --pool USDi; sudo virsh vol-delete USDi.ign --pool USDi; sudo virsh pool-destroy USDi; sudo virsh pool-undefine USDi; done 15.2.6. Additional install-config parameters See the following tables for the required parameters, the hosts parameter, and the bmc parameter for the install-config.yaml file. Table 15.2. Required parameters Parameters Default Description baseDomain The domain name for the cluster. For example, example.com . bootMode UEFI The boot mode for a node. Options are legacy , UEFI , and UEFISecureBoot . If bootMode is not set, Ironic sets it while inspecting the node. bootstrapExternalStaticIP The static IP address for the bootstrap VM. You must set this value when deploying a cluster with static IP addresses when there is no DHCP server on the bare-metal network. bootstrapExternalStaticGateway The static IP address of the gateway for the bootstrap VM. You must set this value when deploying a cluster with static IP addresses when there is no DHCP server on the bare-metal network. sshKey The sshKey configuration setting contains the key in the ~/.ssh/id_rsa.pub file required to access the control plane nodes and worker nodes. Typically, this key is from the provisioner node. pullSecret The pullSecret configuration setting contains a copy of the pull secret downloaded from the Install OpenShift on Bare Metal page when preparing the provisioner node. The name to be given to the OpenShift Container Platform cluster. For example, openshift . The public CIDR (Classless Inter-Domain Routing) of the external network. For example, 10.0.0.0/24 . The OpenShift Container Platform cluster requires a name be provided for worker (or compute) nodes even if there are zero nodes. Replicas sets the number of worker (or compute) nodes in the OpenShift Container Platform cluster. The OpenShift Container Platform cluster requires a name for control plane (master) nodes. Replicas sets the number of control plane (master) nodes included as part of the OpenShift Container Platform cluster. provisioningNetworkInterface The name of the network interface on nodes connected to the provisioning network. For OpenShift Container Platform 4.9 and later releases, use the bootMACAddress configuration setting to enable Ironic to identify the IP address of the NIC instead of using the provisioningNetworkInterface configuration setting to identify the name of the NIC. defaultMachinePlatform The default configuration used for machine pools without a platform configuration. apiVIP (Optional) The virtual IP address for Kubernetes API communication. This setting must either be provided in the install-config.yaml file as a reserved IP from the MachineNetwork or pre-configured in the DNS so that the default name resolves correctly. Use the virtual IP address and not the FQDN when adding a value to the apiVIP configuration setting in the install-config.yaml file. The IP address must be from the primary IPv4 network when using dual stack networking. If not set, the installer uses api.<cluster_name>.<base_domain> to derive the IP address from the DNS. disableCertificateVerification False redfish and redfish-virtualmedia need this parameter to manage BMC addresses. The value should be True when using a self-signed certificate for BMC addresses. ingressVIP (Optional) The virtual IP address for ingress traffic. This setting must either be provided in the install-config.yaml file as a reserved IP from the MachineNetwork or pre-configured in the DNS so that the default name resolves correctly. Use the virtual IP address and not the FQDN when adding a value to the ingressVIP configuration setting in the install-config.yaml file. The IP address must be from the primary IPv4 network when using dual stack networking. If not set, the installer uses test.apps.<cluster_name>.<base_domain> to derive the IP address from the DNS. Table 15.3. Optional Parameters Parameters Default Description provisioningDHCPRange 172.22.0.10,172.22.0.100 Defines the IP range for nodes on the provisioning network. provisioningNetworkCIDR 172.22.0.0/24 The CIDR for the network to use for provisioning. This option is required when not using the default address range on the provisioning network. clusterProvisioningIP The third IP address of the provisioningNetworkCIDR . The IP address within the cluster where the provisioning services run. Defaults to the third IP address of the provisioning subnet. For example, 172.22.0.3 . bootstrapProvisioningIP The second IP address of the provisioningNetworkCIDR . The IP address on the bootstrap VM where the provisioning services run while the installer is deploying the control plane (master) nodes. Defaults to the second IP address of the provisioning subnet. For example, 172.22.0.2 or 2620:52:0:1307::2 . externalBridge baremetal The name of the bare-metal bridge of the hypervisor attached to the bare-metal network. provisioningBridge provisioning The name of the provisioning bridge on the provisioner host attached to the provisioning network. architecture Defines the host architecture for your cluster. Valid values are amd64 or arm64 . defaultMachinePlatform The default configuration used for machine pools without a platform configuration. bootstrapOSImage A URL to override the default operating system image for the bootstrap node. The URL must contain a SHA-256 hash of the image. For example: https://mirror.openshift.com/rhcos-<version>-qemu.qcow2.gz?sha256=<uncompressed_sha256> . provisioningNetwork The provisioningNetwork configuration setting determines whether the cluster uses the provisioning network. If it does, the configuration setting also determines if the cluster manages the network. Disabled : Set this parameter to Disabled to disable the requirement for a provisioning network. When set to Disabled , you must only use virtual media based provisioning, or bring up the cluster using the assisted installer. If Disabled and using power management, BMCs must be accessible from the bare-metal network. If Disabled , you must provide two IP addresses on the bare-metal network that are used for the provisioning services. Managed : Set this parameter to Managed , which is the default, to fully manage the provisioning network, including DHCP, TFTP, and so on. Unmanaged : Set this parameter to Unmanaged to enable the provisioning network but take care of manual configuration of DHCP. Virtual media provisioning is recommended but PXE is still available if required. httpProxy Set this parameter to the appropriate HTTP proxy used within your environment. httpsProxy Set this parameter to the appropriate HTTPS proxy used within your environment. noProxy Set this parameter to the appropriate list of exclusions for proxy usage within your environment. Hosts The hosts parameter is a list of separate bare metal assets used to build the cluster. Table 15.4. Hosts Name Default Description name The name of the BareMetalHost resource to associate with the details. For example, openshift-master-0 . role The role of the bare metal node. Either master or worker . bmc Connection details for the baseboard management controller. See the BMC addressing section for additional details. bootMACAddress The MAC address of the NIC that the host uses for the provisioning network. Ironic retrieves the IP address using the bootMACAddress configuration setting. Then, it binds to the host. Note You must provide a valid MAC address from the host if you disabled the provisioning network. networkConfig Set this optional parameter to configure the network interface of a host. See "(Optional) Configuring host network interfaces" for additional details. 15.2.7. Root device hints The rootDeviceHints parameter enables the installer to provision the Red Hat Enterprise Linux CoreOS (RHCOS) image to a particular device. The installer examines the devices in the order it discovers them, and compares the discovered values with the hint values. The installer uses the first discovered device that matches the hint value. The configuration can combine multiple hints, but a device must match all hints for the installer to select it. Table 15.5. Subfields Subfield Description deviceName A string containing a Linux device name like /dev/vda . The hint must match the actual value exactly. hctl A string containing a SCSI bus address like 0:0:0:0 . The hint must match the actual value exactly. model A string containing a vendor-specific device identifier. The hint can be a substring of the actual value. vendor A string containing the name of the vendor or manufacturer of the device. The hint can be a sub-string of the actual value. serialNumber A string containing the device serial number. The hint must match the actual value exactly. minSizeGigabytes An integer representing the minimum size of the device in gigabytes. wwn A string containing the unique storage identifier. The hint must match the actual value exactly. wwnWithExtension A string containing the unique storage identifier with the vendor extension appended. The hint must match the actual value exactly. wwnVendorExtension A string containing the unique vendor storage identifier. The hint must match the actual value exactly. rotational A boolean indicating whether the device should be a rotating disk (true) or not (false). Example usage - name: master-0 role: master bmc: address: ipmi://10.10.0.3:6203 username: admin password: redhat bootMACAddress: de:ad:be:ef:00:40 rootDeviceHints: deviceName: "/dev/sda" 15.2.8. Creating the OpenShift Container Platform manifests Create the OpenShift Container Platform manifests. USD ./openshift-baremetal-install --dir ~/clusterconfigs create manifests INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings WARNING Discarding the OpenShift Manifest that was provided in the target directory because its dependencies are dirty and it needs to be regenerated 15.2.9. Deploying the cluster via the OpenShift Container Platform installer Run the OpenShift Container Platform installer: USD ./openshift-baremetal-install --dir ~/clusterconfigs --log-level debug create cluster 15.2.10. Following the installation During the deployment process, you can check the installation's overall status by issuing the tail command to the .openshift_install.log log file in the install directory folder: USD tail -f /path/to/install-dir/.openshift_install.log
[ "<cluster_name>.<domain>", "test-cluster.example.com", "ipmi://<IP>:<port>?privilegelevel=OPERATOR", "ibmcloud sl hardware create --hostname <SERVERNAME> --domain <DOMAIN> --size <SIZE> --os <OS-TYPE> --datacenter <DC-NAME> --port-speed <SPEED> --billing <BILLING>", "useradd kni", "passwd kni", "echo \"kni ALL=(root) NOPASSWD:ALL\" | tee -a /etc/sudoers.d/kni", "chmod 0440 /etc/sudoers.d/kni", "su - kni -c \"ssh-keygen -f /home/kni/.ssh/id_rsa -N ''\"", "su - kni", "sudo subscription-manager register --username=<user> --password=<pass> --auto-attach", "sudo subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms --enable=rhel-8-for-x86_64-baseos-rpms", "sudo dnf install -y libvirt qemu-kvm mkisofs python3-devel jq ipmitool", "sudo usermod --append --groups libvirt kni", "sudo systemctl start firewalld", "sudo systemctl enable firewalld", "sudo firewall-cmd --zone=public --add-service=http --permanent", "sudo firewall-cmd --reload", "sudo systemctl enable libvirtd --now", "PRVN_HOST_ID=<ID>", "ibmcloud sl hardware list", "PUBLICSUBNETID=<ID>", "ibmcloud sl subnet list", "PRIVSUBNETID=<ID>", "ibmcloud sl subnet list", "PRVN_PUB_IP=USD(ibmcloud sl hardware detail USDPRVN_HOST_ID --output JSON | jq .primaryIpAddress -r)", "PUBLICCIDR=USD(ibmcloud sl subnet detail USDPUBLICSUBNETID --output JSON | jq .cidr)", "PUB_IP_CIDR=USDPRVN_PUB_IP/USDPUBLICCIDR", "PUB_GATEWAY=USD(ibmcloud sl subnet detail USDPUBLICSUBNETID --output JSON | jq .gateway -r)", "PRVN_PRIV_IP=USD(ibmcloud sl hardware detail USDPRVN_HOST_ID --output JSON | jq .primaryBackendIpAddress -r)", "PRIVCIDR=USD(ibmcloud sl subnet detail USDPRIVSUBNETID --output JSON | jq .cidr)", "PRIV_IP_CIDR=USDPRVN_PRIV_IP/USDPRIVCIDR", "PRIV_GATEWAY=USD(ibmcloud sl subnet detail USDPRIVSUBNETID --output JSON | jq .gateway -r)", "sudo nohup bash -c \" nmcli --get-values UUID con show | xargs -n 1 nmcli con delete nmcli connection add ifname provisioning type bridge con-name provisioning nmcli con add type bridge-slave ifname eth1 master provisioning nmcli connection add ifname baremetal type bridge con-name baremetal nmcli con add type bridge-slave ifname eth2 master baremetal nmcli connection modify baremetal ipv4.addresses USDPUB_IP_CIDR ipv4.method manual ipv4.gateway USDPUB_GATEWAY nmcli connection modify provisioning ipv4.addresses 172.22.0.1/24,USDPRIV_IP_CIDR ipv4.method manual nmcli connection modify provisioning +ipv4.routes \\\"10.0.0.0/8 USDPRIV_GATEWAY\\\" nmcli con down baremetal nmcli con up baremetal nmcli con down provisioning nmcli con up provisioning init 6 \"", "ssh kni@provisioner.<cluster-name>.<domain>", "sudo nmcli con show", "NAME UUID TYPE DEVICE baremetal 4d5133a5-8351-4bb9-bfd4-3af264801530 bridge baremetal provisioning 43942805-017f-4d7d-a2c2-7cb3324482ed bridge provisioning virbr0 d9bca40f-eee1-410b-8879-a2d4bb0465e7 bridge virbr0 bridge-slave-eth1 76a8ed50-c7e5-4999-b4f6-6d9014dd0812 ethernet eth1 bridge-slave-eth2 f31c3353-54b7-48de-893a-02d2b34c4736 ethernet eth2", "vim pull-secret.txt", "sudo dnf install dnsmasq", "sudo vi /etc/dnsmasq.conf", "interface=baremetal except-interface=lo bind-dynamic log-dhcp dhcp-range=<ip_addr>,<ip_addr>,<pub_cidr> 1 dhcp-option=baremetal,121,0.0.0.0/0,<pub_gateway>,<prvn_priv_ip>,<prvn_pub_ip> 2 dhcp-hostsfile=/var/lib/dnsmasq/dnsmasq.hostsfile", "ibmcloud sl subnet detail <publicsubnetid> --output JSON | jq .cidr", "ibmcloud sl subnet detail <publicsubnetid> --output JSON | jq .gateway -r", "ibmcloud sl hardware detail <id> --output JSON | jq .primaryBackendIpAddress -r", "ibmcloud sl hardware detail <id> --output JSON | jq .primaryIpAddress -r", "ibmcloud sl hardware list", "ibmcloud sl hardware detail <id> --output JSON | jq '.networkComponents[] | \"\\(.primaryIpAddress) \\(.macAddress)\"' | grep -v null", "\"10.196.130.144 00:e0:ed:6a:ca:b4\" \"141.125.65.215 00:e0:ed:6a:ca:b5\"", "sudo vim /var/lib/dnsmasq/dnsmasq.hostsfile", "00:e0:ed:6a:ca:b5,141.125.65.215,master-0 <mac>,<ip>,master-1 <mac>,<ip>,master-2 <mac>,<ip>,worker-0 <mac>,<ip>,worker-1", "sudo systemctl start dnsmasq", "sudo systemctl enable dnsmasq", "sudo systemctl status dnsmasq", "● dnsmasq.service - DNS caching server. Loaded: loaded (/usr/lib/systemd/system/dnsmasq.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2021-10-05 05:04:14 CDT; 49s ago Main PID: 3101 (dnsmasq) Tasks: 1 (limit: 204038) Memory: 732.0K CGroup: /system.slice/dnsmasq.service └─3101 /usr/sbin/dnsmasq -k", "sudo firewall-cmd --add-port 53/udp --permanent", "sudo firewall-cmd --add-port 67/udp --permanent", "sudo firewall-cmd --change-zone=provisioning --zone=external --permanent", "sudo firewall-cmd --reload", "export VERSION=stable-4.11", "export RELEASE_ARCH=<architecture>", "export RELEASE_IMAGE=USD(curl -s https://mirror.openshift.com/pub/openshift-v4/USDRELEASE_ARCH/clients/ocp/USDVERSION/release.txt | grep 'Pull From: quay.io' | awk -F ' ' '{print USD3}')", "export cmd=openshift-baremetal-install", "export pullsecret_file=~/pull-secret.txt", "export extract_dir=USD(pwd)", "curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDVERSION/openshift-client-linux.tar.gz | tar zxvf - oc", "sudo cp oc /usr/local/bin", "oc adm release extract --registry-config \"USD{pullsecret_file}\" --command=USDcmd --to \"USD{extract_dir}\" USD{RELEASE_IMAGE}", "sudo cp openshift-baremetal-install /usr/local/bin", "apiVersion: v1 baseDomain: <domain> metadata: name: <cluster_name> networking: machineNetwork: - cidr: <public-cidr> networkType: OVNKubernetes compute: - name: worker replicas: 2 controlPlane: name: master replicas: 3 platform: baremetal: {} platform: baremetal: apiVIP: <api_ip> ingressVIP: <wildcard_ip> provisioningNetworkInterface: <NIC1> provisioningNetworkCIDR: <CIDR> hosts: - name: openshift-master-0 role: master bmc: address: ipmi://10.196.130.145?privilegelevel=OPERATOR 1 username: root password: <password> bootMACAddress: 00:e0:ed:6a:ca:b4 2 rootDeviceHints: deviceName: \"/dev/sda\" - name: openshift-worker-0 role: worker bmc: address: ipmi://<out-of-band-ip>?privilegelevel=OPERATOR 3 username: <user> password: <password> bootMACAddress: <NIC1_mac_address> 4 rootDeviceHints: deviceName: \"/dev/sda\" pullSecret: '<pull_secret>' sshKey: '<ssh_pub_key>'", "ibmcloud sl hardware detail <id> --output JSON | jq '\"(.networkManagementIpAddress) (.remoteManagementAccounts[0].password)\"'", "mkdir ~/clusterconfigs", "cp install-config.yaml ~/clusterconfig", "ipmitool -I lanplus -U <user> -P <password> -H <management_server_ip> power off", "for i in USD(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print USD2'}); do sudo virsh destroy USDi; sudo virsh undefine USDi; sudo virsh vol-delete USDi --pool USDi; sudo virsh vol-delete USDi.ign --pool USDi; sudo virsh pool-destroy USDi; sudo virsh pool-undefine USDi; done", "metadata: name:", "networking: machineNetwork: - cidr:", "compute: - name: worker", "compute: replicas: 2", "controlPlane: name: master", "controlPlane: replicas: 3", "- name: master-0 role: master bmc: address: ipmi://10.10.0.3:6203 username: admin password: redhat bootMACAddress: de:ad:be:ef:00:40 rootDeviceHints: deviceName: \"/dev/sda\"", "./openshift-baremetal-install --dir ~/clusterconfigs create manifests", "INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings WARNING Discarding the OpenShift Manifest that was provided in the target directory because its dependencies are dirty and it needs to be regenerated", "./openshift-baremetal-install --dir ~/clusterconfigs --log-level debug create cluster", "tail -f /path/to/install-dir/.openshift_install.log" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/installing/installing-ibm-cloud-bare-metal-classic
Chapter 6. Security
Chapter 6. Security 6.1. Connecting with a user and password Red Hat build of Apache Qpid Proton DotNet can authenticate connections with a user and password. To specify the credentials used for authentication, set the Username and Password fields on the ConnectionOptions object Example: Connecting with a user and password ConnectionOptions connectionOptions = new(); connectionOptions.User = "user"; connectionOptions.Password = "pass"; 6.2. Configuring SASL authentication Client connections to remote peers may exchange SASL user name and password credentials. The presence of the user field in the connection URI controls this exchange. If user is specified then SASL credentials are exchanged; if user is absent then the SASL credentials are not exchanged. Various SASL mechanisms are supported, please see SASL Reference 6.3. Configuring an SSL/TLS transport Secure communication with servers is achieved using SSL/TLS. A client may be configured for SSL/TLS Handshake only or for SSL/TLS Handshake and client certificate authentication. See the Managing Certificates section for more information. Note TLS Server Name Indication (SNI) is handled automatically by the client library. However, SNI is signaled only for addresses that use the amqps transport scheme where the host is a fully qualified domain name or a host name. SNI is not signaled when the host is a numeric IP address.
[ "ConnectionOptions connectionOptions = new(); connectionOptions.User = \"user\"; connectionOptions.Password = \"pass\";" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_qpid_proton_dotnet/1.0/html/using_qpid_proton_dotnet/security
Chapter 242. Netty4 Component
Chapter 242. Netty4 Component Available as of Camel version 2.14 The netty4 component in Camel is a socket communication component, based on the Netty project version 4. Netty is a NIO client server framework which enables quick and easy development of networkServerInitializerFactory applications such as protocol servers and clients. Netty greatly simplifies and streamlines network programming such as TCP and UDP socket server. This camel component supports both producer and consumer endpoints. The Netty component has several options and allows fine-grained control of a number of TCP/UDP communication parameters (buffer sizes, keepAlives, tcpNoDelay, etc) and facilitates both In-Only and In-Out communication on a Camel route. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-netty4</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 242.1. URI format The URI scheme for a netty component is as follows netty4:tcp://0.0.0.0:99999[?options] netty4:udp://remotehost:99999/[?options] This component supports producer and consumer endpoints for both TCP and UDP. You can append query options to the URI in the following format, ?option=value&option=value&... 242.2. Options The Netty4 component supports 6 options, which are listed below. Name Description Default Type maximumPoolSize (advanced) The thread pool size for the EventExecutorGroup if its in use. The default value is 16. 16 int configuration (advanced) To use the NettyConfiguration as configuration when creating endpoints. NettyConfiguration executorService (advanced) To use the given EventExecutorGroup. EventExecutorGroup useGlobalSslContext Parameters (security) Enable usage of global SSL context parameters. false boolean sslContextParameters (security) To configure security using SSLContextParameters SSLContextParameters resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The Netty4 endpoint is configured using URI syntax: with the following path and query parameters: 242.2.1. Path Parameters (3 parameters): Name Description Default Type protocol Required The protocol to use which can be tcp or udp. String host Required The hostname. For the consumer the hostname is localhost or 0.0.0.0. For the producer the hostname is the remote host to connect to String port Required The host port number int 242.2.2. Query Parameters (72 parameters): Name Description Default Type disconnect (common) Whether or not to disconnect(close) from Netty Channel right after use. Can be used for both consumer and producer. false boolean keepAlive (common) Setting to ensure socket is not closed due to inactivity true boolean reuseAddress (common) Setting to facilitate socket multiplexing true boolean reuseChannel (common) This option allows producers and consumers (in client mode) to reuse the same Netty Channel for the lifecycle of processing the Exchange. This is useful if you need to call a server multiple times in a Camel route and want to use the same network connection. When using this, the channel is not returned to the connection pool until the Exchange is done; or disconnected if the disconnect option is set to true. The reused Channel is stored on the Exchange as an exchange property with the key NettyConstants#NETTY_CHANNEL which allows you to obtain the channel during routing and use it as well. false boolean sync (common) Setting to set endpoint as one-way or request-response true boolean tcpNoDelay (common) Setting to improve TCP protocol performance true boolean bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean broadcast (consumer) Setting to choose Multicast over UDP false boolean clientMode (consumer) If the clientMode is true, netty consumer will connect the address as a TCP client. false boolean reconnect (consumer) Used only in clientMode in consumer, the consumer will attempt to reconnect on disconnection if this is enabled true boolean reconnectInterval (consumer) Used if reconnect and clientMode is enabled. The interval in milli seconds to attempt reconnection 10000 int backlog (consumer) Allows to configure a backlog for netty consumer (server). Note the backlog is just a best effort depending on the OS. Setting this option to a value such as 200, 500 or 1000, tells the TCP stack how long the accept queue can be If this option is not configured, then the backlog depends on OS setting. int bossCount (consumer) When netty works on nio mode, it uses default bossCount parameter from Netty, which is 1. User can use this operation to override the default bossCount from Netty 1 int bossGroup (consumer) Set the BossGroup which could be used for handling the new connection of the server side across the NettyEndpoint EventLoopGroup disconnectOnNoReply (consumer) If sync is enabled then this option dictates NettyConsumer if it should disconnect where there is no reply to send back. true boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern nettyServerBootstrapFactory (consumer) To use a custom NettyServerBootstrapFactory NettyServerBootstrap Factory networkInterface (consumer) When using UDP then this option can be used to specify a network interface by its name, such as eth0 to join a multicast group. String noReplyLogLevel (consumer) If sync is enabled this option dictates NettyConsumer which logging level to use when logging a there is no reply to send back. WARN LoggingLevel serverClosedChannel ExceptionCaughtLogLevel (consumer) If the server (NettyConsumer) catches an java.nio.channels.ClosedChannelException then its logged using this logging level. This is used to avoid logging the closed channel exceptions, as clients can disconnect abruptly and then cause a flood of closed exceptions in the Netty server. DEBUG LoggingLevel serverExceptionCaughtLog Level (consumer) If the server (NettyConsumer) catches an exception then its logged using this logging level. WARN LoggingLevel serverInitializerFactory (consumer) To use a custom ServerInitializerFactory ServerInitializer Factory usingExecutorService (consumer) Whether to use ordered thread pool, to ensure events are processed orderly on the same channel. true boolean connectTimeout (producer) Time to wait for a socket connection to be available. Value is in milliseconds. 10000 int requestTimeout (producer) Allows to use a timeout for the Netty producer when calling a remote server. By default no timeout is in use. The value is in milli seconds, so eg 30000 is 30 seconds. The requestTimeout is using Netty's ReadTimeoutHandler to trigger the timeout. long clientInitializerFactory (producer) To use a custom ClientInitializerFactory ClientInitializer Factory correlationManager (producer) To use a custom correlation manager to manage how request and reply messages are mapped when using request/reply with the netty producer. This should only be used if you have a way to map requests together with replies such as if there is correlation ids in both the request and reply messages. This can be used if you want to multiplex concurrent messages on the same channel (aka connection) in netty. When doing this you must have a way to correlate the request and reply messages so you can store the right reply on the inflight Camel Exchange before its continued routed. We recommend extending the TimeoutCorrelationManagerSupport when you build custom correlation managers. This provides support for timeout and other complexities you otherwise would need to implement as well. See also the producerPoolEnabled option for more details. NettyCamelState CorrelationManager lazyChannelCreation (producer) Channels can be lazily created to avoid exceptions, if the remote server is not up and running when the Camel producer is started. true boolean producerPoolEnabled (producer) Whether producer pool is enabled or not. Important: If you turn this off then a single shared connection is used for the producer, also if you are doing request/reply. That means there is a potential issue with interleaved responses if replies comes back out-of-order. Therefore you need to have a correlation id in both the request and reply messages so you can properly correlate the replies to the Camel callback that is responsible for continue processing the message in Camel. To do this you need to implement NettyCamelStateCorrelationManager as correlation manager and configure it via the correlationManager option. See also the correlationManager option for more details. true boolean producerPoolMaxActive (producer) Sets the cap on the number of objects that can be allocated by the pool (checked out to clients, or idle awaiting checkout) at a given time. Use a negative value for no limit. -1 int producerPoolMaxIdle (producer) Sets the cap on the number of idle instances in the pool. 100 int producerPoolMinEvictable Idle (producer) Sets the minimum amount of time (value in millis) an object may sit idle in the pool before it is eligible for eviction by the idle object evictor. 300000 long producerPoolMinIdle (producer) Sets the minimum number of instances allowed in the producer pool before the evictor thread (if active) spawns new objects. int udpConnectionlessSending (producer) This option supports connection less udp sending which is a real fire and forget. A connected udp send receive the PortUnreachableException if no one is listen on the receiving port. false boolean useByteBuf (producer) If the useByteBuf is true, netty producer will turn the message body into ByteBuf before sending it out. false boolean allowSerializedHeaders (advanced) Only used for TCP when transferExchange is true. When set to true, serializable objects in headers and properties will be added to the exchange. Otherwise Camel will exclude any non-serializable objects and log it at WARN level. false boolean bootstrapConfiguration (advanced) To use a custom configured NettyServerBootstrapConfiguration for configuring this endpoint. NettyServerBootstrap Configuration channelGroup (advanced) To use a explicit ChannelGroup. ChannelGroup nativeTransport (advanced) Whether to use native transport instead of NIO. Native transport takes advantage of the host operating system and is only supported on some platforms. You need to add the netty JAR for the host operating system you are using. See more details at: http://netty.io/wiki/native-transports.html false boolean options (advanced) Allows to configure additional netty options using option. as prefix. For example option.child.keepAlive=false to set the netty option child.keepAlive=false. See the Netty documentation for possible options that can be used. Map receiveBufferSize (advanced) The TCP/UDP buffer sizes to be used during inbound communication. Size is bytes. 65536 int receiveBufferSizePredictor (advanced) Configures the buffer size predictor. See details at Jetty documentation and this mail thread. int sendBufferSize (advanced) The TCP/UDP buffer sizes to be used during outbound communication. Size is bytes. 65536 int synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean transferExchange (advanced) Only used for TCP. You can transfer the exchange over the wire instead of just the body. The following fields are transferred: In body, Out body, fault body, In headers, Out headers, fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. false boolean udpByteArrayCodec (advanced) For UDP only. If enabled the using byte array codec instead of Java serialization protocol. false boolean workerCount (advanced) When netty works on nio mode, it uses default workerCount parameter from Netty, which is cpu_core_threads x 2. User can use this operation to override the default workerCount from Netty. int workerGroup (advanced) To use a explicit EventLoopGroup as the boss thread pool. For example to share a thread pool with multiple consumers or producers. By default each consumer or producer has their own worker pool with 2 x cpu count core threads. EventLoopGroup allowDefaultCodec (codec) The netty component installs a default codec if both, encoder/decoder is null and textline is false. Setting allowDefaultCodec to false prevents the netty component from installing a default codec as the first element in the filter chain. true boolean autoAppendDelimiter (codec) Whether or not to auto append missing end delimiter when sending using the textline codec. true boolean decoder (codec) Deprecated A custom ChannelHandler class that can be used to perform special marshalling of inbound payloads. ChannelHandler decoderMaxLineLength (codec) The max line length to use for the textline codec. 1024 int decoders (codec) A list of decoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup. String delimiter (codec) The delimiter to use for the textline codec. Possible values are LINE and NULL. LINE TextLineDelimiter encoder (codec) Deprecated A custom ChannelHandler class that can be used to perform special marshalling of outbound payloads. ChannelHandler encoders (codec) A list of encoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup. String encoding (codec) The encoding (a charset name) to use for the textline codec. If not provided, Camel will use the JVM default Charset. String textline (codec) Only used for TCP. If no codec is specified, you can use this flag to indicate a text line based codec; if not specified or the value is false, then Object Serialization is assumed over TCP. false boolean enabledProtocols (security) Which protocols to enable when using SSL TLSv1,TLSv1.1,TLSv1.2 String keyStoreFile (security) Client side certificate keystore to be used for encryption File keyStoreFormat (security) Keystore format to be used for payload encryption. Defaults to JKS if not set String keyStoreResource (security) Client side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. String needClientAuth (security) Configures whether the server needs client authentication when using SSL. false boolean passphrase (security) Password setting to use in order to encrypt/decrypt payloads sent using SSH String securityProvider (security) Security provider to be used for payload encryption. Defaults to SunX509 if not set. String ssl (security) Setting to specify whether SSL encryption is applied to this endpoint false boolean sslClientCertHeaders (security) When enabled and in SSL mode, then the Netty consumer will enrich the Camel Message with headers having information about the client certificate such as subject name, issuer name, serial number, and the valid date range. false boolean sslContextParameters (security) To configure security using SSLContextParameters SSLContextParameters sslHandler (security) Reference to a class that could be used to return an SSL Handler SslHandler trustStoreFile (security) Server side certificate keystore to be used for encryption File trustStoreResource (security) Server side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. String 242.3. Spring Boot Auto-Configuration The component supports 78 options, which are listed below. Name Description Default Type camel.component.netty4.configuration.allow-default-codec The netty component installs a default codec if both, encoder/decoder is null and textline is false. Setting allowDefaultCodec to false prevents the netty component from installing a default codec as the first element in the filter chain. true Boolean camel.component.netty4.configuration.allow-serialized-headers Only used for TCP when transferExchange is true. When set to true, serializable objects in headers and properties will be added to the exchange. Otherwise Camel will exclude any non-serializable objects and log it at WARN level. false Boolean camel.component.netty4.configuration.auto-append-delimiter Whether or not to auto append missing end delimiter when sending using the textline codec. true Boolean camel.component.netty4.configuration.backlog Allows to configure a backlog for netty consumer (server). Note the backlog is just a best effort depending on the OS. Setting this option to a value such as 200, 500 or 1000, tells the TCP stack how long the accept queue can be If this option is not configured, then the backlog depends on OS setting. Integer camel.component.netty4.configuration.boss-count When netty works on nio mode, it uses default bossCount parameter from Netty, which is 1. User can use this operation to override the default bossCount from Netty 1 Integer camel.component.netty4.configuration.boss-group Set the BossGroup which could be used for handling the new connection of the server side across the NettyEndpoint EventLoopGroup camel.component.netty4.configuration.broadcast Setting to choose Multicast over UDP false Boolean camel.component.netty4.configuration.channel-group To use a explicit ChannelGroup. ChannelGroup camel.component.netty4.configuration.client-initializer-factory To use a custom ClientInitializerFactory ClientInitializer Factory camel.component.netty4.configuration.client-mode If the clientMode is true, netty consumer will connect the address as a TCP client. false Boolean camel.component.netty4.configuration.connect-timeout Time to wait for a socket connection to be available. Value is in milliseconds. 10000 Integer camel.component.netty4.configuration.correlation-manager To use a custom correlation manager to manage how request and reply messages are mapped when using request/reply with the netty producer. This should only be used if you have a way to map requests together with replies such as if there is correlation ids in both the request and reply messages. This can be used if you want to multiplex concurrent messages on the same channel (aka connection) in netty. When doing this you must have a way to correlate the request and reply messages so you can store the right reply on the inflight Camel Exchange before its continued routed. We recommend extending the TimeoutCorrelationManagerSupport when you build custom correlation managers. This provides support for timeout and other complexities you otherwise would need to implement as well. See also the producerPoolEnabled option for more details. NettyCamelState CorrelationManager camel.component.netty4.configuration.decoder-max-line-length The max line length to use for the textline codec. 1024 Integer camel.component.netty4.configuration.decoders A list of decoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup. List camel.component.netty4.configuration.delimiter The delimiter to use for the textline codec. Possible values are LINE and NULL. TextLineDelimiter camel.component.netty4.configuration.disconnect Whether or not to disconnect(close) from Netty Channel right after use. Can be used for both consumer and producer. false Boolean camel.component.netty4.configuration.disconnect-on-no-reply If sync is enabled then this option dictates NettyConsumer if it should disconnect where there is no reply to send back. true Boolean camel.component.netty4.configuration.enabled-protocols Which protocols to enable when using SSL TLSv1,TLSv1.1,TLSv1.2 String camel.component.netty4.configuration.encoders A list of encoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup. List camel.component.netty4.configuration.encoding The encoding (a charset name) to use for the textline codec. If not provided, Camel will use the JVM default Charset. String camel.component.netty4.configuration.host The hostname. For the consumer the hostname is localhost or 0.0.0.0. For the producer the hostname is the remote host to connect to String camel.component.netty4.configuration.keep-alive Setting to ensure socket is not closed due to inactivity true Boolean camel.component.netty4.configuration.key-store-format Keystore format to be used for payload encryption. Defaults to JKS if not set String camel.component.netty4.configuration.key-store-resource Client side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. String camel.component.netty4.configuration.lazy-channel-creation Channels can be lazily created to avoid exceptions, if the remote server is not up and running when the Camel producer is started. true Boolean camel.component.netty4.configuration.native-transport Whether to use native transport instead of NIO. Native transport takes advantage of the host operating system and is only supported on some platforms. You need to add the netty JAR for the host operating system you are using. See more details at: http://netty.io/wiki/native-transports.html false Boolean camel.component.netty4.configuration.need-client-auth Configures whether the server needs client authentication when using SSL. false Boolean camel.component.netty4.configuration.netty-server-bootstrap-factory To use a custom NettyServerBootstrapFactory NettyServerBootstrap Factory camel.component.netty4.configuration.network-interface When using UDP then this option can be used to specify a network interface by its name, such as eth0 to join a multicast group. String camel.component.netty4.configuration.no-reply-log-level If sync is enabled this option dictates NettyConsumer which logging level to use when logging a there is no reply to send back. LoggingLevel camel.component.netty4.configuration.options Allows to configure additional netty options using option. as prefix. For example option.child.keepAlive=false to set the netty option child.keepAlive=false. See the Netty documentation for possible options that can be used. Map camel.component.netty4.configuration.passphrase Password setting to use in order to encrypt/decrypt payloads sent using SSH String camel.component.netty4.configuration.port The host port number Integer camel.component.netty4.configuration.producer-pool-enabled Whether producer pool is enabled or not. Important: If you turn this off then a single shared connection is used for the producer, also if you are doing request/reply. That means there is a potential issue with interleaved responses if replies comes back out-of-order. Therefore you need to have a correlation id in both the request and reply messages so you can properly correlate the replies to the Camel callback that is responsible for continue processing the message in Camel. To do this you need to implement NettyCamelStateCorrelationManager as correlation manager and configure it via the correlationManager option. See also the correlationManager option for more details. true Boolean camel.component.netty4.configuration.producer-pool-max-active Sets the cap on the number of objects that can be allocated by the pool (checked out to clients, or idle awaiting checkout) at a given time. Use a negative value for no limit. -1 Integer camel.component.netty4.configuration.producer-pool-max-idle Sets the cap on the number of idle instances in the pool. 100 Integer camel.component.netty4.configuration.producer-pool-min-evictable-idle Sets the minimum amount of time (value in millis) an object may sit idle in the pool before it is eligible for eviction by the idle object evictor. 300000 Long camel.component.netty4.configuration.producer-pool-min-idle Sets the minimum number of instances allowed in the producer pool before the evictor thread (if active) spawns new objects. Integer camel.component.netty4.configuration.protocol The protocol to use which can be tcp or udp. String camel.component.netty4.configuration.receive-buffer-size The TCP/UDP buffer sizes to be used during inbound communication. Size is bytes. 65536 Integer camel.component.netty4.configuration.receive-buffer-size-predictor Configures the buffer size predictor. See details at Jetty documentation and this mail thread. Integer camel.component.netty4.configuration.reconnect Used only in clientMode in consumer, the consumer will attempt to reconnect on disconnection if this is enabled true Boolean camel.component.netty4.configuration.reconnect-interval Used if reconnect and clientMode is enabled. The interval in milli seconds to attempt reconnection 10000 Integer camel.component.netty4.configuration.request-timeout Allows to use a timeout for the Netty producer when calling a remote server. By default no timeout is in use. The value is in milli seconds, so eg 30000 is 30 seconds. The requestTimeout is using Netty's ReadTimeoutHandler to trigger the timeout. Long camel.component.netty4.configuration.reuse-address Setting to facilitate socket multiplexing true Boolean camel.component.netty4.configuration.reuse-channel This option allows producers and consumers (in client mode) to reuse the same Netty Channel for the lifecycle of processing the Exchange. This is useful if you need to call a server multiple times in a Camel route and want to use the same network connection. When using this, the channel is not returned to the connection pool until the Exchange is done; or disconnected if the disconnect option is set to true. The reused Channel is stored on the Exchange as an exchange property with the key NettyConstants#NETTY_CHANNEL which allows you to obtain the channel during routing and use it as well. false Boolean camel.component.netty4.configuration.security-provider Security provider to be used for payload encryption. Defaults to SunX509 if not set. String camel.component.netty4.configuration.send-buffer-size The TCP/UDP buffer sizes to be used during outbound communication. Size is bytes. 65536 Integer camel.component.netty4.configuration.server-closed-channel-exception-caught-log-level If the server (NettyConsumer) catches an java.nio.channels.ClosedChannelException then its logged using this logging level. This is used to avoid logging the closed channel exceptions, as clients can disconnect abruptly and then cause a flood of closed exceptions in the Netty server. LoggingLevel camel.component.netty4.configuration.server-exception-caught-log-level If the server (NettyConsumer) catches an exception then its logged using this logging level. LoggingLevel camel.component.netty4.configuration.server-initializer-factory To use a custom ServerInitializerFactory ServerInitializer Factory camel.component.netty4.configuration.ssl Setting to specify whether SSL encryption is applied to this endpoint false Boolean camel.component.netty4.configuration.ssl-client-cert-headers When enabled and in SSL mode, then the Netty consumer will enrich the Camel Message with headers having information about the client certificate such as subject name, issuer name, serial number, and the valid date range. false Boolean camel.component.netty4.configuration.ssl-context-parameters To configure security using SSLContextParameters SSLContextParameters camel.component.netty4.configuration.ssl-handler Reference to a class that could be used to return an SSL Handler SslHandler camel.component.netty4.configuration.sync Setting to set endpoint as one-way or request-response true Boolean camel.component.netty4.configuration.tcp-no-delay Setting to improve TCP protocol performance true Boolean camel.component.netty4.configuration.textline Only used for TCP. If no codec is specified, you can use this flag to indicate a text line based codec; if not specified or the value is false, then Object Serialization is assumed over TCP. false Boolean camel.component.netty4.configuration.transfer-exchange Only used for TCP. You can transfer the exchange over the wire instead of just the body. The following fields are transferred: In body, Out body, fault body, In headers, Out headers, fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. false Boolean camel.component.netty4.configuration.trust-store-resource Server side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. String camel.component.netty4.configuration.udp-byte-array-codec For UDP only. If enabled the using byte array codec instead of Java serialization protocol. false Boolean camel.component.netty4.configuration.udp-connectionless-sending This option supports connection less udp sending which is a real fire and forget. A connected udp send receive the PortUnreachableException if no one is listen on the receiving port. false Boolean camel.component.netty4.configuration.use-byte-buf If the useByteBuf is true, netty producer will turn the message body into ByteBuf before sending it out. false Boolean camel.component.netty4.configuration.using-executor-service Whether to use ordered thread pool, to ensure events are processed orderly on the same channel. true Boolean camel.component.netty4.configuration.worker-count When netty works on nio mode, it uses default workerCount parameter from Netty, which is cpu_core_threads x 2. User can use this operation to override the default workerCount from Netty. Integer camel.component.netty4.configuration.worker-group To use a explicit EventLoopGroup as the boss thread pool. For example to share a thread pool with multiple consumers or producers. By default each consumer or producer has their own worker pool with 2 x cpu count core threads. EventLoopGroup camel.component.netty4.enabled Enable netty4 component true Boolean camel.component.netty4.executor-service To use the given EventExecutorGroup. The option is a io.netty.util.concurrent.EventExecutorGroup type. String camel.component.netty4.maximum-pool-size The thread pool size for the EventExecutorGroup if its in use. The default value is 16. 16 Integer camel.component.netty4.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean camel.component.netty4.ssl-context-parameters To configure security using SSLContextParameters. The option is a org.apache.camel.util.jsse.SSLContextParameters type. String camel.component.netty4.use-global-ssl-context-parameters Enable usage of global SSL context parameters. false Boolean camel.component.netty4.configuration.client-pipeline-factory @deprecated use #setClientInitializerFactory ClientInitializer Factory camel.component.netty4.configuration.decoder A custom ChannelHandler class that can be used to perform special marshalling of inbound payloads. ChannelHandler camel.component.netty4.configuration.encoder A custom ChannelHandler class that can be used to perform special marshalling of outbound payloads. ChannelHandler camel.component.netty4.configuration.key-store-file Client side certificate keystore to be used for encryption File camel.component.netty4.configuration.server-pipeline-factory @deprecated use #setServerInitializerFactory ServerInitializer Factory camel.component.netty4.configuration.trust-store-file Server side certificate keystore to be used for encryption File 242.4. Registry based Options Codec Handlers and SSL Keystores can be enlisted in the Registry, such as in the Spring XML file. The values that could be passed in, are the following: Name Description passphrase password setting to use in order to encrypt/decrypt payloads sent using SSH keyStoreFormat keystore format to be used for payload encryption. Defaults to "JKS" if not set securityProvider Security provider to be used for payload encryption. Defaults to "SunX509" if not set. keyStoreFile deprecated: Client side certificate keystore to be used for encryption trustStoreFile deprecated: Server side certificate keystore to be used for encryption keyStoreResource Camel 2.11.1: Client side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with "classpath:" , "file:" , or "http:" to load the resource from different systems. trustStoreResource Camel 2.11.1: Server side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with "classpath:" , "file:" , or "http:" to load the resource from different systems. sslHandler Reference to a class that could be used to return an SSL Handler encoder A custom ChannelHandler class that can be used to perform special marshalling of outbound payloads. Must override io.netty.channel.ChannelInboundHandlerAdapter. encoders A list of encoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup. decoder A custom ChannelHandler class that can be used to perform special marshalling of inbound payloads. Must override io.netty.channel.ChannelOutboundHandlerAdapter. decoders A list of decoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should do a lookup. Note Read below about using non-shareable encoders/decoders. 242.4.1. Using non-shareable encoders or decoders If your encoders or decoders are not shareable (e.g. they don't have the @Shareable class annotation), then your encoder/decoder must implement the org.apache.camel.component.netty.ChannelHandlerFactory interface, and return a new instance in the newChannelHandler method. This is to ensure the encoder/decoder can safely be used. If this is not the case, then the Netty component will log a WARN when an endpoint is created. The Netty component offers a org.apache.camel.component.netty.ChannelHandlerFactories factory class, that has a number of commonly used methods. 242.5. Sending Messages to/from a Netty endpoint 242.5.1. Netty Producer In Producer mode, the component provides the ability to send payloads to a socket endpoint using either TCP or UDP protocols (with optional SSL support). The producer mode supports both one-way and request-response based operations. 242.5.2. Netty Consumer In Consumer mode, the component provides the ability to: listen on a specified socket using either TCP or UDP protocols (with optional SSL support), receive requests on the socket using text/xml, binary and serialized object based payloads and send them along on a route as message exchanges. The consumer mode supports both one-way and request-response based operations. 242.6. Examples 242.6.1. A UDP Netty endpoint using Request-Reply and serialized object payload RouteBuilder builder = new RouteBuilder() { public void configure() { from("netty4:udp://0.0.0.0:5155?sync=true") .process(new Processor() { public void process(Exchange exchange) throws Exception { Poetry poetry = (Poetry) exchange.getIn().getBody(); poetry.setPoet("Dr. Sarojini Naidu"); exchange.getOut().setBody(poetry); } } } }; 242.6.2. A TCP based Netty consumer endpoint using One-way communication RouteBuilder builder = new RouteBuilder() { public void configure() { from("netty4:tcp://0.0.0.0:5150") .to("mock:result"); } }; 242.6.3. An SSL/TCP based Netty consumer endpoint using Request-Reply communication Using the JSSE Configuration Utility As of Camel 2.9, the Netty component supports SSL/TLS configuration through the Camel JSSE Configuration Utility . This utility greatly decreases the amount of component specific code you need to write and is configurable at the endpoint and component levels. The following examples demonstrate how to use the utility with the Netty component. Programmatic configuration of the component KeyStoreParameters ksp = new KeyStoreParameters(); ksp.setResource("/users/home/server/keystore.jks"); ksp.setPassword("keystorePassword"); KeyManagersParameters kmp = new KeyManagersParameters(); kmp.setKeyStore(ksp); kmp.setKeyPassword("keyPassword"); SSLContextParameters scp = new SSLContextParameters(); scp.setKeyManagers(kmp); NettyComponent nettyComponent = getContext().getComponent("netty4", NettyComponent.class); nettyComponent.setSslContextParameters(scp); Spring DSL based configuration of endpoint ... <camel:sslContextParameters id="sslContextParameters"> <camel:keyManagers keyPassword="keyPassword"> <camel:keyStore resource="/users/home/server/keystore.jks" password="keystorePassword"/> </camel:keyManagers> </camel:sslContextParameters>... ... <to uri="netty4:tcp://0.0.0.0:5150?sync=true&ssl=true&sslContextParameters=#sslContextParameters"/> ... [[Netty4-UsingBasicSSL/TLSconfigurationontheJettyComponent]] Using Basic SSL/TLS configuration on the Jetty Component JndiRegistry registry = new JndiRegistry(createJndiContext()); registry.bind("password", "changeit"); registry.bind("ksf", new File("src/test/resources/keystore.jks")); registry.bind("tsf", new File("src/test/resources/keystore.jks")); context.createRegistry(registry); context.addRoutes(new RouteBuilder() { public void configure() { String netty_ssl_endpoint = "netty4:tcp://0.0.0.0:5150?sync=true&ssl=true&passphrase=#password" + "&keyStoreFile=#ksf&trustStoreFile=#tsf"; String return_string = "When You Go Home, Tell Them Of Us And Say," + "For Your Tomorrow, We Gave Our Today."; from(netty_ssl_endpoint) .process(new Processor() { public void process(Exchange exchange) throws Exception { exchange.getOut().setBody(return_string); } } } }); Getting access to SSLSession and the client certificate You can get access to the javax.net.ssl.SSLSession if you eg need to get details about the client certificate. When ssl=true then the Netty4 component will store the SSLSession as a header on the Camel Message as shown below: SSLSession session = exchange.getIn().getHeader(NettyConstants.NETTY_SSL_SESSION, SSLSession.class); // get the first certificate which is client certificate javax.security.cert.X509Certificate cert = session.getPeerCertificateChain()[0]; Principal principal = cert.getSubjectDN(); Remember to set needClientAuth=true to authenticate the client, otherwise SSLSession cannot access information about the client certificate, and you may get an exception javax.net.ssl.SSLPeerUnverifiedException: peer not authenticated . You may also get this exception if the client certificate is expired or not valid etc. Tip The option sslClientCertHeaders can be set to true which then enriches the Camel Message with headers having details about the client certificate. For example the subject name is readily available in the header CamelNettySSLClientCertSubjectName . 242.6.4. Using Multiple Codecs In certain cases it may be necessary to add chains of encoders and decoders to the netty pipeline. To add multiple codecs to a camel netty endpoint the 'encoders' and 'decoders' uri parameters should be used. Like the 'encoder' and 'decoder' parameters they are used to supply references (lists of ChannelUpstreamHandlers and ChannelDownstreamHandlers) that should be added to the pipeline. Note that if encoders is specified then the encoder param will be ignored, similarly for decoders and the decoder param. Note Read further above about using non-shareable encoders/decoders. The lists of codecs need to be added to the Camel's registry so they can be resolved when the endpoint is created. ChannelHandlerFactory lengthDecoder = ChannelHandlerFactories.newLengthFieldBasedFrameDecoder(1048576, 0, 4, 0, 4); StringDecoder stringDecoder = new StringDecoder(); registry.bind("length-decoder", lengthDecoder); registry.bind("string-decoder", stringDecoder); LengthFieldPrepender lengthEncoder = new LengthFieldPrepender(4); StringEncoder stringEncoder = new StringEncoder(); registry.bind("length-encoder", lengthEncoder); registry.bind("string-encoder", stringEncoder); List<ChannelHandler> decoders = new ArrayList<ChannelHandler>(); decoders.add(lengthDecoder); decoders.add(stringDecoder); List<ChannelHandler> encoders = new ArrayList<ChannelHandler>(); encoders.add(lengthEncoder); encoders.add(stringEncoder); registry.bind("encoders", encoders); registry.bind("decoders", decoders); Spring's native collections support can be used to specify the codec lists in an application context <util:list id="decoders" list-class="java.util.LinkedList"> <bean class="org.apache.camel.component.netty4.ChannelHandlerFactories" factory-method="newLengthFieldBasedFrameDecoder"> <constructor-arg value="1048576"/> <constructor-arg value="0"/> <constructor-arg value="4"/> <constructor-arg value="0"/> <constructor-arg value="4"/> </bean> <bean class="io.netty.handler.codec.string.StringDecoder"/> </util:list> <util:list id="encoders" list-class="java.util.LinkedList"> <bean class="io.netty.handler.codec.LengthFieldPrepender"> <constructor-arg value="4"/> </bean> <bean class="io.netty.handler.codec.string.StringEncoder"/> </util:list> <bean id="length-encoder" class="io.netty.handler.codec.LengthFieldPrepender"> <constructor-arg value="4"/> </bean> <bean id="string-encoder" class="io.netty.handler.codec.string.StringEncoder"/> <bean id="length-decoder" class="org.apache.camel.component.netty4.ChannelHandlerFactories" factory-method="newLengthFieldBasedFrameDecoder"> <constructor-arg value="1048576"/> <constructor-arg value="0"/> <constructor-arg value="4"/> <constructor-arg value="0"/> <constructor-arg value="4"/> </bean> <bean id="string-decoder" class="io.netty.handler.codec.string.StringDecoder"/> The bean names can then be used in netty endpoint definitions either as a comma separated list or contained in a List e.g. from("direct:multiple-codec").to("netty4:tcp://0.0.0.0:{{port}}?encoders=#encoders&sync=false"); from("netty4:tcp://0.0.0.0:{{port}}?decoders=#length-decoder,#string-decoder&sync=false").to("mock:multiple-codec"); or via XML. <camelContext id="multiple-netty-codecs-context" xmlns="http://camel.apache.org/schema/spring"> <route> <from uri="direct:multiple-codec"/> <to uri="netty4:tcp://0.0.0.0:5150?encoders=#encoders&amp;sync=false"/> </route> <route> <from uri="netty4:tcp://0.0.0.0:5150?decoders=#length-decoder,#string-decoder&amp;sync=false"/> <to uri="mock:multiple-codec"/> </route> </camelContext> 242.7. Closing Channel When Complete When acting as a server you sometimes want to close the channel when, for example, a client conversion is finished. You can do this by simply setting the endpoint option disconnect=true . However you can also instruct Camel on a per message basis as follows. To instruct Camel to close the channel, you should add a header with the key CamelNettyCloseChannelWhenComplete set to a boolean true value. For instance, the example below will close the channel after it has written the bye message back to the client: from("netty4:tcp://0.0.0.0:8080").process(new Processor() { public void process(Exchange exchange) throws Exception { String body = exchange.getIn().getBody(String.class); exchange.getOut().setBody("Bye " + body); // some condition which determines if we should close if (close) { exchange.getOut().setHeader(NettyConstants.NETTY_CLOSE_CHANNEL_WHEN_COMPLETE, true); } } }); Adding custom channel pipeline factories to gain complete control over a created pipeline 242.8. Custom pipeline Custom channel pipelines provide complete control to the user over the handler/interceptor chain by inserting custom handler(s), encoder(s) & decoder(s) without having to specify them in the Netty Endpoint URL in a very simple way. In order to add a custom pipeline, a custom channel pipeline factory must be created and registered with the context via the context registry (JNDIRegistry, or the camel-spring ApplicationContextRegistry etc). A custom pipeline factory must be constructed as follows A Producer linked channel pipeline factory must extend the abstract class ClientPipelineFactory . A Consumer linked channel pipeline factory must extend the abstract class ServerInitializerFactory . The classes should override the initChannel() method in order to insert custom handler(s), encoder(s) and decoder(s). Not overriding the initChannel() method creates a pipeline with no handlers, encoders or decoders wired to the pipeline. The example below shows how ServerInitializerFactory factory may be created 242.8.1. Using custom pipeline factory import io.netty.channel.Channel; import io.netty.channel.ChannelPipeline; import io.netty.handler.codec.DelimiterBasedFrameDecoder; import io.netty.handler.codec.Delimiters; import io.netty.handler.codec.string.StringDecoder; import io.netty.handler.codec.string.StringEncoder; import io.netty.util.CharsetUtil; import org.apache.camel.component.netty4.NettyConsumer; import org.apache.camel.component.netty4.ServerInitializerFactory; import org.apache.camel.component.netty4.handlers.ServerChannelHandler; public class SampleServerInitializerFactory extends ServerInitializerFactory { private int maxLineSize = 1024; NettyConsumer consumer; public SampleServerInitializerFactory(NettyConsumer consumer) { this.consumer = consumer; } @Override public ServerInitializerFactory createPipelineFactory(NettyConsumer consumer) { return new SampleServerInitializerFactory(consumer); } @Override protected void initChannel(Channel channel) throws Exception { ChannelPipeline channelPipeline = channel.pipeline(); channelPipeline.addLast("encoder-SD", new StringEncoder(CharsetUtil.UTF_8)); channelPipeline.addLast("decoder-DELIM", new DelimiterBasedFrameDecoder(maxLineSize, true, Delimiters.lineDelimiter())); channelPipeline.addLast("decoder-SD", new StringDecoder(CharsetUtil.UTF_8)); // here we add the default Camel ServerChannelHandler for the consumer, to allow Camel to route the message etc. channelPipeline.addLast("handler", new ServerChannelHandler(consumer)); } } The custom channel pipeline factory can then be added to the registry and instantiated/utilized on a camel route in the following way Registry registry = camelContext.getRegistry(); ServerInitializerFactory factory = new TestServerInitializerFactory(nettyConsumer); registry.bind("spf", factory); context.addRoutes(new RouteBuilder() { public void configure() { String netty_ssl_endpoint = "netty4:tcp://0.0.0.0:5150?serverInitializerFactory=#spf" String return_string = "When You Go Home, Tell Them Of Us And Say," + "For Your Tomorrow, We Gave Our Today."; from(netty_ssl_endpoint) .process(new Processor() { public void process(Exchange exchange) throws Exception { exchange.getOut().setBody(return_string); } } } }); 242.9. Reusing Netty boss and worker thread pools Netty has two kind of thread pools: boss and worker. By default each Netty consumer and producer has their private thread pools. If you want to reuse these thread pools among multiple consumers or producers then the thread pools must be created and enlisted in the Registry. For example using Spring XML we can create a shared worker thread pool using the NettyWorkerPoolBuilder with 2 worker threads as shown below: <!-- use the worker pool builder to help create the shared thread pool --> <bean id="poolBuilder" class="org.apache.camel.component.netty.NettyWorkerPoolBuilder"> <property name="workerCount" value="2"/> </bean> <!-- the shared worker thread pool --> <bean id="sharedPool" class="org.jboss.netty.channel.socket.nio.WorkerPool" factory-bean="poolBuilder" factory-method="build" destroy-method="shutdown"> </bean> Tip For boss thread pool there is a org.apache.camel.component.netty4.NettyServerBossPoolBuilder builder for Netty consumers, and a org.apache.camel.component.netty4.NettyClientBossPoolBuilder for the Netty producers. Then in the Camel routes we can refer to this worker pools by configuring the workerPool option in the URI as shown below: <route> <from uri="netty4:tcp://0.0.0.0:5021?textline=true&amp;sync=true&amp;workerPool=#sharedPool&amp;usingExecutorService=false"/> <to uri="log:result"/> ... </route> And if we have another route we can refer to the shared worker pool: <route> <from uri="netty4:tcp://0.0.0.0:5022?textline=true&amp;sync=true&amp;workerPool=#sharedPool&amp;usingExecutorService=false"/> <to uri="log:result"/> ... </route> and so forth. 242.10. Multiplexing concurrent messages over a single connection with request/reply When using Netty for request/reply messaging via the netty producer then by default each message is sent via a non-shared connection (pooled). This ensures that replies are automatic being able to map to the correct request thread for further routing in Camel. In other words correlation between request/reply messages happens out-of-the-box because the replies comes back on the same connection that was used for sending the request; and this connection is not shared with others. When the response comes back, the connection is returned back to the connection pool, where it can be reused by others. However if you want to multiplex concurrent request/responses on a single shared connection, then you need to turn off the connection pooling by setting producerPoolEnabled=false . Now this means there is a potential issue with interleaved responses if replies comes back out-of-order. Therefore you need to have a correlation id in both the request and reply messages so you can properly correlate the replies to the Camel callback that is responsible for continue processing the message in Camel. To do this you need to implement NettyCamelStateCorrelationManager as correlation manager and configure it via the correlationManager=#myManager option. Note We recommend extending the TimeoutCorrelationManagerSupport when you build custom correlation managers. This provides support for timeout and other complexities you otherwise would need to implement as well. You can find an example with the Apache Camel source code in the examples directory under the camel-example-netty-custom-correlation directory. 242.11. See Also Netty HTTP MINA
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-netty4</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>", "netty4:tcp://0.0.0.0:99999[?options] netty4:udp://remotehost:99999/[?options]", "netty4:protocol:host:port", "RouteBuilder builder = new RouteBuilder() { public void configure() { from(\"netty4:udp://0.0.0.0:5155?sync=true\") .process(new Processor() { public void process(Exchange exchange) throws Exception { Poetry poetry = (Poetry) exchange.getIn().getBody(); poetry.setPoet(\"Dr. Sarojini Naidu\"); exchange.getOut().setBody(poetry); } } } };", "RouteBuilder builder = new RouteBuilder() { public void configure() { from(\"netty4:tcp://0.0.0.0:5150\") .to(\"mock:result\"); } };", "KeyStoreParameters ksp = new KeyStoreParameters(); ksp.setResource(\"/users/home/server/keystore.jks\"); ksp.setPassword(\"keystorePassword\"); KeyManagersParameters kmp = new KeyManagersParameters(); kmp.setKeyStore(ksp); kmp.setKeyPassword(\"keyPassword\"); SSLContextParameters scp = new SSLContextParameters(); scp.setKeyManagers(kmp); NettyComponent nettyComponent = getContext().getComponent(\"netty4\", NettyComponent.class); nettyComponent.setSslContextParameters(scp);", "<camel:sslContextParameters id=\"sslContextParameters\"> <camel:keyManagers keyPassword=\"keyPassword\"> <camel:keyStore resource=\"/users/home/server/keystore.jks\" password=\"keystorePassword\"/> </camel:keyManagers> </camel:sslContextParameters> <to uri=\"netty4:tcp://0.0.0.0:5150?sync=true&ssl=true&sslContextParameters=#sslContextParameters\"/>", "JndiRegistry registry = new JndiRegistry(createJndiContext()); registry.bind(\"password\", \"changeit\"); registry.bind(\"ksf\", new File(\"src/test/resources/keystore.jks\")); registry.bind(\"tsf\", new File(\"src/test/resources/keystore.jks\")); context.createRegistry(registry); context.addRoutes(new RouteBuilder() { public void configure() { String netty_ssl_endpoint = \"netty4:tcp://0.0.0.0:5150?sync=true&ssl=true&passphrase=#password\" + \"&keyStoreFile=#ksf&trustStoreFile=#tsf\"; String return_string = \"When You Go Home, Tell Them Of Us And Say,\" + \"For Your Tomorrow, We Gave Our Today.\"; from(netty_ssl_endpoint) .process(new Processor() { public void process(Exchange exchange) throws Exception { exchange.getOut().setBody(return_string); } } } });", "SSLSession session = exchange.getIn().getHeader(NettyConstants.NETTY_SSL_SESSION, SSLSession.class); // get the first certificate which is client certificate javax.security.cert.X509Certificate cert = session.getPeerCertificateChain()[0]; Principal principal = cert.getSubjectDN();", "ChannelHandlerFactory lengthDecoder = ChannelHandlerFactories.newLengthFieldBasedFrameDecoder(1048576, 0, 4, 0, 4); StringDecoder stringDecoder = new StringDecoder(); registry.bind(\"length-decoder\", lengthDecoder); registry.bind(\"string-decoder\", stringDecoder); LengthFieldPrepender lengthEncoder = new LengthFieldPrepender(4); StringEncoder stringEncoder = new StringEncoder(); registry.bind(\"length-encoder\", lengthEncoder); registry.bind(\"string-encoder\", stringEncoder); List<ChannelHandler> decoders = new ArrayList<ChannelHandler>(); decoders.add(lengthDecoder); decoders.add(stringDecoder); List<ChannelHandler> encoders = new ArrayList<ChannelHandler>(); encoders.add(lengthEncoder); encoders.add(stringEncoder); registry.bind(\"encoders\", encoders); registry.bind(\"decoders\", decoders);", "<util:list id=\"decoders\" list-class=\"java.util.LinkedList\"> <bean class=\"org.apache.camel.component.netty4.ChannelHandlerFactories\" factory-method=\"newLengthFieldBasedFrameDecoder\"> <constructor-arg value=\"1048576\"/> <constructor-arg value=\"0\"/> <constructor-arg value=\"4\"/> <constructor-arg value=\"0\"/> <constructor-arg value=\"4\"/> </bean> <bean class=\"io.netty.handler.codec.string.StringDecoder\"/> </util:list> <util:list id=\"encoders\" list-class=\"java.util.LinkedList\"> <bean class=\"io.netty.handler.codec.LengthFieldPrepender\"> <constructor-arg value=\"4\"/> </bean> <bean class=\"io.netty.handler.codec.string.StringEncoder\"/> </util:list> <bean id=\"length-encoder\" class=\"io.netty.handler.codec.LengthFieldPrepender\"> <constructor-arg value=\"4\"/> </bean> <bean id=\"string-encoder\" class=\"io.netty.handler.codec.string.StringEncoder\"/> <bean id=\"length-decoder\" class=\"org.apache.camel.component.netty4.ChannelHandlerFactories\" factory-method=\"newLengthFieldBasedFrameDecoder\"> <constructor-arg value=\"1048576\"/> <constructor-arg value=\"0\"/> <constructor-arg value=\"4\"/> <constructor-arg value=\"0\"/> <constructor-arg value=\"4\"/> </bean> <bean id=\"string-decoder\" class=\"io.netty.handler.codec.string.StringDecoder\"/>", "from(\"direct:multiple-codec\").to(\"netty4:tcp://0.0.0.0:{{port}}?encoders=#encoders&sync=false\"); from(\"netty4:tcp://0.0.0.0:{{port}}?decoders=#length-decoder,#string-decoder&sync=false\").to(\"mock:multiple-codec\");", "<camelContext id=\"multiple-netty-codecs-context\" xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:multiple-codec\"/> <to uri=\"netty4:tcp://0.0.0.0:5150?encoders=#encoders&amp;sync=false\"/> </route> <route> <from uri=\"netty4:tcp://0.0.0.0:5150?decoders=#length-decoder,#string-decoder&amp;sync=false\"/> <to uri=\"mock:multiple-codec\"/> </route> </camelContext>", "from(\"netty4:tcp://0.0.0.0:8080\").process(new Processor() { public void process(Exchange exchange) throws Exception { String body = exchange.getIn().getBody(String.class); exchange.getOut().setBody(\"Bye \" + body); // some condition which determines if we should close if (close) { exchange.getOut().setHeader(NettyConstants.NETTY_CLOSE_CHANNEL_WHEN_COMPLETE, true); } } });", "import io.netty.channel.Channel; import io.netty.channel.ChannelPipeline; import io.netty.handler.codec.DelimiterBasedFrameDecoder; import io.netty.handler.codec.Delimiters; import io.netty.handler.codec.string.StringDecoder; import io.netty.handler.codec.string.StringEncoder; import io.netty.util.CharsetUtil; import org.apache.camel.component.netty4.NettyConsumer; import org.apache.camel.component.netty4.ServerInitializerFactory; import org.apache.camel.component.netty4.handlers.ServerChannelHandler; public class SampleServerInitializerFactory extends ServerInitializerFactory { private int maxLineSize = 1024; NettyConsumer consumer; public SampleServerInitializerFactory(NettyConsumer consumer) { this.consumer = consumer; } @Override public ServerInitializerFactory createPipelineFactory(NettyConsumer consumer) { return new SampleServerInitializerFactory(consumer); } @Override protected void initChannel(Channel channel) throws Exception { ChannelPipeline channelPipeline = channel.pipeline(); channelPipeline.addLast(\"encoder-SD\", new StringEncoder(CharsetUtil.UTF_8)); channelPipeline.addLast(\"decoder-DELIM\", new DelimiterBasedFrameDecoder(maxLineSize, true, Delimiters.lineDelimiter())); channelPipeline.addLast(\"decoder-SD\", new StringDecoder(CharsetUtil.UTF_8)); // here we add the default Camel ServerChannelHandler for the consumer, to allow Camel to route the message etc. channelPipeline.addLast(\"handler\", new ServerChannelHandler(consumer)); } }", "Registry registry = camelContext.getRegistry(); ServerInitializerFactory factory = new TestServerInitializerFactory(nettyConsumer); registry.bind(\"spf\", factory); context.addRoutes(new RouteBuilder() { public void configure() { String netty_ssl_endpoint = \"netty4:tcp://0.0.0.0:5150?serverInitializerFactory=#spf\" String return_string = \"When You Go Home, Tell Them Of Us And Say,\" + \"For Your Tomorrow, We Gave Our Today.\"; from(netty_ssl_endpoint) .process(new Processor() { public void process(Exchange exchange) throws Exception { exchange.getOut().setBody(return_string); } } } });", "<!-- use the worker pool builder to help create the shared thread pool --> <bean id=\"poolBuilder\" class=\"org.apache.camel.component.netty.NettyWorkerPoolBuilder\"> <property name=\"workerCount\" value=\"2\"/> </bean> <!-- the shared worker thread pool --> <bean id=\"sharedPool\" class=\"org.jboss.netty.channel.socket.nio.WorkerPool\" factory-bean=\"poolBuilder\" factory-method=\"build\" destroy-method=\"shutdown\"> </bean>", "<route> <from uri=\"netty4:tcp://0.0.0.0:5021?textline=true&amp;sync=true&amp;workerPool=#sharedPool&amp;usingExecutorService=false\"/> <to uri=\"log:result\"/> </route>", "<route> <from uri=\"netty4:tcp://0.0.0.0:5022?textline=true&amp;sync=true&amp;workerPool=#sharedPool&amp;usingExecutorService=false\"/> <to uri=\"log:result\"/> </route>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/netty4-component
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue.
null
https://docs.redhat.com/en/documentation/net/8.0/html/getting_started_with_.net_on_rhel_8/proc_providing-feedback-on-red-hat-documentation_getting-started-with-dotnet-on-rhel-8
Chapter 7. Brokers page
Chapter 7. Brokers page The Brokers page shows all the brokers created for a Kafka cluster. For each broker, you can see its status, as well as the distribution of partitions across the brokers, including the number of partition leaders and followers. The broker status is shown as one of the following: Not Running The broker has not yet been started or has been explicitly stopped. Starting The broker is initializing and connecting to the cluster, including discovering and joining the metadata quorum. Recovery The broker has joined the cluster but is in recovery mode, replicating necessary data and metadata before becoming fully operational. It is not serving clients. Running The broker is fully operational, registered with the controller, and actively serving client requests. Pending Controlled Shutdown The broker has initiated a controlled shutdown process and will shut down gracefully once complete. Shutting Down The broker is in the process of shutting down. Client connections are being closed, and internal resources are being released. Unknown The broker's state is unknown, possibly due to an unexpected error or failure. If the broker has a rack ID, this is the ID of the rack or datacenter in which the broker resides. Click on the right arrow (>) to a broker name to see more information about the broker, including its hostname and disk usage. Click on the Rebalance tab to show any rebalances taking place on the cluster. Note Consider rebalancing if the distribution is uneven to ensure efficient resource utilization. 7.1. Managing rebalances When you configure KafkaRebalance resources to generate optimization proposals on a cluster, you can check their status from the Rebalance tab. The Rebalance tab presents a chronological list of KafkaRebalance resources from which you can manage the optimization proposals. Note Cruise Control must be enabled to run alongside the Kafka cluster in order to use the Rebalance tab. For more information on setting up and using Cruise Control to generate proposals, see the Streams for Apache Kafka documentation . Procedure From the Streams for Apache Kafka Console, log in to the Kafka cluster, then click Brokers . Check the information on the Rebalance tab. For each rebalance, you can see its status and a timestamp in UTC. Table 7.1. Rebalance status descriptions Status Description New Resource has not been observed by the operator before PendingProposal Optimization proposal not generated ProposalReady Optimization proposal is ready for approval Rebalancing Rebalance in progress Stopped Rebalance stopped NotReady Error ocurred with the rebalance Ready Rebalance complete ReconciliationPaused Rebalance is paused Note The status of the KafkaRebalance resource changes to ReconciliationPaused when the strimzi.io/pause-reconciliation annotation is set to true in its configuration. Click on the right arrow (>) to a rebalance name to see more information about the broker, including its rebalance mode, and whether auto-approval is enabled. If the rebalance involved brokers being removed or added, they are also listed. Optimization proposals can be generated in one of three modes: full is the default mode and runs a full rebalance. add-brokers is the mode used after adding brokers when scaling up a Kafka cluster. remove-brokers is the mode used before removing brokers when scaling down a Kafka cluster. If auto-approval is enabled for a proposal, a successfully generated proposal goes straight into a cluster rebalance. Viewing optimization proposals Click on the name of a KafkaRebalance resource to see a generated optimization proposal. An optimization proposal is a summary of proposed changes that would produce a more balanced Kafka cluster, with partition workloads distributed more evenly among the brokers. For more information on the properties shown on the proposal and what they mean, see the Streams for Apache Kafka documentation . Managing rebalances Select the options icon (three vertical dots) and click on an option to manage a rebalance. Click Approve to approve a proposal. The rebalance outlined in the proposal is performed on the Kafka cluster. Click Refresh to generate a fresh optimization proposal. If there has been a gap between generating a proposal and approving it, refresh the proposal so that the current state of the cluster is taken into account with a rebalance. Click Stop to stop a rebalance. Rebalances can take a long time and may impact the performance of your cluster. Stopping a rebalance can help avoid performance issues and allow you to revert changes if needed. Note The options available depend on the status of the KafkaBalance resource. For example, it's not possible to approve an optimization proposal if it's not ready.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/using_the_streams_for_apache_kafka_console/con-brokers-page-str
Chapter 68. Kubernetes Job
Chapter 68. Kubernetes Job Since Camel 2.23 Both producer and consumer are supported The Kubernetes Job component isone of the Kubernetes Components which provides a producer to execute kubernetes Job operations and a consumer to consume events related to Job objects. 68.1. Dependencies When using kubernetes-job with Red Hat build of Apache Camel for Spring Boot, use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency> 68.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 68.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 68.2.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 68.3. Component Options The Kubernetes Job component supports 4 options, which are listed below. Name Description Default Type kubernetesClient (common) Autowired To use an existing kubernetes client. KubernetesClient bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 68.4. Endpoint Options The Kubernetes Job endpoint is configured using URI syntax: with the following path and query parameters: 68.4.1. Path Parameters (1 parameters) Name Description Default Type masterUrl (common) Required Kubernetes Master url. String 68.4.2. Query Parameters (33 parameters) Name Description Default Type apiVersion (common) The Kubernetes API Version to use. String dnsDomain (common) The dns domain, used for ServiceCall EIP. String kubernetesClient (common) Default KubernetesClient to use if provided. KubernetesClient namespace (common) The namespace. String portName (common) The port name, used for ServiceCall EIP. String portProtocol (common) The port protocol, used for ServiceCall EIP. tcp String crdGroup (consumer) The Consumer CRD Resource Group we would like to watch. String crdName (consumer) The Consumer CRD Resource name we would like to watch. String crdPlural (consumer) The Consumer CRD Resource Plural we would like to watch. String crdScope (consumer) The Consumer CRD Resource Scope we would like to watch. String crdVersion (consumer) The Consumer CRD Resource Version we would like to watch. String labelKey (consumer) The Consumer Label key when watching at some resources. String labelValue (consumer) The Consumer Label value when watching at some resources. String poolSize (consumer) The Consumer pool size. 1 int resourceName (consumer) The Consumer Resource Name we would like to watch. String bridgeErrorHandler (consumer (advanced)) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut ExchangePattern operation (producer) Producer operation to do on Kubernetes. String lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean connectionTimeout (advanced) Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer caCertData (security) The CA Cert Data. String caCertFile (security) The CA Cert File. String clientCertData (security) The Client Cert Data. String clientCertFile (security) The Client Cert File. String clientKeyAlgo (security) The Key Algorithm used by the client. String clientKeyData (security) The Client Key data. String clientKeyFile (security) The Client Key file. String clientKeyPassphrase (security) The Client Key Passphrase. String oauthToken (security) The Auth Token. String password (security) Password to connect to Kubernetes. String trustCerts (security) Define if the certs we used are trusted anyway or not. Boolean username (security) Username to connect to Kubernetes. String 68.5. Message Headers The Kubernetes Job component supports 5 message header(s), which is/are listed below: Name Description Default Type CamelKubernetesOperation (producer) Constant: KUBERNETES_OPERATION The Producer operation. String CamelKubernetesNamespaceName (producer) Constant: KUBERNETES_NAMESPACE_NAME The namespace name. String CamelKubernetesJobName (producer) Constant: KUBERNETES_JOB_NAME The Job name. String CamelKubernetesJobSpec (producer) Constant: KUBERNETES_JOB_SPEC The spec for a Job. JobSpec CamelKubernetesJobLabels (producer) Constant: KUBERNETES_JOB_LABELS The Job labels. Map 68.6. Supported producer operation listJob listJobByLabels getJob createJob updateJob deleteJob 68.7. Kubernetes Job Producer Examples listJob: this operation list the jobs on a kubernetes cluster. from("direct:list"). toF("kubernetes-job:///?kubernetesClient=#kubernetesClient&operation=listJob"). to("mock:result"); This operation returns a List of Job from your cluster. listJobByLabels: this operation list the jobs by labels on a kubernetes cluster. from("direct:listByLabels").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put("key1", "value1"); labels.put("key2", "value2"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_JOB_LABELS, labels); } }); toF("kubernetes-job:///?kubernetesClient=#kubernetesClient&operation=listJobByLabels"). to("mock:result"); This operation returns a List of Jobs from your cluster, using a label selector (with key1 and key2, with value value1 and value2). createJob: This operation creates a job on a Kubernetes Cluster. Example (see Create Job example for more information) import java.util.ArrayList; import java.util.Date; import java.util.HashMap; import java.util.List; import java.util.Map; import javax.inject.Inject; import org.apache.camel.Endpoint; import org.apache.camel.builder.RouteBuilder; import org.apache.camel.cdi.Uri; import org.apache.camel.component.kubernetes.KubernetesConstants; import org.apache.camel.component.kubernetes.KubernetesOperations; import io.fabric8.kubernetes.api.model.Container; import io.fabric8.kubernetes.api.model.ObjectMeta; import io.fabric8.kubernetes.api.model.PodSpec; import io.fabric8.kubernetes.api.model.PodTemplateSpec; import io.fabric8.kubernetes.api.model.batch.JobSpec; public class KubernetesCreateJob extends RouteBuilder { @Inject @Uri("timer:foo?delay=1000&repeatCount=1") private Endpoint inputEndpoint; @Inject @Uri("log:output") private Endpoint resultEndpoint; @Override public void configure() { // you can configure the route rule with Java DSL here from(inputEndpoint) .routeId("kubernetes-jobcreate-client") .process(exchange -> { exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_JOB_NAME, "camel-job"); //DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*') exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_NAMESPACE_NAME, "default"); Map<String, String> joblabels = new HashMap<String, String>(); joblabels.put("jobLabelKey1", "value1"); joblabels.put("jobLabelKey2", "value2"); joblabels.put("app", "jobFromCamelApp"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_JOB_LABELS, joblabels); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_JOB_SPEC, generateJobSpec()); }) .toF("kubernetes-job:///{{kubernetes-master-url}}?oauthToken={{kubernetes-oauth-token:}}&operation=" + KubernetesOperations.CREATE_JOB_OPERATION) .log("Job created:") .process(exchange -> { System.out.println(exchange.getIn().getBody()); }) .to(resultEndpoint); } private JobSpec generateJobSpec() { JobSpec js = new JobSpec(); PodTemplateSpec pts = new PodTemplateSpec(); PodSpec ps = new PodSpec(); ps.setRestartPolicy("Never"); ps.setContainers(generateContainers()); pts.setSpec(ps); ObjectMeta metadata = new ObjectMeta(); Map<String, String> annotations = new HashMap<String, String>(); annotations.put("jobMetadataAnnotation1", "random value"); metadata.setAnnotations(annotations); Map<String, String> podlabels = new HashMap<String, String>(); podlabels.put("podLabelKey1", "value1"); podlabels.put("podLabelKey2", "value2"); podlabels.put("app", "podFromCamelApp"); metadata.setLabels(podlabels); pts.setMetadata(metadata); js.setTemplate(pts); return js; } private List<Container> generateContainers() { Container container = new Container(); container.setName("pi"); container.setImage("perl"); List<String> command = new ArrayList<String>(); command.add("echo"); command.add("Job created from Apache Camel code at " + (new Date())); container.setCommand(command); List<Container> containers = new ArrayList<Container>(); containers.add(container); return containers; } } 68.8. Spring Boot Auto-Configuration The component supports 102 options, which are listed below. Name Description Default Type camel.cluster.kubernetes.attributes Custom service attributes. Map camel.cluster.kubernetes.cluster-labels Set the labels used to identify the pods composing the cluster. Map camel.cluster.kubernetes.config-map-name Set the name of the ConfigMap used to do optimistic locking (defaults to 'leaders'). String camel.cluster.kubernetes.connection-timeout-millis Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer camel.cluster.kubernetes.enabled Sets if the Kubernetes cluster service should be enabled or not, default is false. false Boolean camel.cluster.kubernetes.id Cluster Service ID. String camel.cluster.kubernetes.jitter-factor A jitter factor to apply in order to prevent all pods to call Kubernetes APIs in the same instant. Double camel.cluster.kubernetes.kubernetes-namespace Set the name of the Kubernetes namespace containing the pods and the configmap (autodetected by default). String camel.cluster.kubernetes.lease-duration-millis The default duration of the lease for the current leader. Long camel.cluster.kubernetes.master-url Set the URL of the Kubernetes master (read from Kubernetes client properties by default). String camel.cluster.kubernetes.order Service lookup order/priority. Integer camel.cluster.kubernetes.pod-name Set the name of the current pod (autodetected from container host name by default). String camel.cluster.kubernetes.renew-deadline-millis The deadline after which the leader must stop its services because it may have lost the leadership. Long camel.cluster.kubernetes.retry-period-millis The time between two subsequent attempts to check and acquire the leadership. It is randomized using the jitter factor. Long camel.component.kubernetes-config-maps.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-config-maps.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-config-maps.enabled Whether to enable auto configuration of the kubernetes-config-maps component. This is enabled by default. Boolean camel.component.kubernetes-config-maps.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-config-maps.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-custom-resources.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-custom-resources.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-custom-resources.enabled Whether to enable auto configuration of the kubernetes-custom-resources component. This is enabled by default. Boolean camel.component.kubernetes-custom-resources.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-custom-resources.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-deployments.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-deployments.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-deployments.enabled Whether to enable auto configuration of the kubernetes-deployments component. This is enabled by default. Boolean camel.component.kubernetes-deployments.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-deployments.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-events.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-events.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-events.enabled Whether to enable auto configuration of the kubernetes-events component. This is enabled by default. Boolean camel.component.kubernetes-events.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-events.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-hpa.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-hpa.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-hpa.enabled Whether to enable auto configuration of the kubernetes-hpa component. This is enabled by default. Boolean camel.component.kubernetes-hpa.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-hpa.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-job.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-job.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-job.enabled Whether to enable auto configuration of the kubernetes-job component. This is enabled by default. Boolean camel.component.kubernetes-job.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-job.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-namespaces.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-namespaces.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-namespaces.enabled Whether to enable auto configuration of the kubernetes-namespaces component. This is enabled by default. Boolean camel.component.kubernetes-namespaces.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-namespaces.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-nodes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-nodes.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-nodes.enabled Whether to enable auto configuration of the kubernetes-nodes component. This is enabled by default. Boolean camel.component.kubernetes-nodes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-nodes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes-claims.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes-claims.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes-claims component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes-claims.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes-claims.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-pods.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-pods.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-pods.enabled Whether to enable auto configuration of the kubernetes-pods component. This is enabled by default. Boolean camel.component.kubernetes-pods.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-pods.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-replication-controllers.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-replication-controllers.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-replication-controllers.enabled Whether to enable auto configuration of the kubernetes-replication-controllers component. This is enabled by default. Boolean camel.component.kubernetes-replication-controllers.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-replication-controllers.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-resources-quota.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-resources-quota.enabled Whether to enable auto configuration of the kubernetes-resources-quota component. This is enabled by default. Boolean camel.component.kubernetes-resources-quota.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-resources-quota.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-secrets.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-secrets.enabled Whether to enable auto configuration of the kubernetes-secrets component. This is enabled by default. Boolean camel.component.kubernetes-secrets.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-secrets.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-service-accounts.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-service-accounts.enabled Whether to enable auto configuration of the kubernetes-service-accounts component. This is enabled by default. Boolean camel.component.kubernetes-service-accounts.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-service-accounts.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-services.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-services.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-services.enabled Whether to enable auto configuration of the kubernetes-services component. This is enabled by default. Boolean camel.component.kubernetes-services.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-services.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-build-configs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-build-configs.enabled Whether to enable auto configuration of the openshift-build-configs component. This is enabled by default. Boolean camel.component.openshift-build-configs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-build-configs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-builds.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-builds.enabled Whether to enable auto configuration of the openshift-builds component. This is enabled by default. Boolean camel.component.openshift-builds.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-builds.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-deploymentconfigs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-deploymentconfigs.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.openshift-deploymentconfigs.enabled Whether to enable auto configuration of the openshift-deploymentconfigs component. This is enabled by default. Boolean camel.component.openshift-deploymentconfigs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-deploymentconfigs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency>", "kubernetes-job:masterUrl", "from(\"direct:list\"). toF(\"kubernetes-job:///?kubernetesClient=#kubernetesClient&operation=listJob\"). to(\"mock:result\");", "from(\"direct:listByLabels\").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put(\"key1\", \"value1\"); labels.put(\"key2\", \"value2\"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_JOB_LABELS, labels); } }); toF(\"kubernetes-job:///?kubernetesClient=#kubernetesClient&operation=listJobByLabels\"). to(\"mock:result\");", "import java.util.ArrayList; import java.util.Date; import java.util.HashMap; import java.util.List; import java.util.Map; import javax.inject.Inject; import org.apache.camel.Endpoint; import org.apache.camel.builder.RouteBuilder; import org.apache.camel.cdi.Uri; import org.apache.camel.component.kubernetes.KubernetesConstants; import org.apache.camel.component.kubernetes.KubernetesOperations; import io.fabric8.kubernetes.api.model.Container; import io.fabric8.kubernetes.api.model.ObjectMeta; import io.fabric8.kubernetes.api.model.PodSpec; import io.fabric8.kubernetes.api.model.PodTemplateSpec; import io.fabric8.kubernetes.api.model.batch.JobSpec; public class KubernetesCreateJob extends RouteBuilder { @Inject @Uri(\"timer:foo?delay=1000&repeatCount=1\") private Endpoint inputEndpoint; @Inject @Uri(\"log:output\") private Endpoint resultEndpoint; @Override public void configure() { // you can configure the route rule with Java DSL here from(inputEndpoint) .routeId(\"kubernetes-jobcreate-client\") .process(exchange -> { exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_JOB_NAME, \"camel-job\"); //DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*') exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_NAMESPACE_NAME, \"default\"); Map<String, String> joblabels = new HashMap<String, String>(); joblabels.put(\"jobLabelKey1\", \"value1\"); joblabels.put(\"jobLabelKey2\", \"value2\"); joblabels.put(\"app\", \"jobFromCamelApp\"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_JOB_LABELS, joblabels); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_JOB_SPEC, generateJobSpec()); }) .toF(\"kubernetes-job:///{{kubernetes-master-url}}?oauthToken={{kubernetes-oauth-token:}}&operation=\" + KubernetesOperations.CREATE_JOB_OPERATION) .log(\"Job created:\") .process(exchange -> { System.out.println(exchange.getIn().getBody()); }) .to(resultEndpoint); } private JobSpec generateJobSpec() { JobSpec js = new JobSpec(); PodTemplateSpec pts = new PodTemplateSpec(); PodSpec ps = new PodSpec(); ps.setRestartPolicy(\"Never\"); ps.setContainers(generateContainers()); pts.setSpec(ps); ObjectMeta metadata = new ObjectMeta(); Map<String, String> annotations = new HashMap<String, String>(); annotations.put(\"jobMetadataAnnotation1\", \"random value\"); metadata.setAnnotations(annotations); Map<String, String> podlabels = new HashMap<String, String>(); podlabels.put(\"podLabelKey1\", \"value1\"); podlabels.put(\"podLabelKey2\", \"value2\"); podlabels.put(\"app\", \"podFromCamelApp\"); metadata.setLabels(podlabels); pts.setMetadata(metadata); js.setTemplate(pts); return js; } private List<Container> generateContainers() { Container container = new Container(); container.setName(\"pi\"); container.setImage(\"perl\"); List<String> command = new ArrayList<String>(); command.add(\"echo\"); command.add(\"Job created from Apache Camel code at \" + (new Date())); container.setCommand(command); List<Container> containers = new ArrayList<Container>(); containers.add(container); return containers; } }" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-kubernetes-job-component-starter
20.5. More Than a Secure Shell
20.5. More Than a Secure Shell A secure command line interface is just the beginning of the many ways SSH can be used. Given the proper amount of bandwidth, X11 sessions can be directed over an SSH channel. Or, by using TCP/IP forwarding, previously insecure port connections between systems can be mapped to specific SSH channels. 20.5.1. X11 Forwarding Opening an X11 session over an established SSH connection is as easy as running an X program on a local machine. When an X program is run from the secure shell prompt, the SSH client and server create a new secure channel, and the X program data is sent over that channel to the client machine transparently. X11 forwarding can be very useful. For example, X11 forwarding can be used to create a secure, interactive session with up2date . To do this, connect to the server using ssh and type: After supplying the root password for the server, you will be allowed to safely update the remote system.
[ "up2date &" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-ssh-beyondshell
Chapter 23. OpenShiftAPIServer [operator.openshift.io/v1]
Chapter 23. OpenShiftAPIServer [operator.openshift.io/v1] Description OpenShiftAPIServer provides information to configure an operator to manage openshift-apiserver. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 23.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec is the specification of the desired behavior of the OpenShift API Server. status object status defines the observed status of the OpenShift API Server. 23.1.1. .spec Description spec is the specification of the desired behavior of the OpenShift API Server. Type object Property Type Description logLevel string logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". managementState string managementState indicates whether and how the operator should manage the component observedConfig `` observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator operatorLogLevel string operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". unsupportedConfigOverrides `` unsupportedConfigOverrides overrides the final configuration that was computed by the operator. Red Hat does not support the use of this field. Misuse of this field could lead to unexpected behavior or conflict with other configuration options. Seek guidance from the Red Hat support before using this field. Use of this property blocks cluster upgrades, it must be removed before upgrading your cluster. 23.1.2. .status Description status defines the observed status of the OpenShift API Server. Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. latestAvailableRevision integer latestAvailableRevision is the latest revision used as suffix of revisioned secrets like encryption-config. A new revision causes a new deployment of pods. observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state version string version is the level this availability applies to 23.1.3. .status.conditions Description conditions is a list of conditions and their status Type array 23.1.4. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Property Type Description lastTransitionTime string message string reason string status string type string 23.1.5. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 23.1.6. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 23.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/openshiftapiservers DELETE : delete collection of OpenShiftAPIServer GET : list objects of kind OpenShiftAPIServer POST : create an OpenShiftAPIServer /apis/operator.openshift.io/v1/openshiftapiservers/{name} DELETE : delete an OpenShiftAPIServer GET : read the specified OpenShiftAPIServer PATCH : partially update the specified OpenShiftAPIServer PUT : replace the specified OpenShiftAPIServer /apis/operator.openshift.io/v1/openshiftapiservers/{name}/status GET : read status of the specified OpenShiftAPIServer PATCH : partially update status of the specified OpenShiftAPIServer PUT : replace status of the specified OpenShiftAPIServer 23.2.1. /apis/operator.openshift.io/v1/openshiftapiservers HTTP method DELETE Description delete collection of OpenShiftAPIServer Table 23.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind OpenShiftAPIServer Table 23.2. HTTP responses HTTP code Reponse body 200 - OK OpenShiftAPIServerList schema 401 - Unauthorized Empty HTTP method POST Description create an OpenShiftAPIServer Table 23.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 23.4. Body parameters Parameter Type Description body OpenShiftAPIServer schema Table 23.5. HTTP responses HTTP code Reponse body 200 - OK OpenShiftAPIServer schema 201 - Created OpenShiftAPIServer schema 202 - Accepted OpenShiftAPIServer schema 401 - Unauthorized Empty 23.2.2. /apis/operator.openshift.io/v1/openshiftapiservers/{name} Table 23.6. Global path parameters Parameter Type Description name string name of the OpenShiftAPIServer HTTP method DELETE Description delete an OpenShiftAPIServer Table 23.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 23.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified OpenShiftAPIServer Table 23.9. HTTP responses HTTP code Reponse body 200 - OK OpenShiftAPIServer schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified OpenShiftAPIServer Table 23.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 23.11. HTTP responses HTTP code Reponse body 200 - OK OpenShiftAPIServer schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified OpenShiftAPIServer Table 23.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 23.13. Body parameters Parameter Type Description body OpenShiftAPIServer schema Table 23.14. HTTP responses HTTP code Reponse body 200 - OK OpenShiftAPIServer schema 201 - Created OpenShiftAPIServer schema 401 - Unauthorized Empty 23.2.3. /apis/operator.openshift.io/v1/openshiftapiservers/{name}/status Table 23.15. Global path parameters Parameter Type Description name string name of the OpenShiftAPIServer HTTP method GET Description read status of the specified OpenShiftAPIServer Table 23.16. HTTP responses HTTP code Reponse body 200 - OK OpenShiftAPIServer schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified OpenShiftAPIServer Table 23.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 23.18. HTTP responses HTTP code Reponse body 200 - OK OpenShiftAPIServer schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified OpenShiftAPIServer Table 23.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 23.20. Body parameters Parameter Type Description body OpenShiftAPIServer schema Table 23.21. HTTP responses HTTP code Reponse body 200 - OK OpenShiftAPIServer schema 201 - Created OpenShiftAPIServer schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/operator_apis/openshiftapiserver-operator-openshift-io-v1
5.289. rusers
5.289. rusers 5.289.1. RHBA-2012:0404 - rusers bug fix update Updated rusers packages that fix one bug are now available for Red Hat Enterprise Linux 6. The rusers program allows users to find out who is logged into various machines on the local network. The rusers command produces output similar to output of the who utility, but for the specified list of hosts or for all machines on the local network. Bug Fix BZ# 697862 Previously, no dependency on the rpcbind package was specified in the rstatd and rusers SysV init scripts. In addition, when the rstatd and rusersd services were started, a check to see if networking was enabled was not performed, and an incorrect exit code was returned. This update adds rpcbind dependency to the rusersd and rstatd init scripts. Also, the SysV init scripts have been adjusted to return correct exit codes. Checks are now performed when starting the rstatd and rusersd services to see whether networking is available and binding to rpcbind was successful. All users of rusers are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/rusers
7.2. Installing the audit Packages
7.2. Installing the audit Packages In order to use the Audit system, you must have the audit packages installed on your system. The audit packages ( audit and audit-libs ) are installed by default on Red Hat Enterprise Linux 6. If you do not have these packages installed, execute the following command as the root user to install them:
[ "~]# yum install audit" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sec-installing_the_audit_packages
7.2. Creating a Template
7.2. Creating a Template Create a template from an existing virtual machine to use as a blueprint for creating additional virtual machines. Note You cannot create a sealed virtual machine template based on a RHEL 8.0 virtual machine in Red Hat Virtualization 4.3, because of the following limitations: libguestfs tools on Red Hat Enterprise Linux 7 do not support modifying Red Hat Enterprise Linux 8 disk images because of additional XFS functionality added in Red Hat Enterprise Linux 8. Red Hat Virtualization 4.3 does not support hypervisors based on Red Hat Enterprise Linux 8.0 When you create a template, you specify the format of the disk to be raw or QCOW2: QCOW2 disks are thin provisioned. Raw disks on file storage are thin provisioned. Raw disks on block storage are preallocated. Creating a Template Click Compute Virtual Machines and select the source virtual machine. Ensure the virtual machine is powered down and has a status of Down . Click More Actions ( ), then click Make Template . For more details on all fields in the New Template window, see Section A.5, "Explanation of Settings in the New Template Window" . Enter a Name , Description , and Comment for the template. Select the cluster with which to associate the template from the Cluster drop-down list. By default, this is the same as that of the source virtual machine. Optionally, select a CPU profile for the template from the CPU Profile drop-down list. Optionally, select the Create as a Template Sub-Version check box, select a Root Template , and enter a Sub-Version Name to create the new template as a sub-template of an existing template. In the Disks Allocation section, enter an alias for the disk in the Alias text field. Select the disk format in the Format drop-down, the storage domain on which to store the disk from the Target drop-down, and the disk profile in the Disk Profile drop-down. By default, these are the same as those of the source virtual machine. Select the Allow all users to access this Template check box to make the template public. Select the Copy VM permissions check box to copy the permissions of the source virtual machine to the template. Select the Seal Template check box (Linux only) to seal the template. Note Sealing, which uses the virt-sysprep command, removes system-specific details from a virtual machine before creating a template based on that virtual machine. This prevents the original virtual machine's details from appearing in subsequent virtual machines that are created using the same template. It also ensures the functionality of other features, such as predictable vNIC order. See Appendix B, virt-sysprep Operations for more information. Click OK . The virtual machine displays a status of Image Locked while the template is being created. The process of creating a template may take up to an hour depending on the size of the virtual disk and the capabilities of your storage hardware. When complete, the template is added to the Templates tab. You can now create new virtual machines based on the template. Note When a template is made, the virtual machine is copied so that both the existing virtual machine and its template are usable after template creation.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/virtual_machine_management_guide/creating_a_template_from_an_existing_virtual_machine
function::uid
function::uid Name function::uid - Returns the user ID of a target process. Synopsis Arguments None General Syntax uid: long Description This function returns the user ID of the target process.
[ "function uid:long()" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-uid
Chapter 15. Advanced process concepts and tasks
Chapter 15. Advanced process concepts and tasks 15.1. Invoking a Decision Model and Notation (DMN) service in a business process You can use Decision Model and Notation (DMN) to model a decision service graphically in a decision requirements diagram (DRD) in Business Central and then invoke that DMN service as part of a business process in Business Central. Business processes interact with DMN services by identifying the DMN service and mapping business data between DMN inputs and the business process properties. As an illustration, this procedure uses an example TrainStation project that defines train routing logic. This example project contains the following data object and DMN components designed in Business Central for the routing decision logic: Example Train object public class Train { private String departureStation; private String destinationStation; private BigDecimal railNumber; // Getters and setters } Figure 15.1. Example Compute Rail DMN model Figure 15.2. Example Rail DMN decision table Figure 15.3. Example tTrain DMN data type For more information about creating DMN models in Business Central, see Designing a decision service using DMN models . Prerequisites All required data objects and DMN model components are defined in the project. Procedure In Business Central, go to Menu Design Projects and click the project name. Select or create the business process asset in which you want to invoke the DMN service. In the process designer, use the left toolbar to drag and drop BPMN components as usual to define your overall business process logic, connections, events, tasks, or other elements. To incorporate a DMN service in the business process, add a Business Rule task from the left toolbar or from the start-node options and insert the task in the relevant location in the process flow. For this example, the following Accept Train business process incorporates the DMN service in the Route To Rail node: Figure 15.4. Example Accept Train business process with a DMN service Select the business rule task node that you want to use for the DMN service, click Properties in the upper-right corner of the process designer, and under Implementation/Execution , define the following fields: Rule Language : Select DMN . Namespace : Enter the unique namespace from the DMN model file. Example: https://www.drools.org/kie-dmn Decision Name : Enter the name of the DMN decision node that you want to invoke in the selected process node. Example: Rail DMN Model Name : Enter the DMN model name. Example: Compute Rail Important When you explore the root node, ensure that the Namespace and DMN Model Name fields consist of the same value in BPMN as DMN diagram. Under Data Assignments Assignments , click the Edit icon and add the DMN input and output data to define the mapping between the DMN service and the process data. For the Route To Rail DMN service node in this example, you add an input assignment for Train that corresponds to the input node in the DMN model, and add an output assignment for Rail that corresponds to the decision node in the DMN model. The Data Type must match the type that you set for that node in the DMN model, and the Source and Target definition is the relevant variable or field for the specified object. Figure 15.5. Example input and output mapping for the Route To Rail DMN service node Click Save to save the data input and output data. Define the remainder of your business process according to how you want the completed DMN service to be handled. For this example, the Properties Implementation/Execution On Exit Action value is set to the following code to store the rail number after the Route To Rail DMN service is complete: Example code for On Exit Action train.setRailNumber(rail); If the rail number is not computed, the process reaches a No Appropriate Rail end error node that is defined with the following condition expression: Figure 15.6. Example condition for No Appropriate Rail end error node If the rail number is computed, the process reaches an Accept Train script task that is defined with the following condition expression: Figure 15.7. Example condition for Accept Train script task node The Accept Train script task also uses the following script in Properties Implementation/Execution Script to print a message about the train route and current rail: com.myspace.trainstation.Train t = (com.myspace.trainstation.Train) kcontext.getVariable("train"); System.out.println("Train from: " + t.getDepartureStation() + ", to: " + t.getDestinationStation() + ", is on rail: " + t.getRailNumber()); After you define your business process with the incorporated DMN service, save your process in the process designer, deploy the project, and run the corresponding process definition to invoke the DMN service. For this example, when you deploy the TrainStation project and run the corresponding process definition, you open the process instance form for the Accept Train process definition and set the departure station and destination station fields to test the execution: Figure 15.8. Example process instance form for the Accept Train process definition After the process is executed, a message appears in the server log with the train route that you specified: Example server log output for the Accept Train process
[ "public class Train { private String departureStation; private String destinationStation; private BigDecimal railNumber; // Getters and setters }", "train.setRailNumber(rail);", "com.myspace.trainstation.Train t = (com.myspace.trainstation.Train) kcontext.getVariable(\"train\"); System.out.println(\"Train from: \" + t.getDepartureStation() + \", to: \" + t.getDestinationStation() + \", is on rail: \" + t.getRailNumber());", "Train from: Zagreb, to: Belgrade, is on rail: 1" ]
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_process_services_in_red_hat_process_automation_manager/advanced_process_concepts_and_tasks
Chapter 7. ConsoleQuickStart [console.openshift.io/v1]
Chapter 7. ConsoleQuickStart [console.openshift.io/v1] Description ConsoleQuickStart is an extension for guiding user through various workflows in the OpenShift web console. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object Required spec 7.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ConsoleQuickStartSpec is the desired quick start configuration. 7.1.1. .spec Description ConsoleQuickStartSpec is the desired quick start configuration. Type object Required description displayName durationMinutes introduction tasks Property Type Description accessReviewResources array accessReviewResources contains a list of resources that the user's access will be reviewed against in order for the user to complete the Quick Start. The Quick Start will be hidden if any of the access reviews fail. accessReviewResources[] object ResourceAttributes includes the authorization attributes available for resource requests to the Authorizer interface conclusion string conclusion sums up the Quick Start and suggests the possible steps. (includes markdown) description string description is the description of the Quick Start. (includes markdown) displayName string displayName is the display name of the Quick Start. durationMinutes integer durationMinutes describes approximately how many minutes it will take to complete the Quick Start. icon string icon is a base64 encoded image that will be displayed beside the Quick Start display name. The icon should be an vector image for easy scaling. The size of the icon should be 40x40. introduction string introduction describes the purpose of the Quick Start. (includes markdown) nextQuickStart array (string) nextQuickStart is a list of the following Quick Starts, suggested for the user to try. prerequisites array (string) prerequisites contains all prerequisites that need to be met before taking a Quick Start. (includes markdown) tags array (string) tags is a list of strings that describe the Quick Start. tasks array tasks is the list of steps the user has to perform to complete the Quick Start. tasks[] object ConsoleQuickStartTask is a single step in a Quick Start. 7.1.2. .spec.accessReviewResources Description accessReviewResources contains a list of resources that the user's access will be reviewed against in order for the user to complete the Quick Start. The Quick Start will be hidden if any of the access reviews fail. Type array 7.1.3. .spec.accessReviewResources[] Description ResourceAttributes includes the authorization attributes available for resource requests to the Authorizer interface Type object Property Type Description group string Group is the API Group of the Resource. "*" means all. name string Name is the name of the resource being requested for a "get" or deleted for a "delete". "" (empty) means all. namespace string Namespace is the namespace of the action being requested. Currently, there is no distinction between no namespace and all namespaces "" (empty) is defaulted for LocalSubjectAccessReviews "" (empty) is empty for cluster-scoped resources "" (empty) means "all" for namespace scoped resources from a SubjectAccessReview or SelfSubjectAccessReview resource string Resource is one of the existing resource types. "*" means all. subresource string Subresource is one of the existing resource types. "" means none. verb string Verb is a kubernetes resource API verb, like: get, list, watch, create, update, delete, proxy. "*" means all. version string Version is the API Version of the Resource. "*" means all. 7.1.4. .spec.tasks Description tasks is the list of steps the user has to perform to complete the Quick Start. Type array 7.1.5. .spec.tasks[] Description ConsoleQuickStartTask is a single step in a Quick Start. Type object Required description title Property Type Description description string description describes the steps needed to complete the task. (includes markdown) review object review contains instructions to validate the task is complete. The user will select 'Yes' or 'No'. using a radio button, which indicates whether the step was completed successfully. summary object summary contains information about the passed step. title string title describes the task and is displayed as a step heading. 7.1.6. .spec.tasks[].review Description review contains instructions to validate the task is complete. The user will select 'Yes' or 'No'. using a radio button, which indicates whether the step was completed successfully. Type object Required failedTaskHelp instructions Property Type Description failedTaskHelp string failedTaskHelp contains suggestions for a failed task review and is shown at the end of task. (includes markdown) instructions string instructions contains steps that user needs to take in order to validate his work after going through a task. (includes markdown) 7.1.7. .spec.tasks[].summary Description summary contains information about the passed step. Type object Required failed success Property Type Description failed string failed briefly describes the unsuccessfully passed task. (includes markdown) success string success describes the succesfully passed task. 7.2. API endpoints The following API endpoints are available: /apis/console.openshift.io/v1/consolequickstarts DELETE : delete collection of ConsoleQuickStart GET : list objects of kind ConsoleQuickStart POST : create a ConsoleQuickStart /apis/console.openshift.io/v1/consolequickstarts/{name} DELETE : delete a ConsoleQuickStart GET : read the specified ConsoleQuickStart PATCH : partially update the specified ConsoleQuickStart PUT : replace the specified ConsoleQuickStart 7.2.1. /apis/console.openshift.io/v1/consolequickstarts HTTP method DELETE Description delete collection of ConsoleQuickStart Table 7.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ConsoleQuickStart Table 7.2. HTTP responses HTTP code Reponse body 200 - OK ConsoleQuickStartList schema 401 - Unauthorized Empty HTTP method POST Description create a ConsoleQuickStart Table 7.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.4. Body parameters Parameter Type Description body ConsoleQuickStart schema Table 7.5. HTTP responses HTTP code Reponse body 200 - OK ConsoleQuickStart schema 201 - Created ConsoleQuickStart schema 202 - Accepted ConsoleQuickStart schema 401 - Unauthorized Empty 7.2.2. /apis/console.openshift.io/v1/consolequickstarts/{name} Table 7.6. Global path parameters Parameter Type Description name string name of the ConsoleQuickStart HTTP method DELETE Description delete a ConsoleQuickStart Table 7.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 7.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ConsoleQuickStart Table 7.9. HTTP responses HTTP code Reponse body 200 - OK ConsoleQuickStart schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ConsoleQuickStart Table 7.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.11. HTTP responses HTTP code Reponse body 200 - OK ConsoleQuickStart schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ConsoleQuickStart Table 7.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.13. Body parameters Parameter Type Description body ConsoleQuickStart schema Table 7.14. HTTP responses HTTP code Reponse body 200 - OK ConsoleQuickStart schema 201 - Created ConsoleQuickStart schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/console_apis/consolequickstart-console-openshift-io-v1
Chapter 20. Counting events during process execution with perf stat
Chapter 20. Counting events during process execution with perf stat You can use the perf stat command to count hardware and software events during process execution. Prerequisites You have the perf user space tool installed as described in Installing perf . 20.1. The purpose of perf stat The perf stat command executes a specified command, keeps a running count of hardware and software event occurrences during the commands execution, and generates statistics of these counts. If you do not specify any events, then perf stat counts a set of common hardware and software events. 20.2. Counting events with perf stat You can use perf stat to count hardware and software event occurrences during command execution and generate statistics of these counts. By default, perf stat operates in per-thread mode. Prerequisites You have the perf user space tool installed as described in Installing perf . Procedure Count the events. Running the perf stat command without root access will only count events occurring in the user space: Example 20.1. Output of perf stat ran without root access As you can see in the example, when perf stat runs without root access the event names are followed by :u , indicating that these events were counted only in the user-space. To count both user-space and kernel-space events, you must have root access when running perf stat : Example 20.2. Output of perf stat ran with root access By default, perf stat operates in per-thread mode. To change to CPU-wide event counting, pass the -a option to perf stat . To count CPU-wide events, you need root access: Additional resources perf-stat(1) man page on your system 20.3. Interpretation of perf stat output perf stat executes a specified command and counts event occurrences during the commands execution and displays statistics of these counts in three columns: The number of occurrences counted for a given event The name of the event that was counted When related metrics are available, a ratio or percentage is displayed after the hash sign ( # ) in the right-most column. For example, when running in default mode, perf stat counts both cycles and instructions and, therefore, calculates and displays instructions per cycle in the right-most column. You can see similar behavior with regard to branch-misses as a percent of all branches since both events are counted by default. 20.4. Attaching perf stat to a running process You can attach perf stat to a running process. This will instruct perf stat to count event occurrences only in the specified processes during the execution of a command. Prerequisites You have the perf user space tool installed as described in Installing perf . Procedure Attach perf stat to a running process: The example counts events in the processes with the IDs of ID1 and ID2 for a time period of seconds seconds as dictated by using the sleep command. Additional resources perf-stat(1) man page on your system
[ "perf stat ls", "Desktop Documents Downloads Music Pictures Public Templates Videos Performance counter stats for 'ls': 1.28 msec task-clock:u # 0.165 CPUs utilized 0 context-switches:u # 0.000 M/sec 0 cpu-migrations:u # 0.000 K/sec 104 page-faults:u # 0.081 M/sec 1,054,302 cycles:u # 0.823 GHz 1,136,989 instructions:u # 1.08 insn per cycle 228,531 branches:u # 178.447 M/sec 11,331 branch-misses:u # 4.96% of all branches 0.007754312 seconds time elapsed 0.000000000 seconds user 0.007717000 seconds sys", "perf stat ls", "Desktop Documents Downloads Music Pictures Public Templates Videos Performance counter stats for 'ls': 3.09 msec task-clock # 0.119 CPUs utilized 18 context-switches # 0.006 M/sec 3 cpu-migrations # 0.969 K/sec 108 page-faults # 0.035 M/sec 6,576,004 cycles # 2.125 GHz 5,694,223 instructions # 0.87 insn per cycle 1,092,372 branches # 352.960 M/sec 31,515 branch-misses # 2.89% of all branches 0.026020043 seconds time elapsed 0.000000000 seconds user 0.014061000 seconds sys", "perf stat -a ls", "perf stat -p ID1,ID2 sleep seconds" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/monitoring_and_managing_system_status_and_performance/counting-events-during-process-execution-with-perf-stat_monitoring-and-managing-system-status-and-performance
Part II. Using a different host FQDN
Part II. Using a different host FQDN
null
https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/replacing_failed_hosts/using_a_different_host_fqdn
Chapter 3. Integrating with an existing Red Hat Ceph Storage cluster
Chapter 3. Integrating with an existing Red Hat Ceph Storage cluster Use the procedures and information in this section to integrate Red Hat OpenStack Platform (RHOSP) with an existing Red Hat Ceph Storage cluster. You can create custom environment files to override and provide values for configuration options within OpenStack components. 3.1. Creating a custom environment file Director supplies parameters to tripleo-ansible to integrate with an external Red Hat Ceph Storage cluster through the environment file: /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml If you deploy the Shared File Systems service (manila) with external CephFS, separate environment files supply additional parameters. For native CephFS, the environment file is /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsnative-config.yaml . For CephFS through NFS, the environment file is /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsganesha-config.yaml . To configure integration of an existing Ceph Storage cluster with the overcloud, you must supply the details of your Ceph Storage cluster to director by using a custom environment file. Director invokes these environment files during deployment. Procedure Create a custom environment file: /home/stack/templates/ceph-config.yaml Add a parameter_defaults: section to the file: Use parameter_defaults to set all of the parameters that you want to override in /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml . You must set the following parameters at a minimum: CephClientKey : The Ceph client key for the client.openstack user in your Ceph Storage cluster. This is the value of key that you retrieved in Configuring the existing Ceph Storage cluster . For example, AQDLOh1VgEp6FRAAFzT7Zw+Y9V6JJExQAsRnRQ== . CephClusterFSID : The file system ID of your Ceph Storage cluster. This is the value of fsid in your Ceph Storage cluster configuration file, which you retrieved in Configuring the existing Ceph Storage cluster . For example, 4b5c8c0a-ff60-454b-a1b4-9747aa737d19 . CephExternalMonHost : A comma-delimited list of the IPs of all MON hosts in your Ceph Storage cluster, for example, 172.16.1.7, 172.16.1.8 . For example: Optional: You can override the Red Hat OpenStack Platform (RHOSP) client username and the following default pool names to match your Ceph Storage cluster: CephClientUserName: <openstack> NovaRbdPoolName: <vms> CinderRbdPoolName: <volumes> GlanceRbdPoolName: <images> CinderBackupRbdPoolName: <backups> Optional: If you are deploying the Shared File Systems service with CephFS, you can override the following default data and metadata pool names: Note Ensure that these names match the names of the pools you created. Set the client key that you created for the Shared File Systems service. You can override the default Ceph client username for that key: Note The default client username ManilaCephFSCephFSAuthId is manila , unless you override it. CephManilaClientKey is always required. After you create the custom environment file, you must include it when you deploy the overcloud. Additional resources Deploying the overcloud 3.2. Ceph containers for Red Hat OpenStack Platform with Red Hat Ceph Storage You must have a Ceph Storage container to configure Red Hat Openstack Platform (RHOSP) to use Red Hat Ceph Storage with NFS Ganesha. You do not require a Ceph Storage container if the external Ceph Storage cluster only provides Block (through RBD), Object (through RGW), or File (through native CephFS) storage. RHOSP 17.0 requires Red Hat Ceph Storage 5.x (Ceph package 16.x) or later to be compatible with Red Hat Enterprise Linux 9. The Ceph Storage 5.x containers are hosted at registry.redhat.io , a registry that requires authentication. For more information, see Container image preparation parameters . 3.3. Deploying the overcloud Deploy the overcloud with the environment file that you created. Procedure The creation of the overcloud requires additional arguments for the openstack overcloud deploy command: This example command uses the following options: --templates - Creates the overcloud from the default heat template collection, /usr/share/openstack-tripleo-heat-templates/ . -e /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml - Sets the director to integrate an existing Ceph Storage cluster to the overcloud. -e /home/stack/templates/ceph-config.yaml - Adds a custom environment file to override the defaults set by -e /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml . --ntp-server pool.ntp.org - Sets the NTP server. 3.3.1. Adding environment files for the Shared File Systems service with CephFS If you deploy an overcloud that uses the Shared File Systems service (manila) with CephFS, you must add additional environment files. Procedure Create and add additional environment files: If you deploy an overcloud that uses the native CephFS back-end driver, add /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsnative-config.yaml . If you deploy an overcloud that uses CephFS through NFS, add /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsganesha-config.yaml . Red Hat recommends that you deploy the Ceph-through-NFS driver with an isolated StorageNFS network where shares are exported. You must deploy the isolated network to overcloud Controller nodes. To enable this deployment, director includes the following file and role: An example custom network configuration file that includes the StorageNFS network (/usr/share/openstack-tripleo-heat-templates/network_data_ganesha.yaml). Review and customize this file as necessary. A ControllerStorageNFS role. Modify the openstack overcloud deploy command depending on the CephFS back end that you use. For native CephFS: For CephFS through NFS: Note The custom ceph-config.yaml environment file overrides parameters in the external-ceph.yaml file and either the manila-cephfsnative-config.yaml file or the manila-cephfsganesha-config.yaml file. Therefore, include the custom ceph-config.yaml environment file in the deployment command after external-ceph.yaml and either manila-cephfsnative-config.yaml or manila-cephfsganesha-config.yaml . Example environment file Replace <cluster_ID> , <IP_address> , and <client_key> with values that are suitable for your environment. Additional resources For more information about generating a custom roles file, see Deploying the Shared File Systems service with CephFS through NFS . 3.3.2. Adding an additional environment file for external Ceph Object Gateway (RGW) for Object storage If you deploy an overcloud that uses an already existing RGW service for Object storage, you must add an additional environment file. Procedure Add the following parameter_defaults to a custom environment file, for example, swift-external-params.yaml , and adjust the values to suit your deployment: Note The example code snippet contains parameter values that might differ from values that you use in your environment: The default port where the remote RGW instance listens is 8080 . The port might be different depending on how the external RGW is configured. The swift user created in the overcloud uses the password defined by the SwiftPassword parameter. You must configure the external RGW instance to use the same password to authenticate with the Identity service by using the rgw_keystone_admin_password . Add the following code to the Ceph config file to configure RGW to use the Identity service. Replace the variable values to suit your environment: Note Director creates the following roles and users in the Identity service by default: rgw_keystone_accepted_admin_roles: ResellerAdmin, swiftoperator rgw_keystone_admin_domain: default rgw_keystone_admin_project: service rgw_keystone_admin_user: swift Deploy the overcloud with the additional environment files with any other environment files that are relevant to your deployment:
[ "parameter_defaults:", "parameter_defaults: CephClientKey: <AQDLOh1VgEp6FRAAFzT7Zw+Y9V6JJExQAsRnRQ==> CephClusterFSID: <4b5c8c0a-ff60-454b-a1b4-9747aa737d19> CephExternalMonHost: <172.16.1.7, 172.16.1.8, 172.16.1.9>", "ManilaCephFSDataPoolName: <manila_data> ManilaCephFSMetadataPoolName: <manila_metadata>", "ManilaCephFSCephFSAuthId: <manila> CephManilaClientKey: <AQDQ991cAAAAABAA0aXFrTnjH9aO39P0iVvYyg==>", "openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml -e /home/stack/templates/ceph-config.yaml -e --ntp-server pool.ntp.org", "openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsnative-config.yaml -e /home/stack/templates/ceph-config.yaml -e --ntp-server pool.ntp.org", "openstack overcloud deploy --templates -n /usr/share/openstack-tripleo-heat-templates/network_data_ganesha.yaml -r /home/stack/custom_roles.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsganesha-config.yaml -e /home/stack/templates/ceph-config.yaml -e --ntp-server pool.ntp.org", "parameter_defaults: CinderEnableIscsiBackend: false CinderEnableRbdBackend: true CinderEnableNfsBackend: false NovaEnableRbdBackend: true GlanceBackend: rbd CinderRbdPoolName: \"volumes\" NovaRbdPoolName: \"vms\" GlanceRbdPoolName: \"images\" CinderBackupRbdPoolName: \"backups\" CephClusterFSID: <cluster_ID> CephExternalMonHost: <IP_address>,<IP_address>,<IP_address> CephClientKey: \"<client_key>\" CephClientUserName: \"openstack\" ManilaCephFSDataPoolName: manila_data ManilaCephFSMetadataPoolName: manila_metadata ManilaCephFSCephFSAuthId: 'manila' CephManilaClientKey: '<client_key>' ExtraConfig:", "parameter_defaults: ExternalSwiftPublicUrl: 'http://<Public RGW endpoint or loadbalancer>:8080/swift/v1/AUTH_%(project_id)s' ExternalSwiftInternalUrl: 'http://<Internal RGW endpoint>:8080/swift/v1/AUTH_%(project_id)s' ExternalSwiftAdminUrl: 'http://<Admin RGW endpoint>:8080/swift/v1/AUTH_%(project_id)s' ExternalSwiftUserTenant: 'service' SwiftPassword: 'choose_a_random_password'", "rgw_keystone_api_version = 3 rgw_keystone_url = http://<public Keystone endpoint>:5000/ rgw_keystone_accepted_roles = member, Member, admin rgw_keystone_accepted_admin_roles = ResellerAdmin, swiftoperator rgw_keystone_admin_domain = default rgw_keystone_admin_project = service rgw_keystone_admin_user = swift rgw_keystone_admin_password = <password_as_defined_in_the_environment_parameters> rgw_keystone_implicit_tenants = true rgw_keystone_revocation_interval = 0 rgw_s3_auth_use_keystone = true rgw_swift_versioning_enabled = true rgw_swift_account_in_url = true rgw_max_attr_name_len = 128 rgw_max_attrs_num_in_req = 90 rgw_max_attr_size = 256 rgw_keystone_verify_ssl = false", "openstack overcloud deploy --templates -e <your_environment_files> -e /usr/share/openstack-tripleo-heat-templates/environments/swift-external.yaml -e swift-external-params.yaml" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/integrating_an_overcloud_with_an_existing_red_hat_ceph_storage_cluster/assembly-integrate-with-an-existing-ceph-storage-cluster_existing-ceph
Chapter 19. Troubleshooting issues in provider mode
Chapter 19. Troubleshooting issues in provider mode 19.1. Force deletion of storage in provider clusters When a client cluster is deleted without performing the offboarding process to remove all the resources from the corresponding provider cluster, you need to perform force deletion of the corresponding storage consumer from the provider cluster. This helps to release the storage space that was claimed by the client. Caution It is recommended to use this method only in unavoidable situations such as accidental deletion of storage client clusters. Prerequisites Access to the OpenShift Data Foundation storage cluster in provider mode. Procedure Click Storage Storage Clients from the OpenShift console. Click the delete icon at the far right of the listed storage client cluster. The delete icon is enabled only after 5 minutes of the last heartbeat of the cluster. Click Confirm .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/troubleshooting_openshift_data_foundation/troubleshooting_issues_in_provider_mode
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/deploying_and_managing_openshift_data_foundation_using_google_cloud/making-open-source-more-inclusive
Chapter 5. Configuring user preferences for email notifications
Chapter 5. Configuring user preferences for email notifications Each user in the Red Hat Hybrid Cloud Console must opt in to receive email notifications emails about events. You can select the services from which to receive notifications as well as the frequency. Important If you select Instant notification for any service, you might receive a very large number of emails. Prerequisites You are logged in to the Red Hat Hybrid Cloud Console. You have configured relevant events in the console. A Notifications administrator or Organization Administrator has configured behavior groups to receive event notifications. Procedure In the Hybrid Cloud Console, navigate to Settings > Notifications > Notification Preferences . The My Notifications page appears. On the My Notifications page, the available services are grouped by category, for example OpenShift or Red Hat Enterprise Linux. Select the service you want to configure your notifications for, for example, Advisor or Inventory. A list of the available event notifications for the selected service opens. At the top of the list, click Select all to enable all notifications for the service, or select one of the following options for each event listed: Note All options are not available for all services. Daily digest : Receive a daily summary of triggered application events that occur in a 24 hour time frame. Instant notification : Receive an email immediately for each triggered application event. Important If you select Instant notification for any service, you might receive a very large number of emails. Weekly report : Receive an email that contains the Advisor Weekly Report. Update your information and then click Save . Email notifications are delivered in the format and frequency that you selected. If notifications are configured for integrated third-party applications, the notifications are sent to those applications as well. Note If you decide to stop receiving notifications, select Deselect all or uncheck the boxes for the events you do not want to be notified about, and then click Save . You will no longer receive any email notifications unless you return to this screen and enable them once again. 5.1. Customizing the daily digest email notification time You can choose to receive a summary of triggered application events occurring in your Red Hat Hybrid Cloud Console services in a daily digest email, instead of being notified as events occur. By default, the daily digest is sent at 00:00 Coordinated Universal Time (UTC). Organization Administrators and Notifications administrators can customize the time the daily digest is sent. The daily digest provides a snapshot of events occurring over a 24-hour time frame, starting from the time you specify in the notifications settings. Prerequisites You are logged in to the Hybrid Cloud Console as an Organization Administrator or as a user with Notifications administrator permissions. Procedure In the Hybrid Cloud Console, navigate to Settings > Notifications > Notification Preferences . The My Notifications page appears. On the My Notifications page, click Edit time settings . Select Custom time and then specify the time and time zone to send your account's daily digest email. Click Save . The daily digest email is sent each day at the time you selected. Note After you save a new time, the Hybrid Cloud Console converts the new time to the UTC time zone. 5.2. Updating your email address for notifications The notifications service sends email notifications to the email address listed in your Red Hat account. You can update your email address using the steps in this procedure. Prerequisites You are logged in to the Hybrid Cloud Console. Procedure Click your user avatar in the upper right of the Red Hat Hybrid Cloud Console window. A drop-down list appears. Click My profile . Under Personal information , click the Change link to your email address. This opens the Red Hat account management application. In the Email address field, enter your new email address and then click Save . Red Hat sends a verification to your new email address. Important You must verify your new email address within one day to save the change. Open the verification email from Red Hat in your email account and click Link to e-mail address verification . This confirms your email address and returns you to the Red Hat account management application. Your new email address is saved in your Red Hat account. If the change does not show in My profile immediately, log out of all Red Hat applications and log back in to view your updated account. Additional resources For more information about updating your Red Hat account details, see "Updating your Red Hat account information" in Getting started with the Red Hat Hybrid Cloud Console .
null
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/configuring_notifications_on_the_red_hat_hybrid_cloud_console/proc-notif-config-user-preferences_notifications
22.16.6. Adding a Manycast Client Address
22.16.6. Adding a Manycast Client Address To add a manycast client address, that is to say, to configure a multicast address to be used for NTP server discovery, make use of the manycastclient command in the ntp.conf file. The manycastclient command takes the following form: manycastclient address where address is an IP multicast address from which packets are to be received. The client will send a request to the address and select the best servers from the responses and ignore other servers. NTP communication then uses unicast associations, as if the discovered NTP servers were listed in ntp.conf . This command configures a system to act as an NTP client. Systems can be both client and server at the same time.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2_adding_a_manycast_client_address
Chapter 3. The Ceph client components
Chapter 3. The Ceph client components Ceph clients differ materially in how they present data storage interfaces. A Ceph block device presents block storage that mounts just like a physical storage drive. A Ceph gateway presents an object storage service with S3-compliant and Swift-compliant RESTful interfaces with its own user management. However, all Ceph clients use the Reliable Autonomic Distributed Object Store (RADOS) protocol to interact with the Red Hat Ceph Storage cluster. They all have the same basic needs: The Ceph configuration file, and the Ceph monitor address. The pool name. The user name and the path to the secret key. Ceph clients tend to follow some similar patterns, such as object-watch-notify and striping. The following sections describe a little bit more about RADOS, librados and common patterns used in Ceph clients. Prerequisites A basic understanding of distributed storage systems. 3.1. Ceph client native protocol Modern applications need a simple object storage interface with asynchronous communication capability. The Ceph Storage Cluster provides a simple object storage interface with asynchronous communication capability. The interface provides direct, parallel access to objects throughout the cluster. Pool Operations Snapshots Read/Write Objects Create or Remove Entire Object or Byte Range Append or Truncate Create/Set/Get/Remove XATTRs Create/Set/Get/Remove Key/Value Pairs Compound operations and dual-ack semantics 3.2. Ceph client object watch and notify A Ceph client can register a persistent interest with an object and keep a session to the primary OSD open. The client can send a notification message and payload to all watchers and receive notification when the watchers receive the notification. This enables a client to use any object as a synchronization/communication channel. 3.3. Ceph client Mandatory Exclusive Locks Mandatory Exclusive Locks is a feature that locks an RBD to a single client, if multiple mounts are in place. This helps address the write conflict situation when multiple mounted clients try to write to the same object. This feature is built on object-watch-notify explained in the section. So, when writing, if one client first establishes an exclusive lock on an object, another mounted client will first check to see if a peer has placed a lock on the object before writing. With this feature enabled, only one client can modify an RBD device at a time, especially when changing internal RBD structures during operations like snapshot create/delete . It also provides some protection for failed clients. For instance, if a virtual machine seems to be unresponsive and you start a copy of it with the same disk elsewhere, the first one will be blacklisted in Ceph and unable to corrupt the new one. Mandatory Exclusive Locks are not enabled by default. You have to explicitly enable it with --image-feature parameter when creating an image. Example Here, the numeral 5 is a summation of 1 and 4 where 1 enables layering support and 4 enables exclusive locking support. So, the above command will create a 100 GB rbd image, enable layering and exclusive lock. Mandatory Exclusive Locks is also a prerequisite for object map . Without enabling exclusive locking support, object map support cannot be enabled. Mandatory Exclusive Locks also does some ground work for mirroring. 3.4. Ceph client object map Object map is a feature that tracks the presence of backing RADOS objects when a client writes to an rbd image. When a write occurs, that write is translated to an offset within a backing RADOS object. When the object map feature is enabled, the presence of these RADOS objects is tracked. So, we can know if the objects actually exist. Object map is kept in-memory on the librbd client so it can avoid querying the OSDs for objects that it knows don't exist. In other words, object map is an index of the objects that actually exist. Object map is beneficial for certain operations, viz: Resize Export Copy Flatten Delete Read A shrink resize operation is like a partial delete where the trailing objects are deleted. An export operation knows which objects are to be requested from RADOS. A copy operation knows which objects exist and need to be copied. It does not have to iterate over potentially hundreds and thousands of possible objects. A flatten operation performs a copy-up for all parent objects to the clone so that the clone can be detached from the parent i.e, the reference from the child clone to the parent snapshot can be removed. So, instead of all potential objects, copy-up is done only for the objects that exist. A delete operation deletes only the objects that exist in the image. A read operation skips the read for objects it knows doesn't exist. So, for operations like resize, shrinking only, exporting, copying, flattening, and deleting, these operations would need to issue an operation for all potentially affected RADOS objects, whether they exist or not. With object map enabled, if the object doesn't exist, the operation need not be issued. For example, if we have a 1 TB sparse RBD image, it can have hundreds and thousands of backing RADOS objects. A delete operation without object map enabled would need to issue a remove object operation for each potential object in the image. But if object map is enabled, it only needs to issue remove object operations for the objects that exist. Object map is valuable against clones that don't have actual objects but get objects from parents. When there is a cloned image, the clone initially has no objects and all reads are redirected to the parent. So, object map can improve reads as without the object map, first it needs to issue a read operation to the OSD for the clone, when that fails, it issues another read to the parent - with object map enabled. It skips the read for objects it knows doesn't exist. Object map is not enabled by default. You have to explicitly enable it with --image-features parameter when creating an image. Also, Mandatory Exclusive Locks is a prerequisite for object map . Without enabling exclusive locking support, object map support cannot be enabled. To enable object map support when creating a image, execute: Here, the numeral 13 is a summation of 1 , 4 and 8 where 1 enables layering support, 4 enables exclusive locking support and 8 enables object map support. So, the above command will create a 100 GB rbd image, enable layering, exclusive lock and object map. 3.5. Ceph client data stripping Storage devices have throughput limitations, which impact performance and scalability. So storage systems often support striping- storing sequential pieces of information across multiple storage devices- to increase throughput and performance. The most common form of data striping comes from RAID. The RAID type most similar to Ceph's striping is RAID 0, or a 'striped volume.' Ceph's striping offers the throughput of RAID 0 striping, the reliability of n-way RAID mirroring and faster recovery. Ceph provides three types of clients: Ceph Block Device, Ceph Filesystem, and Ceph Object Storage. A Ceph Client converts its data from the representation format it provides to its users, such as a block device image, RESTful objects, CephFS filesystem directories, into objects for storage in the Ceph Storage Cluster. Tip The objects Ceph stores in the Ceph Storage Cluster are not striped. Ceph Object Storage, Ceph Block Device, and the Ceph Filesystem stripe their data over multiple Ceph Storage Cluster objects. Ceph Clients that write directly to the Ceph storage cluster using librados must perform the striping, and parallel I/O for themselves to obtain these benefits. The simplest Ceph striping format involves a stripe count of 1 object. Ceph Clients write stripe units to a Ceph Storage Cluster object until the object is at its maximum capacity, and then create another object for additional stripes of data. The simplest form of striping may be sufficient for small block device images, S3 or Swift objects. However, this simple form doesn't take maximum advantage of Ceph's ability to distribute data across placement groups, and consequently doesn't improve performance very much. The following diagram depicts the simplest form of striping: If you anticipate large images sizes, large S3 or Swift objects for example, video, you may see considerable read/write performance improvements by striping client data over multiple objects within an object set. Significant write performance occurs when the client writes the stripe units to their corresponding objects in parallel. Since objects get mapped to different placement groups and further mapped to different OSDs, each write occurs in parallel at the maximum write speed. A write to a single disk would be limited by the head movement for example, 6ms per seek and bandwidth of that one device for example, 100MB/s. By spreading that write over multiple objects, which map to different placement groups and OSDs, Ceph can reduce the number of seeks per drive and combine the throughput of multiple drives to achieve much faster write or read speeds. Note Striping is independent of object replicas. Since CRUSH replicates objects across OSDs, stripes get replicated automatically. In the following diagram, client data gets striped across an object set ( object set 1 in the following diagram) consisting of 4 objects, where the first stripe unit is stripe unit 0 in object 0 , and the fourth stripe unit is stripe unit 3 in object 3 . After writing the fourth stripe, the client determines if the object set is full. If the object set is not full, the client begins writing a stripe to the first object again, see object 0 in the following diagram. If the object set is full, the client creates a new object set, see object set 2 in the following diagram, and begins writing to the first stripe, with a stripe unit of 16, in the first object in the new object set, see object 4 in the diagram below. Three important variables determine how Ceph stripes data: Object Size: Objects in the Ceph Storage Cluster have a maximum configurable size, such as 2 MB, or 4 MB. The object size should be large enough to accommodate many stripe units, and should be a multiple of the stripe unit. Important Red Hat recommends a safe maximum value of 16 MB. Stripe Width: Stripes have a configurable unit size, for example 64 KB. The Ceph Client divides the data it will write to objects into equally sized stripe units, except for the last stripe unit. A stripe width should be a fraction of the Object Size so that an object may contain many stripe units. Stripe Count: The Ceph Client writes a sequence of stripe units over a series of objects determined by the stripe count. The series of objects is called an object set. After the Ceph Client writes to the last object in the object set, it returns to the first object in the object set. Important Test the performance of your striping configuration before putting your cluster into production. You CANNOT change these striping parameters after you stripe the data and write it to objects. Once the Ceph Client has striped data to stripe units and mapped the stripe units to objects, Ceph's CRUSH algorithm maps the objects to placement groups, and the placement groups to Ceph OSD Daemons before the objects are stored as files on a storage disk. Note Since a client writes to a single pool, all data striped into objects get mapped to placement groups in the same pool. So they use the same CRUSH map and the same access controls. 3.6. Ceph on-wire encryption You can enable encryption for all Ceph traffic over the network with the messenger version 2 protocol. The secure mode setting for messenger v2 encrypts communication between Ceph daemons and Ceph clients, giving you end-to-end encryption. The second version of Ceph's on-wire protocol, msgr2 , includes several new features: A secure mode encrypting all data moving through the network. Encapsulation improvement of authentication payloads. Improvements to feature advertisement and negotiation. The Ceph daemons bind to multiple ports allowing both the legacy, v1-compatible, and the new, v2-compatible, Ceph clients to connect to the same storage cluster. Ceph clients or other Ceph daemons connecting to the Ceph Monitor daemon will try to use the v2 protocol first, if possible, but if not, then the legacy v1 protocol will be used. By default, both messenger protocols, v1 and v2 , are enabled. The new v2 port is 3300, and the legacy v1 port is 6789, by default. The messenger v2 protocol has two configuration options that control whether the v1 or the v2 protocol is used: ms_bind_msgr1 - This option controls whether a daemon binds to a port speaking the v1 protocol; it is true by default. ms_bind_msgr2 - This option controls whether a daemon binds to a port speaking the v2 protocol; it is true by default. Similarly, two options control based on IPv4 and IPv6 addresses used: ms_bind_ipv4 - This option controls whether a daemon binds to an IPv4 address; it is true by default. ms_bind_ipv6 - This option controls whether a daemon binds to an IPv6 address; it is true by default. The msgr2 protocol supports two connection modes: crc Provides strong initial authentication when a connection is established with cephx . Provides a crc32c integrity check to protect against bit flips. Does not provide protection against a malicious man-in-the-middle attack. Does not prevent an eavesdropper from seeing all post-authentication traffic. secure Provides strong initial authentication when a connection is established with cephx . Provides full encryption of all post-authentication traffic. Provides a cryptographic integrity check. The default mode is crc . Ensure that you consider cluster CPU requirements when you plan the Red Hat Ceph Storage cluster, to include encryption overhead. Important Using secure mode is currently supported by Ceph kernel clients, such as CephFS and krbd on Red Hat Enterprise Linux. Using secure mode is supported by Ceph clients using librbd , such as OpenStack Nova, Glance, and Cinder. Address Changes For both versions of the messenger protocol to coexist in the same storage cluster, the address formatting has changed: Old address format was, IP_ADDR : PORT / CLIENT_ID , for example, 1.2.3.4:5678/91011 . New address format is, PROTOCOL_VERSION : IP_ADDR : PORT / CLIENT_ID , for example, v2:1.2.3.4:5678/91011 , where PROTOCOL_VERSION can be either v1 or v2 . Because the Ceph daemons now bind to multiple ports, the daemons display multiple addresses instead of a single address. Here is an example from a dump of the monitor map: Also, the mon_host configuration option and specifying addresses on the command line, using -m , supports the new address format. Connection Phases There are four phases for making an encrypted connection: Banner On connection, both the client and the server send a banner. Currently, the Ceph banner is ceph 0 0n . Authentication Exchange All data, sent or received, is contained in a frame for the duration of the connection. The server decides if authentication has completed, and what the connection mode will be. The frame format is fixed, and can be in three different forms depending on the authentication flags being used. Message Flow Handshake Exchange The peers identify each other and establish a session. The client sends the first message, and the server will reply with the same message. The server can close connections if the client talks to the wrong daemon. For new sessions, the client and server proceed to exchanging messages. Client cookies are used to identify a session, and can reconnect to an existing session. Message Exchange The client and server start exchanging messages, until the connection is closed. Additional Resources See the Red Hat Ceph Storage Data Security and Hardening Guide for details on enabling the msgr2 protocol.
[ "rbd create --size 102400 mypool/myimage --image-feature 5", "rbd -p mypool create myimage --size 102400 --image-features 13", "epoch 1 fsid 50fcf227-be32-4bcb-8b41-34ca8370bd17 last_changed 2021-12-12 11:10:46.700821 created 2021-12-12 11:10:46.700821 min_mon_release 14 (nautilus) 0: [v2:10.0.0.10:3300/0,v1:10.0.0.10:6789/0] mon.a 1: [v2:10.0.0.11:3300/0,v1:10.0.0.11:6789/0] mon.b 2: [v2:10.0.0.12:3300/0,v1:10.0.0.12:6789/0] mon.c" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/architecture_guide/the-ceph-client-components
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code and documentation. We are beginning with these four terms: master, slave, blacklist, and whitelist. Due to the enormity of this endeavor, these changes will be gradually implemented over upcoming releases. For more details on making our language more inclusive, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/upgrading_sap_environments_from_rhel_8_to_rhel_9/conscious-language-message_how-to-in-place-upgrade-sap-environments-from-rhel8-to-rhel9
4.7. SELinux Contexts - Labeling Files
4.7. SELinux Contexts - Labeling Files On systems running SELinux, all processes and files are labeled in a way that represents security-relevant information. This information is called the SELinux context. For files, this is viewed using the ls -Z command: In this example, SELinux provides a user ( unconfined_u ), a role ( object_r ), a type ( user_home_t ), and a level ( s0 ). This information is used to make access control decisions. On DAC systems, access is controlled based on Linux user and group IDs. SELinux policy rules are checked after DAC rules. SELinux policy rules are not used if DAC rules deny access first. Note By default, newly-created files and directories inherit the SELinux type of their parent directories. For example, when creating a new file in the /etc directory that is labeled with the etc_t type, the new file inherits the same type: SELinux provides multiple commands for managing the file system labeling, such as chcon , semanage fcontext , restorecon , and matchpathcon . 4.7.1. Temporary Changes: chcon The chcon command changes the SELinux context for files. However, changes made with the chcon command are not persistent across file-system relabels, or the execution of the restorecon command. SELinux policy controls whether users are able to modify the SELinux context for any given file. When using chcon , users provide all or part of the SELinux context to change. An incorrect file type is a common cause of SELinux denying access. Quick Reference Run the chcon -t type file-name command to change the file type, where type is an SELinux type, such as httpd_sys_content_t , and file-name is a file or directory name: Run the chcon -R -t type directory-name command to change the type of the directory and its contents, where type is an SELinux type, such as httpd_sys_content_t , and directory-name is a directory name: Procedure 4.6. Changing a File's or Directory's Type The following procedure demonstrates changing the type, and no other attributes of the SELinux context. The example in this section works the same for directories, for example, if file1 was a directory. Change into your home directory. Create a new file and view its SELinux context: In this example, the SELinux context for file1 includes the SELinux unconfined_u user, object_r role, user_home_t type, and the s0 level. For a description of each part of the SELinux context, see Chapter 2, SELinux Contexts . Enter the following command to change the type to samba_share_t . The -t option only changes the type. Then view the change: Use the following command to restore the SELinux context for the file1 file. Use the -v option to view what changes: In this example, the type, samba_share_t , is restored to the correct, user_home_t type. When using targeted policy (the default SELinux policy in Red Hat Enterprise Linux), the restorecon command reads the files in the /etc/selinux/targeted/contexts/files/ directory, to see which SELinux context files should have. Procedure 4.7. Changing a Directory and its Contents Types The following example demonstrates creating a new directory, and changing the directory's file type along with its contents to a type used by the Apache HTTP Server. The configuration in this example is used if you want Apache HTTP Server to use a different document root (instead of /var/www/html/ ): As the root user, create a new web/ directory and then 3 empty files ( file1 , file2 , and file3 ) within this directory. The web/ directory and files in it are labeled with the default_t type: As root, enter the following command to change the type of the web/ directory (and its contents) to httpd_sys_content_t : To restore the default SELinux contexts, use the restorecon utility as root: See the chcon (1) manual page for further information about chcon . Note Type Enforcement is the main permission control used in SELinux targeted policy. For the most part, SELinux users and roles can be ignored. 4.7.2. Persistent Changes: semanage fcontext The semanage fcontext command is used to change the SELinux context of files. To show contexts to newly created files and directories, enter the following command as root: Changes made by semanage fcontext are used by the following utilities. The setfiles utility is used when a file system is relabeled and the restorecon utility restores the default SELinux contexts. This means that changes made by semanage fcontext are persistent, even if the file system is relabeled. SELinux policy controls whether users are able to modify the SELinux context for any given file. Quick Reference To make SELinux context changes that survive a file system relabel: Enter the following command, remembering to use the full path to the file or directory: Use the restorecon utility to apply the context changes: Use of regular expressions with semanage fcontext For the semanage fcontext command to work correctly, you can use either a fully qualified path or Perl-compatible regular expressions ( PCRE ) . The only PCRE flag in use is PCRE2_DOTALL , which causes the . wildcard to match anything, including a new line. Strings representing paths are processed as bytes, meaning that non-ASCII characters are not matched by a single wildcard. Note that file-context definitions specified using semanage fcontext are evaluated in reverse order to how they were defined: the latest entry is evaluated first regardless of the stem length. Local file context modifications stored in file_contexts.local have a higher priority than those specified in policy modules. This means that whenever a match for a given file path is found in file_contexts.local , no other file-context definitions are considered. Important File-context definitions specified using the semanage fcontext command effectively override all other file-context definitions. All regular expressions should therefore be as specific as possible to avoid unintentionally impacting other parts of the file system. For more information on a type of regular expression used in file-context definitions and flags in effect, see the semanage-fcontext(8) man page. Procedure 4.8. Changing a File's or Directory 's Type The following example demonstrates changing a file's type, and no other attributes of the SELinux context. This example works the same for directories, for instance if file1 was a directory. As the root user, create a new file in the /etc directory. By default, newly-created files in /etc are labeled with the etc_t type: To list information about a directory, use the following command: As root, enter the following command to change the file1 type to samba_share_t . The -a option adds a new record, and the -t option defines a type ( samba_share_t ). Note that running this command does not directly change the type; file1 is still labeled with the etc_t type: As root, use the restorecon utility to change the type. Because semanage added an entry to file_contexts.local for /etc/file1 , restorecon changes the type to samba_share_t : Procedure 4.9. Changing a Directory and its Contents Types The following example demonstrates creating a new directory, and changing the directory's file type along with its contents to a type used by Apache HTTP Server. The configuration in this example is used if you want Apache HTTP Server to use a different document root instead of /var/www/html/ : As the root user, create a new web/ directory and then 3 empty files ( file1 , file2 , and file3 ) within this directory. The web/ directory and files in it are labeled with the default_t type: As root, enter the following command to change the type of the web/ directory and the files in it, to httpd_sys_content_t . The -a option adds a new record, and the -t option defines a type ( httpd_sys_content_t ). The "/web(/.*)?" regular expression causes semanage to apply changes to web/ , as well as the files in it. Note that running this command does not directly change the type; web/ and files in it are still labeled with the default_t type: The semanage fcontext -a -t httpd_sys_content_t "/web(/.*)?" command adds the following entry to /etc/selinux/targeted/contexts/files/file_contexts.local : As root, use the restorecon utility to change the type of web/ , as well as all files in it. The -R is for recursive, which means all files and directories under web/ are labeled with the httpd_sys_content_t type. Since semanage added an entry to file.contexts.local for /web(/.*)? , restorecon changes the types to httpd_sys_content_t : Note that by default, newly-created files and directories inherit the SELinux type of their parent directories. Procedure 4.10. Deleting an added Context The following example demonstrates adding and removing an SELinux context. If the context is part of a regular expression, for example, /web(/.*)? , use quotation marks around the regular expression: To remove the context, as root, enter the following command, where file-name | directory-name is the first part in file_contexts.local : The following is an example of a context in file_contexts.local : With the first part being test . To prevent the test/ directory from being labeled with the httpd_sys_content_t after running restorecon , or after a file system relabel, enter the following command as root to delete the context from file_contexts.local : As root, use the restorecon utility to restore the default SELinux context. For further information about semanage , see the semanage (8) and semanage-fcontext (8) manual pages. Important When changing the SELinux context with semanage fcontext -a , use the full path to the file or directory to avoid files being mislabeled after a file system relabel, or after the restorecon command is run. 4.7.3. How File Context is Determined Determining file context is based on file-context definitions, which are specified in the system security policy (the .fc files). Based on the system policy, semanage generates file_contexts.homedirs and file_contexts files. System administrators can customize file-context definitions using the semanage fcontext command. Such customizations are stored in the file_contexts.local file. When a labeling utility, such as matchpathcon or restorecon , is determining the proper label for a given path, it searches for local changes first ( file_contexts.local ). If the utility does not find a matching pattern, it searches the file_contexts.homedirs file and finally the file_contexts file. However, whenever a match for a given file path is found, the search ends, the utility does look for any additional file-context definitions. This means that home directory-related file contexts have higher priority than the rest, and local customizations override the system policy. File-context definitions specified by system policy (contents of file_contexts.homedirs and file_contexts files) are sorted by the length of the stem (prefix of the path before any wildcard) before evaluation. This means that the most specific path is chosen. However, file-context definitions specified using semanage fcontext are evaluated in reverse order to how they were defined: the latest entry is evaluated first regardless of the stem length. For more information on: changing the context of a file by using chcon , see Section 4.7.1, "Temporary Changes: chcon" . changing and adding a file-context definition by using semanage fcontext , see Section 4.7.2, "Persistent Changes: semanage fcontext" . changing and adding a file-context definition through a system-policy operation, see Section 4.10, "Maintaining SELinux Labels" or Section 4.12, "Prioritizing and Disabling SELinux Policy Modules" .
[ "~]USD ls -Z file1 -rw-rw-r-- user1 group1 unconfined_u:object_r:user_home_t:s0 file1", "~]USD ls -dZ - /etc drwxr-xr-x. root root system_u:object_r: etc_t :s0 /etc", "~]# touch /etc/file1", "~]# ls -lZ /etc/file1 -rw-r--r--. root root unconfined_u:object_r: etc_t :s0 /etc/file1", "~]USD chcon -t httpd_sys_content_t file-name", "~]USD chcon -R -t httpd_sys_content_t directory-name", "~]USD touch file1", "~]USD ls -Z file1 -rw-rw-r-- user1 group1 unconfined_u:object_r:user_home_t:s0 file1", "~]USD chcon -t samba_share_t file1", "~]USD ls -Z file1 -rw-rw-r-- user1 group1 unconfined_u:object_r:samba_share_t:s0 file1", "~]USD restorecon -v file1 restorecon reset file1 context unconfined_u:object_r:samba_share_t:s0->system_u:object_r:user_home_t:s0", "~]# mkdir /web", "~]# touch /web/file{1,2,3}", "~]# ls -dZ /web drwxr-xr-x root root unconfined_u:object_r:default_t:s0 /web", "~]# ls -lZ /web -rw-r--r-- root root unconfined_u:object_r:default_t:s0 file1 -rw-r--r-- root root unconfined_u:object_r:default_t:s0 file2 -rw-r--r-- root root unconfined_u:object_r:default_t:s0 file3", "~]# chcon -R -t httpd_sys_content_t /web/", "~]# ls -dZ /web/ drwxr-xr-x root root unconfined_u:object_r:httpd_sys_content_t:s0 /web/", "~]# ls -lZ /web/ -rw-r--r-- root root unconfined_u:object_r:httpd_sys_content_t:s0 file1 -rw-r--r-- root root unconfined_u:object_r:httpd_sys_content_t:s0 file2 -rw-r--r-- root root unconfined_u:object_r:httpd_sys_content_t:s0 file3", "~]# restorecon -R -v /web/ restorecon reset /web context unconfined_u:object_r:httpd_sys_content_t:s0->system_u:object_r:default_t:s0 restorecon reset /web/file2 context unconfined_u:object_r:httpd_sys_content_t:s0->system_u:object_r:default_t:s0 restorecon reset /web/file3 context unconfined_u:object_r:httpd_sys_content_t:s0->system_u:object_r:default_t:s0 restorecon reset /web/file1 context unconfined_u:object_r:httpd_sys_content_t:s0->system_u:object_r:default_t:s0", "~]# semanage fcontext -C -l", "~]# semanage fcontext -a options file-name | directory-name", "~]# restorecon -v file-name | directory-name", "~]# touch /etc/file1", "~]USD ls -Z /etc/file1 -rw-r--r-- root root unconfined_u:object_r:etc_t:s0 /etc/file1", "~]USD ls -dZ directory_name", "~]# semanage fcontext -a -t samba_share_t /etc/file1", "~]# ls -Z /etc/file1 -rw-r--r-- root root unconfined_u:object_r:etc_t:s0 /etc/file1", "~]USD semanage fcontext -C -l /etc/file1 unconfined_u:object_r:samba_share_t:s0", "~]# restorecon -v /etc/file1 restorecon reset /etc/file1 context unconfined_u:object_r:etc_t:s0->system_u:object_r:samba_share_t:s0", "~]# mkdir /web", "~]# touch /web/file{1,2,3}", "~]# ls -dZ /web drwxr-xr-x root root unconfined_u:object_r:default_t:s0 /web", "~]# ls -lZ /web -rw-r--r-- root root unconfined_u:object_r:default_t:s0 file1 -rw-r--r-- root root unconfined_u:object_r:default_t:s0 file2 -rw-r--r-- root root unconfined_u:object_r:default_t:s0 file3", "~]# semanage fcontext -a -t httpd_sys_content_t \"/web(/.*)?\"", "~]USD ls -dZ /web drwxr-xr-x root root unconfined_u:object_r:default_t:s0 /web", "~]USD ls -lZ /web -rw-r--r-- root root unconfined_u:object_r:default_t:s0 file1 -rw-r--r-- root root unconfined_u:object_r:default_t:s0 file2 -rw-r--r-- root root unconfined_u:object_r:default_t:s0 file3", "/web(/.*)? system_u:object_r:httpd_sys_content_t:s0", "~]# restorecon -R -v /web restorecon reset /web context unconfined_u:object_r:default_t:s0->system_u:object_r:httpd_sys_content_t:s0 restorecon reset /web/file2 context unconfined_u:object_r:default_t:s0->system_u:object_r:httpd_sys_content_t:s0 restorecon reset /web/file3 context unconfined_u:object_r:default_t:s0->system_u:object_r:httpd_sys_content_t:s0 restorecon reset /web/file1 context unconfined_u:object_r:default_t:s0->system_u:object_r:httpd_sys_content_t:s0", "~]# semanage fcontext -d \"/web(/.*)?\"", "~]# semanage fcontext -d file-name | directory-name", "/test system_u:object_r:httpd_sys_content_t:s0", "~]# semanage fcontext -d /test" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/sect-Security-Enhanced_Linux-Working_with_SELinux-SELinux_Contexts_Labeling_Files
4.214. pam
4.214. pam 4.214.1. RHEA-2011:1732 - pam enhancement update Updated pam packages that add one enhancement are now available for Red Hat Enterprise Linux. Pluggable Authentication Modules (PAM) provide a system for administrators to set up authentication policies without the need to recompile programs to handle authentication. Enhancement BZ# 727286 With this update, the libraries are recompiled with the partial read only relocation (RELRO) flag to enhance the security of applications that use the libraries. All pam users are advised to upgrade to these updated packages, which add this enhancement. 4.214.2. RHEA-2012:0482 - pam enhancement update Updated pam packages that add one enhancement are now available for Red Hat Enterprise Linux 6. Pluggable Authentication Modules (PAM) provide a system to set up authentication policies without the need to recompile programs to handle authentication. Enhancement BZ# 809370 The pam_cracklib is a PAM module for password-quality checking used by various applications. With this update, the pam_cracklib module has been improved with additional password-quality checks. The pam_cracklib module now allows to check whether a new password contains the words from the GECOS field from entries in the "/etc/passwd" file. The GECOS field is used to store additional information about the user, such as the user's full name or a phone number, which could be used by an attacker for an attempt to crack the password. The pam_cracklib module now also allows to specify the maximum allowed number of consecutive characters of the same class (lowercase, uppercase, number and special characters) in a password. All users of pam are advised to upgrade to these updated packages, which add this enhancement.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/pam
Chapter 8. KafkaListenerAuthenticationScramSha512 schema reference
Chapter 8. KafkaListenerAuthenticationScramSha512 schema reference Used in: GenericKafkaListener The type property is a discriminator that distinguishes use of the KafkaListenerAuthenticationScramSha512 type from KafkaListenerAuthenticationTls , KafkaListenerAuthenticationOAuth , KafkaListenerAuthenticationCustom . It must have the value scram-sha-512 for the type KafkaListenerAuthenticationScramSha512 . Property Property type Description type string Must be scram-sha-512 .
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-KafkaListenerAuthenticationScramSha512-reference
probe::nfsd.proc.rename
probe::nfsd.proc.rename Name probe::nfsd.proc.rename - NFS Server renaming a file for client Synopsis nfsd.proc.rename Values uid requester's user id tfh file handler of new path tname new file name filename old file name client_ip the ip address of client flen length of old file name gid requester's group id fh file handler of old path tlen length of new file name
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-nfsd-proc-rename
Preface
Preface Open Java Development Kit (OpenJDK) is a free and open source implementation of the Java Platform, Standard Edition (Java SE). Eclipse Temurin is available in four LTS versions: OpenJDK 8u, OpenJDK 11u, OpenJDK 17u, and OpenJDK 21u. Binary files for Eclipse Temurin are available for macOS, Microsoft Windows, and multiple Linux x86 Operating Systems including Red Hat Enterprise Linux and Ubuntu.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_eclipse_temurin_11.0.24/pr01
5.5. Logging Sample Configurations
5.5. Logging Sample Configurations 5.5.1. Logging Sample Configuration Location All of the sample configurations presented in this section should be placed inside the server's configuration file, typically either standalone.xml or clustered.xml . Report a bug 5.5.2. Sample XML Configuration for the Root Logger The following procedure demonstrates a sample configuration for the root logger. Procedure 5.1. Configure the Root Logger Set the level Property The level property sets the maximum level of log message that the root logger records. List handlers handlers is a list of log handlers that are used by the root logger. Report a bug 5.5.3. Sample XML Configuration for a Log Category The following procedure demonstrates a sample configuration for a log category. Procedure 5.2. Configure a Log Category Use the category property to specify the log category from which log messages will be captured. The use-parent-handlers is set to "true" by default. When set to "true" , this category will use the log handlers of the root logger in addition to any other assigned handlers. Use the level property to set the maximum level of log message that the log category records. The handlers element contains a list of log handlers. Report a bug 5.5.4. Sample XML Configuration for a Console Log Handler The following procedure demonstrates a sample configuration for a console log handler. Procedure 5.3. Configure the Console Log Handler Add the Log Handler Identifier Information The name property sets the unique identifier for this log handler. When autoflush is set to "true" the log messages will be sent to the handler's target immediately upon request. Set the level Property The level property sets the maximum level of log messages recorded. Set the encoding Output Use encoding to set the character encoding scheme to be used for the output. Define the target Value The target property defines the system output stream where the output of the log handler goes. This can be System.err for the system error stream, or System.out for the standard out stream. Define the filter-spec Property The filter-spec property is an expression value that defines a filter. The example provided defines a filter that does not match a pattern: not(match("JBAS.*")) . Specify the formatter Use formatter to list the log formatter used by the log handler. Report a bug 5.5.5. Sample XML Configuration for a File Log Handler The following procedure demonstrates a sample configuration for a file log handler. Procedure 5.4. Configure the File Log Handler Add the File Log Handler Identifier Information The name property sets the unique identifier for this log handler. When autoflush is set to "true" the log messages will be sent to the handler's target immediately upon request. Set the level Property The level property sets the maximum level of log message that the root logger records. Set the encoding Output Use encoding to set the character encoding scheme to be used for the output. Set the file Object The file object represents the file where the output of this log handler is written to. It has two configuration properties: relative-to and path . The relative-to property is the directory where the log file is written to. JBoss Enterprise Application Platform 6 file path variables can be specified here. The jboss.server.log.dir variable points to the log/ directory of the server. The path property is the name of the file where the log messages will be written. It is a relative path name that is appended to the value of the relative-to property to determine the complete path. Specify the formatter Use formatter to list the log formatter used by the log handler. Set the append Property When the append property is set to "true" , all messages written by this handler will be appended to an existing file. If set to "false" a new file will be created each time the application server launches. Changes to append require a server reboot to take effect. Report a bug 5.5.6. Sample XML Configuration for a Periodic Log Handler The following procedure demonstrates a sample configuration for a periodic log handler. Procedure 5.5. Configure the Periodic Log Handler Add the Periodic Log Handler Identifier Information The name property sets the unique identifier for this log handler. When autoflush is set to "true" the log messages will be sent to the handler's target immediately upon request. Set the level Property The level property sets the maximum level of log message that the root logger records. Set the encoding Output Use encoding to set the character encoding scheme to be used for the output. Specify the formatter Use formatter to list the log formatter used by the log handler. Set the file Object The file object represents the file where the output of this log handler is written to. It has two configuration properties: relative-to and path . The relative-to property is the directory where the log file is written to. JBoss Enterprise Application Platform 6 file path variables can be specified here. The jboss.server.log.dir variable points to the log/ directory of the server. The path property is the name of the file where the log messages will be written. It is a relative path name that is appended to the value of the relative-to property to determine the complete path. Set the suffix Value The suffix is appended to the filename of the rotated logs and is used to determine the frequency of rotation. The format of the suffix is a dot (.) followed by a date string, which is parsable by the java.text.SimpleDateFormat class. The log is rotated on the basis of the smallest time unit defined by the suffix . For example, yyyy-MM-dd will result in daily log rotation. See http://docs.oracle.com/javase/6/docs/api/index.html?java/text/SimpleDateFormat.html Set the append Property When the append property is set to "true" , all messages written by this handler will be appended to an existing file. If set to "false" a new file will be created each time the application server launches. Changes to append require a server reboot to take effect. Report a bug 5.5.7. Sample XML Configuration for a Size Log Handler The following procedure demonstrates a sample configuration for a size log handler. Procedure 5.6. Configure the Size Log Handler Add the Size Log Handler Identifier Information The name property sets the unique identifier for this log handler. When autoflush is set to "true" the log messages will be sent to the handler's target immediately upon request. Set the level Property The level property sets the maximum level of log message that the root logger records. Set the encoding Output Use encoding to set the character encoding scheme to be used for the output. Set the file Object The file object represents the file where the output of this log handler is written to. It has two configuration properties: relative-to and path . The relative-to property is the directory where the log file is written to. JBoss Enterprise Application Platform 6 file path variables can be specified here. The jboss.server.log.dir variable points to the log/ directory of the server. The path property is the name of the file where the log messages will be written. It is a relative path name that is appended to the value of the relative-to property to determine the complete path. Specify the rotate-size Value The maximum size that the log file can reach before it is rotated. A single character appended to the number indicates the size units: b for bytes, k for kilobytes, m for megabytes, g for gigabytes. For example: 50m for 50 megabytes. Set the max-backup-index Number The maximum number of rotated logs that are kept. When this number is reached, the oldest log is reused. Specify the formatter Use formatter to list the log formatter used by the log handler. Set the append Property When the append property is set to "true" , all messages written by this handler will be appended to an existing file. If set to "false" a new file will be created each time the application server launches. Changes to append require a server reboot to take effect. Report a bug 5.5.8. Sample XML Configuration for a Async Log Handler The following procedure demonstrates a sample configuration for an async log handler Procedure 5.7. Configure the Async Log Handler The name property sets the unique identifier for this log handler. The level property sets the maximum level of log message that the root logger records. The queue-length defines the maximum number of log messages that will be held by this handler while waiting for sub-handlers to respond. The overflow-action defines how this handler responds when its queue length is exceeded. This can be set to BLOCK or DISCARD . BLOCK makes the logging application wait until there is available space in the queue. This is the same behavior as an non-async log handler. DISCARD allows the logging application to continue but the log message is deleted. The subhandlers list is the list of log handlers to which this async handler passes its log messages. Report a bug
[ "<subsystem xmlns=\"urn:jboss:domain:logging:1.4\"> <root-logger> <level name=\"INFO\"/>", "<subsystem xmlns=\"urn:jboss:domain:logging:1.4\"> <root-logger> <level name=\"INFO\"/> <handlers> <handler name=\"CONSOLE\"/> <handler name=\"FILE\"/> </handlers> </root-logger> </subsystem>", "<subsystem xmlns=\"urn:jboss:domain:logging:1.4\"> <logger category=\"com.company.accounts.rec\" use-parent-handlers=\"true\"> <level name=\"WARN\"/> <handlers> <handler name=\"accounts-rec\"/> </handlers> </logger> </subsystem>", "<subsystem xmlns=\"urn:jboss:domain:logging:1.4\"> <console-handler name=\"CONSOLE\" autoflush=\"true\"> <level name=\"INFO\"/> <encoding value=\"UTF-8\"/> <target value=\"System.out\"/> <filter-spec value=\"not(match(&quot;JBAS.*&quot;))\"/> <formatter> <pattern-formatter pattern=\"%K{level}%d{HH:mm:ss,SSS} %-5p [%c] (%t) %s%E%n\"/> </formatter> </console-handler> </subsystem>", "<file-handler name=\"accounts-rec-trail\" autoflush=\"true\"> <level name=\"INFO\"/> <encoding value=\"UTF-8\"/> <file relative-to=\"jboss.server.log.dir\" path=\"accounts-rec-trail.log\"/> <formatter> <pattern-formatter pattern=\"%d{HH:mm:ss,SSS} %-5p [%c] (%t) %s%E%n\"/> </formatter> <append value=\"true\"/> </file-handler>", "<periodic-rotating-file-handler name=\"FILE\" autoflush=\"true\"> <level name=\"INFO\"/> <encoding value=\"UTF-8\"/> <formatter> <pattern-formatter pattern=\"%d{HH:mm:ss,SSS} %-5p [%c] (%t) %s%E%n\"/> </formatter> <file relative-to=\"jboss.server.log.dir\" path=\"server.log\"/> <suffix value=\".yyyy-MM-dd\"/> <append value=\"true\"/> </periodic-rotating-file-handler>", "<size-rotating-file-handler name=\"accounts_debug\" autoflush=\"false\"> <level name=\"DEBUG\"/> <encoding value=\"UTF-8\"/> <file relative-to=\"jboss.server.log.dir\" path=\"accounts-debug.log\"/> <rotate-size value=\"500k\"/> <max-backup-index value=\"5\"/> <formatter> <pattern-formatter pattern=\"%d{HH:mm:ss,SSS} %-5p [%c] (%t) %s%E%n\"/> </formatter> <append value=\"true\"/> </size-rotating-file-handler>", "<async-handler name=\"Async_NFS_handlers\"> <level name=\"INFO\"/> <queue-length value=\"512\"/> <overflow-action value=\"block\"/> <subhandlers> <handler name=\"FILE\"/> <handler name=\"accounts-record\"/> </subhandlers> </async-handler>" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/sect-logging_sample_configurations
26.2. Authentication
26.2. Authentication The Authentication tab allows for the configuration of network authentication methods. To enable an option, click the empty checkbox beside it. To disable an option, click the checkbox beside it to clear the checkbox. Figure 26.2. Authentication The following explains what each option configures: Enable Kerberos Support - Select this option to enable Kerberos authentication. Click the Configure Kerberos button to configure: Realm - Configure the realm for the Kerberos server. The realm is the network that uses Kerberos, composed of one or more KDCs and a potentially large number of clients. KDC - Define the Key Distribution Center (KDC), which is the server that issues Kerberos tickets. Admin Servers - Specify the administration server(s) running kadmind . The krb5-libs and krb5-workstation packages must be installed for this option to work. Refer to the Reference Guide for more information on Kerberos. Enable LDAP Support - Select this option to have standard PAM-enabled applications use LDAP for authentication. Click the Configure LDAP button to specify the following: Use TLS to encrypt connections - Use Transport Layer Security to encrypt passwords sent to the LDAP server. LDAP Search Base DN - Retrieve user information by its Distinguished Name (DN). LDAP Server - Specify the IP address of the LDAP server. The openldap-clients package must be installed for this option to work. Refer to the Reference Guide for more information about LDAP. Use Shadow Passwords - Select this option to store passwords in shadow password format in the /etc/shadow file instead of /etc/passwd . Shadow passwords are enabled by default during installation and are highly recommended to increase the security of the system. The shadow-utils package must be installed for this option to work. For more information about shadow passwords, refer to the Users and Groups chapter in the Reference Guide . Enable SMB Support - This option configures PAM to use an SMB server to authenticate users. Click the Configure SMB button to specify: Workgroup - Specify the SMB workgroup to use. Domain Controllers - Specify the SMB domain controllers to use. Winbind - Select this option to configure the system to connect to a Windows Active Directory or a Windows domain controller. User information can be accessed, as well as server authentication options can be configured. Use MD5 Passwords - Select this option to enable MD5 passwords, which allows passwords to be up to 256 characters instead of eight characters or less. It is selected by default during installation and is highly recommended for increased security.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Authentication_Configuration-Authentication
Preface
Preface Use the Troubleshooting Ansible Automation Platform guide to troubleshoot your Ansible Automation Platform installation.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/troubleshooting_ansible_automation_platform/pr01
Block Device Guide
Block Device Guide Red Hat Ceph Storage 6 Managing, creating, configuring, and using Red Hat Ceph Storage Block Devices Red Hat Ceph Storage Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/block_device_guide/index
15.3. Importing an Intermediate Certificate Chain
15.3. Importing an Intermediate Certificate Chain Before beginning, please change directories into the NSS DB: cd /path/to/nssdb Ensure that your web service is offline (stopped, disabled, etc.) while performing these steps and ensure no concurrent access to the NSS DB by other processes (such as a browser). Doing so may corrupt the NSS DB or result in improper usage of these certificates. If you have not imported and trusted the root certificate, see Section 15.2, "Importing a Root Certificate" . When given a series of intermediate certificates between your root and end server or client certificates, we need to import and validate the signed certificate chain in order from closest to furthest from the root CA certificate. We assume the Intermediate CAs are in files named ca_sub_<num>.crt (for example ca_sub_1.crt , ca_sub_2.crt , and so on). Substitute names and paths for your certificates as appropriate to your deployment. Note In the unlikely scenario that you are instead given a single file named fullchain.crt , fullchain.pem , or similar and it contains multiple certificates, split it into the above format by copying each block (between and including the ----BEGIN CERTIFICATE----- and an -----END CERTIFICATE----- markers) to its own file. The first ones should be named ca_sub_<num>.crt and the last will be your server cert named service.crt . Server certificates are discussed in later sections. First, we will import and validate any intermediate CAs in order of closest to furthest from the root CA certificate. If you don't have any, you can skip to the section. For more information about the certutil and PKICertImport options used below, see Section 15.1, "About certutil and PKICertImport " . For every intermediate certificate in the chain: Execute PKICertImport -d . -n "CA Sub USDnum" -t "CT,C,C" -a -i ca_sub_USDnum.crt -u L This command validates and imports the Intermediate CA certificate into your NSS DB. The validation succeeds when no error message is printed and the return code is 0. To check the return code, execute echo USD? immediately after executing the command above. In most cases, a visual error message is printed. If the validation does not succeed, contact the issuer and ensure that all intermediate and root certificates are present on your system.
null
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/importing_intermediate_certificate_chain
Chapter 4. Upgrading Red Hat build of Keycloak adapters
Chapter 4. Upgrading Red Hat build of Keycloak adapters After you upgrade the Red Hat build of Keycloak server, you can upgrade the adapters. Earlier versions of the adapter might work with later versions of the Red Hat build of Keycloak server, but earlier versions of the Red Hat build of Keycloak server might not work with later versions of the adapter. 4.1. Compatibility with older adapters Newer versions of the Red Hat build of Keycloak server potentially work with older versions of the adapters. However, some fixes of the Red Hat build of Keycloak server may break compatibility with older versions of the adapters. For example, a new implementation of the OpenID Connect specification may not match older client adapter versions. For this situation, you can use Compatibility modes. For OpenID Connect clients, the Admin Console includes OpenID Connect Compatibility Modes on the page with client details. With this option, you can disable some new aspects of the Red Hat build of Keycloak server to preserve compatibility with older client adapters. For more details, see the tool tips of individual switches. 4.2. Upgrading the EAP adapter To upgrade the JBoss EAP adapter, complete the following steps: Procedure Download the new adapter archive. Remove the adapter modules by deleting the EAP_HOME/modules/system/add-ons/keycloak/ directory. Unzip the downloaded archive into EAP_HOME . 4.3. Upgrading the JavaScript adapter To upgrade a JavaScript adapter that has been copied to your web application, perform the following procedure. Procedure Download the new adapter archive. Overwrite the keycloak.js file in your application with the keycloak.js file from the downloaded archive. 4.4. Upgrading the Node.js adapter To upgrade a Node.js adapter that has been copied to your web application, perform the following procedure. Procedure Download the new adapter archive. Remove the existing Node.js adapter directory Unzip the updated file into its place Change the dependency for keycloak-connect in the package.json of your application
null
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/upgrading_guide/upgrading_red_hat_build_of_keycloak_adapters
8.13. Software Selection
8.13. Software Selection To specify which packages will be installed, select Software Selection at the Installation Summary screen. The package groups are organized into Base Environments . These environments are pre-defined sets of packages with a specific purpose; for example, the Virtualization Host environment contains a set of software packages needed for running virtual machines on the system. Only one software environment can be selected at installation time. For each environment, there are additional packages available in the form of Add-ons . Add-ons are presented in the right part of the screen and the list of them is refreshed when a new environment is selected. You can select multiple add-ons for your installation environment. A horizontal line separates the list of add-ons into two areas: Add-ons listed above the horizontal line are specific to the environment you selected. If you select any add-ons in this part of the list and then select a different environment, your selection will be lost. Add-ons listed below the horizontal line are available for all environments. Selecting a different environment will not impact the selections made in this part of the list. Figure 8.15. Example of a Software Selection for a Server Installation The availability of base environments and add-ons depends on the variant of the installation ISO image which you are using as the installation source. For example, the server variant provides environments designed for servers, while the workstation variant has several choices for deployment as a developer workstation, and so on. The installation program does not show which packages are contained in the available environments. To see which packages are contained in a specific environment or add-on, see the repodata/*-comps- variant . architecture .xml file on the Red Hat Enterprise Linux Installation DVD which you are using as the installation source. This file contains a structure describing available environments (marked by the <environment> tag) and add-ons (the <group> tag). Important The pre-defined environments and add-ons allow you to customize your system, but in a manual installation, there is no way to select individual packages to install. If you are not sure what package should be installed, Red Hat recommends you to select the Minimal Install environment. Minimal install only installs a basic version of Red Hat Enterprise Linux with only a minimal amount of additional software. This will substantially reduce the chance of the system being affected by a vulnerability. After the system finishes installing and you log in for the first time, you can use the Yum package manager to install any additional software you need. For more details on Minimal install , see the Installing the Minimum Amount of Packages Required section of the Red Hat Enterprise Linux 7 Security Guide. Alternatively, automating the installation with a Kickstart file allows for a much higher degree of control over installed packages. You can specify environments, groups and individual packages in the %packages section of the Kickstart file. See Section 27.3.2, "Package Selection" for instructions on selecting packages to install in a Kickstart file, and Chapter 27, Kickstart Installations for general information about automating the installation with Kickstart. Once you have selected an environment and add-ons to be installed, click Done to return to the Installation Summary screen. 8.13.1. Core Network Services All Red Hat Enterprise Linux installations include the following network services: centralized logging through the rsyslog service email through SMTP (Simple Mail Transfer Protocol) network file sharing through NFS (Network File System) remote access through SSH (Secure SHell) resource advertising through mDNS (multicast DNS) Some automated processes on your Red Hat Enterprise Linux system use the email service to send reports and messages to the system administrator. By default, the email, logging, and printing services do not accept connections from other systems. You can configure your Red Hat Enterprise Linux system after installation to offer email, file sharing, logging, printing, and remote desktop access services. The SSH service is enabled by default. You can also use NFS to access files on other systems without enabling the NFS sharing service.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/sect-package-selection-x86
Chapter 2. Architecture
Chapter 2. Architecture 2.1. OLM v1 components overview Operator Lifecycle Manager (OLM) v1 comprises the following component projects: Operator Controller Operator Controller is the central component of OLM v1 that extends Kubernetes with an API through which users can install and manage the lifecycle of Operators and extensions. It consumes information from catalogd. Catalogd Catalogd is a Kubernetes extension that unpacks file-based catalog (FBC) content packaged and shipped in container images for consumption by on-cluster clients. As a component of the OLM v1 microservices architecture, catalogd hosts metadata for Kubernetes extensions packaged by the authors of the extensions, and as a result helps users discover installable content. 2.2. Operator Controller Operator Controller is the central component of Operator Lifecycle Manager (OLM) v1 and consumes the other OLM v1 component, catalogd. It extends Kubernetes with an API through which users can install Operators and extensions. 2.2.1. ClusterExtension API Operator Controller provides a new ClusterExtension API object that is a single resource representing an instance of an installed extension, which includes Operators via the registry+v1 bundle format. This clusterextension.olm.operatorframework.io API streamlines management of installed extensions by consolidating user-facing APIs into a single object. Important In OLM v1, ClusterExtension objects are cluster-scoped. This differs from OLM (Classic) where Operators could be either namespace-scoped or cluster-scoped, depending on the configuration of their related Subscription and OperatorGroup objects. For more information about the earlier behavior, see Multitenancy and Operator colocation . Example ClusterExtension object apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: name: <extension_name> spec: namespace: <namespace_name> serviceAccount: name: <service_account_name> source: sourceType: Catalog catalog: packageName: <package_name> channels: - <channel> version: "<version>" Additional resources Operator Lifecycle Manager (OLM) Multitenancy and Operator colocation 2.2.1.1. Example custom resources (CRs) that specify a target version In Operator Lifecycle Manager (OLM) v1, cluster administrators can declaratively set the target version of an Operator or extension in the custom resource (CR). You can define a target version by specifying any of the following fields: Channel Version number Version range If you specify a channel in the CR, OLM v1 installs the latest version of the Operator or extension that can be resolved within the specified channel. When updates are published to the specified channel, OLM v1 automatically updates to the latest release that can be resolved from the channel. Example CR with a specified channel apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: name: <clusterextension_name> spec: namespace: <installed_namespace> serviceAccount: name: <service_account_installer_name> source: sourceType: Catalog catalog: packageName: <package_name> channels: - latest 1 1 Optional: Installs the latest release that can be resolved from the specified channel. Updates to the channel are automatically installed. Specify the value of the channels parameter as an array. If you specify the Operator or extension's target version in the CR, OLM v1 installs the specified version. When the target version is specified in the CR, OLM v1 does not change the target version when updates are published to the catalog. If you want to update the version of the Operator that is installed on the cluster, you must manually edit the Operator's CR. Specifying an Operator's target version pins the Operator's version to the specified release. Example CR with the target version specified apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: name: <clusterextension_name> spec: namespace: <installed_namespace> serviceAccount: name: <service_account_installer_name> source: sourceType: Catalog catalog: packageName: <package_name> version: "1.11.1" 1 1 Optional: Specifies the target version. If you want to update the version of the Operator or extension that is installed, you must manually update this field the CR to the desired target version. If you want to define a range of acceptable versions for an Operator or extension, you can specify a version range by using a comparison string. When you specify a version range, OLM v1 installs the latest version of an Operator or extension that can be resolved by the Operator Controller. Example CR with a version range specified apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: name: <clusterextension_name> spec: namespace: <installed_namespace> serviceAccount: name: <service_account_installer_name> source: sourceType: Catalog catalog: packageName: <package_name> version: ">1.11.1" 1 1 Optional: Specifies that the desired version range is greater than version 1.11.1 . For more information, see "Support for version ranges". After you create or update a CR, apply the configuration file by running the following command: Command syntax USD oc apply -f <extension_name>.yaml 2.2.2. Object ownership for cluster extensions In Operator Lifecycle Manager (OLM) v1, a Kubernetes object can only be owned by a single ClusterExtension object at a time. This ensures that objects within an OpenShift Container Platform cluster are managed consistently and prevents conflicts between multiple cluster extensions attempting to control the same object. 2.2.2.1. Single ownership The core ownership principle enforced by OLM v1 is that each object can only have one cluster extension as its owner. This prevents overlapping or conflicting management by multiple cluster extensions, ensuring that each object is uniquely associated with only one bundle. Implications of single ownership Bundles that provide a CustomResourceDefinition (CRD) object can only be installed once. Bundles provide CRDs, which are part of a ClusterExtension object. This means you can install a bundle only once in a cluster. Attempting to install another bundle that provides the same CRD results in failure, as each custom resource can have only one cluster extension as its owner. Cluster extensions cannot share objects. The single-owner policy of OLM v1 means that cluster extensions cannot share ownership of any objects. If one cluster extension manages a specific object, such as a Deployment , CustomResourceDefinition , or Service object, another cluster extension cannot claim ownership of the same object. Any attempt to do so is blocked by OLM v1. 2.2.2.2. Error messages When a conflict occurs due to multiple cluster extensions attempting to manage the same object, Operator Controller returns an error message indicating the ownership conflict, such as the following: Example error message CustomResourceDefinition 'logfilemetricexporters.logging.kubernetes.io' already exists in namespace 'kubernetes-logging' and cannot be managed by operator-controller This error message signals that the object is already being managed by another cluster extension and cannot be reassigned or shared. 2.2.2.3. Considerations As a cluster or extension administrator, review the following considerations: Uniqueness of bundles Ensure that Operator bundles providing the same CRDs are not installed more than once. This can prevent potential installation failures due to ownership conflicts. Avoid object sharing If you need different cluster extensions to interact with similar resources, ensure they are managing separate objects. Cluster extensions cannot jointly manage the same object due to the single-owner enforcement. 2.3. Catalogd Operator Lifecycle Manager (OLM) v1 uses the catalogd component and its resources to manage Operator and extension catalogs. 2.3.1. About catalogs in OLM v1 You can discover installable content by querying a catalog for Kubernetes extensions, such as Operators and controllers, by using the catalogd component. Catalogd is a Kubernetes extension that unpacks catalog content for on-cluster clients and is part of the Operator Lifecycle Manager (OLM) v1 suite of microservices. Currently, catalogd unpacks catalog content that is packaged and distributed as container images. Additional resources File-based catalogs Adding a catalog to a cluster Red Hat-provided catalogs
[ "apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: name: <extension_name> spec: namespace: <namespace_name> serviceAccount: name: <service_account_name> source: sourceType: Catalog catalog: packageName: <package_name> channels: - <channel> version: \"<version>\"", "apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: name: <clusterextension_name> spec: namespace: <installed_namespace> serviceAccount: name: <service_account_installer_name> source: sourceType: Catalog catalog: packageName: <package_name> channels: - latest 1", "apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: name: <clusterextension_name> spec: namespace: <installed_namespace> serviceAccount: name: <service_account_installer_name> source: sourceType: Catalog catalog: packageName: <package_name> version: \"1.11.1\" 1", "apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: name: <clusterextension_name> spec: namespace: <installed_namespace> serviceAccount: name: <service_account_installer_name> source: sourceType: Catalog catalog: packageName: <package_name> version: \">1.11.1\" 1", "oc apply -f <extension_name>.yaml", "CustomResourceDefinition 'logfilemetricexporters.logging.kubernetes.io' already exists in namespace 'kubernetes-logging' and cannot be managed by operator-controller" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/extensions/architecture
Chapter 3. Using libcgroup Tools
Chapter 3. Using libcgroup Tools The libcgroup package, which was the main tool for cgroup management in versions of Red Hat Enterprise Linux, is now deprecated. To avoid conflicts, do not use libcgroup tools for default resource controllers (listed in Available Controllers in Red Hat Enterprise Linux 7 ) that are now an exclusive domain of systemd . This leaves a limited space for applying libcgroup tools, use it only when you need to manage controllers not currently supported by systemd , such as net_prio . The following sections describe how to use libcgroup tools in relevant scenarios without conflicting with the default system of hierarchy. Note In order to use libcgroup tools, first ensure the libcgroup and libcgroup-tools packages are installed on your system. To install them, run as root : Note The net_prio controller is not compiled in the kernel like the rest of the controllers, rather it is a module that has to be loaded before attempting to mount it. To load this module, type as root : 3.1. Mounting a Hierarchy To use a kernel resource controller that is not mounted automatically, you have to create a hierarchy that will contain this controller. Add or detach the hierarchy by editing the mount section of the /etc/cgconfig.conf configuration file. This method makes the controller attachment persistent, which means your settings will be preserved after system reboot. As an alternative, use the mount command to create a transient mount only for the current session. Using the cgconfig Service The cgconfig service installed with the libcgroup-tools package provides a way to mount hierarchies for additional resource controllers. By default, this service is not started automatically. When you start cgconfig , it applies the settings from the /etc/cgconfig.conf configuration file. The configuration is therefore recreated from session to session and becomes persistent. Note that if you stop cgconfig , it unmounts all the hierarchies that it mounted. The default /etc/cgconfig.conf file installed with the libcgroup package does not contain any configuration settings, only information that systemd mounts the main resource controllers automatically. Entries of three types can be created in /etc/cgconfig.conf - mount , group , and template . Mount entries are used to create and mount hierarchies as virtual file systems, and attach controllers to those hierarchies. In Red Hat Enterprise Linux 7, default hierarchies are mounted automatically to the /sys/fs/cgroup/ directory, cgconfig is therefore used solely to attach non-default controllers. Mount entries are defined using the following syntax: Replace controller_name with a name of the kernel resource controller you wish to mount to the hierarchy. See Example 3.1, "Creating a mount entry" for an example. Example 3.1. Creating a mount entry To attach the net_prio controller to the default cgroup tree, add the following text to the /etc/cgconfig.conf configuration file: Then restart the cgconfig service to apply the setting: Group entries in /etc/cgconfig.conf can be used to set the parameters of resource controllers. See Section 3.5, "Setting Cgroup Parameters" for more information about group entries. Template entries in /etc/cgconfig.conf can be used to create a group definition applied to all processes. Using the mount Command Use the mount command to temporarily mount a hierarchy. To do so, first create a mount point in the /sys/fs/cgroup/ directory where systemd mounts the main resource controllers. Type as root : Replace name with a name of the new mount destination, usually the name of the controller is used. , execute the mount command to mount the hierarchy and simultaneously attach one or more subsystems. Type as root : Replace controller_name with a name of the controller to specify both the device to be mounted as well as the destination folder. The -t cgroup parameter specifies the type of mount. Example 3.2. Using the mount command to attach controllers To mount a hierarchy for the net_prio controller with use of the mount command, first create the mount point: Then mount net_prio to the destination you created in the step: You can verify whether you attached the hierarchy correctly by listing all available hierarchies along with their current mount points using the lssubsys command (see the section called "Listing Controllers" ):
[ "~]# yum install libcgroup ~]# yum install libcgroup-tools", "~]# modprobe netprio_cgroup", "mount { controller_name = /sys/fs/cgroup/ controller_name ; ... }", "mount { net_prio = /sys/fs/cgroup/net_prio; }", "~]# systemctl restart cgconfig.service", "~]# mkdir /sys/fs/cgroup/ name", "~]# mount -t cgroup -o controller_name none /sys/fs/cgroup/ controller_name", "~]# mkdir /sys/fs/cgroup/net_prio", "~]# mount -t cgroup -o net_prio none /sys/fs/cgroup/net_prio", "~]# lssubsys -am cpuset /sys/fs/cgroup/cpuset cpu,cpuacct /sys/fs/cgroup/cpu,cpuacct memory /sys/fs/cgroup/memory devices /sys/fs/cgroup/devices freezer /sys/fs/cgroup/freezer net_cls /sys/fs/cgroup/net_cls blkio /sys/fs/cgroup/blkio perf_event /sys/fs/cgroup/perf_event hugetlb /sys/fs/cgroup/hugetlb net_prio /sys/fs/cgroup/net_prio" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/resource_management_guide/chap-Using_libcgroup_Tools
Chapter 11. Interoperability
Chapter 11. Interoperability This chapter discusses how to use AMQ JMS in combination with other AMQ components. For an overview of the compatibility of AMQ components, see the product introduction . 11.1. Interoperating with other AMQP clients AMQP messages are composed using the AMQP type system . Having this common format is one of the reasons AMQP clients in different languages are able to interoperate with each other. This section serves to document behaviour around the AMQP payloads sent and received by the client in relation to the various JMS Message types used, to aid in using the client along with other AMQP clients. 11.1.1. Sending messages This section serves to document the different payloads sent by the client when using the various JMS Message types, so as to aid in using other clients to receive them. 11.1.1.1. Message type JMS message type Description of transmitted AMQP message TextMessage A TextMessage will be sent using an amqp-value body section containing a utf8 encoded string of the body text, or null if no body text is set. The message annotation with symbol key of "x-opt-jms-msg-type" will be set to a byte value of 5. BytesMessage A BytesMessage will be sent using a data body section containing the raw bytes from the BytesMessage body, with the properties section content-type field set to the symbol value "application/octet-stream" . The message annotation with symbol key of "x-opt-jms-msg-type" will be set to a byte value of 3. MapMessage A MapMessage body will be sent using an amqp-value body section containing a single map value. Any byte[] values in the MapMessage body will be encoded as binary entries in the map. The message annotation with symbol key of "x-opt-jms-msg-type" will be set to a byte value of 2. StreamMessage A StreamMessage will be sent using an amqp-sequence body section containing the entries in the StreamMessage body. Any byte[] entries in the StreamMessage body will be encoded as binary entries in the sequence. The message annotation with symbol key of "x-opt-jms-msg-type" will be set to a byte value of 4. ObjectMessage An ObjectMessage will be sent using an data body section, containing the bytes from serializing the ObjectMessage body using an ObjectOutputStream, with the properties section content-type field set to the symbol value "application/x-java-serialized-object" . The message annotation with symbol key of "x-opt-jms-msg-type" will be set to a byte value of 1. Message A plain JMS Message has no body, and will be sent as an amqp-value body section containing a null . The message annotation with symbol key of "x-opt-jms-msg-type" will be set to a byte value of 0. 11.1.1.2. Message properties JMS messages support setting application properties of various Java types. This section serves to show the mapping of these property types to AMQP typed values in the application-properties section of the sent message. Both JMS and AMQP use string keys for property names. JMS property type AMQP application property type boolean boolean byte byte short short int int long long float float double double String string or null 11.1.2. Receiving messages This section serves to document the different payloads received by the client will be mapped to the various JMS Message types, so as to aid in using other clients to send messages for receipt by the JMS client. 11.1.2.1. Message type If the the "x-opt-jms-msg-type" message-annotation is present on the received AMQP message, its value is used to determine the JMS message type used to represent it, according to the mapping detailed in the following table. This reflects the reverse process of the mappings discussed for messages sent by the JMS client . AMQP "x-opt-jms-msg-type" message-annotation value (type) JMS message type 0 (byte) Message 1 (byte) ObjectMessage 2 (byte) MapMessage 3 (byte) BytesMessage 4 (byte) StreamMessage 5 (byte) TextMessage If the "x-opt-jms-msg-type" message-annotation is not present, the table below details how the message will be mapped to a JMS Message type. Note that the StreamMessage and MapMessage types are only assigned to annotated messages. Description of Received AMQP Message without "x-opt-jms-msg-type" annotation JMS Message Type An amqp-value body section containing a string or null . A data body section, with the properties section content-type field set to a symbol value representing a common textual media type such as "text/plain" , "application/xml" , or "application/json" . TextMessage An amqp-value body section containing a binary . A data body section, with the properties section content-type field either not set, set to symbol value "application/octet-stream" , or set to any value not understood to be associated with another message type. BytesMessage A data body section, with the properties section content-type field set to symbol value "application/x-java-serialized-object" . An amqp-value body section containing a value not covered above. An amqp-sequence body section. This will be represented as a List inside the ObjectMessage. ObjectMessage 11.1.2.2. Message properties This section serves to show the mapping of values in the application-properties section of the received AMQP message to Java types used in the JMS Message. AMQP application property Type JMS property type boolean boolean byte byte short short int int long long float float double double string String null String 11.2. Connecting to AMQ Broker AMQ Broker is designed to interoperate with AMQP 1.0 clients. Check the following to ensure the broker is configured for AMQP messaging: Port 5672 in the network firewall is open. The AMQ Broker AMQP acceptor is enabled. See Default acceptor settings . The necessary addresses are configured on the broker. See Addresses, Queues, and Topics . The broker is configured to permit access from your client, and the client is configured to send the required credentials. See Broker Security . 11.3. Connecting to AMQ Interconnect AMQ Interconnect works with any AMQP 1.0 client. Check the following to ensure the components are configured correctly: Port 5672 in the network firewall is open. The router is configured to permit access from your client, and the client is configured to send the required credentials. See Securing network connections .
null
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_jms_client/interoperability
Chapter 4. Encrypting and validating OpenStack services
Chapter 4. Encrypting and validating OpenStack services You can use barbican to encrypt and validate several Red Hat OpenStack Platform services, such as Block Storage (cinder) encryption keys, Block Storage volume images, Object Storage (swift) objects, and Image Service (glance) images. Important Nova formats encrypted volumes during their first use if they are unencrypted. The resulting block device is then presented to the Compute node. Guidelines for containerized services Do not update any configuration file you might find on the physical node's host operating system, for example, /etc/cinder/cinder.conf . The containerized service does not reference this file. Do not update the configuration file running within the container. Changes are lost once you restart the container. Instead, if you must change containerized services, update the configuration file in /var/lib/config-data/puppet-generated/ , which is used to generate the container. For example: keystone: /var/lib/config-data/puppet-generated/keystone/etc/keystone/keystone.conf cinder: /var/lib/config-data/puppet-generated/cinder/etc/cinder/cinder.conf nova: /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf Changes are applied after you restart the container. 4.1. Encrypting Object Storage (swift) at-rest objects By default, objects uploaded to Object Storage (swift) are stored unencrypted. Because of this, it is possible to access objects directly from the file system. This can present a security risk if disks are not properly erased before they are discarded. When you have barbican enabled, the Object Storage service (swift) can transparently encrypt and decrypt your stored (at-rest) objects. At-rest encryption is distinct from in-transit encryption in that it refers to the objects being encrypted while being stored on disk. Swift performs these encryption tasks transparently, with the objects being automatically encrypted when uploaded to swift, then automatically decrypted when served to a user. This encryption and decryption is done using the same (symmetric) key, which is stored in barbican. Note You cannot disable encryption after you have enabled encryption and added data to the swift cluster, because the data is now stored in an encrypted state. Consequently, the data will not be readable if encryption is disabled, until you re-enable encryption with the same key. Prerequisites OpenStack Key Manager is installed and enabled Procedure Include the SwiftEncryptionEnabled: True parameter in your environment file, then re-running openstack overcloud deploy using /home/stack/overcloud_deploy.sh . Confirm that swift is configured to use at-rest encryption: The result should include an entry for encryption . 4.2. Encrypting Block Storage (cinder) volumes You can use barbican to manage your Block Storage (cinder) encryption keys. This configuration uses LUKS to encrypt the disks attached to your instances, including boot disks. Key management is transparent to the user; when you create a new volume using luks as the encryption type, cinder generates a symmetric key secret for the volume and stores it in barbican. When booting the instance (or attaching an encrypted volume), nova retrieves the key from barbican and stores the secret locally as a Libvirt secret on the Compute node. Procedure On nodes running the cinder-volume and nova-compute services, confirm that nova and cinder are both configured to use barbican for key management: Create a volume template that uses encryption. When you create new volumes they can be modeled off the settings you define here: Create a new volume and specify that it uses the LuksEncryptor-Template-256 settings: The resulting secret is automatically uploaded to the barbican back end. Note Ensure that the user creating the encrypted volume has the creator barbican role on the project. For more information, see the Grant user access to the creator role section. Obtain the barbican secret UUID. This value is displayed in the encryption_key_id field. Note You must use the --os-volume-api-version 3.64 parameter with the Cinder CLI to display the encryption_key_id value. There is no equivalent OpenStack CLI command. Use barbican to confirm that the disk encryption key is present. In this example, the timestamp matches the LUKS volume creation time: Attach the new volume to an existing instance. For example: The volume is then presented to the guest operating system and can be mounted using the built-in tools. 4.2.1. Migrating Block Storage volumes to OpenStack Key Manager If you previously used ConfKeyManager to manage disk encryption keys, you can migrate the volumes to OpenStack Key Manager by scanning the databases for encryption_key_id entries within scope for migration to barbican. Each entry gets a new barbican key ID and the existing ConfKeyManager secret is retained. Note Previously, you could reassign ownership for volumes encrypted using ConfKeyManager . This is not possible for volumes that have their keys managed by barbican. Activating barbican will not break your existing keymgr volumes. Prerequisites Before you migrate, review the following differences between Barbican-managed encrypted volumes and volumes that use ConfKeyManager : You cannot transfer ownership of encrypted volumes, because it is not currently possible to transfer ownership of the barbican secret. Barbican is more restrictive about who is allowed to read and delete secrets, which can affect some cinder volume operations. For example, a user cannot attach, detach, or delete a different user's volumes. Procedure Deploy the barbican service. Add the creator role to the cinder service. For example: Restart the cinder-volume and cinder-backup services. The cinder-volume and cinder-backup services automatically begin the migration process. You can check the log files to view status information about the migration: cinder-volume - migrates keys stored in cinder's Volumes and Snapshots tables. cinder-backup - migrates keys in the Backups table. Monitor the logs for the message indicating migration has finished and check that no more volumes are using the ConfKeyManager all-zeros encryption key ID. Remove the fixed_key option from cinder.conf and nova.conf . You must determine which nodes have this setting configured. Remove the creator role from the cinder service. Verification After you start the process, one of these entries appears in the log files. This indicates whether the migration started correctly, or it identifies the issue it encountered: Not migrating encryption keys because the ConfKeyManager is still in use. Not migrating encryption keys because the ConfKeyManager's fixed_key is not in use. Not migrating encryption keys because migration to the 'XXX' key_manager backend is not supported. - This message is unlikely to appear; it is a safety check to handle the code ever encountering another Key Manager back end other than barbican. This is because the code only supports one migration scenario: From ConfKeyManager to barbican. Not migrating encryption keys because there are no volumes associated with this host. - This can occur when cinder-volume is running on multiple hosts, and a particular host has no volumes associated with it. This arises because every host is responsible for handling its own volumes. Starting migration of ConfKeyManager keys. Migrating volume <UUID> encryption key to Barbican - During migration, all of the host's volumes are examined, and if a volume is still using the ConfKeyManager's key ID (identified by the fact that it's all zeros ( 00000000-0000-0000-0000-000000000000 )), then this message appears. For cinder-backup , this message uses slightly different capitalization: Migrating Volume [...] or Migrating Backup [...] After each host examines all of its volumes, the host displays a summary status message: You may also see the following entries: There are still %d volume(s) using the ConfKeyManager's all-zeros encryption key ID. There are still %d backup(s) using the ConfKeyManager's all-zeros encryption key ID. Both of these messages can appear in the cinder-volume and cinder-backup logs. Whereas each service only handles the migration of its own entries, the service is aware of the the other's status. As a result, cinder-volume knows if cinder-backup still has backups to migrate, and cinder-backup knows if the cinder-volume service has volumes to migrate. Although each host migrates only its own volumes, the summary message is based on a global assessment of whether any volume still requires migration This allows you to confirm that migration for all volumes is complete. Cleanup After migrating your key IDs into barbican, the fixed key remains in the configuration files. This can present a security concern to some users, because the fixed_key value is not encrypted in the .conf files. To address this, you can manually remove the fixed_key values from your nova and cinder configurations. However, first complete testing and review the output of the log file before you proceed, because disks that are still dependent on this value are not accessible. Important The encryption_key_id was only recently added to the Backup table, as part of the Queens release. As a result, pre-existing backups of encrypted volumes are likely to exist. The all-zeros encryption_key_id is stored on the backup itself, but it does not appear in the Backup database. As such, it is impossible for the migration process to know for certain whether a backup of an encrypted volume exists that still relies on the all-zeros ConfKeyMgr key ID. Review the existing fixed_key values. The values must match for both services. Important Make a backup of the existing fixed_key values. This allows you to restore the value if something goes wrong, or if you need to restore a backup that uses the old encryption key. Delete the fixed_key values: Troubleshooting The barbican secret can only be created when the requestor has the creator role. This means that the cinder service itself requires the creator role, otherwise a log sequence similar to this will occur: Starting migration of ConfKeyManager keys. Migrating volume <UUID> encryption key to Barbican Error migrating encryption key: Forbidden: Secret creation attempt not allowed - please review your user/project privileges There are still %d volume(s) using the ConfKeyManager's all-zeros encryption key ID. The key message is the third one: Secret creation attempt not allowed. To fix the problem, update the cinder account's privileges: Run openstack role add --project service --user cinder creator Restart the cinder-volume and cinder-backup services. As a result, the attempt at migration should succeed. 4.3. Validating Block Storage (cinder) volume images The Block Storage Service (cinder) automatically validates the signature of any downloaded, signed image during volume from image creation. The signature is validated before the image is written to the volume. To improve performance, you can use the Block Storage Image-Volume cache to store validated images for creating new volumes. Note Cinder image signature validation is not supported with Red Hat Ceph Storage or RBD volumes. Procedure Log in to a Controller node. Choose one of the following options: View cinder's image validation activities in the Volume log, /var/log/containers/cinder/cinder-volume.log . For example, you can expect the following entry when the instance is booted: Use the openstack volume list and cinder volume show commands: Use the openstack volume list command to locate the volume ID. Run the cinder volume show command on a compute node: Locate the volume_image_metadata section with the line signature verified : True . Note Snapshots are saved as Image service (glance) images. If you configure the Compute service (nova) to check for signed images, then you must manually download the image from glance, sign the image, and then re-upload the image. This is true whether the snapshot is from an instance created with signed images, or an instance booted from a volume created from a signed image. Note A volume can be uploaded as an Image service (glance) image. If the original volume was bootable, the image can be used to create a bootable volume in the Block Storage service (cinder). If you have configured the Block Storage service to check for signed images then you must manually download the image from glance, compute the image signature and update all appropriate image signature properties before using the image. For more information, see Section 4.5, "Validating snapshots" . Additional resources Configuring the Block Storage service (cinder) 4.3.1. Automatic deletion of volume image encryption key The Block Storage service (cinder) creates an encryption key in the Key Management service (barbican) when it uploads an encrypted volume to the Image service (glance). This creates a 1:1 relationship between an encryption key and a stored image. Encryption key deletion prevents unlimited resource consumption of the Key Management service. The Block Storage, Key Management, and Image services automatically manage the key for an encrypted volume, including the deletion of the key. The Block Storage service automatically adds two properties to a volume image: cinder_encryption_key_id - The identifier of the encryption key that the Key Management service stores for a specific image. cinder_encryption_key_deletion_policy - The policy that tells the Image service to tell the Key Management service whether to delete the key associated with this image. Important The values of these properties are automatically assigned. To avoid unintentional data loss, do not adjust these values . When you create a volume image, the Block Storage service sets the cinder_encryption_key_deletion_policy property to on_image_deletion . When you delete a volume image, the Image service deletes the corresponding encryption key if the cinder_encryption_key_deletion_policy equals on_image_deletion . Important Red Hat does not recommend manual manipulation of the cinder_encryption_key_id or cinder_encryption_key_deletion_policy properties. If you use the encryption key that is identified by the value of cinder_encryption_key_id for any other purpose, you risk data loss. 4.4. Signing Image Service (glance) images When you configure the Image Service (glance) to verify that an uploaded image has not been tampered with, you must sign images before you can start an instance using those images. Use the openssl command to sign an image with a key that is stored in barbican, then upload the image to glance with the accompanying signing information. As a result, the image's signature is verified before each use, with the instance build process failing if the signature does not match. Prerequisites OpenStack Key Manager is installed and enabled Procedure In your environment file, enable image verification with the VerifyGlanceSignatures: True setting. You must re-run the openstack overcloud deploy command for this setting to take effect. To verify that glance image validation is enabled, run the following command on an overcloud Compute node: Note If you use Ceph as the back end for the Image and Compute services, a CoW clone is created. Therefore, Image signing verification cannot be performed. Confirm that glance is configured to use barbican: Generate a certificate: Add the certificate to the barbican secret store: Note Record the resulting UUID for use in a later step. In this example, the certificate's UUID is 5df14c2b-f221-4a02-948e-48a61edd3f5b . Use private_key.pem to sign the image and generate the .signature file. For example: Convert the resulting .signature file into base64 format: Load the base64 value into a variable to use it in the subsequent command: Upload the signed image to glance. For img_signature_certificate_uuid , you must specify the UUID of the signing key you previously uploaded to barbican: You can view glance's image validation activities in the Compute log: /var/log/containers/nova/nova-compute.log . For example, you can expect the following entry when the instance is booted: 4.5. Validating snapshots Snapshots are saved as Image service (glance) images. If you configure the Compute service (nova) to check for signed images, then snapshots must by signed, even if they were created from an instance with a signed image. Procedure Download the snapshot from glance Generate to signature to validate the snapshot. This is the same process you use when you generate a signature to validate any image. For more information, see Validating Image Service (glance) images . Update the image properties: Optional: Remove the downloaded glance image from the filesystem:
[ "crudini --get /var/lib/config-data/puppet-generated/swift/etc/swift/proxy-server.conf pipeline-main pipeline pipeline = catch_errors healthcheck proxy-logging cache ratelimit bulk tempurl formpost authtoken keystone staticweb copy container_quotas account_quotas slo dlo versioned_writes kms_keymaster encryption proxy-logging proxy-server", "crudini --get /var/lib/config-data/puppet-generated/cinder/etc/cinder/cinder.conf key_manager backend castellan.key_manager.barbican_key_manager.BarbicanKeyManager crudini --get /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf key_manager backend castellan.key_manager.barbican_key_manager.BarbicanKeyManager", "openstack volume type create --encryption-provider nova.volume.encryptors.luks.LuksEncryptor --encryption-cipher aes-xts-plain64 --encryption-key-size 256 --encryption-control-location front-end LuksEncryptor-Template-256 +-------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +-------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | description | None | | encryption | cipher='aes-xts-plain64', control_location='front-end', encryption_id='9df604d0-8584-4ce8-b450-e13e6316c4d3', key_size='256', provider='nova.volume.encryptors.luks.LuksEncryptor' | | id | 78898a82-8f4c-44b2-a460-40a5da9e4d59 | | is_public | True | | name | LuksEncryptor-Template-256 | +-------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+", "openstack volume create --size 1 --type LuksEncryptor-Template-256 'Encrypted-Test-Volume' +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2018-01-22T00:19:06.000000 | | description | None | | encrypted | True | | id | a361fd0b-882a-46cc-a669-c633630b5c93 | | migration_status | None | | multiattach | False | | name | Encrypted-Test-Volume | | properties | | | replication_status | None | | size | 1 | | snapshot_id | None | | source_volid | None | | status | creating | | type | LuksEncryptor-Template-256 | | updated_at | None | | user_id | 0e73cb3111614365a144e7f8f1a972af | +---------------------+--------------------------------------+", "cinder --os-volume-api-version 3.64 volume show Encrypted-Test-Volume +------------------------------+-------------------------------------+ |Property |Value | +------------------------------+-------------------------------------+ |attached_servers |[] | |attachment_ids |[] | |availability_zone |nova | |bootable |false | |cluster_name |None | |consistencygroup_id |None | |created_at |2022-07-28T17:35:26.000000 | |description |None | |encrypted |True | |encryption_key_id |0944b8a8-de09-4413-b2ed-38f6c4591dd4 | |group_id |None | |id |a0b51b97-0392-460a-abfa-093022a120f3 | |metadata | | |migration_status |None | |multiattach |False | |name |vol | |os-vol-host-attr:host |hostgroup@tripleo_iscsi#tripleo_iscsi| |os-vol-mig-status-attr:migstat|None | |os-vol-mig-status-attr:name_id|None | |os-vol-tenant-attr:tenant_id |a2071ece39b3440aa82395ff7707996f | |provider_id |None | |replication_status |None | |service_uuid |471f0805-072e-4256-b447-c7dd10ceb807 | |shared_targets |False | |size |1 | |snapshot_id |None | |source_volid |None | |status |available | |updated_at |2022-07-28T17:35:26.000000 | |user_id |ba311b5c2b8e438c951d1137333669d4 | |volume_type |LUKS | |volume_type_id |cc188ace-f73d-4af5-bf5a-d70ccc5a401c | +------------------------------+-------------------------------------+", "openstack secret list +------------------------------------------------------------------------------------+------+---------------------------+--------+-------------------------------------------+-----------+------------+-------------+------+------------+ | Secret href | Name | Created | Status | Content types | Algorithm | Bit length | Secret type | Mode | Expiration | +------------------------------------------------------------------------------------+------+---------------------------+--------+-------------------------------------------+-----------+------------+-------------+------+------------+ | https://192.168.123.169:9311/v1/secrets/0944b8a8-de09-4413-b2ed-38f6c4591dd4 | None | 2018-01-22T02:23:15+00:00 | ACTIVE | {u'default': u'application/octet-stream'} | aes | 256 | symmetric | None | None | +------------------------------------------------------------------------------------+------+---------------------------+--------+-------------------------------------------+-----------+------------+-------------+------+------------+", "openstack server add volume testInstance Encrypted-Test-Volume", "#openstack role create creator #openstack role add --user cinder creator --project service", "`No volumes are using the ConfKeyManager's encryption_key_id.` `No backups are known to be using the ConfKeyManager's encryption_key_id.`", "crudini --get /var/lib/config-data/puppet-generated/cinder/etc/cinder/cinder.conf keymgr fixed_key crudini --get /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf keymgr fixed_key", "crudini --del /var/lib/config-data/puppet-generated/cinder/etc/cinder/cinder.conf keymgr fixed_key crudini --del /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf keymgr fixed_key", "2018-05-24 12:48:35.256 1 INFO cinder.image.image_utils [req-7c271904-4975-4771-9d26-cbea6c0ade31 b464b2fd2a2140e9a88bbdacf67bdd8c a3db2f2beaee454182c95b646fa7331f - default default] Image signature verification succeeded for image d3396fa0-2ea2-4832-8a77-d36fa3f2ab27", "cinder volume show <VOLUME_ID>", "cinder show d0db26bb-449d-4111-a59a-6fbb080bb483 +--------------------------------+-------------------------------------------------+ | Property | Value | +--------------------------------+-------------------------------------------------+ | attached_servers | [] | | attachment_ids | [] | | availability_zone | nova | | bootable | true | | consistencygroup_id | None | | created_at | 2018-10-12T19:04:41.000000 | | description | None | | encrypted | True | | id | d0db26bb-449d-4111-a59a-6fbb080bb483 | | metadata | | | migration_status | None | | multiattach | False | | name | None | | os-vol-host-attr:host | centstack.localdomain@nfs#nfs | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | 1a081dd2505547f5a8bb1a230f2295f4 | | replication_status | None | | size | 1 | | snapshot_id | None | | source_volid | None | | status | available | | updated_at | 2018-10-12T19:05:13.000000 | | user_id | ad9fe430b3a6416f908c79e4de3bfa98 | | volume_image_metadata | checksum : f8ab98ff5e73ebab884d80c9dc9c7290 | | | container_format : bare | | | disk_format : qcow2 | | | image_id : 154d4d4b-12bf-41dc-b7c4-35e5a6a3482a | | | image_name : cirros-0.3.5-x86_64-disk | | | min_disk : 0 | | | min_ram : 0 | | | signature_verified : False | | | size : 13267968 | | volume_type | nfs | +--------------------------------+-------------------------------------------------+", "sudo crudini --get /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf glance verify_glance_signatures", "sudo crudini --get /var/lib/config-data/puppet-generated/glance_api/etc/glance/glance-api.conf key_manager backend castellan.key_manager.barbican_key_manager.BarbicanKeyManager", "openssl genrsa -out private_key.pem 1024 openssl rsa -pubout -in private_key.pem -out public_key.pem openssl req -new -key private_key.pem -out cert_request.csr openssl x509 -req -days 14 -in cert_request.csr -signkey private_key.pem -out x509_signing_cert.crt", "source ~/overcloudrc openstack secret store --name signing-cert --algorithm RSA --secret-type certificate --payload-content-type \"application/octet-stream\" --payload-content-encoding base64 --payload \"USD(base64 x509_signing_cert.crt)\" -c 'Secret href' -f value https://192.168.123.170:9311/v1/secrets/5df14c2b-f221-4a02-948e-48a61edd3f5b", "openssl dgst -sha256 -sign private_key.pem -sigopt rsa_padding_mode:pss -out cirros-0.4.0.signature cirros-0.4.0-x86_64-disk.img", "base64 -w 0 cirros-0.4.0.signature > cirros-0.4.0.signature.b64", "cirros_signature_b64=USD(cat cirros-0.4.0.signature.b64)", "openstack image create --container-format bare --disk-format qcow2 --property img_signature=\"USDcirros_signature_b64\" --property img_signature_certificate_uuid=\"5df14c2b-f221-4a02-948e-48a61edd3f5b\" --property img_signature_hash_method=\"SHA-256\" --property img_signature_key_type=\"RSA-PSS\" cirros_0_4_0_signed --file cirros-0.4.0-x86_64-disk.img +--------------------------------+----------------------------------------------------------------------------------+ | Property | Value | +--------------------------------+----------------------------------------------------------------------------------+ | checksum | None | | container_format | bare | | created_at | 2018-01-23T05:37:31Z | | disk_format | qcow2 | | id | d3396fa0-2ea2-4832-8a77-d36fa3f2ab27 | | img_signature | lcI7nGgoKxnCyOcsJ4abbEZEpzXByFPIgiPeiT+Otjz0yvW00KNN3fI0AA6tn9EXrp7fb2xBDE4UaO3v | | | IFquV/s3mU4LcCiGdBAl3pGsMlmZZIQFVNcUPOaayS1kQYKY7kxYmU9iq/AZYyPw37KQI52smC/zoO54 | | | zZ+JpnfwIsM= | | img_signature_certificate_uuid | ba3641c2-6a3d-445a-8543-851a68110eab | | img_signature_hash_method | SHA-256 | | img_signature_key_type | RSA-PSS | | min_disk | 0 | | min_ram | 0 | | name | cirros_0_4_0_signed | | owner | 9f812310df904e6ea01e1bacb84c9f1a | | protected | False | | size | None | | status | queued | | tags | [] | | updated_at | 2018-01-23T05:37:31Z | | virtual_size | None | | visibility | shared | +--------------------------------+----------------------------------------------------------------------------------+", "2018-05-24 12:48:35.256 1 INFO nova.image.glance [req-7c271904-4975-4771-9d26-cbea6c0ade31 b464b2fd2a2140e9a88bbdacf67bdd8c a3db2f2beaee454182c95b646fa7331f - default default] Image signature verification succeeded for image d3396fa0-2ea2-4832-8a77-d36fa3f2ab27", "openstack image save --file <local-file-name> <image-name>", "openstack image set --property img_signature=\"USDcirros_signature_b64\" --property img_signature_certificate_uuid=\"5df14c2b-f221-4a02-948e-48a61edd3f5b\" --property img_signature_hash_method=\"SHA-256\" --property img_signature_key_type=\"RSA-PSS\" <image_id_of_the_snapshot>", "rm <local-file-name>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/manage_secrets_with_openstack_key_manager/assembly-encrypting-validating-openstack-services_rhosp
probe::sunrpc.svc.create
probe::sunrpc.svc.create Name probe::sunrpc.svc.create - Create an RPC service Synopsis sunrpc.svc.create Values bufsize the buffer size pg_nvers the number of supported versions progname the name of the program prog the number of the program
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-sunrpc-svc-create
Chapter 7. Operator SDK
Chapter 7. Operator SDK 7.1. Installing the Operator SDK CLI The Operator SDK provides a command-line interface (CLI) tool that Operator developers can use to build, test, and deploy an Operator. You can install the Operator SDK CLI on your workstation so that you are prepared to start authoring your own Operators. Important The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OpenShift Dedicated. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OpenShift Dedicated releases. The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OpenShift Dedicated 4 to maintain their projects and create Operator releases targeting newer versions of OpenShift Dedicated. The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs. The base image for Ansible-based Operator projects The base image for Helm-based Operator projects For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework) . Operator authors with cluster administrator access to a Kubernetes-based cluster, such as OpenShift Dedicated, can use the Operator SDK CLI to develop their own Operators based on Go, Ansible, Java, or Helm. Kubebuilder is embedded into the Operator SDK as the scaffolding solution for Go-based Operators, which means existing Kubebuilder projects can be used as is with the Operator SDK and continue to work. 7.1.1. Installing the Operator SDK CLI on Linux You can install the OpenShift SDK CLI tool on Linux. Prerequisites Go v1.19+ docker v17.03+, podman v1.9.3+, or buildah v1.7+ Procedure Navigate to the OpenShift mirror site . From the latest 4 directory, download the latest version of the tarball for Linux. Unpack the archive: USD tar xvf operator-sdk-v1.38.0-ocp-linux-x86_64.tar.gz Make the file executable: USD chmod +x operator-sdk Move the extracted operator-sdk binary to a directory that is on your PATH . Tip To check your PATH : USD echo USDPATH USD sudo mv ./operator-sdk /usr/local/bin/operator-sdk Verification After you install the Operator SDK CLI, verify that it is available: USD operator-sdk version Example output operator-sdk version: "v1.38.0-ocp", ... 7.1.2. Installing the Operator SDK CLI on macOS You can install the OpenShift SDK CLI tool on macOS. Prerequisites Go v1.19+ docker v17.03+, podman v1.9.3+, or buildah v1.7+ Procedure For the amd64 architecture, navigate to the OpenShift mirror site for the amd64 architecture . From the latest 4 directory, download the latest version of the tarball for macOS. Unpack the Operator SDK archive for amd64 architecture by running the following command: USD tar xvf operator-sdk-v1.38.0-ocp-darwin-x86_64.tar.gz Make the file executable by running the following command: USD chmod +x operator-sdk Move the extracted operator-sdk binary to a directory that is on your PATH by running the following command: Tip Check your PATH by running the following command: USD echo USDPATH USD sudo mv ./operator-sdk /usr/local/bin/operator-sdk Verification After you install the Operator SDK CLI, verify that it is available by running the following command:: USD operator-sdk version Example output operator-sdk version: "v1.38.0-ocp", ... 7.2. Operator SDK CLI reference The Operator SDK command-line interface (CLI) is a development kit designed to make writing Operators easier. Important The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OpenShift Dedicated. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OpenShift Dedicated releases. The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OpenShift Dedicated 4 to maintain their projects and create Operator releases targeting newer versions of OpenShift Dedicated. The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs. The base image for Ansible-based Operator projects The base image for Helm-based Operator projects For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework) . Operator SDK CLI syntax USD operator-sdk <command> [<subcommand>] [<argument>] [<flags>] 7.2.1. bundle The operator-sdk bundle command manages Operator bundle metadata. 7.2.1.1. validate The bundle validate subcommand validates an Operator bundle. Table 7.1. bundle validate flags Flag Description -h , --help Help output for the bundle validate subcommand. --index-builder (string) Tool to pull and unpack bundle images. Only used when validating a bundle image. Available options are docker , which is the default, podman , or none . --list-optional List all optional validators available. When set, no validators are run. --select-optional (string) Label selector to select optional validators to run. When run with the --list-optional flag, lists available optional validators. 7.2.2. cleanup The operator-sdk cleanup command destroys and removes resources that were created for an Operator that was deployed with the run command. Table 7.2. cleanup flags Flag Description -h , --help Help output for the run bundle subcommand. --kubeconfig (string) Path to the kubeconfig file to use for CLI requests. -n , --namespace (string) If present, namespace in which to run the CLI request. --timeout <duration> Time to wait for the command to complete before failing. The default value is 2m0s . 7.2.3. completion The operator-sdk completion command generates shell completions to make issuing CLI commands quicker and easier. Table 7.3. completion subcommands Subcommand Description bash Generate bash completions. zsh Generate zsh completions. Table 7.4. completion flags Flag Description -h, --help Usage help output. For example: USD operator-sdk completion bash Example output # bash completion for operator-sdk -*- shell-script -*- ... # ex: ts=4 sw=4 et filetype=sh 7.2.4. create The operator-sdk create command is used to create, or scaffold , a Kubernetes API. 7.2.4.1. api The create api subcommand scaffolds a Kubernetes API. The subcommand must be run in a project that was initialized with the init command. Table 7.5. create api flags Flag Description -h , --help Help output for the run bundle subcommand. 7.2.5. generate The operator-sdk generate command invokes a specific generator to generate code or manifests. 7.2.5.1. bundle The generate bundle subcommand generates a set of bundle manifests, metadata, and a bundle.Dockerfile file for your Operator project. Note Typically, you run the generate kustomize manifests subcommand first to generate the input Kustomize bases that are used by the generate bundle subcommand. However, you can use the make bundle command in an initialized project to automate running these commands in sequence. Table 7.6. generate bundle flags Flag Description --channels (string) Comma-separated list of channels to which the bundle belongs. The default value is alpha . --crds-dir (string) Root directory for CustomResoureDefinition manifests. --default-channel (string) The default channel for the bundle. --deploy-dir (string) Root directory for Operator manifests, such as deployments and RBAC. This directory is different from the directory passed to the --input-dir flag. -h , --help Help for generate bundle --input-dir (string) Directory from which to read an existing bundle. This directory is the parent of your bundle manifests directory and is different from the --deploy-dir directory. --kustomize-dir (string) Directory containing Kustomize bases and a kustomization.yaml file for bundle manifests. The default path is config/manifests . --manifests Generate bundle manifests. --metadata Generate bundle metadata and Dockerfile. --output-dir (string) Directory to write the bundle to. --overwrite Overwrite the bundle metadata and Dockerfile if they exist. The default value is true . --package (string) Package name for the bundle. -q , --quiet Run in quiet mode. --stdout Write bundle manifest to standard out. --version (string) Semantic version of the Operator in the generated bundle. Set only when creating a new bundle or upgrading the Operator. 7.2.5.2. kustomize The generate kustomize subcommand contains subcommands that generate Kustomize data for the Operator. 7.2.5.2.1. manifests The generate kustomize manifests subcommand generates or regenerates Kustomize bases and a kustomization.yaml file in the config/manifests directory, which are used to build bundle manifests by other Operator SDK commands. This command interactively asks for UI metadata, an important component of manifest bases, by default unless a base already exists or you set the --interactive=false flag. Table 7.7. generate kustomize manifests flags Flag Description --apis-dir (string) Root directory for API type definitions. -h , --help Help for generate kustomize manifests . --input-dir (string) Directory containing existing Kustomize files. --interactive When set to false , if no Kustomize base exists, an interactive command prompt is presented to accept custom metadata. --output-dir (string) Directory where to write Kustomize files. --package (string) Package name. -q , --quiet Run in quiet mode. 7.2.6. init The operator-sdk init command initializes an Operator project and generates, or scaffolds , a default project directory layout for the given plugin. This command writes the following files: Boilerplate license file PROJECT file with the domain and repository Makefile to build the project go.mod file with project dependencies kustomization.yaml file for customizing manifests Patch file for customizing images for manager manifests Patch file for enabling Prometheus metrics main.go file to run Table 7.8. init flags Flag Description --help, -h Help output for the init command. --plugins (string) Name and optionally version of the plugin to initialize the project with. Available plugins are ansible.sdk.operatorframework.io/v1 , go.kubebuilder.io/v2 , go.kubebuilder.io/v3 , and helm.sdk.operatorframework.io/v1 . --project-version Project version. Available values are 2 and 3-alpha , which is the default. 7.2.7. run The operator-sdk run command provides options that can launch the Operator in various environments. 7.2.7.1. bundle The run bundle subcommand deploys an Operator in the bundle format with Operator Lifecycle Manager (OLM). Table 7.9. run bundle flags Flag Description --index-image (string) Index image in which to inject a bundle. The default image is quay.io/operator-framework/upstream-opm-builder:latest . --install-mode <install_mode_value> Install mode supported by the cluster service version (CSV) of the Operator, for example AllNamespaces or SingleNamespace . --timeout <duration> Install timeout. The default value is 2m0s . --kubeconfig (string) Path to the kubeconfig file to use for CLI requests. -n , --namespace (string) If present, namespace in which to run the CLI request. --security-context-config <security_context> Specifies the security context to use for the catalog pod. Allowed values include restricted and legacy . The default value is legacy . [1] -h , --help Help output for the run bundle subcommand. The restricted security context is not compatible with the default namespace. To configure your Operator's pod security admission in your production environment, see "Complying with pod security admission". For more information about pod security admission, see "Understanding and managing pod security admission". 7.2.7.2. bundle-upgrade The run bundle-upgrade subcommand upgrades an Operator that was previously installed in the bundle format with Operator Lifecycle Manager (OLM). Table 7.10. run bundle-upgrade flags Flag Description --timeout <duration> Upgrade timeout. The default value is 2m0s . --kubeconfig (string) Path to the kubeconfig file to use for CLI requests. -n , --namespace (string) If present, namespace in which to run the CLI request. --security-context-config <security_context> Specifies the security context to use for the catalog pod. Allowed values include restricted and legacy . The default value is legacy . [1] -h , --help Help output for the run bundle subcommand. The restricted security context is not compatible with the default namespace. To configure your Operator's pod security admission in your production environment, see "Complying with pod security admission". For more information about pod security admission, see "Understanding and managing pod security admission". 7.2.8. scorecard The operator-sdk scorecard command runs the scorecard tool to validate an Operator bundle and provide suggestions for improvements. The command takes one argument, either a bundle image or directory containing manifests and metadata. If the argument holds an image tag, the image must be present remotely. Table 7.11. scorecard flags Flag Description -c , --config (string) Path to scorecard configuration file. The default path is bundle/tests/scorecard/config.yaml . -h , --help Help output for the scorecard command. --kubeconfig (string) Path to kubeconfig file. -L , --list List which tests are available to run. -n , --namespace (string) Namespace in which to run the test images. -o , --output (string) Output format for results. Available values are text , which is the default, and json . --pod-security <security_context> Option to run scorecard with the specified security context. Allowed values include restricted and legacy . The default value is legacy . [1] -l , --selector (string) Label selector to determine which tests are run. -s , --service-account (string) Service account to use for tests. The default value is default . -x , --skip-cleanup Disable resource cleanup after tests are run. -w , --wait-time <duration> Seconds to wait for tests to complete, for example 35s . The default value is 30s . The restricted security context is not compatible with the default namespace. To configure your Operator's pod security admission in your production environment, see "Complying with pod security admission". For more information about pod security admission, see "Understanding and managing pod security admission".
[ "tar xvf operator-sdk-v1.38.0-ocp-linux-x86_64.tar.gz", "chmod +x operator-sdk", "echo USDPATH", "sudo mv ./operator-sdk /usr/local/bin/operator-sdk", "operator-sdk version", "operator-sdk version: \"v1.38.0-ocp\",", "tar xvf operator-sdk-v1.38.0-ocp-darwin-x86_64.tar.gz", "chmod +x operator-sdk", "echo USDPATH", "sudo mv ./operator-sdk /usr/local/bin/operator-sdk", "operator-sdk version", "operator-sdk version: \"v1.38.0-ocp\",", "operator-sdk <command> [<subcommand>] [<argument>] [<flags>]", "operator-sdk completion bash", "bash completion for operator-sdk -*- shell-script -*- ex: ts=4 sw=4 et filetype=sh" ]
https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/cli_tools/operator-sdk
7.3. Linking Attributes to Manage Attribute Values
7.3. Linking Attributes to Manage Attribute Values A class of service dynamically supplies attribute values for entries which all have attributes with the same value , like building addresses, postal codes, or main office numbers. These are shared attribute values, which are updated in a single template entry. Frequently, though, there are relationships between entries where there needs to be a way to express linkage between them, but the values (and possibly even the attributes) that express that relationship are different. Red Hat Directory Server provides a way to link specified attributes together, so that when one attribute in one entry is altered, a corresponding attribute on a related entry is automatically updated. (The link and managed attributes both have DN values. The value of the link attribute contains the DN of the entry for the plug-in to update; the managed attribute in the second entry has a DN value which points back to the original link entry.) 7.3.1. About Linking Attributes The Linked Attributes Plug-in, allows multiple instances of the plug-in. Each instance configures one attribute which is manually maintained by the administrator ( linkType ) and one attribute which is automatically maintained by the plug-in ( managedType ). Figure 7.5. Basic Linked Attribute Configuration Note To preserve data consistency, only the plug-in process should maintain the managed attribute. Consider creating an ACI that will restrict all write access to any managed attribute. See Section 18.7.2, "Adding an ACI" for information on setting ACIs. A Linked Attribute Plug-in instance can be restricted to a single subtree within the directory. This can allow more flexible customization of attribute combinations and affected entries. If no scope is set, then the plug-in operates in the entire directory. Figure 7.6. Restricting the Linked Attribute Plug-in to a Specific Subtree When configuring the Linked Attribute Plug-in instance, certain configurations are required: Both the managed attribute and linked attribute must require the Distinguished Name syntax in their attribute definitions. The linked attributes are essentially managed cross-references, and the way that the plug-in handles these cross-references is by pulling the DN of the entry from the attribute value. For information on planning custom schema elements, see Chapter 12, Managing the Directory Schema . Each Linked Attribute Plug-in instance must be local and any managed attributes must be blocked from replication using fractional replication. Any changes that are made on one supplier will automatically trigger the plug-in to manage the values on the corresponding directory entries, so the data stay consistent across servers. However, the managed attributes must be maintained by the plug-in instance for the data to be consistent between the linked entries. This means that managed attribute values should be maintained solely by the plug-in processes, not the replication process, even in a multi-supplier replication environment. For information on using fractional replication, see Section 15.1.7, "Replicating a Subset of Attributes with Fractional Replication" . 7.3.2. Looking at the Linking Attributes Plug-in Syntax The default Linked Attributes Plug-in entry is a container entry for each plug-in instance, similar to the password syntax plug-ins or the DNA Plug-in in the section. Each entry beneath this container entry defines a different link-managed attribute pair. To create a new linking attribute pair, then, create a new plug-in instance beneath the container entry. A basic linking attribute plug-in instance required defining two things: The attribute that is managed manually by administrators, in the linkType attribute The attribute that is created dynamically by the plug-in, in the managedType attribute Optionally, a scope that restricts the plug-in to a specific part of the directory tree, in the linkScope attribute Example 7.5. Example Linked Attributes Plug-in Instance Entry For a list of attributes available for an instance of the Linked Attributes plug-in, see the corresponding section in the Red Hat Directory Server Configuration, Command, and File Reference . 7.3.3. Configuring Attribute Links If it is not already enabled, enable the Linked Attributes plug-in. For details, see Section 1.10.2, "Enabling and Disabling Plug-ins" .f Create the plug-in instance. Both the --managed-type and --link-type parameters are required. The following example shows the plug-in instance created by using dsconf : Restart the instance: 7.3.4. Cleaning up Attribute Links The managed-linked attributes can get out of sync. For instance, a linked attribute could be imported or replicated over to a server, but the corresponding managed attribute was not because the link attribute was not properly configured. The managed-linked attribute pairs can be fixed by running the dsconf plugin linked-attr fixup command or by launching a fix-up task. The fixup task removes any managed attributes (attributes managed by the plug-in) that do not have a corresponding link attribute (attributes managed by the administrator) on the referenced entry. Conversely, the task adds any missing managed attributes if the link attribute exists in an entry. 7.3.4.1. Regenerating Linked Attributes The dsconf plugin linked-attr fixup command launches a special task to regenerate all of the managed-link attribute pairs on directory entries. One or the other may be lost in certain situations. If the link attribute exists in an entry, the task traces the cross-referenced DN in the available attribute and creates the corresponding configured managed attribute on the referenced entry. If a managed attribute exists with no corresponding link attribute, then the managed attribute value is removed. To repair all configured link attribute pairs for the entire scope of the plug-in, then run the command as the Directory Manager: It is also possible to limit the fixup task to a single link-managed attribute pair by passing a base DN to the command. For example: 7.3.4.2. Regenerating Linked Attributes Using ldapmodify Repairing linked attributes is one of the tasks which can be managed through a special task configuration entry. Task entries occur under the cn=tasks configuration entry in the dse.ldif file, so it is also possible to initiate a task by adding the entry using ldapmodify . When the task is complete, the entry is removed from the directory. This task is the same one created automatically by the dsconf plugin linked-attr fixup command when it is run. To initiate a linked attributes fixup task, add an entry under the cn=fixup linked attributes,cn=tasks,cn=config entry. The only required attribute is the cn for the specific task, though it also allows the ttl attribute to set a timeout period. Using ldapmodify : Once the task is completed, the entry is deleted from the dse.ldif configuration, so it is possible to reuse the same task entry continually. The cn=fixup linked attributes task configuration is described in more detail in the Configuration, Command, and File Reference .
[ "dn: cn=Manager Link,cn=Linked Attributes,cn=plugins,cn=config objectClass: top objectClass: extensibleObject cn: Manager Link linkType: directReport managedType: manager linkScope: ou=people,dc=example,dc=com", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin linked-attr config \"Manager Link\" add --link-type=directReport --managed-type=manager", "dsctl instance_name restart", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin linked-attr fixup", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin linked-attr fixup \"cn=Manager Link,cn=Linked Attributes,cn=plugins,cn=config\"", "ldapmodify -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -x dn: cn=example,cn=fixup linked attributes,cn=tasks,cn=config changetype: add cn:example ttl: 5" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/linking-attributes
Chapter 12. Network Time Protocol
Chapter 12. Network Time Protocol You need to ensure that systems within your Red Hat OpenStack Platform cluster have accurate and consistent timestamps between systems. Red Hat OpenStack Platform on Red Hat Enterprise Linux 9 supports Chrony for time management. For more information, see Using the Chrony suite to configure NTP . 12.1. Why consistent time is important Consistent time throughout your organization is important for both operational and security needs: Identifying a security event Consistent timekeeping helps you correlate timestamps for events on affected systems so that you can understand the sequence of events. Authentication and security systems Security systems can be sensitive to time skew, for example: A kerberos-based authentication system might refuse to authenticate clients that are affected by seconds of clock skew. Transport layer security (TLS) certificates depend on a valid source of time. A client to server TLS connection fails if the difference between client and server system times exceeds the Valid From date range. Red Hat OpenStack Platform services Some core OpenStack services are especially dependent on accurate timekeeping, including High Availability (HA) and Ceph. 12.2. NTP design Network time protocol (NTP) is organized in a hierarchical design. Each layer is called a stratum. At the top of the hierarchy are stratum 0 devices such as atomic clocks. In the NTP hierarchy, stratum 0 devices provide reference for publicly available stratum 1 and stratum 2 NTP time servers. Do not connect your data center clients directly to publicly available NTP stratum 1 or 2 servers. The number of direct connections would put unnecessary strain on the public NTP resources. Instead, allocate a dedicated time server in your data center, and connect the clients to that dedicated server. Configure instances to receive time from your dedicated time servers, not the host on which they reside. Note Service containers running within the Red Hat OpenStack Platform environment still receive time from the host on which they reside.
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/hardening_red_hat_openstack_platform/assembly_network-time-protocol_security_and_hardening
3.2. Modeling Your Source Metadata
3.2. Modeling Your Source Metadata When you model the Source Metadata within your enterprise information systems, you capture some detailed information, including: Identification of datatype Storage formats Constraints Source-specific locations and names The Source Metadata captures this detailed technical metadata to provide a map of the data, the location of the data, and how you access it. This collection of Source Metadata comprises a direct mapping of the information sources within your enterprise. If you use the JBoss Data Virtualization Server for information integration, this technical metadata plays an integral part in query resolution. For example, our ZIPCode column and its parent table StreetAddress map directly to fields within our hypothetical address book database. To extend our example, we might have a second source of information, a comma separated text file provided by a marketing research vendor. This text file can supply additional demographic information based upon address or ZIP code. This text file would represent another Enterprise Information System (EIS), and the meta objects in its Source Model would describe each comma separated value.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/modeling_your_source_metadata
Chapter 3. Avro
Chapter 3. Avro This component provides a dataformat for avro, which allows serialization and deserialization of messages using Apache Avro's binary dataformat. Since Camel 3.2 rpc functionality was moved into separate camel-avro-rpc component. You can easily generate classes from a schema, using maven, ant etc. More details can be found at the Apache Avro documentation . 3.1. Dependencies When using camel-avro with Red Hat build of Camel Spring Boot, add the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-avro-starter</artifactId> </dependency> 3.2. Avro Dataformat Options The Avro dataformat supports 1 options, which are listed below. Name Default Java Type Description instanceClassName String Class name to use for marshal and unmarshalling. 3.3. Avro Data Format usage Using the avro data format is as easy as specifying that the class that you want to marshal or unmarshal in your route. AvroDataFormat format = new AvroDataFormat(Value.SCHEMAUSD); from("direct:in").marshal(format).to("direct:marshal"); from("direct:back").unmarshal(format).to("direct:unmarshal"); Where Value is an Avro Maven Plugin Generated class. or in XML <camelContext id="camel" xmlns="http://camel.apache.org/schema/spring"> <route> <from uri="direct:in"/> <marshal> <avro instanceClass="org.apache.camel.dataformat.avro.Message"/> </marshal> <to uri="log:out"/> </route> </camelContext> An alternative can be to specify the dataformat inside the context and reference it from your route. <camelContext id="camel" xmlns="http://camel.apache.org/schema/spring"> <dataFormats> <avro id="avro" instanceClass="org.apache.camel.dataformat.avro.Message"/> </dataFormats> <route> <from uri="direct:in"/> <marshal><custom ref="avro"/></marshal> <to uri="log:out"/> </route> </camelContext> In the same manner you can umarshal using the avro data format. 3.4. Spring Boot Auto-Configuration When using avro with Spring Boot make sure to add the Maven dependency to have support for auto configuration. The component supports 2 options, which are listed below. Name Description Default Type camel.dataformat.avro.enabled Whether to enable auto configuration of the avro data format. This is enabled by default. Boolean camel.dataformat.avro.instance-class-name Class name to use for marshal and unmarshalling. String
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-avro-starter</artifactId> </dependency>", "AvroDataFormat format = new AvroDataFormat(Value.SCHEMAUSD); from(\"direct:in\").marshal(format).to(\"direct:marshal\"); from(\"direct:back\").unmarshal(format).to(\"direct:unmarshal\");", "<camelContext id=\"camel\" xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:in\"/> <marshal> <avro instanceClass=\"org.apache.camel.dataformat.avro.Message\"/> </marshal> <to uri=\"log:out\"/> </route> </camelContext>", "<camelContext id=\"camel\" xmlns=\"http://camel.apache.org/schema/spring\"> <dataFormats> <avro id=\"avro\" instanceClass=\"org.apache.camel.dataformat.avro.Message\"/> </dataFormats> <route> <from uri=\"direct:in\"/> <marshal><custom ref=\"avro\"/></marshal> <to uri=\"log:out\"/> </route> </camelContext>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-avro-dataformat-starter
4.4. Tuning Tasks with Tuna
4.4. Tuning Tasks with Tuna To change policy and priority information on threads, use the --priority parameter: The pid_or_cmd_list argument is a list of comma-separated PIDs or command-name patterns. Set the policy to RR for round-robin, FIFO for first in, first out, or OTHER for the default policy. For an overview of the scheduling policies, see Section 6.3.6, "Tuning Scheduling Policy" . Set the rt_priority in the range 1-99. 1 is the lowest priority, and 99 is the highest priority. For example: To verify the changes you set, use the --show_threads parameter both before and after the modifying --priority parameter: This allows you to compare the state of the selected threads before and after your changes.
[ "tuna --threads= pid_or_cmd_list --priority=[ policy : ] rt_priority", "tuna --threads=7861 --priority=RR:40", "tuna --threads=sshd --show_threads --priority=RR:40 --show_threads thread ctxt_switches pid SCHED_ rtpri affinity voluntary nonvoluntary cmd 1034 OTHER 0 0,1,2,3 12 17 sshd thread ctxt_switches pid SCHED_ rtpri affinity voluntary nonvoluntary cmd 1034 RR 40 0,1,2,3 12 17 sshd" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/performance_tuning_guide/sec-tuna-tuning-tasks-with-tuna
Chapter 10. HelmChartRepository [helm.openshift.io/v1beta1]
Chapter 10. HelmChartRepository [helm.openshift.io/v1beta1] Description HelmChartRepository holds cluster-wide configuration for proxied Helm chart repository Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object Required spec 10.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object Observed status of the repository within the cluster.. 10.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description connectionConfig object Required configuration for connecting to the chart repo description string Optional human readable repository description, it can be used by UI for displaying purposes disabled boolean If set to true, disable the repo usage in the cluster/namespace name string Optional associated human readable repository name, it can be used by UI for displaying purposes 10.1.2. .spec.connectionConfig Description Required configuration for connecting to the chart repo Type object Property Type Description ca object ca is an optional reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. The key "ca-bundle.crt" is used to locate the data. If empty, the default system roots are used. The namespace for this config map is openshift-config. tlsClientConfig object tlsClientConfig is an optional reference to a secret by name that contains the PEM-encoded TLS client certificate and private key to present when connecting to the server. The key "tls.crt" is used to locate the client certificate. The key "tls.key" is used to locate the private key. The namespace for this secret is openshift-config. url string Chart repository URL 10.1.3. .spec.connectionConfig.ca Description ca is an optional reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. The key "ca-bundle.crt" is used to locate the data. If empty, the default system roots are used. The namespace for this config map is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 10.1.4. .spec.connectionConfig.tlsClientConfig Description tlsClientConfig is an optional reference to a secret by name that contains the PEM-encoded TLS client certificate and private key to present when connecting to the server. The key "tls.crt" is used to locate the client certificate. The key "tls.key" is used to locate the private key. The namespace for this secret is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 10.1.5. .status Description Observed status of the repository within the cluster.. Type object Property Type Description conditions array conditions is a list of conditions and their statuses conditions[] object Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } 10.1.6. .status.conditions Description conditions is a list of conditions and their statuses Type array 10.1.7. .status.conditions[] Description Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) 10.2. API endpoints The following API endpoints are available: /apis/helm.openshift.io/v1beta1/helmchartrepositories DELETE : delete collection of HelmChartRepository GET : list objects of kind HelmChartRepository POST : create a HelmChartRepository /apis/helm.openshift.io/v1beta1/helmchartrepositories/{name} DELETE : delete a HelmChartRepository GET : read the specified HelmChartRepository PATCH : partially update the specified HelmChartRepository PUT : replace the specified HelmChartRepository /apis/helm.openshift.io/v1beta1/helmchartrepositories/{name}/status GET : read status of the specified HelmChartRepository PATCH : partially update status of the specified HelmChartRepository PUT : replace status of the specified HelmChartRepository 10.2.1. /apis/helm.openshift.io/v1beta1/helmchartrepositories HTTP method DELETE Description delete collection of HelmChartRepository Table 10.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind HelmChartRepository Table 10.2. HTTP responses HTTP code Reponse body 200 - OK HelmChartRepositoryList schema 401 - Unauthorized Empty HTTP method POST Description create a HelmChartRepository Table 10.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.4. Body parameters Parameter Type Description body HelmChartRepository schema Table 10.5. HTTP responses HTTP code Reponse body 200 - OK HelmChartRepository schema 201 - Created HelmChartRepository schema 202 - Accepted HelmChartRepository schema 401 - Unauthorized Empty 10.2.2. /apis/helm.openshift.io/v1beta1/helmchartrepositories/{name} Table 10.6. Global path parameters Parameter Type Description name string name of the HelmChartRepository HTTP method DELETE Description delete a HelmChartRepository Table 10.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 10.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified HelmChartRepository Table 10.9. HTTP responses HTTP code Reponse body 200 - OK HelmChartRepository schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified HelmChartRepository Table 10.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.11. HTTP responses HTTP code Reponse body 200 - OK HelmChartRepository schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified HelmChartRepository Table 10.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.13. Body parameters Parameter Type Description body HelmChartRepository schema Table 10.14. HTTP responses HTTP code Reponse body 200 - OK HelmChartRepository schema 201 - Created HelmChartRepository schema 401 - Unauthorized Empty 10.2.3. /apis/helm.openshift.io/v1beta1/helmchartrepositories/{name}/status Table 10.15. Global path parameters Parameter Type Description name string name of the HelmChartRepository HTTP method GET Description read status of the specified HelmChartRepository Table 10.16. HTTP responses HTTP code Reponse body 200 - OK HelmChartRepository schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified HelmChartRepository Table 10.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.18. HTTP responses HTTP code Reponse body 200 - OK HelmChartRepository schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified HelmChartRepository Table 10.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.20. Body parameters Parameter Type Description body HelmChartRepository schema Table 10.21. HTTP responses HTTP code Reponse body 200 - OK HelmChartRepository schema 201 - Created HelmChartRepository schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/config_apis/helmchartrepository-helm-openshift-io-v1beta1
Providing feedback on Red Hat build of OpenJDK documentation
Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Create creates and routes the issue to the appropriate documentation team.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/release_notes_for_red_hat_build_of_openjdk_21.0.4/providing-direct-documentation-feedback_openjdk
13.2. Event Notification with Monitoring Resources
13.2. Event Notification with Monitoring Resources The ocf:pacemaker:ClusterMon resource can monitor the cluster status and trigger alerts on each cluster event. This resource runs the crm_mon command in the background at regular intervals. By default, the crm_mon command listens for resource events only; to enable listing for fencing events you can provide the --watch-fencing option to the command when you configure the ClusterMon resource. The crm_mon command does not monitor for membership issues but will print a message when fencing is started and when monitoring is started for that node, which would imply that a member just joined the cluster. The ClusterMon resource can execute an external program to determine what to do with cluster notifications by means of the extra_options parameter. Table 13.3, "Environment Variables Passed to the External Monitor Program" lists the environment variables that are passed to that program, which describe the type of cluster event that occurred. Table 13.3. Environment Variables Passed to the External Monitor Program Environment Variable Description CRM_notify_recipient The static external-recipient from the resource definition CRM_notify_node The node on which the status change happened CRM_notify_rsc The name of the resource that changed the status CRM_notify_task The operation that caused the status change CRM_notify_desc The textual output relevant error code of the operation (if any) that caused the status change CRM_notify_rc The return code of the operation CRM_target_rc The expected return code of the operation CRM_notify_status The numerical representation of the status of the operation The following example configures a ClusterMon resource that executes the external program crm_logger.sh which will log the event notifications specified in the program. The following procedure creates the crm_logger.sh program that this resource will use. On one node of the cluster, create the program that will log the event notifications. Set the ownership and permissions for the program. Use the scp command to copy the crm_logger.sh program to the other nodes of the cluster, putting the program in the same location on those nodes and setting the same ownership and permissions for the program. The following example configures the ClusterMon resource, named ClusterMon-External , that runs the program /usr/local/bin/crm_logger.sh . The ClusterMon resource outputs the cluster status to an html file, which is /var/www/html/cluster_mon.html in this example. The pidfile detects whether ClusterMon is already running; in this example that file is /var/run/crm_mon-external.pid . This resource is created as a clone so that it will run on every node in the cluster. The watch-fencing is specified to enable monitoring of fencing events in addition to resource events, including the start/stop/monitor, start/monitor. and stop of the fencing resource. Note The crm_mon command that this resource executes and which could be run manually is as follows: The following example shows the format of the output of the monitoring notifications that this example yields.
[ "cat <<-END >/usr/local/bin/crm_logger.sh #!/bin/sh logger -t \"ClusterMon-External\" \"USD{CRM_notify_node} USD{CRM_notify_rsc} USD{CRM_notify_task} USD{CRM_notify_desc} USD{CRM_notify_rc} USD{CRM_notify_target_rc} USD{CRM_notify_status} USD{CRM_notify_recipient}\"; exit; END", "chmod 700 /usr/local/bin/crm_logger.sh chown root.root /usr/local/bin/crm_logger.sh", "pcs resource create ClusterMon-External ClusterMon user=root update=10 extra_options=\"-E /usr/local/bin/crm_logger.sh --watch-fencing\" htmlfile=/var/www/html/cluster_mon.html pidfile=/var/run/crm_mon-external.pid clone", "/usr/sbin/crm_mon -p /var/run/crm_mon-manual.pid -d -i 5 -h /var/www/html/crm_mon-manual.html -E \"/usr/local/bin/crm_logger.sh\" --watch-fencing", "Aug 7 11:31:32 rh6node1pcmk ClusterMon-External: rh6node2pcmk.examplerh.com ClusterIP st_notify_fence Operation st_notify_fence requested by rh6node1pcmk.examplerh.com for peer rh6node2pcmk.examplerh.com: OK (ref=b206b618-e532-42a5-92eb-44d363ac848e) 0 0 0 #177 Aug 7 11:31:32 rh6node1pcmk ClusterMon-External: rh6node1pcmk.examplerh.com ClusterIP start OK 0 0 0 Aug 7 11:31:32 rh6node1pcmk ClusterMon-External: rh6node1pcmk.examplerh.com ClusterIP monitor OK 0 0 0 Aug 7 11:33:59 rh6node1pcmk ClusterMon-External: rh6node1pcmk.examplerh.com fence_xvms monitor OK 0 0 0 Aug 7 11:33:59 rh6node1pcmk ClusterMon-External: rh6node1pcmk.examplerh.com ClusterIP monitor OK 0 0 0 Aug 7 11:33:59 rh6node1pcmk ClusterMon-External: rh6node1pcmk.examplerh.com ClusterMon-External start OK 0 0 0 Aug 7 11:33:59 rh6node1pcmk ClusterMon-External: rh6node1pcmk.examplerh.com fence_xvms start OK 0 0 0 Aug 7 11:33:59 rh6node1pcmk ClusterMon-External: rh6node1pcmk.examplerh.com ClusterIP start OK 0 0 0 Aug 7 11:33:59 rh6node1pcmk ClusterMon-External: rh6node1pcmk.examplerh.com ClusterMon-External monitor OK 0 0 0 Aug 7 11:34:00 rh6node1pcmk crmd[2887]: notice: te_rsc_command: Initiating action 8: monitor ClusterMon-External:1_monitor_0 on rh6node2pcmk.examplerh.com Aug 7 11:34:00 rh6node1pcmk crmd[2887]: notice: te_rsc_command: Initiating action 16: start ClusterMon-External:1_start_0 on rh6node2pcmk.examplerh.com Aug 7 11:34:00 rh6node1pcmk ClusterMon-External: rh6node1pcmk.examplerh.com ClusterIP stop OK 0 0 0 Aug 7 11:34:00 rh6node1pcmk crmd[2887]: notice: te_rsc_command: Initiating action 15: monitor ClusterMon-External_monitor_10000 on rh6node2pcmk.examplerh.com Aug 7 11:34:00 rh6node1pcmk ClusterMon-External: rh6node2pcmk.examplerh.com ClusterMon-External start OK 0 0 0 Aug 7 11:34:00 rh6node1pcmk ClusterMon-External: rh6node2pcmk.examplerh.com ClusterMon-External monitor OK 0 0 0 Aug 7 11:34:00 rh6node1pcmk ClusterMon-External: rh6node2pcmk.examplerh.com ClusterIP start OK 0 0 0 Aug 7 11:34:00 rh6node1pcmk ClusterMon-External: rh6node2pcmk.examplerh.com ClusterIP monitor OK 0 0 0" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-eventnotification-HAAR
Metadata APIs
Metadata APIs OpenShift Container Platform 4.13 Reference guide for metadata APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/metadata_apis/index
Chapter 5. Understanding SystemTap Errors
Chapter 5. Understanding SystemTap Errors This chapter explains errors that are commonly encountered while using SystemTap. Many diagnostic errors include strings that suggest reading a man page, such as: In such instances, see the respective man pages for warning:: example1 and error:: example2 which provides explanatory information specific to that diagnostic. 5.1. Parse and Semantic Errors These types of errors occur while SystemTap attempts to parse and translate the script into C, prior to being converted into a kernel module. For example type errors result from operations that assign invalid values to variables or arrays. parse error: expected abc , saw xyz The script contains a grammatical/typographical error. SystemTap detected type of construct that is incorrect, given the context of the probe. The following invalid SystemTap script is missing its probe handlers: It results in the following error message showing that the parser was expecting something other than the probe keyword in column 1 of line 2: parse error: embedded code in unprivileged script The script contains unsafe embedded C code (blocks of code surrounded by %{ %} . SystemTap allows you to embed C code in a script, which is useful if there are no tapsets to suit your purposes. However, embedded C constructs are not safe; as such, SystemTap warns you with this error if such constructs appear in the script. If you are sure of the safety of any similar constructs in the script and are member of stapdev group (or have root privileges), run the script in guru mode by using the -g option: stap -g script semantic error: type mismatch for identifier ' ident ' ... string vs. long The ident function in the script used the wrong type ( %s or %d ). This error presents itself in Example 5.1, "error-variable.stp" . Because the execname() function returns a string, the format specifier should be %s , not %d . Example 5.1. error-variable.stp semantic error: unresolved type for identifier ' ident ' The identifier (a variable, for example) was used, but no type (integer or string) could be determined. This occurs, for instance, if you use a variable in a printf statement while the script never assigns a value to the variable. semantic error: Expecting symbol or array index expression SystemTap could not assign a value to a variable or to a location in an array. The destination for the assignment is not a valid destination. The following example code would generate this error: while searching for arity N function, semantic error: unresolved function call A function call or array index expression in the script used an invalid number of arguments/parameters. In SystemTap arity can either see the number of indices for an array, or the number of parameters to a function. semantic error: array locals not supported, missing global declaration? The script used an array operation without declaring the array as a global variable (global variables can be declared after their use in SystemTap scripts). Similar messages appear if an array is used, but with inconsistent arities. semantic error: variable ' vaar ' modified during 'foreach' iteration The var array is being modifed (being assigned to or deleted from) within an active foreach loop. This error also displays if an operation within the script performs a function call within the foreach loop. semantic error: probe point mismatch at position N , while resolving probe point pnt SystemTap did not understand what the event or SystemTap function pnt refers to. This usually means that SystemTap could not find a match for pnt in the tapset library. The N refers to the line and column of the error. semantic error: no match for probe point, while resolving probe point pnt The pnt events and handler function could not be resolved for a variety of reasons. This error occurs when the script contains the kernel.function(" name ") event, and name does not exist. In some cases, the error could also mean the script contains an invalid kernel file name or source-line number. semantic error: unresolved target-symbol expression A handler in the script references a target variable, but the value of the variable could not be resolved. This error could also mean that a handler is referencing a target variable that is not valid in the context when it was referenced. This may be a result of compiler optimization of the generated code. semantic error: libdwfl failure There was a problem processing the debugging information. In most cases, this error results from the installation of a kernel-debuginfo package. The installed kernel-debuginfo package itself may have some consistency or correctness problems. semantic error: cannot find package debuginfo SystemTap could not find a suitable kernel-debuginfo at all.
[ "[man warning:: example1 ] [man error:: example2 ]", "probe vfs.read probe vfs.write", "parse error: expected one of '. , ( ? ! { = +=' saw: keyword at perror.stp:2:1 1 parse error(s).", "probe syscall.open { printf (\"%d(%d) open\\n\", execname(), pid()) }", "probe begin { printf(\"x\") = 1 }" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_beginners_guide/errors
Chapter 10. Namespace [v1]
Chapter 10. Namespace [v1] Description Namespace provides a scope for Names. Use of multiple namespaces is optional. Type object 10.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object NamespaceSpec describes the attributes on a Namespace. status object NamespaceStatus is information about the current status of a Namespace. 10.1.1. .spec Description NamespaceSpec describes the attributes on a Namespace. Type object Property Type Description finalizers array (string) Finalizers is an opaque list of values that must be empty to permanently remove object from storage. More info: https://kubernetes.io/docs/tasks/administer-cluster/namespaces/ 10.1.2. .status Description NamespaceStatus is information about the current status of a Namespace. Type object Property Type Description conditions array Represents the latest available observations of a namespace's current state. conditions[] object NamespaceCondition contains details about state of namespace. phase string Phase is the current lifecycle phase of the namespace. More info: https://kubernetes.io/docs/tasks/administer-cluster/namespaces/ Possible enum values: - "Active" means the namespace is available for use in the system - "Terminating" means the namespace is undergoing graceful termination 10.1.3. .status.conditions Description Represents the latest available observations of a namespace's current state. Type array 10.1.4. .status.conditions[] Description NamespaceCondition contains details about state of namespace. Type object Required type status Property Type Description lastTransitionTime Time message string reason string status string Status of the condition, one of True, False, Unknown. type string Type of namespace controller condition. 10.2. API endpoints The following API endpoints are available: /api/v1/namespaces GET : list or watch objects of kind Namespace POST : create a Namespace /api/v1/watch/namespaces GET : watch individual changes to a list of Namespace. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{name} DELETE : delete a Namespace GET : read the specified Namespace PATCH : partially update the specified Namespace PUT : replace the specified Namespace /api/v1/watch/namespaces/{name} GET : watch changes to an object of kind Namespace. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /api/v1/namespaces/{name}/status GET : read status of the specified Namespace PATCH : partially update status of the specified Namespace PUT : replace status of the specified Namespace /api/v1/namespaces/{name}/finalize PUT : replace finalize of the specified Namespace 10.2.1. /api/v1/namespaces HTTP method GET Description list or watch objects of kind Namespace Table 10.1. HTTP responses HTTP code Reponse body 200 - OK NamespaceList schema 401 - Unauthorized Empty HTTP method POST Description create a Namespace Table 10.2. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.3. Body parameters Parameter Type Description body Namespace schema Table 10.4. HTTP responses HTTP code Reponse body 200 - OK Namespace schema 201 - Created Namespace schema 202 - Accepted Namespace schema 401 - Unauthorized Empty 10.2.2. /api/v1/watch/namespaces HTTP method GET Description watch individual changes to a list of Namespace. deprecated: use the 'watch' parameter with a list operation instead. Table 10.5. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 10.2.3. /api/v1/namespaces/{name} Table 10.6. Global path parameters Parameter Type Description name string name of the Namespace HTTP method DELETE Description delete a Namespace Table 10.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 10.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Namespace Table 10.9. HTTP responses HTTP code Reponse body 200 - OK Namespace schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Namespace Table 10.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.11. HTTP responses HTTP code Reponse body 200 - OK Namespace schema 201 - Created Namespace schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Namespace Table 10.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.13. Body parameters Parameter Type Description body Namespace schema Table 10.14. HTTP responses HTTP code Reponse body 200 - OK Namespace schema 201 - Created Namespace schema 401 - Unauthorized Empty 10.2.4. /api/v1/watch/namespaces/{name} Table 10.15. Global path parameters Parameter Type Description name string name of the Namespace HTTP method GET Description watch changes to an object of kind Namespace. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 10.16. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 10.2.5. /api/v1/namespaces/{name}/status Table 10.17. Global path parameters Parameter Type Description name string name of the Namespace HTTP method GET Description read status of the specified Namespace Table 10.18. HTTP responses HTTP code Reponse body 200 - OK Namespace schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Namespace Table 10.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.20. HTTP responses HTTP code Reponse body 200 - OK Namespace schema 201 - Created Namespace schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Namespace Table 10.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.22. Body parameters Parameter Type Description body Namespace schema Table 10.23. HTTP responses HTTP code Reponse body 200 - OK Namespace schema 201 - Created Namespace schema 401 - Unauthorized Empty 10.2.6. /api/v1/namespaces/{name}/finalize Table 10.24. Global path parameters Parameter Type Description name string name of the Namespace Table 10.25. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. HTTP method PUT Description replace finalize of the specified Namespace Table 10.26. Body parameters Parameter Type Description body Namespace schema Table 10.27. HTTP responses HTTP code Reponse body 200 - OK Namespace schema 201 - Created Namespace schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/metadata_apis/namespace-v1
function::ctime
function::ctime Name function::ctime - Convert seconds since epoch into human readable date/time string Synopsis Arguments epochsecs Number of seconds since epoch (as returned by gettimeofday_s ) Description Takes an argument of seconds since the epoch as returned by gettimeofday_s . Returns a string of the form " Wed Jun 30 21:49:08 1993 " The string will always be exactly 24 characters. If the time would be unreasonable far in the past (before what can be represented with a 32 bit offset in seconds from the epoch) an error will occur (which can be avoided with try/catch). If the time would be unreasonable far in the future, an error will also occur. Note that the epoch (zero) corresponds to " Thu Jan 1 00:00:00 1970 " The earliest full date given by ctime, corresponding to epochsecs -2147483648 is " Fri Dec 13 20:45:52 1901 " . The latest full date given by ctime, corresponding to epochsecs 2147483647 is " Tue Jan 19 03:14:07 2038 " . The abbreviations for the days of the week are 'Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', and 'Sat'. The abbreviations for the months are 'Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', and 'Dec'. Note that the real C library ctime function puts a newline ('\n') character at the end of the string that this function does not. Also note that since the kernel has no concept of timezones, the returned time is always in GMT.
[ "ctime:string(epochsecs:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-ctime
Chapter 3. LocalSubjectAccessReview [authorization.openshift.io/v1]
Chapter 3. LocalSubjectAccessReview [authorization.openshift.io/v1] Description LocalSubjectAccessReview is an object for requesting information about whether a user or group can perform an action in a particular namespace Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required namespace verb resourceAPIGroup resourceAPIVersion resource resourceName path isNonResourceURL user groups scopes 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources content RawExtension Content is the actual content of the request for create and update groups array (string) Groups is optional. Groups is the list of groups to which the User belongs. isNonResourceURL boolean IsNonResourceURL is true if this is a request for a non-resource URL (outside of the resource hierarchy) kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds namespace string Namespace is the namespace of the action being requested. Currently, there is no distinction between no namespace and all namespaces path string Path is the path of a non resource URL resource string Resource is one of the existing resource types resourceAPIGroup string Group is the API group of the resource Serialized as resourceAPIGroup to avoid confusion with the 'groups' field when inlined resourceAPIVersion string Version is the API version of the resource Serialized as resourceAPIVersion to avoid confusion with TypeMeta.apiVersion and ObjectMeta.resourceVersion when inlined resourceName string ResourceName is the name of the resource being requested for a "get" or deleted for a "delete" scopes array (string) Scopes to use for the evaluation. Empty means "use the unscoped (full) permissions of the user/groups". Nil for a self-SAR, means "use the scopes on this request". Nil for a regular SAR, means the same as empty. user string User is optional. If both User and Groups are empty, the current authenticated user is used. verb string Verb is one of: get, list, watch, create, update, delete 3.2. API endpoints The following API endpoints are available: /apis/authorization.openshift.io/v1/namespaces/{namespace}/localsubjectaccessreviews POST : create a LocalSubjectAccessReview 3.2.1. /apis/authorization.openshift.io/v1/namespaces/{namespace}/localsubjectaccessreviews Table 3.1. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 3.2. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. pretty string If 'true', then the output is pretty printed. HTTP method POST Description create a LocalSubjectAccessReview Table 3.3. Body parameters Parameter Type Description body LocalSubjectAccessReview schema Table 3.4. HTTP responses HTTP code Reponse body 200 - OK LocalSubjectAccessReview schema 201 - Created LocalSubjectAccessReview schema 202 - Accepted LocalSubjectAccessReview schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/authorization_apis/localsubjectaccessreview-authorization-openshift-io-v1
Chapter 5. Using the Management CLI with a Managed Domain
Chapter 5. Using the Management CLI with a Managed Domain You can use the management CLI to configure and manage both standalone servers and managed domains. The JBoss EAP documentation usually shows examples of management CLI commands for a standalone server configuration. If you are running a managed domain instead, you often need to adjust the command. The following sections describe how to change standalone server management CLI commands for a managed domain configuration. Specify the Profile for Subsystem Configuration The management CLI commands for standalone server subsystem configuration begin with /subsystem= SUBSYSTEM_NAME . For managed domain subsystem configuration, you must specify which profile's subsystem to configure by starting the command with /profile= PROFILE_NAME /subsystem= SUBSYSTEM_NAME . Example: Read the Logging Subsystem Configuration (Standalone Server) This example shows how to read the configuration of the logging subsystem for a standalone server. Example: Read the Logging Subsystem Configuration (Managed Domain) This example shows how to read the configuration of the logging subsystem for a the default profile in a managed domain. Specify the Host for Core Management and Runtime Commands Some core management and runtime commands for a managed domain require you to specify the host that the command applies to by starting the command with /host= HOST_NAME . Example: Enable Audit Logging (Standalone Server) This example shows how to enable audit logging for a standalone server. Example: Enable Audit Logging (Managed Domain) This example shows how to enable audit logging for the master host in a managed domain. Note Some commands require the host as an argument, for example, reload --host= HOST_NAME . If you do not specify a host for these commands, an error message notifies you that the --host argument is required. Specify the Server for Core Management and Runtime Commands Some core management and runtime commands for a managed domain require you to specify the host and server that the command applies to by starting the command with /host= HOST_NAME /server= SERVER_NAME . Example: Display Runtime Metrics for a Deployment (Standalone Server) This example shows how to display runtime metrics for a standalone server deployment. Example: Display Runtime Metrics for a Deployment (Managed Domain) This example shows how to display runtime metrics for a managed domain deployment that is deployed to the server-one server on the master host.
[ "/subsystem=logging:read-resource", "/profile=default/subsystem=logging:read-resource", "/core-service=management/access=audit/logger=audit-log:write-attribute(name=enabled,value=true)", "/host=master/core-service=management/access=audit/logger=audit-log:write-attribute(name=enabled,value=true)", "/deployment=test-application.war/subsystem=undertow:read-attribute(name=active-sessions)", "/host=master/server=server-one/deployment=test-application.war/subsystem=undertow:read-attribute(name=active-sessions)" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/management_cli_guide/using_cli_domain
Chapter 9. Deploying compliance policies
Chapter 9. Deploying compliance policies To deploy a compliance policy, you must install the SCAP client, update the cron schedule file, and upload the SCAP content selected in the policy onto a host. 9.1. Inclusion of remote SCAP resources SCAP data streams can reference remote resources, such as OVAL files, that the SCAP client fetches over the internet when it runs on hosts. If a data stream requires a remote resource, you can see a warning from the OpenSCAP Scanner tool on your Satellite Server, such as: By default, the SCAP client is configured to ignore the remote resources and skip the XCCDF rules that rely on the resources. The skipped rules then result in the notchecked status. For hosts with internet access, you can enable the download of remote resources on hosts in Satellite. For information about applying remote SCAP resources to hosts that cannot access the internet, see Section 9.2, "Applying remote SCAP resources in a disconnected environment" . Using the Ansible deployment method Override the following Ansible variable: Name: foreman_scap_client_fetch_remote_resources Type: boolean Value: true For more information, see Overriding Ansible Variables in Satellite in Managing configurations using Ansible integration . Using the Puppet deployment method Configure the following Puppet Smart Class Parameter: Name: fetch_remote_resources Type: boolean Value: true For more information, see Configuring Puppet Smart Class Parameters in Managing configurations using Puppet integration . 9.2. Applying remote SCAP resources in a disconnected environment SCAP data streams can contain remote resources, such as OVAL files, that the SCAP client can fetch over the internet when it runs on hosts. If your hosts do not have internet access, you must download remote SCAP resources and distribute them from Satellite Server to your hosts as local files by downloading the files on hosts from a custom file type repository . Prerequisites You have registered your host to Satellite with remote execution enabled. Fetching remote resources must be disabled, which is the default. For more information, see Section 9.1, "Inclusion of remote SCAP resources" . Procedure On your Satellite Server, examine the data stream you use in your compliance policy to find out which missing resource you must download: Examine the name of the local file that is referenced by the data stream: On an online machine, download the missing resource: Important Ensure that the name of the downloaded file matches the name the data stream references. Add the file as new custom file type content into your Satellite Server. For more information, see Managing custom file type content in Managing content . Note the URL on which your repository is published, such as http:// satellite.example.com /pulp/content/ My_Organization_Label /Library/custom/ My_Product_Label / My_Repo_Label / . Schedule a remote job to upload the file to the home directory of root on your host. For example, use the Run Command - Script Default job template and enter the following command: For more information about running remote jobs, see Executing a Remote Job in Managing hosts . Continue with deploying your compliance policy. 9.3. Deploying a policy in a host group using Ansible After you deploy a compliance policy in a host group using Ansible, the Ansible role installs the SCAP client and configures OpenSCAP scans on the hosts according to the selected compliance policy. The SCAP content in the compliance policy might require remote resources. For more information, see Section 9.1, "Inclusion of remote SCAP resources" . Prerequisites You have enabled OpenSCAP on your Capsule. For more information, see Enabling OpenSCAP on Capsule Servers in Installing Capsule Server . Repositories for the operating system version of the host are synchronized on Satellite Server and enabled on the host. Red Hat Enterprise Linux 9 BaseOS and Appstream RPMs repositories Red Hat Enterprise Linux 8 BaseOS and Appstream RPMs repositories Red Hat Enterprise Linux 7 Server and Extras RPMs repositories Red Hat Satellite Client 6 repository for the operating system version of the host is synchronized on Satellite Server, available in the content view and the lifecycle environment of the host, and enabled for the host. For more information, see Changing the repository sets status for a host in Satellite in Managing content . This repository is required for installing the SCAP client. You have created a compliance policy with the Ansible deployment option and assigned the host group. Procedure In the Satellite web UI, navigate to Configure > Host Groups . Click the host group that you want to configure for OpenSCAP reporting. From the OpenSCAP Capsule list, select the Capsule with OpenSCAP enabled that you want to use. On the Ansible Roles tab, assign the theforeman.foreman_scap_client Ansible role. Optional: On the Parameters tab, configure any Ansible variables of the role. Click Submit to save your changes. In the row of the required host group, navigate to the Actions column and select Run all Ansible roles . 9.4. Deploying a policy on a host using Ansible After you deploy a compliance policy on a host using Ansible, the Ansible role installs the SCAP client and configures OpenSCAP scans on the host according to the selected compliance policy. The SCAP content in the compliance policy might require remote resources. For more information, see Section 9.1, "Inclusion of remote SCAP resources" . Prerequisites You have enabled OpenSCAP on your Capsule. For more information, see Enabling OpenSCAP on Capsule Servers in Installing Capsule Server . Repositories for the operating system version of the host are synchronized on Satellite Server and enabled on the host. Red Hat Enterprise Linux 9 BaseOS and Appstream RPMs repositories Red Hat Enterprise Linux 8 BaseOS and Appstream RPMs repositories Red Hat Enterprise Linux 7 Server and Extras RPMs repositories Red Hat Satellite Client 6 repository for the operating system version of the host is synchronized on Satellite Server, available in the content view and the lifecycle environment of the host, and enabled for the host. For more information, see Changing the repository sets status for a host in Satellite in Managing content . This repository is required for installing the SCAP client. You have created a compliance policy with the Ansible deployment option. Procedure In the Satellite web UI, navigate to Hosts > All Hosts , and select Edit on the host you want to configure for OpenSCAP reporting. From the OpenSCAP Capsule list, select the Capsule with OpenSCAP enabled that you want to use. On the Ansible Roles tab, add the theforeman.foreman_scap_client Ansible role. Optional: On the Parameters tab, configure any Ansible variables of the role. Click Submit to save your changes. Click the Hosts breadcrumbs link to navigate back to the host index page. Select the host or hosts to which you want to add the policy. Click Select Action . Select Assign Compliance Policy from the list. In the Assign Compliance Policy window, select Remember hosts selection for the bulk action . Select the required policy from the list of available policies and click Submit . Click Select Action . Select Run all Ansible roles from the list. 9.5. Deploying a policy in a host group using Puppet After you deploy a compliance policy in a host group using Puppet, the Puppet agent installs the SCAP client and configures OpenSCAP scans on the hosts on the Puppet run according to the selected compliance policy. The SCAP content in your compliance policy might require remote resources. For more information, see Section 9.1, "Inclusion of remote SCAP resources" . Prerequisites You have enabled OpenSCAP on your Capsule. For more information, see Enabling OpenSCAP on Capsule Servers in Installing Capsule Server . Repositories for the operating system version of the host are synchronized on Satellite Server and enabled on the host. Red Hat Enterprise Linux 9 BaseOS and Appstream RPMs repositories Red Hat Enterprise Linux 8 BaseOS and Appstream RPMs repositories Red Hat Enterprise Linux 7 Server and Extras RPMs repositories Red Hat Satellite Client 6 repository for the operating system version of the host is synchronized on Satellite Server, available in the content view and the lifecycle environment of the host, and enabled for the host. For more information, see Changing the repository sets status for a host in Satellite in Managing content . This repository is required for installing the SCAP client. You have created a compliance policy with the Puppet deployment option and assigned the host group. Procedure In the Satellite web UI, navigate to Configure > Host Groups . Click the host group that you want to configure for OpenSCAP reporting. In the Environment list, select the Puppet environment that contains the foreman_scap_client* Puppet classes. In the OpenSCAP Capsule list, select the Capsule with OpenSCAP enabled that you want to use. On the Puppet ENC tab, add the foreman_scap_client Puppet class. Optional: Configure any Puppet Class Parameters . Click Submit to save your changes. 9.6. Deploying a policy on a host using Puppet After you deploy a compliance policy on a host using Puppet, the Puppet agent installs the SCAP client and configures OpenSCAP scans on the host on the Puppet run according to the selected compliance policy. The SCAP content in your compliance policy might require remote resources. For more information, see Section 9.1, "Inclusion of remote SCAP resources" . Prerequisites You have enabled OpenSCAP on your Capsule. For more information, see Enabling OpenSCAP on Capsule Servers in Installing Capsule Server . Repositories for the operating system version of the host are synchronized on Satellite Server and enabled on the host. Red Hat Enterprise Linux 9 BaseOS and Appstream RPMs repositories Red Hat Enterprise Linux 8 BaseOS and Appstream RPMs repositories Red Hat Enterprise Linux 7 Server and Extras RPMs repositories Red Hat Satellite Client 6 repository for the operating system version of the host is synchronized on Satellite Server, available in the content view and the lifecycle environment of the host, and enabled for the host. For more information, see Changing the repository sets status for a host in Satellite in Managing content . This repository is required for installing the SCAP client. You have created a compliance policy with the Puppet deployment option. Procedure In the Satellite web UI, navigate to Hosts > All Hosts , and select Edit on the host you want to configure for OpenSCAP reporting. From the Environment list, select the Puppet environment that contains the foreman_scap_client and foreman_scap_client::params Puppet classes. From the OpenSCAP Capsule list, select the Capsule with OpenSCAP enabled that you want to use. On the Puppet ENC tab, add the foreman_scap_client Puppet class. Optional: Configure any Puppet Class Parameters . Click the Hosts breadcrumbs link to navigate back to the host index page. Select the host or hosts to which you want to add the policy. Click Select Action . Select Assign Compliance Policy from the list. In the Assign Compliance Policy window, select Remember hosts selection for the bulk action . Select the required policy from the list of available policies and click Submit .
[ "oscap info /usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml | grep \"WARNING\" WARNING: Datastream component 'scap_org.open-scap_cref_security-data-oval-com.redhat.rhsa-RHEL8.xml.bz2' points out to the remote 'https://access.redhat.com/security/data/oval/com.redhat.rhsa-RHEL8.xml.bz2'. Use '--fetch-remote-resources' option to download it. WARNING: Skipping 'https://access.redhat.com/security/data/oval/com.redhat.rhsa-RHEL8.xml.bz2' file which is referenced from datastream", "oscap info /usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml | grep \"WARNING\" WARNING: Datastream component 'scap_org.open-scap_cref_security-data-oval-com.redhat.rhsa-RHEL8.xml.bz2' points out to the remote 'https://access.redhat.com/security/data/oval/com.redhat.rhsa-RHEL8.xml.bz2'. Use '--fetch-remote-resources' option to download it. WARNING: Skipping 'https://access.redhat.com/security/data/oval/com.redhat.rhsa-RHEL8.xml.bz2' file which is referenced from datastream", "oscap info /usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml Referenced check files: ssg-rhel8-oval.xml system: http://oval.mitre.org/XMLSchema/oval-definitions-5 ssg-rhel8-ocil.xml system: http://scap.nist.gov/schema/ocil/2 security-data-oval-com.redhat.rhsa-RHEL8.xml.bz2 system: http://oval.mitre.org/XMLSchema/oval-definitions-5", "curl -o security-data-oval-com.redhat.rhsa-RHEL8.xml.bz2 https://www.redhat.com/security/data/oval/com.redhat.rhsa-RHEL8.xml.bz2", "curl -o /root/ security-data-oval-com.redhat.rhsa-RHEL8.xml.bz2 http:// satellite.example.com /pulp/content/ My_Organization_Label /Library/custom/ My_Product_Label / My_Repo_Label / security-data-oval-com.redhat.rhsa-RHEL8.xml.bz2" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/managing_security_compliance/deploying-compliance-policies_security-compliance
8.124. nfs-utils
8.124. nfs-utils 8.124.1. RHBA-2013:1714 - nfs-utils bug fix and enhancement update Updated nfs-utils packages that fix several bugs and add various enhancements are now available. The nfs-utils packages provide a daemon for the kernel Network File System (NFS) server and related tools such as the mount.nfs, umount.nfs, and showmount. Bug Fixes BZ# 889272 When the "Background", "Foreground" or "timeo" options were set in multiple sections of the nfsmount.conf configuration file, each of those options were incorrectly present in the resulting parsed values. This update changes this behavior so that the first instance of either option overrides any ones. In addition, configuration file options could have been incorrectly passed to the mount syscall from sections that were not relevant to the options that were being performed. The parser has been made more strict so that each option can appear at most four times: once for the system section, once for the server-specific section, once for the mount-specific section, and once for the command line mount options. BZ# 890146 Prior to this update, running "nfsstat -s -o rpc" command produced output with incorrect labels in a table header. With this update, the underlying source code has been adapted to make sure that all columns now have the correct name. BZ# 892235 Starting the nfs service resulted in the following output: Although the sequence of events of having to first stop and then start the RPC idmapd service was previously necessary, the current init scripts do not require this behavior. This has been corrected so that starting the nfs service now simply results in a single "Starting RPC idmapd" status display. BZ# 950324 When running sm-notify, specifying the "-v <ip_address>" or "-v <hostname>" option did not work correctly after the nfs-utils packages were updated to version 1.2.2, which was the first version that included support for IPv6. This update corrects the address handling logic so that specifying a hostname, IPv4 or IPv6 IP address with the '-v' option works as expected. BZ# 952560 The nfs(5) manual page contained incorrect information about the "retrans=n" option, by which specifies the number of times an NFS client will retry a request before it attempts a further recovery action. This information has been corrected and now specifies the number of attempts by protocol type. The man page correction for the "retrans=n" option is: The number of times the NFS client retries a request before it attempts further recovery action. If the retrans option is not specified, the NFS client tries each request three times with mounts using UDP and two times with mounts using TCP. Users of nfs-utils are advised to upgrade to these updated packages, which fix these bugs and add various enhancements.
[ "Stopping RPC idmapd: [ OK ] Starting RPC idmapd: [ OK ]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/nfs-utils
Appendix A. Tests
Appendix A. Tests The Red Hat Enterprise Linux software certification includes several tests and subtests described in the following sections. A certification might exit with one of the following statuses: Pass : All the subtests have passed and no further action is required. Fail : A critical subtest or check has not succeeded and requires a change before a certification can be achieved. Review : Additional detailed review is required by Red Hat to determine the status. Warn : One or more subtests did not follow best practices and require further action. However, the certification will succeed. Red Hat recommends that you review the output of all tests, perform appropriate actions, and re-run the test as appropriate. The Red Hat Certification application plans the tests sequentially and writes a single log file each time you run the tests. Submit the log file to Red Hat for new certifications and recertifications. For more information about the certification tool and how to run the tests, see the Red Hat Software Certification Workflow Guide . Additional resources Red Hat Certification test suite download link A.1. Self check test The self check test verifies that all the software packages required in the certification process are installed and that they have not been altered. This ensures that the test environment is ready for the certification process and that all the installed certification software packages are supportable. Success criteria The test environment includes all the packages required in the certification process and the packages have not been modified. A.2. RPM test The RPM test checks whether RPM-packaged products undergoing certification adhere to Red Hat's best practices for RPM packaging. This test is mandatory for products packaged as RPMs only. The test includes the following subtests: A.2.1. RPM provenance subtest The RPM provenance subtest checks whether the origin of the RPM-packaged product undergoing certification and its dependencies can be tracked in accordance with Red Hat's best practices for RPM packaging. Success criteria Non-Red Hat packages are identified as belonging to the product undergoing certification, or its dependencies. Files are tracked within the packages. Additional resources Packaging and Distributing Software (RHEL 8) Packaging and Distributing Software (RHEL 9) A.2.2. RPM version handling subtest The RPM version handling subtest checks whether the RPM-packaged product undergoing certification and its dependencies are versioned in accordance with Red Hat's best practices for RPM packaging. Success criteria Packages and changes to packages are versioned. A.2.3. RPM dependency tracking subtest The RPM dependency tracking subtest checks whether the RPM-packaged product undergoing certification and its dependencies are tracked in accordance with Red Hat's best practices for RPM dependency tracking. Success criteria All dependencies are tracked. A.3. Supportability test The supportability test ensures that Red Hat can support Red Hat Enterprise Linux (RHEL) with the product undergoing certification as installed and running. The software/supportable tests include the following subtests: A.3.1. Log versions subtest The log versions subtest checks whether it can find the RHEL version and the kernel version that are installed on the host under test. Success criteria The test successfully detects both the RHEL version and the kernel version. A.3.2. Kernel subtest The kernel subtest checks the kernel module running on the test environment. The version of the kernel can be either the original General Availability (GA) version or any subsequent kernel update released for the RHEL major and minor releases. The kernel subtest also ensures that the kernel is not tainted when running in the environment. Success criteria The running kernel is a Red Hat kernel. The running kernel is released by Red Hat for use with the RHEL version. The running kernel is not tainted. The running kernel has not been modified. Additional resources Red Hat Enterprise Linux Life Cycle Red Hat Enterprise Linux Release Dates Why is the kernel "tainted" and how are the taint values deciphered? A.3.3. Kernel modules subtest The kernel modules subtest verifies that loaded kernel modules are released by Red Hat, either as part of the kernel's package or added through a Red Hat Driver Update. The kernel module subtest also ensures that kernel modules do not identify as Technology Preview. Success criteria The kernel modules are released by Red Hat and supported. Additional resources What does a "Technology Preview" feature mean? A.3.4. Third-party kernel modules subtest The third-party kernel subtest checks whether non-Red Hat kernel packages are running. The use of partner kernel modules has the potential to introduce risks to the Red Hat kernel that may not be fully ascertained during certification. As a result, when partner kernel modules are required, the certification process aims to ensure that the stack remains supportable, and the partner's responsibilities are clearly delineated. Red Hat reserves the right to deny a certification whenever partner kernel modules are required. Partner kernel modules are subject to additional verification, including (but not limited to) the following: Success criteria Partners must: Agree that you understand and will act according to the policies defined in Red Hat's production scope of coverage . Agree that you understand and will act according to the policies defined in Red Hat's third party support policy . Provide Red Hat the documentation of kernel modules written for joint customers. Provide Red Hat the contact information of your application support team and kernel engineering support team Declare that you own and support the module. Declare that module will not interfere with the RHEL kernel or userland functionality. Declare that module is not a hardware driver. Partner kernel modules must: Show the module name, size, and dependencies in the output of the lsmod command. Show the module name, filename, license, and description in the output of the modinfo command, aligned with the partner documentation. Show that the partner signs and supports the module in the output of the modinfo command. Be precompiled ko or ko.xz kmods . Be loaded after the final pivot_root . Be delivered and packaged in an RPM or other format that is signed by the partner. It must also provide a mechanism to validate both the in-memory and on-disk kernel module. If delivered and packaged as an RPM, partner kernel modules must: Meet the standard RHEL RPM certification requirements. Show that the package's vendor is responsible for its support in the output of the rpm -qi command. Show the supported Red Hat kernel range for the kernel modules in the output of the rpm -q --requires command. A.3.5. Hardware Health subtest The hardware health subtest checks the system's health by testing if the hardware is supported, meets the requirements, and has any known hardware vulnerabilities. The subtest does the following: Checks that the RHEL kernel does not identify hardware as unsupported. When the kernel identifies unsupported hardware, it displays a message similar to "unsupported hardware" in the system logs and triggers an unsupported kernel taint. This subtest mitigates the risk of running Red Hat products on unsupported configurations and environments. In hypervisor, partitioning, cloud instances, and other virtual machine situations, the kernel may trigger an unsupported hardware message or taint based on the hardware data presented to RHEL by the virtual machine. Checks that the host under test meets the minimum hardware requirements: RHEL 8 and RHEL 9: Minimum system RAM must be 1.5GB by CPU logical core count. Checks if the kernel has reported any known hardware vulnerabilities. Confirms that no CPUs are offline in the system. Confirms if simultaneous multithreading is available, enabled, and active in the system. Failing any of these tests will result in a warning from the test suite. Check the warnings to ensure the product is working as intended. Success criteria The kernel does not have the UNSUPPORTEDHARDWARE taint bit set. The kernel does not report an unsupported hardware system message. The kernel does not report any vulnerabilities. The kernel does not report the logic core-to-installed memory ratio as out of range. The kernel does not report CPUs in an offline state. Additional resources Minimum required memory Hardware support available in RHEL 7 but removed from RHEL 8 Hardware support available in RHEL 8 but removed from RHEL 9 A.3.6. Hypervisor/Partitioning subtest The hypervisor/partitioning subtest verifies that the architecture of the host under test is supported by RHEL. Success criteria The pass scenarios on bare-metal systems are: x86_64, ppc64le, s390x, and aarch64. The pass scenarios on hypervisor or partitioning environments are: RHEL KVM, VMware, RHEV, QEMU, and HyperV. A.3.7. Filesystem layout subtest The filesystem layout subtest verifies that the size of the root filesystem and the size and type of the boot filesystem follow the guidelines for each RHEL release. This ensures that the image has a reasonable amount of space required to operate effectively, run applications, and install updates. Success criteria RHEL 8 and RHEL 9: The root file system is 10GB or larger. The boot file system is 1GB or larger, and on an xfs or ext formatted partition. A.3.8. Installed RPMs subtest The installed RPMs subtest verifies that RPM packages installed on the system are released by Red Hat and not modified. Modified packages may introduce risks and impact the supportability of the customer's environment. You might install non-Red Hat packages if necessary, but you must add them to your product's documentation, and they must not modify or conflict with any Red Hat packages. Red Hat will review the output of this test if you install non-Red Hat packages. Success criteria The installed Red Hat RPMs are not modified. The installed non-Red Hat RPMs are necessary and documented. The installed non-Red Hat RPMs do not conflict with Red Hat RPMs or software. For example, you may develop custom packages to manage CPU affinity of interrupt requests (IRQs) for network interfaces. However, such packages might conflict with Red Hat's tuned package, which already provides similar functionality for performance tuning. Additional resources Production Support Scope of Coverage A.3.9. Software repositories subtest The software repositories subtest verifies that relevant Red Hat repositories are configured, and that GPG keys are imported on the host under test. Red Hat provides software packages and content in the Red Hat official software repositories. These repositories are signed with GPG keys to ensure authenticity of the distributed files. Software provided in these repositories is fully supported and reliable for customer production environments. You might configure Non-Red Hat repositories if they are necessary, but they must be properly documented and approved. Success criteria You enabled the BaseOS and AppStream RHEL repositories. You imported the GPG keys for the RHEL repositories. The relevant Red Hat repositories are: Red Hat Update Infrastructure, Red Hat Satellite, and Red Hat Content Delivery Network. You documented the non-Red Hat repositories required by the product undergoing certification, or by the certified Red Hat public cloud where you are running the tests. Note To verify Red Hat repositories, you must configure your base URL with either one of these keywords: satellite , redhat.com , or rhui . Additional resources Production Support Scope of Coverage A.3.10. Trusted containers subtest The trusted containers subtest verifies that the RHEL container tool set is installed, and that any containers installed on the host under test are either provided by Red Hat or are part of the product undergoing certification. Success criteria The RHEL container tool set is installed and operational. Any containers present in the environment are supplied as part of a RHEL subscription or have been verified as part of the product certification. The default RHEL container registry, registry.redhat.io , is enabled. Additional resources Building, running, and managing containers (RHEL 8) Building, running, and managing containers (RHEL 9) A.3.11. Insights subtest The insights subtest verifies that the insights-client package is installed and operational. Red Hat Insights lets customers predict and prevent problems before they occur through ongoing, in-depth analysis of their infrastructure. Red Hat recommends customers to use Red Hat Insights in their own environments. Success criteria The insights-client package is installed and operational. Additional resources Red Hat Insights A.3.12. RPM freshness subtest The RPM freshness subtest checks whether all important and critical security updates released against Red Hat packages are installed, and displays a review status for those packages that need updating. Red Hat will review the results of this test if important or critical updates are not installed. Red Hat encourages partners to update their test environments whenever a security update is released. Success criteria All important and critical security updates released for Red Hat packages are installed. Additional resources Red Hat security ratings A.3.13. SELinux enforcing subtest The Security-Enhanced Linux (SELinux) enforcing subtest confirms that SELinux is enabled and running in enforcing mode on the host under test. Success criteria SELinux is configured and running in enforcing mode on the host under test. Additional resources Using SELinux (RHEL 8) Using SELinux (RHEL 9) A.3.14. Software modules subtest The software modules subtest validates modules available on RHEL systems. The RHEL modularity feature is a collection of packages available on the system. Success criteria The subtest fails if non-Red Hat software modules are installed. A.4. Fingerprinting test The fingerprinting test captures the digital fingerprint of the product undergoing certification. By using the output of the ps and systemd commands, the test detects services and processes related to the product undergoing certification and any non-Red Hat applications installed on the test system. Then, the test prompts you about the services and processes it has found. Red Hat will use the test results to investigate customer-reported problems and redirect them to the appropriate teams. Success criteria The product undergoing certification is installed and running on the host under test. A.5. Container test The container test verifies that the container undergoing certification can be started and then stopped by using Podman and Systemd. This test is mandatory for containerized products only. The test includes the following subtests: A.5.1. Podman subtest The podman subtest checks whether the container can be started and then stopped by using Podman. The subtest performs the following actions: Displays a list of the containers running on the test system. Prompts you to identify the container undergoing certification. Starts and then stops the container by using the podman command. Success criteria Containers must start and stop successfully by using the podman command. A.5.2. Systemd subtest The systemd subtest checks whether the container can be controlled with Systemd and automatically restarted after a container failure. The subtest performs the following actions: Prompts you to confirm whether a Systemd unit file for the container exists. If the file exists, enter its location. The test will use this file to start and stop the container. If the file does not exist, the test can generate one in /etc/systemd/system . Ensure that the container is running before letting the test create the file. Stops the container if it is running. Checks that the container can be controlled by systemd . Verifies that the container is set to restart on failure. Stops the container using the podman kill command to simulate failure. Verifies that the container automatically restarts. Success criteria Containers must start successfully during all the tests. Additional resources Generating a Systemd unit file using Podman (RHEL 8) Generating a Systemd unit file using Podman (RHEL 9) A.6. Sosreport test The sosreport test ensures that the sosreport tool works as expected on the test environment and captures a basic system report test. The sosreport tool collects configuration and diagnostic information that Red Hat can use to assist customers in troubleshooting issues. Success criteria A basic sosreport can be collected on the host under test. Additional resources What is an sosreport and how to create one in Red Hat Enterprise Linux?
null
https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_enterprise_linux_software_certification_policy_guide/assembly_appendix_test-environment
Chapter 8. Configuring authentication
Chapter 8. Configuring authentication This chapter covers several authentication topics. These topics include: Enforcing strict password and One Time Password (OTP) policies. Managing different credential types. Logging in with Kerberos. Disabling and enabling built-in credential types. 8.1. Password policies When Red Hat build of Keycloak creates a realm, it does not associate password policies with the realm. You can set a simple password with no restrictions on its length, security, or complexity. Simple passwords are unacceptable in production environments. Red Hat build of Keycloak has a set of password policies available through the Admin Console. Procedure Click Authentication in the menu. Click the Policies tab. Select the policy to add in the Add policy drop-down box. Enter a value that applies to the policy chosen. Click Save . Password policy After saving the policy, Red Hat build of Keycloak enforces the policy for new users. Note The new policy will not be effective for existing users. Therefore, make sure that you set the password policy from the beginning of the realm creation or add "Update password" to existing users or use "Expire password" to make sure that users update their passwords in "N" days, which will actually adjust to new password policies. 8.1.1. Password policy types 8.1.1.1. HashAlgorithm Passwords are not stored in cleartext. Before storage or validation, Red Hat build of Keycloak hashes passwords using standard hashing algorithms. PBKDF2 is the only built-in and default algorithm available. See the Server Developer Guide on how to add your own hashing algorithm. Note If you change the hashing algorithm, password hashes in storage will not change until the user logs in. 8.1.1.2. Hashing iterations Specifies the number of times Red Hat build of Keycloak hashes passwords before storage or verification. The default value is 27,500. Red Hat build of Keycloak hashes passwords to ensure that hostile actors with access to the password database cannot read passwords through reverse engineering. Note A high hashing iteration value can impact performance as it requires higher CPU power. 8.1.1.3. Digits The number of numerical digits required in the password string. 8.1.1.4. Lowercase characters The number of lower case letters required in the password string. 8.1.1.5. Uppercase characters The number of upper case letters required in the password string. 8.1.1.6. Special characters The number of special characters required in the password string. 8.1.1.7. Not username The password cannot be the same as the username. 8.1.1.8. Not email The password cannot be the same as the email address of the user. 8.1.1.9. Regular expression Password must match one or more defined regular expression patterns. 8.1.1.10. Expire password The number of days the password is valid. When the number of days has expired, the user must change their password. 8.1.1.11. Not recently used Password cannot be already used by the user. Red Hat build of Keycloak stores a history of used passwords. The number of old passwords stored is configurable in Red Hat build of Keycloak. 8.1.1.12. Password blacklist Password must not be in a blacklist file. Blacklist files are UTF-8 plain-text files with Unix line endings. Every line represents a blacklisted password. Red Hat build of Keycloak compares passwords in a case-insensitive manner. All passwords in the blacklist must be lowercase. The value of the blacklist file must be the name of the blacklist file, for example, 100k_passwords.txt . Blacklist files resolve against USD{kc.home.dir}/data/password-blacklists/ by default. Customize this path using: The keycloak.password.blacklists.path system property. The blacklistsPath property of the passwordBlacklist policy SPI configuration. To configure the blacklist folder using the CLI, use --spi-password-policy-password-blacklist-blacklists-path=/path/to/blacklistsFolder . A note about False Positives The current implementation uses a BloomFilter for fast and memory efficient containment checks, such as whether a given password is contained in a blacklist, with the possibility for false positives. By default a false positive probability of 0.01% is used. To change the false positive probability by CLI configuration, use --spi-password-policy-password-blacklist-false-positive-probability=0.00001 8.2. One Time Password (OTP) policies Red Hat build of Keycloak has several policies for setting up a FreeOTP or Google Authenticator One-Time Password generator. Procedure Click Authentication in the menu. Click the Policy tab. Click the OTP Policy tab. Otp Policy Red Hat build of Keycloak generates a QR code on the OTP set-up page, based on information configured in the OTP Policy tab. FreeOTP and Google Authenticator scan the QR code when configuring OTP. 8.2.1. Time-based or counter-based one time passwords The algorithms available in Red Hat build of Keycloak for your OTP generators are time-based and counter-based. With Time-Based One Time Passwords (TOTP), the token generator will hash the current time and a shared secret. The server validates the OTP by comparing the hashes within a window of time to the submitted value. TOTPs are valid for a short window of time. With Counter-Based One Time Passwords (HOTP), Red Hat build of Keycloak uses a shared counter rather than the current time. The Red Hat build of Keycloak server increments the counter with each successful OTP login. Valid OTPs change after a successful login. TOTP is more secure than HOTP because the matchable OTP is valid for a short window of time, while the OTP for HOTP is valid for an indeterminate amount of time. HOTP is more user-friendly than TOTP because no time limit exists to enter the OTP. HOTP requires a database update every time the server increments the counter. This update is a performance drain on the authentication server during heavy load. To increase efficiency, TOTP does not remember passwords used, so there is no need to perform database updates. The drawback is that it is possible to re-use TOTPs in the valid time interval. 8.2.2. TOTP configuration options 8.2.2.1. OTP hash algorithm The default algorithm is SHA1. The other, more secure options are SHA256 and SHA512. 8.2.2.2. Number of digits The length of the OTP. Short OTP's are user-friendly, easier to type, and easier to remember. Longer OTP's are more secure than shorter OTP's. 8.2.2.3. Look around window The number of intervals the server attempts to match the hash. This option is present in Red Hat build of Keycloak if the clock of the TOTP generator or authentication server becomes out-of-sync. The default value of 1 is adequate. For example, if the time interval for a token is 30 seconds, the default value of 1 means it will accept valid tokens in the 90-second window (time interval 30 seconds + look ahead 30 seconds + look behind 30 seconds). Every increment of this value increases the valid window by 60 seconds (look ahead 30 seconds + look behind 30 seconds). 8.2.2.4. OTP token period The time interval in seconds the server matches a hash. Each time the interval passes, the token generator generates a TOTP. 8.2.2.5. Reusable code Determine whether OTP tokens can be reused in the authentication process or user needs to wait for the token. Users cannot reuse those tokens by default, and the administrator needs to explicitly specify that those tokens can be reused. 8.2.3. HOTP configuration options 8.2.3.1. OTP hash algorithm The default algorithm is SHA1. The other, more secure options are SHA256 and SHA512. 8.2.3.2. Number of digits The length of the OTP. Short OTPs are user-friendly, easier to type, and easier to remember. Longer OTPs are more secure than shorter OTPs. 8.2.3.3. Look around window The number of and following intervals the server attempts to match the hash. This option is present in Red Hat build of Keycloak if the clock of the TOTP generator or authentication server become out-of-sync. The default value of 1 is adequate. This option is present in Red Hat build of Keycloak to cover when the user's counter gets ahead of the server. 8.2.3.4. Initial counter The value of the initial counter. 8.3. Authentication flows An authentication flow is a container of authentications, screens, and actions, during log in, registration, and other Red Hat build of Keycloak workflows. 8.3.1. Built-in flows Red Hat build of Keycloak has several built-in flows. You cannot modify these flows, but you can alter the flow's requirements to suit your needs. Procedure Click Authentication in the menu. Click on the Browser item in the list to see the details. Browser flow 8.3.1.1. Auth type The name of the authentication or the action to execute. If an authentication is indented, it is in a sub-flow. It may or may not be executed, depending on the behavior of its parent. Cookie The first time a user logs in successfully, Red Hat build of Keycloak sets a session cookie. If the cookie is already set, this authentication type is successful. Since the cookie provider returned success and each execution at this level of the flow is alternative , Red Hat build of Keycloak does not perform any other execution. This results in a successful login. Kerberos This authenticator is disabled by default and is skipped during the Browser Flow. Identity Provider Redirector This action is configured through the Actions > Config link. It redirects to another IdP for identity brokering . Forms Since this sub-flow is marked as alternative , it will not be executed if the Cookie authentication type passed. This sub-flow contains an additional authentication type that needs to be executed. Red Hat build of Keycloak loads the executions for this sub-flow and processes them. The first execution is the Username Password Form , an authentication type that renders the username and password page. It is marked as required , so the user must enter a valid username and password. The second execution is the Browser - Conditional OTP sub-flow. This sub-flow is conditional and executes depending on the result of the Condition - User Configured execution. If the result is true, Red Hat build of Keycloak loads the executions for this sub-flow and processes them. The execution is the Condition - User Configured authentication. This authentication checks if Red Hat build of Keycloak has configured other executions in the flow for the user. The Browser - Conditional OTP sub-flow executes only when the user has a configured OTP credential. The final execution is the OTP Form . Red Hat build of Keycloak marks this execution as required but it runs only when the user has an OTP credential set up because of the setup in the conditional sub-flow. If not, the user does not see an OTP form. 8.3.1.2. Requirement A set of radio buttons that control the execution of an action executes. 8.3.1.2.1. Required All Required elements in the flow must be successfully sequentially executed. The flow terminates if a required element fails. 8.3.1.2.2. Alternative Only a single element must successfully execute for the flow to evaluate as successful. Because the Required flow elements are sufficient to mark a flow as successful, any Alternative flow element within a flow containing Required flow elements will not execute. 8.3.1.2.3. Disabled The element does not count to mark a flow as successful. 8.3.1.2.4. Conditional This requirement type is only set on sub-flows. A Conditional sub-flow contains executions. These executions must evaluate to logical statements. If all executions evaluate as true , the Conditional sub-flow acts as Required . If any executions evaluate as false , the Conditional sub-flow acts as Disabled . If you do not set an execution, the Conditional sub-flow acts as Disabled . If a flow contains executions and the flow is not set to Conditional , Red Hat build of Keycloak does not evaluate the executions, and the executions are considered functionally Disabled . 8.3.2. Creating flows Important functionality and security considerations apply when you design a flow. To create a flow, perform the following: Procedure Click Authentication in the menu. Click Create flow . Note You can copy and then modify an existing flow. Click the "Action list" (the three dots at the end of the row), click Duplicate , and enter a name for the new flow. When creating a new flow, you must create a top-level flow first with the following options: Name The name of the flow. Description The description you can set to the flow. Top-Level Flow Type The type of flow. The type client is used only for the authentication of clients (applications). For all other cases, choose basic . Create a top-level flow When Red Hat build of Keycloak has created the flow, Red Hat build of Keycloak displays the Add step , and Add sub-flow buttons. An empty new flow Three factors determine the behavior of flows and sub-flows. The structure of the flow and sub-flows. The executions within the flows The requirements set within the sub-flows and the executions. Executions have a wide variety of actions, from sending a reset email to validating an OTP. Add executions with the Add step button. Adding an authentication execution Two types of executions exist, automatic executions and interactive executions . Automatic executions are similar to the Cookie execution and will automatically perform their action in the flow. Interactive executions halt the flow to get input. Executions executing successfully set their status to success . For a flow to complete, it needs at least one execution with a status of success . You can add sub-flows to top-level flows with the Add sub-flow button. The Add sub-flow button displays the Create Execution Flow page. This page is similar to the Create Top Level Form page. The difference is that the Flow Type can be basic (default) or form . The form type constructs a sub-flow that generates a form for the user, similar to the built-in Registration flow. Sub-flows success depends on how their executions evaluate, including their contained sub-flows. See the execution requirements section for an in-depth explanation of how sub-flows work. Note After adding an execution, check the requirement has the correct value. All elements in a flow have a Delete option to the element. Some executions have a ⚙\ufe0f menu item (the gear icon) to configure the execution. It is also possible to add executions and sub-flows to sub-flows with the Add step and Add sub-flow links. Since the order of execution is important, you can move executions and sub-flows up and down by dragging their names. Warning Make sure to properly test your configuration when you configure the authentication flow to confirm that no security holes exist in your setup. We recommend that you test various corner cases. For example, consider testing the authentication behavior for a user when you remove various credentials from the user's account before authentication. As an example, when 2nd-factor authenticators, such as OTP Form or WebAuthn Authenticator, are configured in the flow as REQUIRED and the user does not have credential of particular type, the user will be able to set up the particular credential during authentication itself. This situation means that the user does not authenticate with this credential as he set up it right during the authentication. So for browser authentication, make sure to configure your authentication flow with some 1st-factor credentials such as Password or WebAuthn Passwordless Authenticator. 8.3.3. Creating a password-less browser login flow To illustrate the creation of flows, this section describes creating an advanced browser login flow. The purpose of this flow is to allow a user a choice between logging in using a password-less manner with WebAuthn , or two-factor authentication with a password and OTP. Procedure Click Authentication in the menu. Click the Flows tab. Click Create flow . Enter Browser Password-less as a name. Click Create . Click Add execution . Select Cookie from the list. Click Add . Select Alternative for the Cookie authentication type to set its requirement to alternative. Click Add step . Select Kerberos from the list. Click Add . Click Add step . Select Identity Provider Redirector from the list. Click Add . Select Alternative for the Identity Provider Redirector authentication type to set its requirement to alternative. Click Add sub-flow . Enter Forms as a name. Click Add . Select Alternative for the Forms authentication type to set its requirement to alternative. The common part with the browser flow Click + menu of the Forms execution. Select Add step . Select Username Form from the list. Click Add . At this stage, the form requires a username but no password. We must enable password authentication to avoid security risks. Click + menu of the Forms sub-flow. Click Add sub-flow . Enter Authentication as name. Click Add . Select Required for the Authentication authentication type to set its requirement to required. Click + menu of the Authentication sub-flow. Click Add step . Select WebAuthn Passwordless Authenticator from the list. Click Add . Select Alternative for the Webauthn Passwordless Authenticator authentication type to set its requirement to alternative. Click + menu of the Authentication sub-flow. Click Add sub-flow . Enter Password with OTP as name. Click Add . Select Alternative for the Password with OTP authentication type to set its requirement to alternative. Click + menu of the Password with OTP sub-flow. Click Add step . Select Password Form from the list. Click Add . Select Required for the Password Form authentication type to set its requirement to required. Click + menu of the Password with OTP sub-flow. Click Add step . Select OTP Form from the list. Click Add . Click Required for the OTP Form authentication type to set its requirement to required. Finally, change the bindings. Click the Action menu at the top of the screen. Select Bind flow from the menu. Click the Browser Flow drop-down list. Click Save . A password-less browser login After entering the username, the flow works as follows: If users have WebAuthn passwordless credentials recorded, they can use these credentials to log in directly. This is the password-less login. The user can also select Password with OTP because the WebAuthn Passwordless execution and the Password with OTP flow are set to Alternative . If they are set to Required , the user has to enter WebAuthn, password, and OTP. If the user selects the Try another way link with WebAuthn passwordless authentication, the user can choose between Password and Security Key (WebAuthn passwordless). When selecting the password, the user will need to continue and log in with the assigned OTP. If the user has no WebAuthn credentials, the user must enter the password and then the OTP. If the user has no OTP credential, they will be asked to record one. Note Since the WebAuthn Passwordless execution is set to Alternative rather than Required , this flow will never ask the user to register a WebAuthn credential. For a user to have a Webauthn credential, an administrator must add a required action to the user. Do this by: Enabling the Webauthn Register Passwordless required action in the realm (see the WebAuthn documentation). Setting the required action using the Credential Reset part of a user's Credentials management menu. Creating an advanced flow such as this can have side effects. For example, if you enable the ability to reset the password for users, this would be accessible from the password form. In the default Reset Credentials flow, users must enter their username. Since the user has already entered a username earlier in the Browser Password-less flow, this action is unnecessary for Red Hat build of Keycloak and suboptimal for user experience. To correct this problem, you can: Duplicate the Reset Credentials flow. Set its name to Reset Credentials for password-less , for example. Click Delete (trash icon) of the Choose user step. In the Action menu, select Bind flow and select Reset credentials flow from the dropdown and click Save 8.3.4. Creating a browser login flow with step-up mechanism This section describes how to create advanced browser login flow using the step-up mechanism. The purpose of step-up authentication is to allow access to clients or resources based on a specific authentication level of a user. Procedure Click Authentication in the menu. Click the Flows tab. Click Create flow . Enter Browser Incl Step up Mechanism as a name. Click Save . Click Add execution . Select Cookie from the list. Click Add . Select Alternative for the Cookie authentication type to set its requirement to alternative. Click Add sub-flow . Enter Auth Flow as a name. Click Add . Click Alternative for the Auth Flow authentication type to set its requirement to alternative. Now you configure the flow for the first authentication level. Click + menu of the Auth Flow . Click Add sub-flow . Enter 1st Condition Flow as a name. Click Add . Click Conditional for the 1st Condition Flow authentication type to set its requirement to conditional. Click + menu of the 1st Condition Flow . Click Add condition . Select Conditional - Level Of Authentication from the list. Click Add . Click Required for the Conditional - Level Of Authentication authentication type to set its requirement to required. Click ⚙\ufe0f (gear icon). Enter Level 1 as an alias. Enter 1 for the Level of Authentication (LoA). Set Max Age to 36000 . This value is in seconds and it is equivalent to 10 hours, which is the default SSO Session Max timeout set in the realm. As a result, when a user authenticates with this level, subsequent SSO logins can re-use this level and the user does not need to authenticate with this level until the end of the user session, which is 10 hours by default. Click Save Configure the condition for the first authentication level Click + menu of the 1st Condition Flow . Click Add step . Select Username Password Form from the list. Click Add . Now you configure the flow for the second authentication level. Click + menu of the Auth Flow . Click Add sub-flow . Enter 2nd Condition Flow as an alias. Click Add . Click Conditional for the 2nd Condition Flow authentication type to set its requirement to conditional. Click + menu of the 2nd Condition Flow . Click Add condition . Select Conditional - Level Of Authentication from the item list. Click Add . Click Required for the Conditional - Level Of Authentication authentication type to set its requirement to required. Click ⚙\ufe0f (gear icon). Enter Level 2 as an alias. Enter 2 for the Level of Authentication (LoA). Set Max Age to 0 . As a result, when a user authenticates, this level is valid just for the current authentication, but not any subsequent SSO authentications. So the user will always need to authenticate again with this level when this level is requested. Click Save Configure the condition for the second authentication level Click + menu of the 2nd Condition Flow . Click Add step . Select OTP Form from the list. Click Add . Click Required for the OTP Form authentication type to set its requirement to required. Finally, change the bindings. Click the Action menu at the top of the screen. Select Bind flow from the list. Select Browser Flow in the dropdown. Click Save . Browser login with step-up mechanism Request a certain authentication level To use the step-up mechanism, you specify a requested level of authentication (LoA) in your authentication request. The claims parameter is used for this purpose: The claims parameter is specified in a JSON representation: The Red Hat build of Keycloak javascript adapter has support for easy construct of this JSON and sending it in the login request. See Javascript adapter documentation for more details. You can also use simpler parameter acr_values instead of claims parameter to request particular levels as non-essential. This is mentioned in the OIDC specification. You can also configure the default level for the particular client, which is used when the parameter acr_values or the parameter claims with the acr claim is not present. For further details, see Client ACR configuration ). Note To request the acr_values as text (such as gold ) instead of a numeric value, you configure the mapping between the ACR and the LoA. It is possible to configure it at the realm level (recommended) or at the client level. For configuration see ACR to LoA Mapping . For more details see the official OIDC specification . Flow logic The logic for the configured authentication flow is as follows: If a client request a high authentication level, meaning Level of Authentication 2 (LoA 2), a user has to perform full 2-factor authentication: Username/Password + OTP. However, if a user already has a session in Keycloak, that was logged in with username and password (LoA 1), the user is only asked for the second authentication factor (OTP). The option Max Age in the condition determines how long (how much seconds) the subsequent authentication level is valid. This setting helps to decide whether the user will be asked to present the authentication factor again during a subsequent authentication. If the particular level X is requested by the claims or acr_values parameter and user already authenticated with level X, but it is expired (for example max age is configured to 300 and user authenticated before 310 seconds) then the user will be asked to re-authenticate again with the particular level. However if the level is not yet expired, the user will be automatically considered as authenticated with that level. Using Max Age with the value 0 means, that particular level is valid just for this single authentication. Hence every re-authentication requesting that level will need to authenticate again with that level. This is useful for operations that require higher security in the application (e.g. send payment) and always require authentication with the specific level. Warning Note that parameters such as claims or acr_values might be changed by the user in the URL when the login request is sent from the client to the Red Hat build of Keycloak via the user's browser. This situation can be mitigated if client uses PAR (Pushed authorization request), a request object, or other mechanisms that prevents the user from rewrite the parameters in the URL. Hence after the authentication, clients are encouraged to check the ID Token to double-check that acr in the token corresponds to the expected level. If no explicit level is requested by parameters, the Red Hat build of Keycloak will require the authentication with the first LoA condition found in the authentication flow, such as the Username/Password in the preceding example. When a user was already authenticated with that level and that level expired, the user is not required to re-authenticate, but acr in the token will have the value 0. This result is considered as authentication based solely on long-lived browser cookie as mentioned in the section 2 of OIDC Core 1.0 specification. Note A conflict situation may arise when an admin specifies several flows, sets different LoA levels to each, and assigns the flows to different clients. However, the rule is always the same: if a user has a certain level, it needs only have that level to connect to a client. It's up to the admin to make sure that the LoA is coherent. Example scenario Max Age is configured as 300 seconds for level 1 condition. Login request is sent without requesting any acr. Level 1 will be used and the user needs to authenticate with username and password. The token will have acr=1 . Another login request is sent after 100 seconds. The user is automatically authenticated due to the SSO and the token will return acr=1 . Another login request is sent after another 201 seconds (301 seconds since authentication in point 2). The user is automatically authenticated due to the SSO, but the token will return acr=0 due the level 1 is considered expired. Another login request is sent, but now it will explicitly request ACR of level 1 in the claims parameter. User will be asked to re-authenticate with username/password and then acr=1 will be returned in the token. ACR claim in the token ACR claim is added to the token by the acr loa level protocol mapper defined in the acr client scope. This client scope is the realm default client scope and hence will be added to all newly created clients in the realm. In case you do not want acr claim inside tokens or you need some custom logic for adding it, you can remove the client scope from your client. Note when the login request initiates a request with the claims parameter requesting acr as essential claim, then Red Hat build of Keycloak will always return one of the specified levels. If it is not able to return one of the specified levels (For example if the requested level is unknown or bigger than configured conditions in the authentication flow), then Red Hat build of Keycloak will throw an error. 8.3.5. Registration or Reset credentials requested by client Usually when the user is redirected to the Red Hat build of Keycloak from client application, the browser flow is triggered. This flow may allow the user to register in case that realm registration is enabled and the user clicks Register on the login screen. Also, if Forget password is enabled for the realm, the user can click Forget password on the login screen, which triggers the Reset credentials flow where users can reset credentials after email address confirmation. Sometimes it can be useful for the client application to directly redirect the user to the Registration screen or to the Reset credentials flow. The resulting action will match the action of when the user clicks Register or Forget password on the normal login screen. Automatic redirect to the registration or reset-credentials screen can be done as follows: When the client wants the user to be redirected directly to the registration, the OIDC client should replace the very last snippet from the OIDC login URL path ( /auth ) with /registrations . So the full URL might be similar to the following: https://keycloak.example.com/realms/your_realm/protocol/openid-connect/registrations . When the client wants a user to be redirected directly to the Reset credentials flow, the OIDC client should replace the very last snippet from the OIDC login URL path ( /auth ) with /forgot-credentials . Warning The preceding steps are the only supported method for a client to directly request a registration or reset-credentials flow. For security purposes, it is not supported and recommended for client applications to bypass OIDC/SAML flows and directly redirect to other Red Hat build of Keycloak endpoints (such as for instance endpoints under /realms/realm_name/login-actions or /realms/realm_name/broker ). 8.4. User session limits Limits on the number of session that a user can have can be configured. Sessions can be limited per realm or per client. To add session limits to a flow, perform the following steps. Click Add step for the flow. Select User session count limiter from the item list. Click Add . Click Required for the User Session Count Limiter authentication type to set its requirement to required. Click ⚙\ufe0f (gear icon) for the User Session Count Limiter . Enter an alias for this config. Enter the required maximum number of sessions that a user can have in this realm. For example, if 2 is the value, 2 SSO sessions is the maximum that each user can have in this realm. If 0 is the value, this check is disabled. Enter the required maximum number of sessions a user can have for the client. For example, if 2 is the value, then 2 SSO sessions is the maximum in this realm for each client. So when a user is trying to authenticate to client foo , but that user has already authenticated in 2 SSO sessions to client foo , either the authentication will be denied or an existing sessions will be killed based on the behavior configured. If a value of 0 is used, this check is disabled. If both session limits and client session limits are enabled, it makes sense to have client session limits to be always lower than session limits. The limit per client can never exceed the limit of all SSO sessions of this user. Select the behavior that is required when the user tries to create a session after the limit is reached. Available behaviors are: Deny new session - when a new session is requested and the session limit is reached, no new sessions can be created. Terminate oldest session - when a new session is requested and the session limit has been reached, the oldest session will be removed and the new session created. Optionally, add a custom error message to be displayed when the limit is reached. Note that the user session limits should be added to your bound Browser flow , Direct grant flow , Reset credentials and also to any Post broker login flow . The authenticator should be added at the point when the user is already known during authentication (usually at the end of the authentication flow) and should be typically REQUIRED. Note that it is not possible to have ALTERNATIVE and REQUIRED executions at the same level. For most of authenticators like Direct grant flow , Reset credentials or Post broker login flow , it is recommended to add the authenticator as REQUIRED at the end of the authentication flow. Here is an example for the Reset credentials flow: For Browser flow, consider not adding the Session Limits authenticator at the top level flow. This recommendation is due to the Cookie authenticator, which automatically re-authenticates users based on SSO cookie. It is at the top level and it is better to not check session limits during SSO re-authentication because a user session already exists. So instead, consider adding a separate ALTERNATIVE subflow, such as the following authenticate-user-with-session-limit example at the same level like Cookie . Then you can add a REQUIRED subflow, in the following real-authentication-subflow`example, as a nested subflow of `authenticate-user-with-session-limit and add a User Session Limit at the same level as well. Inside the real-authentication-subflow , you can add real authenticators in a similar fashion to the default browser flow. The following example flow allows to users to authenticate with an identity provider or with password and OTP: Regarding Post Broker login flow , you can add the User Session Limits as the only authenticator in the authentication flow as long as you have no other authenticators that you trigger after authentication with your identity provider. However, make sure that this flow is configured as Post Broker Flow at your identity providers. This requirement exists needed so that the authentication with Identity providers also participates in the session limits. Note Currently, the administrator is responsible for maintaining consistency between the different configurations. So make sure that all your flows use same the configuration of User Session Limits . Note User session limit feature is not available for CIBA. 8.5. Kerberos Red Hat build of Keycloak supports login with a Kerberos ticket through the Simple and Protected GSSAPI Negotiation Mechanism (SPNEGO) protocol. SPNEGO authenticates transparently through the web browser after the user authenticates the session. For non-web cases, or when a ticket is not available during login, Red Hat build of Keycloak supports login with Kerberos username and password. A typical use case for web authentication is the following: The user logs into the desktop. The user accesses a web application secured by Red Hat build of Keycloak using a browser. The application redirects to Red Hat build of Keycloak login. Red Hat build of Keycloak renders the HTML login screen with status 401 and HTTP header WWW-Authenticate: Negotiate If the browser has a Kerberos ticket from desktop login, the browser transfers the desktop sign-on information to Red Hat build of Keycloak in header Authorization: Negotiate 'spnego-token' . Otherwise, it displays the standard login screen, and the user enters the login credentials. Red Hat build of Keycloak validates the token from the browser and authenticates the user. If using LDAPFederationProvider with Kerberos authentication support, Red Hat build of Keycloak provisions user data from LDAP. If using KerberosFederationProvider, Red Hat build of Keycloak lets the user update the profile and pre-fill login data. Red Hat build of Keycloak returns to the application. Red Hat build of Keycloak and the application communicate through OpenID Connect or SAML messages. Red Hat build of Keycloak acts as a broker to Kerberos/SPNEGO login. Therefore Red Hat build of Keycloak authenticating through Kerberos is hidden from the application. Warning The Negotiate www-authenticate scheme allows NTLM as a fallback to Kerberos and on some web browsers in Windows NTLM is supported by default. If a www-authenticate challenge comes from a server outside a browsers permitted list, users may encounter an NTLM dialog prompt. A user would need to click the cancel button on the dialog to continue as Keycloak does not support this mechanism. This situation can happen if Intranet web browsers are not strictly configured or if Keycloak serves users in both the Intranet and Internet. A custom authenticator can be used to restrict Negotiate challenges to a whitelist of hosts. Perform the following steps to set up Kerberos authentication: The setup and configuration of the Kerberos server (KDC). The setup and configuration of the Red Hat build of Keycloak server. The setup and configuration of the client machines. 8.5.1. Setup of Kerberos server The steps to set up a Kerberos server depends on the operating system (OS) and the Kerberos vendor. Consult Windows Active Directory, MIT Kerberos, and your OS documentation for instructions on setting up and configuring a Kerberos server. During setup, perform these steps: Add some user principals to your Kerberos database. You can also integrate your Kerberos with LDAP, so user accounts provision from the LDAP server. Add service principal for "HTTP" service. For example, if the Red Hat build of Keycloak server runs on www.mydomain.org , add the service principal HTTP/www.mydomain.org@<kerberos realm> . On MIT Kerberos, you run a "kadmin" session. On a machine with MIT Kerberos, you can use the command: Then, add HTTP principal and export its key to a keytab file with commands such as: Ensure the keytab file /tmp/http.keytab is accessible on the host where Red Hat build of Keycloak is running. 8.5.2. Setup and configuration of Red Hat build of Keycloak server Install a Kerberos client on your machine. Procedure Install a Kerberos client. If your machine runs Fedora, Ubuntu, or RHEL, install the freeipa-client package, containing a Kerberos client and other utilities. Configure the Kerberos client (on Linux, the configuration settings are in the /etc/krb5.conf file ). Add your Kerberos realm to the configuration and configure the HTTP domains your server runs on. For example, for the MYDOMAIN.ORG realm, you can configure the domain_realm section like this: Export the keytab file with the HTTP principal and ensure the file is accessible to the process running the Red Hat build of Keycloak server. For production, ensure that the file is readable by this process only. For the MIT Kerberos example above, we exported keytab to the /tmp/http.keytab file. If your Key Distribution Centre (KDC) and Red Hat build of Keycloak run on the same host, the file is already available. 8.5.2.1. Enabling SPNEGO processing By default, Red Hat build of Keycloak disables SPNEGO protocol support. To enable it, go to the browser flow and enable Kerberos . Browser flow Set the Kerberos requirement from disabled to alternative (Kerberos is optional) or required (browser must have Kerberos enabled). If you have not configured the browser to work with SPNEGO or Kerberos, Red Hat build of Keycloak falls back to the regular login screen. 8.5.2.2. Configure Kerberos user storage federation providers You must now use User Storage Federation to configure how Red Hat build of Keycloak interprets Kerberos tickets. Two different federation providers exist with Kerberos authentication support. To authenticate with Kerberos backed by an LDAP server, configure the LDAP Federation Provider . Procedure Go to the configuration page for your LDAP provider. Ldap kerberos integration Toggle Allow Kerberos authentication to ON Allow Kerberos authentication makes Red Hat build of Keycloak use the Kerberos principal access user information so information can import into the Red Hat build of Keycloak environment. If an LDAP server is not backing up your Kerberos solution, use the Kerberos User Storage Federation Provider. Procedure Click User Federation in the menu. Select Kerberos from the Add provider select box. Kerberos user storage provider The Kerberos provider parses the Kerberos ticket for simple principal information and imports the information into the local Red Hat build of Keycloak database. User profile information, such as first name, last name, and email, are not provisioned. 8.5.3. Setup and configuration of client machines Client machines must have a Kerberos client and set up the krb5.conf as described above . The client machines must also enable SPNEGO login support in their browser. See configuring Firefox for Kerberos if you are using the Firefox browser. The .mydomain.org URI must be in the network.negotiate-auth.trusted-uris configuration option. In Windows domains, clients do not need to adjust their configuration. Internet Explorer and Edge can already participate in SPNEGO authentication. 8.5.4. Credential delegation Kerberos supports the credential delegation. Applications may need access to the Kerberos ticket so they can re-use it to interact with other services secured by Kerberos. Because the Red Hat build of Keycloak server processed the SPNEGO protocol, you must propagate the GSS credential to your application within the OpenID Connect token claim or a SAML assertion attribute. Red Hat build of Keycloak transmits this to your application from the Red Hat build of Keycloak server. To insert this claim into the token or assertion, each application must enable the built-in protocol mapper gss delegation credential . This mapper is available in the Mappers tab of the application's client page. See Protocol Mappers chapter for more details. Applications must deserialize the claim it receives from Red Hat build of Keycloak before using it to make GSS calls against other services. When you deserialize the credential from the access token to the GSSCredential object, create the GSSContext with this credential passed to the GSSManager.createContext method. For example: // Obtain accessToken in your application. KeycloakPrincipal keycloakPrincipal = (KeycloakPrincipal) servletReq.getUserPrincipal(); AccessToken accessToken = keycloakPrincipal.getKeycloakSecurityContext().getToken(); // Retrieve Kerberos credential from accessToken and deserialize it String serializedGssCredential = (String) accessToken.getOtherClaims(). get(org.keycloak.common.constants.KerberosConstants.GSS_DELEGATION_CREDENTIAL); GSSCredential deserializedGssCredential = org.keycloak.common.util.KerberosSerializationUtils. deserializeCredential(serializedGssCredential); // Create GSSContext to call other Kerberos-secured services GSSContext context = gssManager.createContext(serviceName, krb5Oid, deserializedGssCredential, GSSContext.DEFAULT_LIFETIME); Note Configure forwardable Kerberos tickets in krb5.conf file and add support for delegated credentials to your browser. Warning Credential delegation has security implications, so use it only if necessary and only with HTTPS. See this article for more details and an example. 8.5.5. Cross-realm trust In the Kerberos protocol, the realm is a set of Kerberos principals. The definition of these principals exists in the Kerberos database, which is typically an LDAP server. The Kerberos protocol allows cross-realm trust. For example, if 2 Kerberos realms, A and B, exist, then cross-realm trust will allow the users from realm A to access realm B's resources. Realm B trusts realm A. Kerberos cross-realm trust The Red Hat build of Keycloak server supports cross-realm trust. To implement this, perform the following: Configure the Kerberos servers for the cross-realm trust. Implementing this step depends on the Kerberos server implementations. This step is necessary to add the Kerberos principal krbtgt/B@A to the Kerberos databases of realm A and B. This principal must have the same keys on both Kerberos realms. The principals must have the same password, key version numbers, and ciphers in both realms. Consult the Kerberos server documentation for more details. Note The cross-realm trust is unidirectional by default. You must add the principal krbtgt/A@B to both Kerberos databases for bidirectional trust between realm A and realm B. However, trust is transitive by default. If realm B trusts realm A and realm C trusts realm B, then realm C trusts realm A without the principal, krbtgt/C@A , available. Additional configuration (for example, capaths ) may be necessary on the Kerberos client-side so clients can find the trust path. Consult the Kerberos documentation for more details. Configure Red Hat build of Keycloak server When using an LDAP storage provider with Kerberos support, configure the server principal for realm B, as in this example: HTTP/mydomain.com@B . The LDAP server must find the users from realm A if users from realm A are to successfully authenticate to Red Hat build of Keycloak, because Red Hat build of Keycloak must perform the SPNEGO flow and then find the users. Finding users is based on the LDAP storage provider option Kerberos principal attribute . When this is configured for instance with value like userPrincipalName , then after SPNEGO authentication of user john@A , Red Hat build of Keycloak will try to lookup LDAP user with attribute userPrincipalName equivalent to john@A . If Kerberos principal attribute is left empty, then Red Hat build of Keycloak will lookup the LDAP user based on the prefix of his kerberos principal with the realm omitted. For example, Kerberos principal user john@A must be available in the LDAP under username john , so typically under an LDAP DN such as uid=john,ou=People,dc=example,dc=com . If you want users from realm A and B to authenticate, ensure that LDAP can find users from both realms A and B. When using a Kerberos user storage provider (typically, Kerberos without LDAP integration), configure the server principal as HTTP/mydomain.com@B , and users from Kerberos realms A and B must be able to authenticate. Users from multiple Kerberos realms are allowed to authenticate as every user would have attribute KERBEROS_PRINCIPAL referring to the kerberos principal used for authentication and this is used for further lookups of this user. To avoid conflicts when there is user john in both kerberos realms A and B , the username of the Red Hat build of Keycloak user might contain the kerberos realm lowercased. For instance username would be john@a . Just in case when realm matches with the configured Kerberos realm , the realm suffix might be omitted from the generated username. For instance username would be john for the Kerberos principal john@A as long as the Kerberos realm is configured on the Kerberos provider is A . 8.5.6. Troubleshooting If you have issues, enable additional logging to debug the problem: Enable Debug flag in the Admin Console for Kerberos or LDAP federation providers Enable TRACE logging for category org.keycloak to receive more information in server logs Add system properties -Dsun.security.krb5.debug=true and -Dsun.security.spnego.debug=true 8.6. X.509 client certificate user authentication Red Hat build of Keycloak supports logging in with an X.509 client certificate if you have configured the server to use mutual SSL authentication. A typical workflow: A client sends an authentication request over SSL/TLS channel. During the SSL/TLS handshake, the server and the client exchange their x.509/v3 certificates. The container (JBoss EAP) validates the certificate PKIX path and the certificate expiration date. The x.509 client certificate authenticator validates the client certificate by using the following methods: Checks the certificate revocation status by using CRL or CRL Distribution Points. Checks the Certificate revocation status by using OCSP (Online Certificate Status Protocol). Validates whether the key in the certificate matches the expected key. Validates whether the extended key in the certificate matches the expected extended key. If any of the these checks fail, the x.509 authentication fails. Otherwise, the authenticator extracts the certificate identity and maps it to an existing user. When the certificate maps to an existing user, the behavior diverges depending on the authentication flow: In the Browser Flow, the server prompts users to confirm their identity or sign in with a username and password. In the Direct Grant Flow, the server signs in the user. Important Note that it is the responsibility of the web container to validate certificate PKIX path. X.509 authenticator on the Red Hat build of Keycloak side provides just the additional support for check the certificate expiration, certificate revocation status and key usage. If you are using Red Hat build of Keycloak deployed behind reverse proxy, make sure that your reverse proxy is configured to validate PKIX path. If you do not use reverse proxy and users directly access the JBoss EAP, you should be fine as JBoss EAP makes sure that PKIX path is validated as long as it is configured as described below. 8.6.1. Features Supported Certificate Identity Sources: Match SubjectDN by using regular expressions X500 Subject's email attribute X500 Subject's email from Subject Alternative Name Extension (RFC822Name General Name) X500 Subject's other name from Subject Alternative Name Extension. This other name is the User Principal Name (UPN), typically. X500 Subject's Common Name attribute Match IssuerDN by using regular expressions Certificate Serial Number Certificate Serial Number and IssuerDN SHA-256 Certificate thumbprint Full certificate in PEM format 8.6.1.1. Regular expressions Red Hat build of Keycloak extracts the certificate identity from Subject DN or Issuer DN by using a regular expression as a filter. For example, this regular expression matches the email attribute: The regular expression filtering applies if the Identity Source is set to either Match SubjectDN using regular expression or Match IssuerDN using regular expression . 8.6.1.1.1. Mapping certificate identity to an existing user The certificate identity mapping can map the extracted user identity to an existing user's username, email, or a custom attribute whose value matches the certificate identity. For example, setting Identity source to Subject's email or User mapping method to Username or email makes the X.509 client certificate authenticator use the email attribute in the certificate's Subject DN as the search criteria when searching for an existing user by username or by email. Important If you disable Login with email at realm settings, the same rules apply to certificate authentication. Users are unable to log in by using the email attribute. Using Certificate Serial Number and IssuerDN as an identity source requires two custom attributes for the serial number and the IssuerDN. SHA-256 Certificate thumbprint is the lowercase hexadecimal representation of SHA-256 certificate thumbprint. Using Full certificate in PEM format as an identity source is limited to the custom attributes mapped to external federation sources, such as LDAP. Red Hat build of Keycloak cannot store certificates in its database due to length limitations, so in the case of LDAP, you must enable Always Read Value From LDAP . 8.6.1.1.2. Extended certificate validation Revocation status checking using CRL. Revocation status checking using CRL/Distribution Point. Revocation status checking using OCSP/Responder URI. Certificate KeyUsage validation. Certificate ExtendedKeyUsage validation. 8.6.2. Adding X.509 client certificate authentication to browser flows Click Authentication in the menu. Click the Browser flow. From the Action list, select Duplicate . Enter a name for the copy. Click Duplicate . Click Add step . Click "X509/Validate Username Form". Click Add . X509 execution Click and drag the "X509/Validate Username Form" over the "Browser Forms" execution. Set the requirement to "ALTERNATIVE". X509 browser flow Click the Action menu. Click the Bind flow . Click the Browser flow from the drop-down list. Click Save . X509 browser flow bindings 8.6.3. Configuring X.509 client certificate authentication X509 configuration User Identity Source Defines the method for extracting the user identity from a client certificate. Canonical DN representation enabled Defines whether to use canonical format to determine a distinguished name. The official Java API documentation describes the format. This option affects the two User Identity Sources Match SubjectDN using regular expression and Match IssuerDN using regular expression only. Enable this option when you set up a new Red Hat build of Keycloak instance. Disable this option to retain backward compatibility with existing Red Hat build of Keycloak instances. Enable Serial Number hexadecimal representation Represent the serial number as hexadecimal. The serial number with the sign bit set to 1 must be left padded with 00 octet. For example, a serial number with decimal value 161 , or a1 in hexadecimal representation is encoded as 00a1 , according to RFC5280. See RFC5280, appendix-B for more details. A regular expression A regular expression to use as a filter for extracting the certificate identity. The expression must contain a single group. User Mapping Method Defines the method to match the certificate identity with an existing user. Username or email searches for existing users by username or email. Custom Attribute Mapper searches for existing users with a custom attribute that matches the certificate identity. The name of the custom attribute is configurable. A name of user attribute A custom attribute whose value matches against the certificate identity. Use multiple custom attributes when attribute mapping is related to multiple values, For example, 'Certificate Serial Number and IssuerDN'. CRL Checking Enabled Check the revocation status of the certificate by using the Certificate Revocation List. The location of the list is defined in the CRL file path attribute. Enable CRL Distribution Point to check certificate revocation status Use CDP to check the certificate revocation status. Most PKI authorities include CDP in their certificates. CRL file path The path to a file containing a CRL list. The value must be a path to a valid file if the CRL Checking Enabled option is enabled. OCSP Checking Enabled Checks the certificate revocation status by using Online Certificate Status Protocol. OCSP Fail-Open Behavior By default the OCSP check must return a positive response in order to continue with a successful authentication. Sometimes however this check can be inconclusive: for example, the OCSP server could be unreachable, overloaded, or the client certificate may not contain an OCSP responder URI. When this setting is turned ON, authentication will be denied only if an explicit negative response is received by the OCSP responder and the certificate is definitely revoked. If a valid OCSP response is not available the authentication attempt will be accepted. OCSP Responder URI Override the value of the OCSP responder URI in the certificate. Validate Key Usage Verifies the certificate's KeyUsage extension bits are set. For example, "digitalSignature,KeyEncipherment" verifies if bits 0 and 2 in the KeyUsage extension are set. Leave this parameter empty to disable the Key Usage validation. See RFC5280, Section-4.2.1.3 for more information. Red Hat build of Keycloak raises an error when a key usage mismatch occurs. Validate Extended Key Usage Verifies one or more purposes defined in the Extended Key Usage extension. See RFC5280, Section-4.2.1.12 for more information. Leave this parameter empty to disable the Extended Key Usage validation. Red Hat build of Keycloak raises an error when flagged as critical by the issuing CA and a key usage extension mismatch occurs. Validate Certificate Policy Verifies one or more policy OIDs as defined in the Certificate Policy extension. See RFC5280, Section-4.2.1.4 . Leave the parameter empty to disable the Certificate Policy validation. Multiple policies should be separated using a comma. Certificate Policy Validation Mode When more than one policy is specified in the Validate Certificate Policy setting, it decides whether the matching should check for all requested policies to be present, or one match is enough for a successful authentication. Default value is All , meaning that all requested policies should be present in the client certificate. Bypass identity confirmation If enabled, X.509 client certificate authentication does not prompt the user to confirm the certificate identity. Red Hat build of Keycloak signs in the user upon successful authentication. Revalidate client certificate If set, the client certificate trust chain will be always verified at the application level using the certificates present in the configured trust store. This can be useful if the underlying web server does not enforce client certificate chain validation, for example because it is behind a non-validating load balancer or reverse proxy, or when the number of allowed CAs is too large for the mutual SSL negotiation (most browsers cap the maximum SSL negotiation packet size at 32767 bytes, which corresponds to about 200 advertised CAs). By default this option is off. 8.6.4. Adding X.509 Client Certificate Authentication to a Direct Grant Flow Click Authentication in the menu. Select Duplicate from the "Action list" to make a copy of the built-in "Direct grant" flow. Enter a name for the copy. Click Duplicate . Click the created flow. Click the trash can icon 🗑\ufe0f of the "Username Validation" and click Delete . Click the trash can icon 🗑\ufe0f of the "Password" and click Delete . Click Add step . Click "X509/Validate Username". Click Add . X509 direct grant execution Set up the x509 authentication configuration by following the steps described in the x509 Browser Flow section. Click the Bindings tab. Click the Direct Grant Flow drop-down list. Click the newly created "x509 Direct Grant" flow. Click Save . X509 direct grant flow bindings 8.7. W3C Web Authentication (WebAuthn) Red Hat build of Keycloak provides support for W3C Web Authentication (WebAuthn) . Red Hat build of Keycloak works as a WebAuthn's Relying Party (RP) . Note WebAuthn's operations success depends on the user's WebAuthn supporting authenticator, browser, and platform. Make sure your authenticator, browser, and platform support the WebAuthn specification. 8.7.1. Setup The setup procedure of WebAuthn support for 2FA is the following: 8.7.1.1. Enable WebAuthn authenticator registration Click Authentication in the menu. Click the Required Actions tab. Toggle the Webauthn Register switch to ON . Toggle the Default Action switch to ON if you want all new users to be required to register their WebAuthn credentials. 8.7.2. Adding WebAuthn authentication to a browser flow Click Authentication in the menu. Click the Browser flow. Select Duplicate from the "Action list" to make a copy of the built-in Browser flow. Enter "WebAuthn Browser" as the name of the copy. Click Duplicate . Click the name to go to the details Click the trash can icon 🗑\ufe0f of the "WebAuthn Browser Browser - Conditional OTP" and click Delete . If you require WebAuthn for all users: Click + menu of the WebAuthn Browser Forms . Click Add step . Click WebAuthn Authenticator . Click Add . Select Required for the WebAuthn Authenticator authentication type to set its requirement to required. Click the Action menu at the top of the screen. Select Bind flow from the drop-down list. Select Browser from the drop-down list. Click Save . Note If a user does not have WebAuthn credentials, the user must register WebAuthn credentials. Users can log in with WebAuthn if they have a WebAuthn credential registered only. So instead of adding the WebAuthn Authenticator execution, you can: Procedure Click + menu of the WebAuthn Browser Forms row. Click Add sub-flow . Enter "Conditional 2FA" for the name field. Select Conditional for the Conditional 2FA to set its requirement to conditional. On the Conditional 2FA row, click the plus sign + and select Add condition . Click Add condition . Select Condition - User Configured . Click Add . Select Required for the Condition - User Configured to set its requirement to required. Drag and drop WebAuthn Authenticator into the Conditional 2FA flow Select Alternative for the WebAuthn Authenticator to set its requirement to alternative. The user can choose between using WebAuthn and OTP for the second factor: Procedure On the Conditional 2FA row, click the plus sign + and select Add step . Select OTP Form from the list. Click Add . Select Alternative for the OTP Form to set its requirement to alternative. 8.7.3. Authenticate with WebAuthn authenticator After registering a WebAuthn authenticator, the user carries out the following operations: Open the login form. The user must authenticate with a username and password. The user's browser asks the user to authenticate by using their WebAuthn authenticator. 8.7.4. Managing WebAuthn as an administrator 8.7.4.1. Managing credentials Red Hat build of Keycloak manages WebAuthn credentials similarly to other credentials from User credential management : Red Hat build of Keycloak assigns users a required action to create a WebAuthn credential from the Reset Actions list and select Webauthn Register . Administrators can delete a WebAuthn credential by clicking Delete . Administrators can view the credential's data, such as the AAGUID, by selecting Show data... . Administrators can set a label for the credential by setting a value in the User Label field and saving the data. 8.7.4.2. Managing policy Administrators can configure WebAuthn related operations as WebAuthn Policy per realm. Procedure Click Authentication in the menu. Click the Policy tab. Click the WebAuthn Policy tab. Configure the items within the policy (see description below). Click Save . The configurable items and their description are as follows: Configuration Description Relying Party Entity Name The readable server name as a WebAuthn Relying Party. This item is mandatory and applies to the registration of the WebAuthn authenticator. The default setting is "keycloak". For more details, see WebAuthn Specification . Signature Algorithms The algorithms telling the WebAuthn authenticator which signature algorithms to use for the Public Key Credential . Red Hat build of Keycloak uses the Public Key Credential to sign and verify Authentication Assertions . If no algorithms exist, the default ES256 is adapted. ES256 is an optional configuration item applying to the registration of WebAuthn authenticators. For more details, see WebAuthn Specification . Relying Party ID The ID of a WebAuthn Relying Party that determines the scope of Public Key Credentials . The ID must be the origin's effective domain. This ID is an optional configuration item applied to the registration of WebAuthn authenticators. If this entry is blank, Red Hat build of Keycloak adapts the host part of Red Hat build of Keycloak's base URL. For more details, see WebAuthn Specification . Attestation Conveyance Preference The WebAuthn API implementation on the browser ( WebAuthn Client ) is the preferential method to generate Attestation statements. This preference is an optional configuration item applying to the registration of the WebAuthn authenticator. If no option exists, its behavior is the same as selecting "none". For more details, see WebAuthn Specification . Authenticator Attachment The acceptable attachment pattern of a WebAuthn authenticator for the WebAuthn Client. This pattern is an optional configuration item applying to the registration of the WebAuthn authenticator. For more details, see WebAuthn Specification . Require Resident Key The option requiring that the WebAuthn authenticator generates the Public Key Credential as Client-side-resident Public Key Credential Source . This option applies to the registration of the WebAuthn authenticator. If left blank, its behavior is the same as selecting "No". For more details, see WebAuthn Specification . User Verification Requirement The option requiring that the WebAuthn authenticator confirms the verification of a user. This is an optional configuration item applying to the registration of a WebAuthn authenticator and the authentication of a user by a WebAuthn authenticator. If no option exists, its behavior is the same as selecting "preferred". For more details, see WebAuthn Specification for registering a WebAuthn authenticator and WebAuthn Specification for authenticating the user by a WebAuthn authenticator . Timeout The timeout value, in seconds, for registering a WebAuthn authenticator and authenticating the user by using a WebAuthn authenticator. If set to zero, its behavior depends on the WebAuthn authenticator's implementation. The default value is 0. For more details, see WebAuthn Specification for registering a WebAuthn authenticator and WebAuthn Specification for authenticating the user by a WebAuthn authenticator . Avoid Same Authenticator Registration If enabled, Red Hat build of Keycloak cannot re-register an already registered WebAuthn authenticator. Acceptable AAGUIDs The white list of AAGUIDs which a WebAuthn authenticator must register against. 8.7.5. Attestation statement verification When registering a WebAuthn authenticator, Red Hat build of Keycloak verifies the trustworthiness of the attestation statement generated by the WebAuthn authenticator. Red Hat build of Keycloak requires the trust anchor's certificates imported into the truststore . To omit this validation, disable this truststore or set the WebAuthn policy's configuration item "Attestation Conveyance Preference" to "none". 8.7.6. Managing WebAuthn credentials as a user 8.7.6.1. Register WebAuthn authenticator The appropriate method to register a WebAuthn authenticator depends on whether the user has already registered an account on Red Hat build of Keycloak. 8.7.6.2. New user If the WebAuthn Register required action is Default Action in a realm, new users must set up the WebAuthn security key after their first login. Procedure Open the login form. Click Register . Fill in the items on the form. Click Register . After successfully registering, the browser asks the user to enter the text of their WebAuthn authenticator's label. 8.7.6.3. Existing user If WebAuthn Authenticator is set up as required as shown in the first example, then when existing users try to log in, they are required to register their WebAuthn authenticator automatically: Procedure Open the login form. Enter the items on the form. Click Save . Click Login . After successful registration, the user's browser asks the user to enter the text of their WebAuthn authenticator's label. 8.7.7. Passwordless WebAuthn together with Two-Factor Red Hat build of Keycloak uses WebAuthn for two-factor authentication, but you can use WebAuthn as the first-factor authentication. In this case, users with passwordless WebAuthn credentials can authenticate to Red Hat build of Keycloak without a password. Red Hat build of Keycloak can use WebAuthn as both the passwordless and two-factor authentication mechanism in the context of a realm and a single authentication flow. An administrator typically requires that Security Keys registered by users for the WebAuthn passwordless authentication meet different requirements. For example, the security keys may require users to authenticate to the security key using a PIN, or the security key attests with a stronger certificate authority. Because of this, Red Hat build of Keycloak permits administrators to configure a separate WebAuthn Passwordless Policy . There is a required Webauthn Register Passwordless action of type and separate authenticator of type WebAuthn Passwordless Authenticator . 8.7.7.1. Setup Set up WebAuthn passwordless support as follows: (if not already present) Register a new required action for WebAuthn passwordless support. Use the steps described in Enable WebAuthn Authenticator Registration . Register the Webauthn Register Passwordless action. Configure the policy. You can use the steps and configuration options described in Managing Policy . Perform the configuration in the Admin Console in the tab WebAuthn Passwordless Policy . Typically the requirements for the security key will be stronger than for the two-factor policy. For example, you can set the User Verification Requirement to Required when you configure the passwordless policy. Configure the authentication flow. Use the WebAuthn Browser flow described in Adding WebAuthn Authentication to a Browser Flow . Configure the flow as follows: The WebAuthn Browser Forms subflow contains Username Form as the first authenticator. Delete the default Username Password Form authenticator and add the Username Form authenticator. This action requires the user to provide a username as the first step. There will be a required subflow, which can be named Passwordless Or Two-factor , for example. This subflow indicates the user can authenticate with Passwordless WebAuthn credential or with Two-factor authentication. The flow contains WebAuthn Passwordless Authenticator as the first alternative. The second alternative will be a subflow named Password And Two-factor Webauthn , for example. This subflow contains a Password Form and a WebAuthn Authenticator . The final configuration of the flow looks similar to this: PasswordLess flow You can now add WebAuthn Register Passwordless as the required action to a user, already known to Red Hat build of Keycloak, to test this. During the first authentication, the user must use the password and second-factor WebAuthn credential. The user does not need to provide the password and second-factor WebAuthn credential if they use the WebAuthn Passwordless credential. 8.7.8. LoginLess WebAuthn Red Hat build of Keycloak uses WebAuthn for two-factor authentication, but you can use WebAuthn as the first-factor authentication. In this case, users with passwordless WebAuthn credentials can authenticate to Red Hat build of Keycloak without submitting a login or a password. Red Hat build of Keycloak can use WebAuthn as both the loginless/passwordless and two-factor authentication mechanism in the context of a realm. An administrator typically requires that Security Keys registered by users for the WebAuthn loginless authentication meet different requirements. Loginless authentication requires users to authenticate to the security key (for example by using a PIN code or a fingerprint) and that the cryptographic keys associated with the loginless credential are stored physically on the security key. Not all security keys meet that kind of requirements. Check with your security key vendor if your device supports 'user verification' and 'resident key'. See Supported Security Keys . Red Hat build of Keycloak permits administrators to configure the WebAuthn Passwordless Policy in a way that allows loginless authentication. Note that loginless authentication can only be configured with WebAuthn Passwordless Policy and with WebAuthn Passwordless credentials. WebAuthn loginless authentication and WebAuthn passwordless authentication can be configured on the same realm but will share the same policy WebAuthn Passwordless Policy . 8.7.8.1. Setup Procedure Set up WebAuthn Loginless support as follows: (if not already present) Register a new required action for WebAuthn passwordless support. Use the steps described in Enable WebAuthn Authenticator Registration . Register the Webauthn Register Passwordless action. Configure the WebAuthn Passwordless Policy . Perform the configuration in the Admin Console, Authentication section, in the tab Policies WebAuthn Passwordless Policy . You have to set User Verification Requirement to required and Require Resident Key to Yes when you configure the policy for loginless scenario. Note that since there isn't a dedicated Loginless policy it won't be possible to mix authentication scenarios with user verification=no/resident key=no and loginless scenarios (user verification=yes/resident key=yes). Storage capacity is usually very limited on security keys meaning that you won't be able to store many resident keys on your security key. Configure the authentication flow. Create a new authentication flow, add the "WebAuthn Passwordless" execution and set the Requirement setting of the execution to Required The final configuration of the flow looks similar to this: LoginLess flow You can now add the required action WebAuthn Register Passwordless to a user, already known to Red Hat build of Keycloak, to test this. The user with the required action configured will have to authenticate (with a username/password for example) and will then be prompted to register a security key to be used for loginless authentication. 8.7.8.2. Vendor specific remarks 8.7.8.2.1. Compatibility check list Loginless authentication with Red Hat build of Keycloak requires the security key to meet the following features FIDO2 compliance: not to be confused with FIDO/U2F User verification: the ability for the security key to authenticate the user (prevents someone finding your security key to be able to authenticate loginless and passwordless) Resident key: the ability for the security key to store the login and the cryptographic keys associated with the client application 8.7.8.2.2. Windows Hello To use Windows Hello based credentials to authenticate against Red Hat build of Keycloak, configure the Signature Algorithms setting of the WebAuthn Passwordless Policy to include the RS256 value. Note that some browsers don't allow access to platform security key (like Windows Hello) inside private windows. 8.7.8.2.3. Supported security keys The following security keys have been successfully tested for loginless authentication with Red Hat build of Keycloak: Windows Hello (Windows 10 21H1/21H2) Yubico Yubikey 5 NFC Feitian ePass FIDO-NFC 8.8. Recovery Codes (RecoveryCodes) You can configure Recovery codes for two-factor authentication by adding 'Recovery Authentication Code Form' as a two-factor authenticator to your authentication flow. For an example of configuring this authenticator, see WebAuthn . Note RecoveryCodes is Technology Preview and is not fully supported. This feature is disabled by default. To enable start the server with --features=preview or --features=recovery-codes 8.9. Conditions in conditional flows As was mentioned in Execution requirements , Condition executions can be only contained in Conditional subflow. If all Condition executions evaluate as true, then the Conditional sub-flow acts as Required . You can process the execution in the Conditional sub-flow. If some executions included in the Conditional sub-flow evaluate as false, then the whole sub-flow is considered as Disabled . 8.9.1. Available conditions Condition - User Role This execution has the ability to determine if the user has a role defined by User role field. If the user has the required role, the execution is considered as true and other executions are evaluated. The administrator has to define the following fields: Alias Describes a name of the execution, which will be shown in the authentication flow. User role Role the user should have to execute this flow. To specify an application role the syntax is appname.approle (for example myapp.myrole ). Condition - User Configured This checks if the other executions in the flow are configured for the user. The Execution requirements section includes an example of the OTP form. Condition - User Attribute This checks if the user has set up the required attribute: optionally, the check can also evaluate the group attributes. There is a possibility to negate output, which means the user should not have the attribute. The User Attributes section shows how to add a custom attribute. You can provide these fields: Alias Describes a name of the execution, which will be shown in the authentication flow. Attribute name Name of the attribute to check. Expected attribute value Expected value in the attribute. Include group attributes If On, the condition checks if any of the joined group has one attribute matching the configured name and value: this option can affect performance Negate output You can negate the output. In other words, the attribute should not be present. 8.9.2. Explicitly deny/allow access in conditional flows You can allow or deny access to resources in a conditional flow. The two authenticators Deny Access and Allow Access control access to the resources by conditions. Allow Access Authenticator will always successfully authenticate. This authenticator is not configurable. Deny Access Access will always be denied. You can define an error message, which will be shown to the user. You can provide these fields: Alias Describes a name of the execution, which will be shown in the authentication flow. Error message Error message which will be shown to the user. The error message could be provided as a particular message or as a property in order to use it with localization. (i.e. " You do not have the role 'admin'. ", my-property-deny in messages properties) Leave blank for the default message defined as property access-denied . Here is an example how to deny access to all users who do not have the role role1 and show an error message defined by a property deny-role1 . This example includes Condition - User Role and Deny Access executions. Browser flow Condition - user role configuration Configuration of the Deny Access is really easy. You can specify an arbitrary Alias and required message like this: The last thing is defining the property with an error message in the login theme messages_en.properties (for English):
[ "https://{DOMAIN}/realms/{REALMNAME}/protocol/openid-connect/auth?client_id={CLIENT-ID}&redirect_uri={REDIRECT-URI}&scope=openid&response_type=code&response_mode=query&nonce=exg16fxdjcu&claims=%7B%22id_token%22%3A%7B%22acr%22%3A%7B%22essential%22%3Atrue%2C%22values%22%3A%5B%22gold%22%5D%7D%7D%7D", "claims= { \"id_token\": { \"acr\": { \"essential\": true, \"values\": [\"gold\"] } } }", "sudo kadmin.local", "addprinc -randkey HTTP/[email protected] ktadd -k /tmp/http.keytab HTTP/[email protected]", "[domain_realm] .mydomain.org = MYDOMAIN.ORG mydomain.org = MYDOMAIN.ORG", "// Obtain accessToken in your application. KeycloakPrincipal keycloakPrincipal = (KeycloakPrincipal) servletReq.getUserPrincipal(); AccessToken accessToken = keycloakPrincipal.getKeycloakSecurityContext().getToken(); // Retrieve Kerberos credential from accessToken and deserialize it String serializedGssCredential = (String) accessToken.getOtherClaims(). get(org.keycloak.common.constants.KerberosConstants.GSS_DELEGATION_CREDENTIAL); GSSCredential deserializedGssCredential = org.keycloak.common.util.KerberosSerializationUtils. deserializeCredential(serializedGssCredential); // Create GSSContext to call other Kerberos-secured services GSSContext context = gssManager.createContext(serviceName, krb5Oid, deserializedGssCredential, GSSContext.DEFAULT_LIFETIME);", "emailAddress=(.*?)(?:,|USD)", "deny-role1 = You do not have required role!" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/server_administration_guide/configuring-authentication_server_administration_guide
Chapter 1. Red Hat Software Collections 3.3
Chapter 1. Red Hat Software Collections 3.3 This chapter serves as an overview of the Red Hat Software Collections 3.3 content set. It provides a list of components and their descriptions, sums up changes in this version, documents relevant compatibility information, and lists known issues. 1.1. About Red Hat Software Collections For certain applications, more recent versions of some software components are often needed in order to use their latest new features. Red Hat Software Collections is a Red Hat offering that provides a set of dynamic programming languages, database servers, and various related packages that are either more recent than their equivalent versions included in the base Red Hat Enterprise Linux system, or are available for this system for the first time. Red Hat Software Collections 3.3 is be available for Red Hat Enterprise Linux 7; selected new components and previously released components also for Red Hat Enterprise Linux 6. For a complete list of components that are distributed as part of Red Hat Software Collections and a brief summary of their features, see Section 1.2, "Main Features" . Red Hat Software Collections does not replace the default system tools provided with Red Hat Enterprise Linux 6 or Red Hat Enterprise Linux 7. Instead, a parallel set of tools is installed in the /opt/ directory and can be optionally enabled per application by the user using the supplied scl utility. The default versions of Perl or PostgreSQL, for example, remain those provided by the base Red Hat Enterprise Linux system. All Red Hat Software Collections components are fully supported under Red Hat Enterprise Linux Subscription Level Agreements, are functionally complete, and are intended for production use. Important bug fix and security errata are issued to Red Hat Software Collections subscribers in a similar manner to Red Hat Enterprise Linux for at least two years from the release of each major version. In each major release stream, each version of a selected component remains backward compatible. For detailed information about length of support for individual components, refer to the Red Hat Software Collections Product Life Cycle document. 1.1.1. Red Hat Developer Toolset Red Hat Developer Toolset is a part of Red Hat Software Collections, included as a separate Software Collection. For more information about Red Hat Developer Toolset, refer to the Red Hat Developer Toolset Release Notes and the Red Hat Developer Toolset User Guide . 1.2. Main Features Table 1.1, "Red Hat Software Collections 3.3 Components" lists components that are supported at the time of the Red Hat Software Collections 3.3 release. Table 1.1. Red Hat Software Collections 3.3 Components Component Software Collection Description Red Hat Developer Toolset 8.1 devtoolset-8 Red Hat Developer Toolset is designed for developers working on the Red Hat Enterprise Linux platform. It provides current versions of the GNU Compiler Collection , GNU Debugger , and other development, debugging, and performance monitoring tools. For a complete list of components, see the Red Hat Developer Toolset Components table in the Red Hat Developer Toolset User Guide . Perl 5.24.0 rh-perl524 A release of Perl, a high-level programming language that is commonly used for system administration utilities and web programming. The rh-perl524 Software Collection provides additional utilities, scripts, and database connectors for MySQL and PostgreSQL . It includes the DateTime Perl module and the mod_perl Apache httpd module, which is supported only with the httpd24 Software Collection. Additionally, it provides the cpanm utility for easy installation of CPAN modules. Perl 5.26.3 [a] rh-perl526 A release of Perl, a high-level programming language that is commonly used for system administration utilities and web programming. The rh-perl526 Software Collection provides additional utilities, scripts, and database connectors for MySQL and PostgreSQL . It includes the DateTime Perl module and the mod_perl Apache httpd module, which is supported only with the httpd24 Software Collection. Additionally, it provides the cpanm utility for easy installation of CPAN modules. The rh-perl526 packaging is aligned with upstream; the perl526-perl package installs also core modules, while the interpreter is provided by the perl-interpreter package. PHP 7.0.27 rh-php70 A release of PHP 7.0 with PEAR 1.10, enhanced language features and performance improvement . PHP 7.1.8 [a] rh-php71 A release of PHP 7.1 with PEAR 1.10, APCu 5.1.8, and enhanced language features. PHP 7.2.10 [a] rh-php72 A release of PHP 7.2 with PEAR 1.10.5, APCu 5.1.12, and enhanced language features. Python 2.7.16 python27 A release of Python 2.7 with a number of additional utilities. This Python version provides various features and enhancements, including an ordered dictionary type, faster I/O operations, and improved forward compatibility with Python 3. The python27 Software Collections contains the Python 2.7.13 interpreter , a set of extension libraries useful for programming web applications and mod_wsgi (only supported with the httpd24 Software Collection), MySQL and PostgreSQL database connectors, and numpy and scipy . Python 3.6.3 rh-python36 The rh-python36 Software Collection contains Python 3.6.3, which introduces a number of new features, such as f-strings, syntax for variable annotations, and asynchronous generators and comprehensions . In addition, a set of extension libraries useful for programming web applications is included, with mod_wsgi (supported only together with the httpd24 Software Collection), PostgreSQL database connector, and numpy and scipy . Ruby 2.4.6 rh-ruby24 A release of Ruby 2.4. This version provides multiple performance improvements and enhancements, for example improved hash table, new debugging features, support for Unicode case mappings, and support for OpenSSL 1.1.0 . Ruby 2.4.0 maintains source-level backward compatibility with Ruby 2.3, Ruby 2.2, Ruby 2.0.0, and Ruby 1.9.3. Ruby 2.5.5 [a] rh-ruby25 A release of Ruby 2.5. This version provides multiple performance improvements and new features, for example, simplified usage of blocks with the rescue , else , and ensure keywords, a new yield_self method, support for branch coverage and method coverage measurement, new Hash#slice and Hash#transform_keys methods . Ruby 2.5.0 maintains source-level backward compatibility with Ruby 2.4. Ruby 2.6.2 [a] rh-ruby26 A release of Ruby 2.6. This version provides multiple performance improvements and new features, such as endless ranges, the Binding#source_location method, and the USDSAFE process global state . Ruby 2.6.0 maintains source-level backward compatibility with Ruby 2.5. Ruby on Rails 5.0.1 rh-ror50 A release of Ruby on Rails 5.0, the latest version of the web application framework written in the Ruby language. Notable new features include Action Cable, API mode, exclusive use of rails CLI over Rake, and ActionRecord attributes. This Software Collection is supported together with the rh-ruby24 Collection. Scala 2.10.6 [a] rh-scala210 A release of Scala, a general purpose programming language for the Java platform, which integrates features of object-oriented and functional languages. MariaDB 10.2.22 rh-mariadb102 A release of MariaDB, an alternative to MySQL for users of Red Hat Enterprise Linux. For all practical purposes, MySQL is binary compatible with MariaDB and can be replaced with it without any data conversions. This version adds MariaDB Backup, Flashback, support for Recursive Common Table Expressions, window functions, and JSON functions . MariaDB 10.3.13 [a] rh-mariadb103 A release of MariaDB, an alternative to MySQL for users of Red Hat Enterprise Linux. For all practical purposes, MySQL is binary compatible with MariaDB and can be replaced with it without any data conversions. This version introduces system-versioned tables, invisible columns, a new instant ADD COLUMN operation for InnoDB , and a JDBC connector for MariaDB and MySQL . MongoDB 3.4.9 rh-mongodb34 A release of MongoDB, a cross-platform document-oriented database system classified as a NoSQL database. This release introduces support for new architectures, adds message compression and support for the decimal128 type, enhances collation features and more. MongoDB 3.6.3 [a] rh-mongodb36 A release of MongoDB, a cross-platform document-oriented database system classified as a NoSQL database. This release introduces change streams, retryable writes, and JSON Schema , as well as other features. MySQL 5.7.24 rh-mysql57 A release of MySQL, which provides a number of new features and enhancements, including improved performance. MySQL 8.0.13 [a] rh-mysql80 A release of the MySQL server, which introduces a number of new security and account management features and enhancements. PostgreSQL 9.6.10 rh-postgresql96 A release of PostgreSQL, which introduces parallel execution of sequential scans, joins, and aggregates, and provides enhancements to synchronous replication, full-text search, deration driver, postgres_fdw, as well as performance improvements. PostgreSQL 10.6 [a] rh-postgresql10 A release of PostgreSQL, which includes a significant performance improvement and a number of new features, such as logical replication using the publish and subscribe keywords, or stronger password authentication based on the SCRAM-SHA-256 mechanism . Node.js 8.11.4 [a] rh-nodejs8 A release of Node.js, which provides multiple API enhancements and new features, including V8 engine version 6.0, npm 5.6.0 and npx, enhanced security, experimental N-API support, and performance improvements. Node.js 10.10.0 [a] rh-nodejs10 A release of Node.js, which provides multiple API enhancements and new features, including V8 engine version 6.6, full N-API support , and stability improvements. nginx 1.10.2 rh-nginx110 A release of nginx, a web and proxy server with a focus on high concurrency, performance, and low memory usage. This version introduces a number of new features, including dynamic module support, HTTP/2 support, Perl integration, and numerous performance improvements . nginx 1.12.1 [a] rh-nginx112 A release of nginx, a web and proxy server with a focus on high concurrency, performance, and low memory usage. This version introduces a number of new features, including IP Transparency, improved TCP/UDP load balancing, enhanced caching performance, and numerous performance improvements . nginx 1.14.1 [a] rh-nginx114 A release of nginx, a web and proxy server with a focus on high concurrency, performance, and low memory usage. This version provides a number of features, such as mirror module, HTTP/2 server push, gRPC proxy module, and numerous performance improvements . Apache httpd 2.4.34 httpd24 A release of the Apache HTTP Server (httpd), including a high performance event-based processing model, enhanced SSL module and FastCGI support . The mod_auth_kerb , mod_auth_mellon , and ModSecurity modules are also included. Varnish Cache 5.2.1 [a] rh-varnish5 A release of Varnish Cache, a high-performance HTTP reverse proxy. This version includes the shard director, experimental HTTP/2 support, and improvements to Varnish configuration through separate VCL files and VCL labels. Varnish Cache 6.0.2 [a] rh-varnish6 A release of Varnish Cache, a high-performance HTTP reverse proxy. This version includes support for Unix Domain Sockets (both for clients and for back-end servers), new level of the VCL language ( vcl 4.1 ), and improved HTTP/2 support . Maven 3.5.0 [a] rh-maven35 A release of Maven, a software project management and comprehension tool. This release introduces support for new architectures and a number of new features, including colorized logging . Git 2.18.1 [a] rh-git218 A release of Git, a distributed revision control system with a decentralized architecture. As opposed to centralized version control systems with a client-server model, Git ensures that each working copy of a Git repository is its exact copy with complete revision history. This version includes the Large File Storage (LFS) extension . Redis 3.2.4 rh-redis32 A release of Redis 3.2, a persistent key-value database . Redis 5.0.3 [a] rh-redis5 A release of Redis 5.0, a persistent key-value database . Redis now provides redis-trib , a cluster management tool . HAProxy 1.8.17 [a] rh-haproxy18 A release of HAProxy 1.8, a reliable, high-performance network load balancer for TCP and HTTP-based applications. Common Java Packages rh-java-common This Software Collection provides common Java libraries and tools used by other collections. The rh-java-common Software Collection is required by the rh-maven35 and rh-scala210 components and it is not supposed to be installed directly by users. JDK Mission Control [a] rh-jmc This Software Collection includes JDK Mission Control (JMC) , a powerful profiler for HotSpot JVMs. JMC provides an advanced set of tools for efficient and detailed analysis of extensive data collected by the JDK Flight Recorder. JMC requires JDK version 8 or later to run. Target Java applications must run with at least OpenJDK version 11 so that JMC can access JDK Flight Recorder features. The rh-jmc Software Collection requires the rh-maven35 Software Collection. [a] This Software Collection is available only for Red Hat Enterprise Linux 7 Previously released Software Collections remain available in the same distribution channels. All Software Collections, including retired components, are listed in the Table 1.2, "All Available Software Collections" . Software Collections that are no longer supported are marked with an asterisk ( * ). See the Red Hat Software Collections Product Life Cycle document for information on the length of support for individual components. For detailed information regarding previously released components, refer to the Release Notes for earlier versions of Red Hat Software Collections. Table 1.2. All Available Software Collections Component Software Collection Availability Architectures supported on RHEL7 Components New in Red Hat Software Collections 3.3 MariaDB 10.3.13 rh-mariadb103 RHEL7 x86_64, s390x, aarch64, ppc64le Redis 5.0.3 rh-redis5 RHEL7 x86_64, s390x, aarch64, ppc64le Ruby 2.6.2 rh-ruby26 RHEL7 x86_64, s390x, aarch64, ppc64le Table 1.2. All Available Software Collections Components Updated in Red Hat Software Collections 3.3 Red Hat Developer Toolset 8.1 devtoolset-8 RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64, ppc64le HAProxy 1.8.17 rh-haproxy18 RHEL7 x86_64 Varnish Cache 6.0.2 rh-varnish6 RHEL7 x86_64, s390x, aarch64, ppc64le Apache httpd 2.4.34 httpd24 RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 3.2 PHP 7.2.10 rh-php72 RHEL7 x86_64, s390x, aarch64, ppc64le MySQL 8.0.13 rh-mysql80 RHEL7 x86_64, s390x, aarch64, ppc64le Node.js 10.10.0 rh-nodejs10 RHEL7 x86_64, s390x, aarch64, ppc64le nginx 1.14.1 rh-nginx114 RHEL7 x86_64, s390x, aarch64, ppc64le Git 2.18.1 rh-git218 RHEL7 x86_64, s390x, aarch64, ppc64le JDK Mission Control rh-jmc RHEL7 x86_64 Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 3.1 Red Hat Developer Toolset 7.1 devtoolset-7 RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64, ppc64le Perl 5.26.3 rh-perl526 RHEL7 x86_64, s390x, aarch64, ppc64le Ruby 2.5.5 rh-ruby25 RHEL7 x86_64, s390x, aarch64, ppc64le MongoDB 3.6.3 rh-mongodb36 RHEL7 x86_64, s390x, aarch64, ppc64le Varnish Cache 5.2.1 rh-varnish5 RHEL7 x86_64, s390x, aarch64, ppc64le PostgreSQL 10.6 rh-postgresql10 RHEL7 x86_64, s390x, aarch64, ppc64le PHP 7.0.27 rh-php70 RHEL6, RHEL7 x86_64 MySQL 5.7.24 rh-mysql57 RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 3.0 PHP 7.1.8 rh-php71 RHEL7 x86_64, s390x, aarch64, ppc64le nginx 1.12.1 rh-nginx112 RHEL7 x86_64, s390x, aarch64, ppc64le Python 3.6.3 rh-python36 RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le Maven 3.5.0 rh-maven35 RHEL7 x86_64, s390x, aarch64, ppc64le MariaDB 10.2.22 rh-mariadb102 RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le PostgreSQL 9.6.10 rh-postgresql96 RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le MongoDB 3.4.9 rh-mongodb34 RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le Node.js 8.11.4 rh-nodejs8 RHEL7 x86_64, s390x, aarch64, ppc64le Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 2.4 Red Hat Developer Toolset 6.1 devtoolset-6 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64, ppc64le Scala 2.10.6 rh-scala210 RHEL7 x86_64 nginx 1.10.2 rh-nginx110 RHEL6, RHEL7 x86_64 Node.js 6.11.3 rh-nodejs6 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le Ruby 2.4.6 rh-ruby24 RHEL6, RHEL7 x86_64 Ruby on Rails 5.0.1 rh-ror50 RHEL6, RHEL7 x86_64 Eclipse 4.6.3 rh-eclipse46 * RHEL7 x86_64 Python 2.7.16 python27 RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le Thermostat 1.6.6 rh-thermostat16 * RHEL6, RHEL7 x86_64 Maven 3.3.9 rh-maven33 * RHEL6, RHEL7 x86_64 Common Java Packages rh-java-common RHEL6, RHEL7 x86_64 Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 2.3 Git 2.9.3 rh-git29 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le Redis 3.2.4 rh-redis32 RHEL6, RHEL7 x86_64 Perl 5.24.0 rh-perl524 RHEL6, RHEL7 x86_64 Python 3.5.1 rh-python35 * RHEL6, RHEL7 x86_64 MongoDB 3.2.10 rh-mongodb32 * RHEL6, RHEL7 x86_64 Ruby 2.3.8 rh-ruby23 * RHEL6, RHEL7 x86_64 PHP 5.6.25 rh-php56 * RHEL6, RHEL7 x86_64 Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 2.2 Red Hat Developer Toolset 4.1 devtoolset-4 * RHEL6, RHEL7 x86_64 MariaDB 10.1.29 rh-mariadb101 * RHEL6, RHEL7 x86_64 MongoDB 3.0.11 upgrade collection rh-mongodb30upg * RHEL6, RHEL7 x86_64 Node.js 4.6.2 rh-nodejs4 * RHEL6, RHEL7 x86_64 PostgreSQL 9.5.14 rh-postgresql95 * RHEL6, RHEL7 x86_64 Ruby on Rails 4.2.6 rh-ror42 * RHEL6, RHEL7 x86_64 MongoDB 2.6.9 rh-mongodb26 * RHEL6, RHEL7 x86_64 Thermostat 1.4.4 thermostat1 * RHEL6, RHEL7 x86_64 Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 2.1 Varnish Cache 4.0.3 rh-varnish4 * RHEL6, RHEL7 x86_64 nginx 1.8.1 rh-nginx18 * RHEL6, RHEL7 x86_64 Node.js 0.10 nodejs010 * RHEL6, RHEL7 x86_64 Maven 3.0.5 maven30 * RHEL6, RHEL7 x86_64 V8 3.14.5.10 v8314 * RHEL6, RHEL7 x86_64 Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 2.0 Red Hat Developer Toolset 3.1 devtoolset-3 * RHEL6, RHEL7 x86_64 Perl 5.20.1 rh-perl520 * RHEL6, RHEL7 x86_64 Python 3.4.2 rh-python34 * RHEL6, RHEL7 x86_64 Ruby 2.2.9 rh-ruby22 * RHEL6, RHEL7 x86_64 Ruby on Rails 4.1.5 rh-ror41 * RHEL6, RHEL7 x86_64 MariaDB 10.0.33 rh-mariadb100 * RHEL6, RHEL7 x86_64 MySQL 5.6.40 rh-mysql56 * RHEL6, RHEL7 x86_64 PostgreSQL 9.4.14 rh-postgresql94 * RHEL6, RHEL7 x86_64 Passenger 4.0.50 rh-passenger40 * RHEL6, RHEL7 x86_64 PHP 5.4.40 php54 * RHEL6, RHEL7 x86_64 PHP 5.5.21 php55 * RHEL6, RHEL7 x86_64 nginx 1.6.2 nginx16 * RHEL6, RHEL7 x86_64 DevAssistant 0.9.3 devassist09 * RHEL6, RHEL7 x86_64 Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 1 Git 1.9.4 git19 * RHEL6, RHEL7 x86_64 Perl 5.16.3 perl516 * RHEL6, RHEL7 x86_64 Python 3.3.2 python33 * RHEL6, RHEL7 x86_64 Ruby 1.9.3 ruby193 * RHEL6, RHEL7 x86_64 Ruby 2.0.0 ruby200 * RHEL6, RHEL7 x86_64 Ruby on Rails 4.0.2 ror40 * RHEL6, RHEL7 x86_64 MariaDB 5.5.53 mariadb55 * RHEL6, RHEL7 x86_64 MongoDB 2.4.9 mongodb24 * RHEL6, RHEL7 x86_64 MySQL 5.5.52 mysql55 * RHEL6, RHEL7 x86_64 PostgreSQL 9.2.18 postgresql92 * RHEL6, RHEL7 x86_64 Legend: RHEL6 - Red Hat Enterprise Linux 6 RHEL7 - Red Hat Enterprise Linux 7 x86_64 - AMD64 and Intel 64 architectures s390x - IBM Z aarch64 - The 64-bit ARM architecture ppc64 - IBM POWER, big endian ppc64le - IBM POWER, little endian * - Retired component; this Software Collection is no longer supported The tables above list the latest versions available through asynchronous updates. Note that Software Collections released in Red Hat Software Collections 2.0 and later include a rh- prefix in their names. Eclipse is available as a part of the Red Hat Developer Tools offering. 1.3. Changes in Red Hat Software Collections 3.3 1.3.1. Overview Architectures The Red Hat Software Collections offering contains packages for Red Hat Enterprise Linux 7 running on AMD64 and Intel 64 architectures; certain Software Collections are available also for Red Hat Enterprise Linux 6. In addition, Red Hat Software Collections 3.3 supports the following architectures on Red Hat Enterprise Linux 7: The 64-bit ARM architecture IBM Z IBM POWER, little endian For a full list of components and their availability, see Table 1.2, "All Available Software Collections" . New Software Collections Red Hat Software Collections 3.3 adds these new Software Collections: rh-mariadb103 - see Section 1.3.3, "Changes in MariaDB" rh-redis5 - see Section 1.3.4, "Changes in Redis" rh-ruby26 - see Section 1.3.5, "Changes in Ruby" All new Software Collections are available only for Red Hat Enterprise Linux 7. Updated Software Collections The following component has been updated in Red Hat Software Collections 3.3: devtoolset-8 - see Section 1.3.2, "Changes in Red Hat Developer Toolset" rh-varnish6 - see Section 1.3.6, "Changes in Varnish Cache" httpd24 - see Section 1.3.7, "Changes in Apache httpd" rh-haproxy18 - see Section 1.3.8, "Changes in HAProxy" Red Hat Software Collections Container Images The following container images are new in Red Hat Software Collections 3.3: rhscl/mariadb-103-rhel7 rhscl/redis-5-rhel7 rhscl/ruby-26-rhel7 The following container images have been updated in Red Hat Software Collections 3.3: rhscl/devtoolset-8-toolchain-rhel7 rhscl/devtoolset-8-perftools-rhel7 rhscl/varnish-6-rhel7 rhscl/httpd-24-rhel7 For detailed information regarding Red Hat Software Collections container images, see Section 3.4, "Red Hat Software Collections Container Images" . 1.3.2. Changes in Red Hat Developer Toolset The following components have been upgraded in Red Hat Developer Toolset 8.1 compared to the release of Red Hat Developer Toolset: GCC to version 8.3.1 elfutils to version 0.176 In addition, bug fix updates are available for the following components: binutils GDB SystemTap Valgrind Dyninst For detailed information on changes in 8.1, see the Red Hat Developer Toolset User Guide . 1.3.3. Changes in MariaDB The new rh-mariadb103 Software Collection provides MariaDB 10.3.13 , which introduces a number of new features and bug fixes. New features include: A new rh-mariadb103-mariadb-java-client package, which provides the Java Database Connectivity (JDBC) connector for the MariaDB and MySQL database servers. The connector supports MariaDB and MySQL version 5.5.3 and later, JDBC version 4.2, and it requires Java Runtime Environment (JRE) version 8 or 11. (BZ# 1625989 ) System-versioned tables , which enable you to store history of changes. Invisible columns , which are not listed unless explicitly called. A new instant ADD COLUMN operation for InnoDB , which does not require the whole table to be rebuilt. For compatibility notes and migration instructions, see Section 5.1, "Migrating to MariaDB 10.3" . For detailed changes in MariaDB 10.3 , see the upstream documentation . 1.3.4. Changes in Redis The new rh-redis5 Software Collection includes Redis 5.0.3 . This version provides multiple enhancements and bug fixes over version 3.2 distributed with an earlier Red Hat Software Collections release. Most notably, the redis-trib cluster management tool has been implemented in the Redis command-line interface. For migration and compatibility notes, see Section 5.10, "Migrating to Redis 5" . For detailed changes in Redis , see the upstream release notes for version 4.0 and version 5.0 . 1.3.5. Changes in Ruby The new rh-ruby26 Software Collection provides Ruby 2.6.2 , which introduces a number of performance improvements, bug fixes, and new features. Notable enhancements include: Constant names are now allowed to begin with a non-ASCII capital letter. Support for an endless range has been added. A new Binding#source_location method has been provided. USDSAFE is now a process global state and it can be set back to 0 . The following performance improvements have been implemented: The Proc#call and block.call processes have been optimized. A new garbage collector managed heap, Transient heap ( theap ), has been introduced. Native implementations of coroutines for individual architectures have been introduced. For more information regarding changes in Ruby 2.6 , see the upstream announcement . 1.3.6. Changes in Varnish Cache The rh-varnish6 Software Collection has been updated to version 6.0.2. This version includes numerous bug fixes, various minor enhancements, for example to Varnish Configuration Language (VCL) and log messages, and improvements to stability. In addition, the varnish-modules subpackage has been added, which provides a collection of Varnish modules (VMODs) that extend VCL used for describing HTTP request and response policies with additional capabilities. For more information, see the upstream documentation . For detailed changes in Varnish Cache 6.0.2 , see the upstream change log for version 6.0.2 and 6.0.1 . 1.3.7. Changes in Apache httpd This release introduces the ModSecurity module and an update to the mod_auth_mellon module. Both modules are available only for Red Hat Enterprise Linux 7. The ModSecurity module, distributed in the httpd24-mod_security packages, includes an open source web application firewall (WAF) engine for web applications. ModSecurity operates embedded into the web server and has a robust event-based programming language, which provides protection from a range of attacks against web applications. Red Hat Software Collections 3.3 includes ModSecurity version 2.9.3. The mod_auth_mellon module has been updated to version 0.14.0, which provides various bug fixes, improvements to stability, and enhancements, such as: More detailed error logging New diagnostics logging, which creates a detailed log during request processing Support for selecting which signature algorithm is used when signing messages This update to mod_auth_mellon also introduces the following backward incompatible change: The default signature algorithm used for signing messages has been changed from rsa-sha1 to rsa-sha256 . If your identity provider (IdP) does not support rsa-sha256 , adjust the /opt/rh/httpd24/root/etc/httpd/conf.d/auth_mellon.conf file to include the following line: Note that this affects only messages sent from mod_auth_mellon to your IdP. It does not affect authentication responses or other messages sent from your IdP to mod_auth_mellon . 1.3.8. Changes in HAProxy The HAProxy load balancer has been updated to version 1.8.17, which provides multiple bug and security fixes. 1.4. Compatibility Information Red Hat Software Collections 3.3 is available for all supported releases of Red Hat Enterprise Linux 7 on AMD64 and Intel 64 architectures, the 64-bit ARM architecture, IBM Z, and IBM POWER, little endian. Certain components are available also for all supported releases of Red Hat Enterprise Linux 6 on AMD64 and Intel 64 architectures. For a full list of available components, see Table 1.2, "All Available Software Collections" . 1.5. Known Issues multiple components, BZ# 1716378 Certain files provided by the Software Collections debuginfo packages might conflict with the corresponding debuginfo package files from the base Red Hat Enterprise Linux system or from other versions of Red Hat Software Collections components. For example, the python27-python-debuginfo package files might conflict with the corresponding files from the python-debuginfo package installed on the core system. Similarly, files from the httpd24-mod_auth_mellon-debuginfo package might conflict with similar files provided by the base system mod_auth_mellon-debuginfo package. To work around this problem, uninstall the base system debuginfo package prior to installing the Software Collection debuginfo package. rh-mysql80 , BZ# 1646363 The mysql-connector-java database connector does not work with the MySQL 8.0 server. To work around this problem, use the mariadb-java-client database connector from the rh-mariadb103 Software Collection. rh-mysql80 , BZ# 1646158 The default character set has been changed to utf8mb4 in MySQL 8.0 but this character set is unsupported by the php-mysqlnd database connector. Consequently, php-mysqlnd fails to connect in the default configuration. To work around this problem, specify a known character set as a parameter of the MySQL server configuration. For example, modify the /etc/opt/rh/rh-mysql80/my.cnf.d/mysql-server.cnf file to read: httpd24 component, BZ# 1429006 Since httpd 2.4.27 , the mod_http2 module is no longer supported with the default prefork Multi-Processing Module (MPM). To enable HTTP/2 support, edit the configuration file at /opt/rh/httpd24/root/etc/httpd/conf.modules.d/00-mpm.conf and switch to the event or worker MPM. Note that the HTTP/2 server-push feature does not work on the 64-bit ARM architecture, IBM Z, and IBM POWER, little endian. httpd24 component, BZ# 1327548 The mod_ssl module does not support the ALPN protocol on Red Hat Enterprise Linux 6, or on Red Hat Enterprise Linux 7.3 and earlier. Consequently, clients that support upgrading TLS connections to HTTP/2 only using ALPN are limited to HTTP/1.1 support. httpd24 component, BZ# 1224763 When using the mod_proxy_fcgi module with FastCGI Process Manager (PHP-FPM), httpd uses port 8000 for the FastCGI protocol by default instead of the correct port 9000 . To work around this problem, specify the correct port explicitly in configuration. httpd24 component, BZ# 1382706 When SELinux is enabled, the LD_LIBRARY_PATH environment variable is not passed through to CGI scripts invoked by httpd . As a consequence, in some cases it is impossible to invoke executables from Software Collections enabled in the /opt/rh/httpd24/service-environment file from CGI scripts run by httpd . To work around this problem, set LD_LIBRARY_PATH as desired from within the CGI script. httpd24 component Compiling external applications against the Apache Portable Runtime (APR) and APR-util libraries from the httpd24 Software Collection is not supported. The LD_LIBRARY_PATH environment variable is not set in httpd24 because it is not required by any application in this Software Collection. rh-python35 , rh-python36 components, BZ# 1499990 The pytz module, which is used by Babel for time zone support, is not included in the rh-python35 , and rh-python36 Software Collections. Consequently, when the user tries to import the dates module from Babel , a traceback is returned. To work around this problem, install pytz through the pip package manager from the pypi public repository by using the pip install pytz command. rh-python36 component Certain complex trigonometric functions provided by numpy might return incorrect values on the 64-bit ARM architecture, IBM Z, and IBM POWER, little endian. The AMD64 and Intel 64 architectures are not affected by this problem. python27 component, BZ# 1330489 The python27-python-pymongo package has been updated to version 3.2.1. Note that this version is not fully compatible with the previously shipped version 2.5.2. scl-utils component In Red Hat Enterprise Linux 7.5 and earlier, due to an architecture-specific macro bug in the scl-utils package, the <collection>/root/usr/lib64/ directory does not have the correct package ownership on the 64-bit ARM architecture and on IBM POWER, little endian. As a consequence, this directory is not removed when a Software Collection is uninstalled. To work around this problem, manually delete <collection>/root/usr/lib64/ when removing a Software Collection. rh-ruby24 , rh-ruby23 components Determination of RubyGem installation paths is dependent on the order in which multiple Software Collections are enabled. The required order has been changed since Ruby 2.3.1 shipped in Red Hat Software Collections 2.3 to support dependent Collections. As a consequence, RubyGem paths, which are used for gem installation during an RPM build, are invalid when the Software Collections are supplied in an incorrect order. For example, the build now fails if the RPM spec file contains scl enable rh-ror50 rh-nodejs6 . To work around this problem, enable the rh-ror50 Software Collection last, for example, scl enable rh-nodejs6 rh-ror50 . rh-maven35 , rh-maven33 components When the user has installed both the Red Hat Enterprise Linux system version of maven-local package and the rh-maven35-maven-local package or rh-maven33-maven-local package , XMvn , a tool used for building Java RPM packages, run from the rh-maven35 or rh-maven33 Software Collection tries to read the configuration file from the base system and fails. To work around this problem, uninstall the maven-local package from the base Red Hat Enterprise Linux system. perl component It is impossible to install more than one mod_perl.so library. As a consequence, it is not possible to use the mod_perl module from more than one Perl Software Collection. postgresql component The rh-postgresql9* packages for Red Hat Enterprise Linux 6 do not provide the sepgsql module as this feature requires installation of libselinux version 2.0.99, which is not available in Red Hat Enterprise Linux 6. httpd , mariadb , mongodb , mysql , nodejs , perl , php , python , ruby , and ror components, BZ# 1072319 When uninstalling the httpd24 , rh-mariadb* , rh-mongodb* , rh-mysql* , rh-nodejs* , rh-perl* , rh-php* , python27 , rh-python* , rh-ruby* , or rh-ror* packages, the order of uninstalling can be relevant due to ownership of dependent packages. As a consequence, some directories and files might not be removed properly and might remain on the system. mariadb , mysql components, BZ# 1194611 Since MariaDB 10 and MySQL 5.6 , the rh-mariadb*-mariadb-server and rh-mysql*-mysql-server packages no longer provide the test database by default. Although this database is not created during initialization, the grant tables are prefilled with the same values as when test was created by default. As a consequence, upon a later creation of the test or test_* databases, these databases have less restricted access rights than is default for new databases. Additionally, when running benchmarks, the run-all-tests script no longer works out of the box with example parameters. You need to create a test database before running the tests and specify the database name in the --database parameter. If the parameter is not specified, test is taken by default but you need to make sure the test database exist. mariadb , mysql , postgresql , mongodb components Red Hat Software Collections 3.3 contains the MySQL 5.7 , MySQL 8.0 , MariaDB 10.0 , MariaDB 10.1 , MariaDB 10.2 , PostgreSQL 9.5 , PostgreSQL 9.6 , PostgreSQL 10 , MongoDB 3.2 , MongoDB 3.4 , and MongoDB 3.6 databases. The core Red Hat Enterprise Linux 6 provides earlier versions of the MySQL and PostgreSQL databases (client library and daemon). The core Red Hat Enterprise Linux 7 provides earlier versions of the MariaDB and PostgreSQL databases (client library and daemon). Client libraries are also used in database connectors for dynamic languages, libraries, and so on. The client library packaged in the Red Hat Software Collections database packages in the PostgreSQL component is not supposed to be used, as it is included only for purposes of server utilities and the daemon. Users are instead expected to use the system library and the database connectors provided with the core system. A protocol, which is used between the client library and the daemon, is stable across database versions, so, for example, using the PostgreSQL 9.2 client library with the PostgreSQL 9.4 or 9.5 daemon works as expected. The core Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7 do not include the client library for MongoDB . In order to use this client library for your application, you should use the client library from Red Hat Software Collections and always use the scl enable ... call every time you run an application linked against this MongoDB client library. mariadb , mysql , mongodb components MariaDB, MySQL, and MongoDB do not make use of the /opt/ provider / collection /root prefix when creating log files. Note that log files are saved in the /var/opt/ provider / collection /log/ directory, not in /opt/ provider / collection /root/var/log/ . Other Notes rh-ruby* , rh-python* , rh-php* components Using Software Collections on a read-only NFS has several limitations. Ruby gems cannot be installed while the rh-ruby* Software Collection is on a read-only NFS. Consequently, for example, when the user tries to install the ab gem using the gem install ab command, an error message is displayed, for example: The same problem occurs when the user tries to update or install gems from an external source by running the bundle update or bundle install commands. When installing Python packages on a read-only NFS using the Python Package Index (PyPI), running the pip command fails with an error message similar to this: Installing packages from PHP Extension and Application Repository (PEAR) on a read-only NFS using the pear command fails with the error message: This is an expected behavior. httpd component Language modules for Apache are supported only with the Red Hat Software Collections version of Apache httpd and not with the Red Hat Enterprise Linux system versions of httpd . For example, the mod_wsgi module from the rh-python35 Collection can be used only with the httpd24 Collection. all components Since Red Hat Software Collections 2.0, configuration files, variable data, and runtime data of individual Collections are stored in different directories than in versions of Red Hat Software Collections. coreutils , util-linux , screen components Some utilities, for example, su , login , or screen , do not export environment settings in all cases, which can lead to unexpected results. It is therefore recommended to use sudo instead of su and set the env_keep environment variable in the /etc/sudoers file. Alternatively, you can run commands in a reverse order; for example: instead of When using tools like screen or login , you can use the following command to preserve the environment settings: source /opt/rh/<collection_name>/enable python component When the user tries to install more than one scldevel package from the python27 and rh-python* Software Collections, a transaction check error message is returned. This is an expected behavior because the user can install only one set of the macro files provided by the packages ( %scl_python , %scl_ prefix _python ). php component When the user tries to install more than one scldevel package from the rh-php* Software Collections, a transaction check error message is returned. This is an expected behavior because the user can install only one set of the macro files provided by the packages ( %scl_php , %scl_ prefix _php ). ruby component When the user tries to install more than one scldevel package from the rh-ruby* Software Collections, a transaction check error message is returned. This is an expected behavior because the user can install only one set of the macro files provided by the packages ( %scl_ruby , %scl_ prefix _ruby ). perl component When the user tries to install more than one scldevel package from the rh-perl* Software Collections, a transaction check error message is returned. This is an expected behavior because the user can install only one set of the macro files provided by the packages ( %scl_perl , %scl_ prefix _perl ). nginx component When the user tries to install more than one scldevel package from the rh-nginx* Software Collections, a transaction check error message is returned. This is an expected behavior because the user can install only one set of the macro files provided by the packages ( %scl_nginx , %scl_ prefix _nginx ). 1.6. Deprecated Functionality httpd24 component, BZ# 1434053 Previously, in an SSL/TLS configuration requiring name-based SSL virtual host selection, the mod_ssl module rejected requests with a 400 Bad Request error, if the host name provided in the Host: header did not match the host name provided in a Server Name Indication (SNI) header. Such requests are no longer rejected if the configured SSL/TLS security parameters are identical between the selected virtual hosts, in-line with the behavior of upstream mod_ssl .
[ "MellonSignatureMethod rsa-sha1", "[mysqld] character-set-server=utf8", "ERROR: While executing gem ... (Errno::EROFS) Read-only file system @ dir_s_mkdir - /opt/rh/rh-ruby22/root/usr/local/share/gems", "Read-only file system: '/opt/rh/rh-python34/root/usr/lib/python3.4/site-packages/ipython-3.1.0.dist-info'", "Cannot install, php_dir for channel \"pear.php.net\" is not writeable by the current user", "su -l postgres -c \"scl enable rh-postgresql94 psql\"", "scl enable rh-postgresql94 bash su -l postgres -c psql" ]
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.3_release_notes/chap-rhscl
Chapter 9. Policy enforcers
Chapter 9. Policy enforcers Policy Enforcement Point (PEP) is a design pattern and as such you can implement it in different ways. Red Hat Single Sign-On provides all the necessary means to implement PEPs for different platforms, environments, and programming languages. Red Hat Single Sign-On Authorization Services presents a RESTful API, and leverages OAuth2 authorization capabilities for fine-grained authorization using a centralized authorization server. A PEP is responsible for enforcing access decisions from the Red Hat Single Sign-On server where these decisions are taken by evaluating the policies associated with a protected resource. It acts as a filter or interceptor in your application in order to check whether or not a particular request to a protected resource can be fulfilled based on the permissions granted by these decisions. Permissions are enforced depending on the protocol you are using. When using UMA, the policy enforcer always expects an RPT as a bearer token in order to decide whether or not a request can be served. That means clients should first obtain an RPT from Red Hat Single Sign-On before sending requests to the resource server. However, if you are not using UMA, you can also send regular access tokens to the resource server. In this case, the policy enforcer will try to obtain permissions directly from the server. If you are using any of the Red Hat Single Sign-On OIDC adapters, you can easily enable the policy enforcer by adding the following property to your keycloak.json file: keycloak.json { "policy-enforcer": {} } When you enable the policy enforcer all requests sent your application are intercepted and access to protected resources will be granted depending on the permissions granted by Red Hat Single Sign-On to the identity making the request. Policy enforcement is strongly linked to your application's paths and the resources you created for a resource server using the Red Hat Single Sign-On Administration Console. By default, when you create a resource server, Red Hat Single Sign-On creates a default configuration for your resource server so you can enable policy enforcement quickly. 9.1. Configuration To enable policy enforcement for your application, add the following property to your keycloak.json file: keycloak.json { "policy-enforcer": {} } Or a little more verbose if you want to manually define the resources being protected: { "policy-enforcer": { "user-managed-access" : {}, "enforcement-mode" : "ENFORCING", "paths": [ { "path" : "/someUri/*", "methods" : [ { "method": "GET", "scopes" : ["urn:app.com:scopes:view"] }, { "method": "POST", "scopes" : ["urn:app.com:scopes:create"] } ] }, { "name" : "Some Resource", "path" : "/usingPattern/{id}", "methods" : [ { "method": "DELETE", "scopes" : ["urn:app.com:scopes:delete"] } ] }, { "path" : "/exactMatch" }, { "name" : "Admin Resources", "path" : "/usingWildCards/*" } ] } } Here is a description of each configuration option: policy-enforcer Specifies the configuration options that define how policies are actually enforced and optionally the paths you want to protect. If not specified, the policy enforcer queries the server for all resources associated with the resource server being protected. In this case, you need to ensure the resources are properly configured with a URIS property that matches the paths you want to protect. user-managed-access Specifies that the adapter uses the UMA protocol. If specified, the adapter queries the server for permission tickets and returns them to clients according to the UMA specification. If not specified, the policy enforcer will be able to enforce permissions based on regular access tokens or RPTs. In this case, before denying access to the resource when the token lacks permission, the policy enforcer will try to obtain permissions directly from the server. enforcement-mode Specifies how policies are enforced. ENFORCING (default mode) Requests are denied by default even when there is no policy associated with a given resource. PERMISSIVE Requests are allowed even when there is no policy associated with a given resource. DISABLED Completely disables the evaluation of policies and allows access to any resource. When enforcement-mode is DISABLED applications are still able to obtain all permissions granted by Red Hat Single Sign-On through the Authorization Context on-deny-redirect-to Defines a URL where a client request is redirected when an "access denied" message is obtained from the server. By default, the adapter responds with a 403 HTTP status code. path-cache Defines how the policy enforcer should track associations between paths in your application and resources defined in Red Hat Single Sign-On. The cache is needed to avoid unnecessary requests to a Red Hat Single Sign-On server by caching associations between paths and protected resources. lifespan Defines the time in milliseconds when the entry should be expired. If not provided, default value is 30000 . A value equal to 0 can be set to completely disable the cache. A value equal to -1 can be set to disable the expiry of the cache. max-entries Defines the limit of entries that should be kept in the cache. If not provided, default value is 1000 . paths Specifies the paths to protect. This configuration is optional. If not defined, the policy enforcer will discover all paths by fetching the resources you defined to your application in Red Hat Single Sign-On, where these resources are defined with URIS representing some paths in your application. name The name of a resource on the server that is to be associated with a given path. When used in conjunction with a path , the policy enforcer ignores the resource's URIS property and uses the path you provided instead. path (required) A URI relative to the application's context path. If this option is specified, the policy enforcer queries the server for a resource with a URI with the same value. Currently a very basic logic for path matching is supported. Examples of valid paths are: Wildcards: /* Suffix: /*.html Sub-paths: /path/* Path parameters: /resource/{id} Exact match: /resource Patterns: /{version}/resource, /api/{version}/resource, /api/{version}/resource/* methods The HTTP methods (for example, GET, POST, PATCH) to protect and how they are associated with the scopes for a given resource in the server. method The name of the HTTP method. scopes An array of strings with the scopes associated with the method. When you associate scopes with a specific method, the client trying to access a protected resource (or path) must provide an RPT that grants permission to all scopes specified in the list. For example, if you define a method POST with a scope create , the RPT must contain a permission granting access to the create scope when performing a POST to the path. scopes-enforcement-mode A string referencing the enforcement mode for the scopes associated with a method. Values can be ALL or ANY . If ALL , all defined scopes must be granted in order to access the resource using that method. If ANY , at least one scope should be granted in order to gain access to the resource using that method. By default, enforcement mode is set to ALL . enforcement-mode Specifies how policies are enforced. ENFORCING (default mode) Requests are denied by default even when there is no policy associated with a given resource. DISABLED claim-information-point Defines a set of one or more claims that must be resolved and pushed to the Red Hat Single Sign-On server in order to make these claims available to policies. See Claim Information Point for more details. lazy-load-paths Specifies how the adapter should fetch the server for resources associated with paths in your application. If true , the policy enforcer is going to fetch resources on-demand accordingly with the path being requested. This configuration is specially useful when you don't want to fetch all resources from the server during deployment (in case you have provided no paths ) or in case you have defined only a sub set of paths and want to fetch others on-demand. http-method-as-scope Specifies how scopes should be mapped to HTTP methods. If set to true , the policy enforcer will use the HTTP method from the current request to check whether or not access should be granted. When enabled, make sure your resources in Red Hat Single Sign-On are associated with scopes representing each HTTP method you are protecting. claim-information-point Defines a set of one or more global claims that must be resolved and pushed to the Red Hat Single Sign-On server in order to make these claims available to policies. See Claim Information Point for more details. 9.2. Claim Information Point A Claim Information Point (CIP) is responsible for resolving claims and pushing these claims to the Red Hat Single Sign-On server in order to provide more information about the access context to policies. They can be defined as a configuration option to the policy-enforcer in order to resolve claims from different sources, such as: HTTP Request (parameters, headers, body, etc) External HTTP Service Static values defined in configuration Any other source by implementing the Claim Information Provider SPI When pushing claims to the Red Hat Single Sign-On server, policies can base decisions not only on who a user is but also by taking context and contents into account, based on who, what, why, when, where, and which for a given transaction. It is all about Contextual-based Authorization and how to use runtime information in order to support fine-grained authorization decisions. 9.2.1. Obtaining information from the HTTP request Here are several examples showing how you can extract claims from an HTTP request: keycloak.json "policy-enforcer": { "paths": [ { "path": "/protected/resource", "claim-information-point": { "claims": { "claim-from-request-parameter": "{request.parameter['a']}", "claim-from-header": "{request.header['b']}", "claim-from-cookie": "{request.cookie['c']}", "claim-from-remoteAddr": "{request.remoteAddr}", "claim-from-method": "{request.method}", "claim-from-uri": "{request.uri}", "claim-from-relativePath": "{request.relativePath}", "claim-from-secure": "{request.secure}", "claim-from-json-body-object": "{request.body['/a/b/c']}", "claim-from-json-body-array": "{request.body['/d/1']}", "claim-from-body": "{request.body}", "claim-from-static-value": "static value", "claim-from-multiple-static-value": ["static", "value"], "param-replace-multiple-placeholder": "Test {keycloak.access_token['/custom_claim/0']} and {request.parameter['a']} " } } } ] } 9.2.2. Obtaining information from an external HTTP service Here are several examples showing how you can extract claims from an external HTTP Service: keycloak.json "policy-enforcer": { "paths": [ { "path": "/protected/resource", "claim-information-point": { "http": { "claims": { "claim-a": "/a", "claim-d": "/d", "claim-d0": "/d/0", "claim-d-all": ["/d/0", "/d/1"] }, "url": "http://mycompany/claim-provider", "method": "POST", "headers": { "Content-Type": "application/x-www-form-urlencoded", "header-b": ["header-b-value1", "header-b-value2"], "Authorization": "Bearer {keycloak.access_token}" }, "parameters": { "param-a": ["param-a-value1", "param-a-value2"], "param-subject": "{keycloak.access_token['/sub']}", "param-user-name": "{keycloak.access_token['/preferred_username']}", "param-other-claims": "{keycloak.access_token['/custom_claim']}" } } } } ] } 9.2.3. Static claims keycloak.json "policy-enforcer": { "paths": [ { "path": "/protected/resource", "claim-information-point": { "claims": { "claim-from-static-value": "static value", "claim-from-multiple-static-value": ["static", "value"], } } } ] } 9.2.4. Claim information provider SPI The Claim Information Provider SPI can be used by developers to support different claim information points in case none of the built-ins providers are enough to address their requirements. For example, to implement a new CIP provider you need to implement org.keycloak.adapters.authorization.ClaimInformationPointProviderFactory and ClaimInformationPointProvider and also provide the file META-INF/services/org.keycloak.adapters.authorization.ClaimInformationPointProviderFactory in your application`s classpath. Example of org.keycloak.adapters.authorization.ClaimInformationPointProviderFactory : public class MyClaimInformationPointProviderFactory implements ClaimInformationPointProviderFactory<MyClaimInformationPointProvider> { @Override public String getName() { return "my-claims"; } @Override public void init(PolicyEnforcer policyEnforcer) { } @Override public MyClaimInformationPointProvider create(Map<String, Object> config) { return new MyClaimInformationPointProvider(config); } } Every CIP provider must be associated with a name, as defined above in the MyClaimInformationPointProviderFactory.getName method. The name will be used to map the configuration from the claim-information-point section in the policy-enforcer configuration to the implementation. When processing requests, the policy enforcer will call the MyClaimInformationPointProviderFactory.create method in order to obtain an instance of MyClaimInformationPointProvider. When called, any configuration defined for this particular CIP provider (via claim-information-point) is passed as a map. Example of ClaimInformationPointProvider : public class MyClaimInformationPointProvider implements ClaimInformationPointProvider { private final Map<String, Object> config; public MyClaimInformationPointProvider(Map<String, Object> config) { this.config = config; } @Override public Map<String, List<String>> resolve(HttpFacade httpFacade) { Map<String, List<String>> claims = new HashMap<>(); // put whatever claim you want into the map return claims; } } 9.3. Obtaining the authorization context When policy enforcement is enabled, the permissions obtained from the server are available through org.keycloak.AuthorizationContext . This class provides several methods you can use to obtain permissions and ascertain whether a permission was granted for a particular resource or scope. Obtaining the Authorization Context in a Servlet Container HttpServletRequest request = ... // obtain javax.servlet.http.HttpServletRequest KeycloakSecurityContext keycloakSecurityContext = (KeycloakSecurityContext) request .getAttribute(KeycloakSecurityContext.class.getName()); AuthorizationContext authzContext = keycloakSecurityContext.getAuthorizationContext(); Note For more details about how you can obtain a KeycloakSecurityContext consult the adapter configuration. The example above should be sufficient to obtain the context when running an application using any of the servlet containers supported by Red Hat Single Sign-On. The authorization context helps give you more control over the decisions made and returned by the server. For example, you can use it to build a dynamic menu where items are hidden or shown depending on the permissions associated with a resource or scope. if (authzContext.hasResourcePermission("Project Resource")) { // user can access the Project Resource } if (authzContext.hasResourcePermission("Admin Resource")) { // user can access administration resources } if (authzContext.hasScopePermission("urn:project.com:project:create")) { // user can create new projects } The AuthorizationContext represents one of the main capabilities of Red Hat Single Sign-On Authorization Services. From the examples above, you can see that the protected resource is not directly associated with the policies that govern them. Consider some similar code using role-based access control (RBAC): if (User.hasRole('user')) { // user can access the Project Resource } if (User.hasRole('admin')) { // user can access administration resources } if (User.hasRole('project-manager')) { // user can create new projects } Although both examples address the same requirements, they do so in different ways. In RBAC, roles only implicitly define access for their resources. With Red Hat Single Sign-On you gain the capability to create more manageable code that focuses directly on your resources whether you are using RBAC, attribute-based access control (ABAC), or any other BAC variant. Either you have the permission for a given resource or scope, or you don't. Now, suppose your security requirements have changed and in addition to project managers, PMOs can also create new projects. Security requirements change, but with Red Hat Single Sign-On there is no need to change your application code to address the new requirements. Once your application is based on the resource and scope identifier, you need only change the configuration of the permissions or policies associated with a particular resource in the authorization server. In this case, the permissions and policies associated with the Project Resource and/or the scope urn:project.com:project:create would be changed. 9.4. Using the AuthorizationContext to obtain an Authorization Client Instance The AuthorizationContext can also be used to obtain a reference to the Authorization Client API configured to your application: ClientAuthorizationContext clientContext = ClientAuthorizationContext.class.cast(authzContext); AuthzClient authzClient = clientContext.getClient(); In some cases, resource servers protected by the policy enforcer need to access the APIs provided by the authorization server. With an AuthzClient instance in hands, resource servers can interact with the server in order to create resources or check for specific permissions programmatically. 9.5. JavaScript integration The Red Hat Single Sign-On Server comes with a JavaScript library you can use to interact with a resource server protected by a policy enforcer. This library is based on the Red Hat Single Sign-On JavaScript adapter, which can be integrated to allow your client to obtain permissions from a Red Hat Single Sign-On Server. You can obtain this library from a running a Red Hat Single Sign-On Server instance by including the following script tag in your web page: <script src="http://.../auth/js/keycloak-authz.js"></script> Once you do that, you can create a KeycloakAuthorization instance as follows: const keycloak = ... // obtain a Keycloak instance from keycloak.js library const authorization = new KeycloakAuthorization(keycloak); The keycloak-authz.js library provides two main features: Obtain permissions from the server using a permission ticket, if you are accessing a UMA protected resource server. Obtain permissions from the server by sending the resources and scopes the application wants to access. In both cases, the library allows you to easily interact with both resource server and Red Hat Single Sign-On Authorization Services to obtain tokens with permissions your client can use as bearer tokens to access the protected resources on a resource server. 9.5.1. Handling authorization responses from a UMA-Protected resource server If a resource server is protected by a policy enforcer, it responds to client requests based on the permissions carried along with a bearer token. Typically, when you try to access a resource server with a bearer token that is lacking permissions to access a protected resource, the resource server responds with a 401 status code and a WWW-Authenticate header. HTTP/1.1 401 Unauthorized WWW-Authenticate: UMA realm="USD{realm}", as_uri="https://USD{host}:USD{port}/auth/realms/USD{realm}", ticket="016f84e8-f9b9-11e0-bd6f-0021cc6004de" See UMA Authorization Process for more information. What your client needs to do is extract the permission ticket from the WWW-Authenticate header returned by the resource server and use the library to send an authorization request as follows: // prepare a authorization request with the permission ticket const authorizationRequest = {}; authorizationRequest.ticket = ticket; // send the authorization request, if successful retry the request Identity.authorization.authorize(authorizationRequest).then(function (rpt) { // onGrant }, function () { // onDeny }, function () { // onError }); The authorize function is completely asynchronous and supports a few callback functions to receive notifications from the server: onGrant : The first argument of the function. If authorization was successful and the server returned an RPT with the requested permissions, the callback receives the RPT. onDeny : The second argument of the function. Only called if the server has denied the authorization request. onError : The third argument of the function. Only called if the server responds unexpectedly. Most applications should use the onGrant callback to retry a request after a 401 response. Subsequent requests should include the RPT as a bearer token for retries. 9.5.2. Obtaining entitlements The keycloak-authz.js library provides an entitlement function that you can use to obtain an RPT from the server by providing the resources and scopes your client wants to access. Example about how to obtain an RPT with permissions for all resources and scopes the user can access authorization.entitlement('my-resource-server-id').then(function (rpt) { // onGrant callback function. // If authorization was successful you'll receive an RPT // with the necessary permissions to access the resource server }); Example about how to obtain an RPT with permissions for specific resources and scopes authorization.entitlement('my-resource-server', { "permissions": [ { "id" : "Some Resource" } ] }).then(function (rpt) { // onGrant }); When using the entitlement function, you must provide the client_id of the resource server you want to access. The entitlement function is completely asynchronous and supports a few callback functions to receive notifications from the server: onGrant : The first argument of the function. If authorization was successful and the server returned an RPT with the requested permissions, the callback receives the RPT. onDeny : The second argument of the function. Only called if the server has denied the authorization request. onError : The third argument of the function. Only called if the server responds unexpectedly. 9.5.3. Authorization request Both authorize and entitlement functions accept an authorization request object. This object can be set with the following properties: permissions An array of objects representing the resource and scopes. For instance: const authorizationRequest = { "permissions": [ { "id" : "Some Resource", "scopes" : ["view", "edit"] } ] } metadata An object where its properties define how the authorization request should be processed by the server. response_include_resource_name A boolean value indicating to the server if resource names should be included in the RPT's permissions. If false, only the resource identifier is included. response_permissions_limit An integer N that defines a limit for the amount of permissions an RPT can have. When used together with rpt parameter, only the last N requested permissions will be kept in the RPT submit_request A boolean value indicating whether the server should create permission requests to the resources and scopes referenced by a permission ticket. This parameter will only take effect when used together with the ticket parameter as part of a UMA authorization process. 9.5.4. Obtaining the RPT If you have already obtained an RPT using any of the authorization functions provided by the library, you can always obtain the RPT as follows from the authorization object (assuming that it has been initialized by one of the techniques shown earlier): const rpt = authorization.rpt; 9.6. Configuring TLS/HTTPS When the server is using HTTPS, ensure your adapter is configured as follows: keycloak.json { "truststore": "path_to_your_trust_store", "truststore-password": "trust_store_password" } The configuration above enables TLS/HTTPS to the Authorization Client, making possible to access a Red Hat Single Sign-On Server remotely using the HTTPS scheme. Note It is strongly recommended that you enable TLS/HTTPS when accessing the Red Hat Single Sign-On Server endpoints.
[ "{ \"policy-enforcer\": {} }", "{ \"policy-enforcer\": {} }", "{ \"policy-enforcer\": { \"user-managed-access\" : {}, \"enforcement-mode\" : \"ENFORCING\", \"paths\": [ { \"path\" : \"/someUri/*\", \"methods\" : [ { \"method\": \"GET\", \"scopes\" : [\"urn:app.com:scopes:view\"] }, { \"method\": \"POST\", \"scopes\" : [\"urn:app.com:scopes:create\"] } ] }, { \"name\" : \"Some Resource\", \"path\" : \"/usingPattern/{id}\", \"methods\" : [ { \"method\": \"DELETE\", \"scopes\" : [\"urn:app.com:scopes:delete\"] } ] }, { \"path\" : \"/exactMatch\" }, { \"name\" : \"Admin Resources\", \"path\" : \"/usingWildCards/*\" } ] } }", "\"policy-enforcer\": { \"paths\": [ { \"path\": \"/protected/resource\", \"claim-information-point\": { \"claims\": { \"claim-from-request-parameter\": \"{request.parameter['a']}\", \"claim-from-header\": \"{request.header['b']}\", \"claim-from-cookie\": \"{request.cookie['c']}\", \"claim-from-remoteAddr\": \"{request.remoteAddr}\", \"claim-from-method\": \"{request.method}\", \"claim-from-uri\": \"{request.uri}\", \"claim-from-relativePath\": \"{request.relativePath}\", \"claim-from-secure\": \"{request.secure}\", \"claim-from-json-body-object\": \"{request.body['/a/b/c']}\", \"claim-from-json-body-array\": \"{request.body['/d/1']}\", \"claim-from-body\": \"{request.body}\", \"claim-from-static-value\": \"static value\", \"claim-from-multiple-static-value\": [\"static\", \"value\"], \"param-replace-multiple-placeholder\": \"Test {keycloak.access_token['/custom_claim/0']} and {request.parameter['a']} \" } } } ] }", "\"policy-enforcer\": { \"paths\": [ { \"path\": \"/protected/resource\", \"claim-information-point\": { \"http\": { \"claims\": { \"claim-a\": \"/a\", \"claim-d\": \"/d\", \"claim-d0\": \"/d/0\", \"claim-d-all\": [\"/d/0\", \"/d/1\"] }, \"url\": \"http://mycompany/claim-provider\", \"method\": \"POST\", \"headers\": { \"Content-Type\": \"application/x-www-form-urlencoded\", \"header-b\": [\"header-b-value1\", \"header-b-value2\"], \"Authorization\": \"Bearer {keycloak.access_token}\" }, \"parameters\": { \"param-a\": [\"param-a-value1\", \"param-a-value2\"], \"param-subject\": \"{keycloak.access_token['/sub']}\", \"param-user-name\": \"{keycloak.access_token['/preferred_username']}\", \"param-other-claims\": \"{keycloak.access_token['/custom_claim']}\" } } } } ] }", "\"policy-enforcer\": { \"paths\": [ { \"path\": \"/protected/resource\", \"claim-information-point\": { \"claims\": { \"claim-from-static-value\": \"static value\", \"claim-from-multiple-static-value\": [\"static\", \"value\"], } } } ] }", "public class MyClaimInformationPointProviderFactory implements ClaimInformationPointProviderFactory<MyClaimInformationPointProvider> { @Override public String getName() { return \"my-claims\"; } @Override public void init(PolicyEnforcer policyEnforcer) { } @Override public MyClaimInformationPointProvider create(Map<String, Object> config) { return new MyClaimInformationPointProvider(config); } }", "public class MyClaimInformationPointProvider implements ClaimInformationPointProvider { private final Map<String, Object> config; public MyClaimInformationPointProvider(Map<String, Object> config) { this.config = config; } @Override public Map<String, List<String>> resolve(HttpFacade httpFacade) { Map<String, List<String>> claims = new HashMap<>(); // put whatever claim you want into the map return claims; } }", "HttpServletRequest request = ... // obtain javax.servlet.http.HttpServletRequest KeycloakSecurityContext keycloakSecurityContext = (KeycloakSecurityContext) request .getAttribute(KeycloakSecurityContext.class.getName()); AuthorizationContext authzContext = keycloakSecurityContext.getAuthorizationContext();", "if (authzContext.hasResourcePermission(\"Project Resource\")) { // user can access the Project Resource } if (authzContext.hasResourcePermission(\"Admin Resource\")) { // user can access administration resources } if (authzContext.hasScopePermission(\"urn:project.com:project:create\")) { // user can create new projects }", "if (User.hasRole('user')) { // user can access the Project Resource } if (User.hasRole('admin')) { // user can access administration resources } if (User.hasRole('project-manager')) { // user can create new projects }", "ClientAuthorizationContext clientContext = ClientAuthorizationContext.class.cast(authzContext); AuthzClient authzClient = clientContext.getClient();", "<script src=\"http://.../auth/js/keycloak-authz.js\"></script>", "const keycloak = ... // obtain a Keycloak instance from keycloak.js library const authorization = new KeycloakAuthorization(keycloak);", "HTTP/1.1 401 Unauthorized WWW-Authenticate: UMA realm=\"USD{realm}\", as_uri=\"https://USD{host}:USD{port}/auth/realms/USD{realm}\", ticket=\"016f84e8-f9b9-11e0-bd6f-0021cc6004de\"", "// prepare a authorization request with the permission ticket const authorizationRequest = {}; authorizationRequest.ticket = ticket; // send the authorization request, if successful retry the request Identity.authorization.authorize(authorizationRequest).then(function (rpt) { // onGrant }, function () { // onDeny }, function () { // onError });", "authorization.entitlement('my-resource-server-id').then(function (rpt) { // onGrant callback function. // If authorization was successful you'll receive an RPT // with the necessary permissions to access the resource server });", "authorization.entitlement('my-resource-server', { \"permissions\": [ { \"id\" : \"Some Resource\" } ] }).then(function (rpt) { // onGrant });", "const authorizationRequest = { \"permissions\": [ { \"id\" : \"Some Resource\", \"scopes\" : [\"view\", \"edit\"] } ] }", "const rpt = authorization.rpt;", "{ \"truststore\": \"path_to_your_trust_store\", \"truststore-password\": \"trust_store_password\" }" ]
https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.6/html/authorization_services_guide/enforcer_overview
Chapter 2. Deploy OpenShift Data Foundation using local storage devices
Chapter 2. Deploy OpenShift Data Foundation using local storage devices Use this section to deploy OpenShift Data Foundation on IBM Power infrastructure where OpenShift Container Platform is already installed. Also, it is possible to deploy only the Multicloud Object Gateway (MCG) component with OpenShift Data Foundation. For more information, see Deploy standalone Multicloud Object Gateway . Perform the following steps to deploy OpenShift Data Foundation: Install the Local Storage Operator . Install the Red Hat OpenShift Data Foundation Operator . Find available storage devices . Create an OpenShift Data Foundation cluster on IBM Power . 2.1. Installing Local Storage Operator Use this procedure to install the Local Storage Operator from the Operator Hub before creating OpenShift Data Foundation clusters on local storage devices. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Type local storage in the Filter by keyword... box to find the Local Storage Operator from the list of operators and click on it. Set the following options on the Install Operator page: Update channel as stable . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-local-storage . Approval Strategy as Automatic . Click Install . Verification steps Verify that the Local Storage Operator shows a green tick indicating successful installation. 2.2. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. For information about the hardware and software requirements, see Planning your deployment . Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and Operator installation permissions. You must have at least three worker nodes in the Red Hat OpenShift Container Platform cluster. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command in the command line interface to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation chapter in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.16 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps Verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console, navigate to Storage and verify if Data Foundation is available. 2.3. Enabling cluster-wide encryption with KMS using the Token authentication method You can enable the key value backend path and policy in the vault for token authentication. Prerequisites Administrator access to the vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . Carefully, select a unique path name as the backend path that follows the naming convention since you cannot change it later. Procedure Enable the Key/Value (KV) backend path in the vault. For vault KV secret engine API, version 1: For vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Create a token that matches the above policy: 2.4. Enabling cluster-wide encryption with KMS using the Kubernetes authentication method You can enable the Kubernetes authentication method for cluster-wide encryption using the Key Management System (KMS). Prerequisites Administrator access to Vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . The OpenShift Data Foundation operator must be installed from the Operator Hub. Select a unique path name as the backend path that follows the naming convention carefully. You cannot change this path name later. Procedure Create a service account: where, <serviceaccount_name> specifies the name of the service account. For example: Create clusterrolebindings and clusterroles : For example: Create a secret for the serviceaccount token and CA certificate. where, <serviceaccount_name> is the service account created in the earlier step. Get the token and the CA certificate from the secret. Retrieve the OCP cluster endpoint. Fetch the service account issuer: Use the information collected in the step to setup the Kubernetes authentication method in Vault: Important To configure the Kubernetes authentication method in Vault when the issuer is empty: Enable the Key/Value (KV) backend path in Vault. For Vault KV secret engine API, version 1: For Vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Generate the roles: The role odf-rook-ceph-op is later used while you configure the KMS connection details during the creation of the storage system. 2.5. Finding available storage devices Use this procedure to identify the device names for each of the three or more worker nodes that you have labeled with the OpenShift Data Foundation label cluster.ocs.openshift.io/openshift-storage='' before creating PVs for IBM Power. Procedure List and verify the name of the worker nodes with the OpenShift Data Foundation label. Example output: Log in to each worker node that is used for OpenShift Data Foundation resources and find the name of the additional disk that you have attached while deploying Openshift Container Platform. Example output: In this example, for worker-0, the available local devices of 500G are sda , sdc , sde , sdg , sdi , sdk , sdm , sdo . Repeat the above step for all the other worker nodes that have the storage devices to be used by OpenShift Data Foundation. See this Knowledge Base article for more details. 2.6. Creating OpenShift Data Foundation cluster on IBM Power Use this procedure to create an OpenShift Data Foundation cluster after you install the OpenShift Data Foundation operator. Prerequisites Ensure that all the requirements in the Requirements for installing OpenShift Data Foundation using local storage devices section are met. You must have a minimum of three worker nodes with the same storage type and size attached to each node (for example, 200 GB SSD) to use local storage devices on IBM Power. Verify your OpenShift Container Platform worker nodes are labeled for OpenShift Data Foundation: To identify storage devices on each node, refer to Finding available storage devices . Procedure Log into the OpenShift Web Console. In openshift-local-storage namespace Click Operators Installed Operators to view the installed operators. Click the Local Storage installed operator. On the Operator Details page, click the Local Volume link. Click Create Local Volume . Click on YAML view for configuring Local Volume. Define a LocalVolume custom resource for block PVs using the following YAML. The above definition selects sda local device from the worker-0 , worker-1 and worker-2 nodes. The localblock storage class is created and persistent volumes are provisioned from sda . Important Specify appropriate values of nodeSelector as per your environment. The device name should be same on all the worker nodes. You can also specify more than one devicePaths. Click Create . Confirm whether diskmaker-manager pods and Persistent Volumes are created. For Pods Click Workloads Pods from the left pane of the OpenShift Web Console. Select openshift-local-storage from the Project drop-down list. Check if there are diskmaker-manager pods for each of the worker node that you used while creating LocalVolume CR. For Persistent Volumes Click Storage PersistentVolumes from the left pane of the OpenShift Web Console. Check the Persistent Volumes with the name local-pv-* . Number of Persistent Volumes will be equivalent to the product of number of worker nodes and number of storage devices provisioned while creating localVolume CR. Important The flexible scaling feature is enabled only when the storage cluster that you created with three or more nodes are spread across fewer than the minimum requirement of three availability zones. For information about flexible scaling, see knowledgebase article on Scaling OpenShift Data Foundation cluster using YAML when flexible scaling is enabled . Flexible scaling features get enabled at the time of deployment and can not be enabled or disabled later on. In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, perform the following: Select Full Deployment for the Deployment type option. Select the Use an existing StorageClass option. Select the required Storage Class that you used while installing LocalVolume. By default, it is set to none . Optional: Select Use Ceph RBD as the default StorageClass . This avoids having to manually annotate a StorageClass. Optional: Select Use external PostgreSQL checkbox to use an external PostgreSQL [Technology preview] . This provides high availability solution for Multicloud Object Gateway where the PostgreSQL pod is a single point of failure. Provide the following connection details: Username Password Server name and Port Database name Select Enable TLS/SSL checkbox to enable encryption for the Postgres server. Click . In the Capacity and nodes page, configure the following: Available raw capacity is populated with the capacity value based on all the attached disks associated with the storage class. This takes some time to show up. The Selected nodes list shows the nodes based on the storage class. In the Configure performance section, select one of the following performance profiles: Lean Use this in a resource constrained environment with minimum resources that are lower than the recommended. This profile minimizes resource consumption by allocating fewer CPUs and less memory. Balanced (default) Use this when recommended resources are available. This profile provides a balance between resource consumption and performance for diverse workloads. Performance Use this in an environment with sufficient resources to get the best performance. This profile is tailored for high performance by allocating ample memory and CPUs to ensure optimal execution of demanding workloads. Note You have the option to configure the performance profile even after the deployment using the Configure performance option from the options menu of the StorageSystems tab. Important Before selecting a resource profile, make sure to check the current availability of resources within the cluster. Opting for a higher resource profile in a cluster with insufficient resources might lead to installation failures. For more information about resource requirements, see Resource requirement for performance profiles . Optional: Select the Taint nodes checkbox to dedicate the selected nodes for OpenShift Data Foundation. Click . Optional: In the Security and network page, configure the following based on your requirements: To enable encryption, select Enable data encryption for block and file storage . Select either one or both the encryption levels: Cluster-wide encryption Encrypts the entire cluster (block and file). StorageClass encryption Creates encrypted persistent volume (block only) using encryption enabled storage class. Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Select Default (OVN) network as Multus is not yet supported on OpenShift Data Foundation on IBM Power. Click . To enable in-transit encryption, select In-transit encryption . Select a Network . Click . In the Data Protection page, if you are configuring Regional-DR solution for Openshift Data Foundation then select the Prepare cluster for disaster recovery(Regional-DR only) checkbox, else click . In the Review and create page:: Review the configurations details. To modify any configuration settings, click Back to go back to the configuration page. Click Create StorageSystem . Note When your deployment has five or more nodes, racks, or rooms, and when there are five or more number of failure domains present in the deployment, you can configure Ceph monitor counts based on the number of racks or zones. An alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the option to increase the number of Ceph monitor counts. You can use the Configure option in the alert to configure the Ceph monitor counts. For more information, see Resolving low Ceph monitor count alert . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources . Verify that Status of StorageCluster is Ready and has a green tick mark to it. To verify if flexible scaling is enabled on your storage cluster, perform the following steps: In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources ocs-storagecluster . In the YAML tab, search for the keys flexibleScaling in spec section and failureDomain in status section. If flexible scaling is true and failureDomain is set to host, flexible scaling feature is enabled. To verify that all the components for OpenShift Data Foundation are successfully installed, see Verifying your OpenShift Data Foundation deployment . Additional resources To expand the capacity of the initial cluster, see the Scaling Storage guide.
[ "oc annotate namespace openshift-storage openshift.io/node-selector=", "vault secrets enable -path=odf kv", "vault secrets enable -path=odf kv-v2", "echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -", "vault token create -policy=odf -format json", "oc -n openshift-storage create serviceaccount <serviceaccount_name>", "oc -n openshift-storage create serviceaccount odf-vault-auth", "oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:_<serviceaccount_name>_", "oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:odf-vault-auth", "cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: odf-vault-auth-token namespace: openshift-storage annotations: kubernetes.io/service-account.name: <serviceaccount_name> type: kubernetes.io/service-account-token data: {} EOF", "SA_JWT_TOKEN=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['token']}\" | base64 --decode; echo) SA_CA_CRT=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['ca\\.crt']}\" | base64 --decode; echo)", "OCP_HOST=USD(oc config view --minify --flatten -o jsonpath=\"{.clusters[0].cluster.server}\")", "oc proxy & proxy_pid=USD! issuer=\"USD( curl --silent http://127.0.0.1:8001/.well-known/openid-configuration | jq -r .issuer)\" kill USDproxy_pid", "vault auth enable kubernetes", "vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\" issuer=\"USDissuer\"", "vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\"", "vault secrets enable -path=odf kv", "vault secrets enable -path=odf kv-v2", "echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -", "vault write auth/kubernetes/role/odf-rook-ceph-op bound_service_account_names=rook-ceph-system,rook-ceph-osd,noobaa bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h", "vault write auth/kubernetes/role/odf-rook-ceph-osd bound_service_account_names=rook-ceph-osd bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h", "oc get nodes -l cluster.ocs.openshift.io/openshift-storage=", "NAME STATUS ROLES AGE VERSION worker-0 Ready worker 2d11h v1.23.3+e419edf worker-1 Ready worker 2d11h v1.23.3+e419edf worker-2 Ready worker 2d11h v1.23.3+e419edf", "oc debug node/<node name>", "oc debug node/worker-0 Starting pod/worker-0-debug To use host binaries, run `chroot /host` Pod IP: 192.168.0.63 If you don't see a command prompt, try pressing enter. sh-4.4# sh-4.4# chroot /host sh-4.4# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop1 7:1 0 500G 0 loop sda 8:0 0 500G 0 disk sdb 8:16 0 120G 0 disk |-sdb1 8:17 0 4M 0 part |-sdb3 8:19 0 384M 0 part `-sdb4 8:20 0 119.6G 0 part sdc 8:32 0 500G 0 disk sdd 8:48 0 120G 0 disk |-sdd1 8:49 0 4M 0 part |-sdd3 8:51 0 384M 0 part `-sdd4 8:52 0 119.6G 0 part sde 8:64 0 500G 0 disk sdf 8:80 0 120G 0 disk |-sdf1 8:81 0 4M 0 part |-sdf3 8:83 0 384M 0 part `-sdf4 8:84 0 119.6G 0 part sdg 8:96 0 500G 0 disk sdh 8:112 0 120G 0 disk |-sdh1 8:113 0 4M 0 part |-sdh3 8:115 0 384M 0 part `-sdh4 8:116 0 119.6G 0 part sdi 8:128 0 500G 0 disk sdj 8:144 0 120G 0 disk |-sdj1 8:145 0 4M 0 part |-sdj3 8:147 0 384M 0 part `-sdj4 8:148 0 119.6G 0 part sdk 8:160 0 500G 0 disk sdl 8:176 0 120G 0 disk |-sdl1 8:177 0 4M 0 part |-sdl3 8:179 0 384M 0 part `-sdl4 8:180 0 119.6G 0 part /sysroot sdm 8:192 0 500G 0 disk sdn 8:208 0 120G 0 disk |-sdn1 8:209 0 4M 0 part |-sdn3 8:211 0 384M 0 part /boot `-sdn4 8:212 0 119.6G 0 part sdo 8:224 0 500G 0 disk sdp 8:240 0 120G 0 disk |-sdp1 8:241 0 4M 0 part |-sdp3 8:243 0 384M 0 part `-sdp4 8:244 0 119.6G 0 part", "get nodes -l cluster.ocs.openshift.io/openshift-storage -o jsonpath='{range .items[*]}{.metadata.name}{\"\\n\"}'", "apiVersion: local.storage.openshift.io/v1 kind: LocalVolume metadata: name: localblock namespace: openshift-local-storage spec: logLevel: Normal managementState: Managed nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - worker-0 - worker-1 - worker-2 storageClassDevices: - devicePaths: - /dev/sda storageClassName: localblock volumeMode: Block", "spec: flexibleScaling: true [...] status: failureDomain: host" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_openshift_data_foundation_using_ibm_power/deploy-using-local-storage-devices-ibm-power
Preface
Preface Providing feedback on Red Hat build of Apache Camel documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create ticket Enter a brief description of the issue in the Summary. Provide a detailed description of the issue or enhancement in the Description. Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/getting_started_with_red_hat_build_of_apache_camel_for_quarkus/pr01
35.3. Adding a Log File
35.3. Adding a Log File To add a log file to the list, select Edit => Preferences , and click the Add button in the Log Files tab. Figure 35.3. Adding a Log File Provide a name, description, and the location of the log file to add. After clicking OK , the file is immediately added to the viewing area, if the file exists.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/log_files-adding_a_log_file
4.2. Mounting a File System
4.2. Mounting a File System Before you can mount a GFS file system, the file system must exist (refer to Section 4.1, "Making a File System" ), the volume where the file system exists must be activated, and the supporting clustering and locking systems must be started (refer to Chapter 3, Getting Started and Configuring and Managing a Red Hat Cluster . After those requirements have been met, you can mount the GFS file system as you would any Linux file system. To manipulate file ACLs, you must mount the file system with the -o acl mount option. If a file system is mounted without the -o acl mount option, users are allowed to view ACLs (with getfacl ), but are not allowed to set them (with setfacl ). Usage Mounting Without ACL Manipulation Mounting With ACL Manipulation -o acl GFS-specific option to allow manipulating file ACLs. BlockDevice Specifies the block device where the GFS file system resides. MountPoint Specifies the directory where the GFS file system should be mounted. Example In this example, the GFS file system on /dev/vg01/lvol0 is mounted on the /gfs1 directory. Complete Usage The -o option argument consists of GFS-specific options (refer to Table 4.2, "GFS-Specific Mount Options" ) or acceptable standard Linux mount -o options, or a combination of both. Multiple option parameters are separated by a comma and no spaces. Note The mount command is a Linux system command. In addition to using GFS-specific options described in this section, you can use other, standard, mount command options (for example, -r ). For information about other Linux mount command options, see the Linux mount man page. Table 4.2, "GFS-Specific Mount Options" describes the available GFS-specific -o option values that can be passed to GFS at mount time. Table 4.2. GFS-Specific Mount Options Option Description acl Allows manipulating file ACLs. If a file system is mounted without the acl mount option, users are allowed to view ACLs (with getfacl ), but are not allowed to set them (with setfacl ). hostdata=HostIDInfo This field provides host (the computer on which the file system is being mounted) identity information to the lock module. The format and behavior of HostIDInfo depends on the lock module used. For lock_gulm , it overrides the uname -n network node name used as the default value by lock_gulm . This field is ignored by the lock_dlm and lock_nolock lock modules. ignore_local_fs Caution: This option should not be used when GFS file systems are shared. Forces GFS to treat the file system as a multihost file system. By default, using lock_nolock automatically turns on the localcaching and localflocks flags. localcaching Caution: This option should not be used when GFS file systems are shared. Tells GFS that it is running as a local file system. GFS can then turn on selected optimization capabilities that are not available when running in cluster mode. The localcaching flag is automatically turned on by lock_nolock . localflocks Caution: This option should not be used when GFS file systems are shared. Tells GFS to let the VFS (virtual file system) layer do all flock and fcntl. The localflocks flag is automatically turned on by lock_nolock . lockproto= LockModuleName Allows the user to specify which locking protocol to use with the file system. If LockModuleName is not specified, the locking protocol name is read from the file-system superblock. locktable= LockTableName Allows the user to specify which locking table to use with the file system. oopses_ok This option allows a GFS node to not panic when an oops occurs. (By default, a GFS node panics when an oops occurs, causing the file system used by that node to stall for other GFS nodes.) A GFS node not panicking when an oops occurs minimizes the failure on other GFS nodes using the file system that the failed node is using. There may be circumstances where you do not want to use this option - for example, when you need more detailed troubleshooting information. Use this option with care. Note: This option is turned on automatically if lock_nolock locking is specified; however, you can override it by using the ignore_local_fs option. upgrade Upgrade the on-disk format of the file system so that it can be used by newer versions of GFS.
[ "mount -t gfs BlockDevice MountPoint", "mount -t gfs -o acl BlockDevice MountPoint", "mount -t gfs /dev/vg01/lvol0 /gfs1", "mount -t gfs BlockDevice MountPoint -o option" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/global_file_system/s1-manage-mountfs
SystemTap Beginners Guide
SystemTap Beginners Guide Red Hat Enterprise Linux 6 Introduction to SystemTap Red Hat, Inc. Robert Kratky Red Hat Customer Content Services [email protected] Mirek Jahoda Red Hat Customer Content Services [email protected] Don Domingo Engineering Services and Operations Content Services William Cohen Engineering Services and Operations Performance Tools Edited by Jacquelynn East Red Hat Engineering Content Services
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_beginners_guide/index
Chapter 1. Configuring your application to use Eclipse Vert.x
Chapter 1. Configuring your application to use Eclipse Vert.x When you start configuring your applications to use Eclipse Vert.x, you must reference the Eclipse Vert.x BOM (Bill of Materials) artifact in the pom.xml file at the root directory of your application. The BOM is used to set the correct versions of the artifacts. Prerequisites A Maven-based application Procedure Open the pom.xml file, add the io.vertx:vertx-dependencies artifact to the <dependencyManagement> section. Specify the type as pom and scope as import . <project> ... <dependencyManagement> <dependencies> <dependency> <groupId>io.vertx</groupId> <artifactId>vertx-dependencies</artifactId> <version>USD{vertx.version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> ... </project> Include the following properties to track the version of Eclipse Vert.x and the Eclipse Vert.x Maven Plugin you are using. Properties can be used to set values that change in every release. For example, versions of product or plugins. <project> ... <properties> <vertx.version>USD{vertx.version}</vertx.version> <vertx-maven-plugin.version>USD{vertx-maven-plugin.version}</vertx-maven-plugin.version> </properties> ... </project> Specify vertx-maven-plugin as the plugin used to package your application: <project> ... <build> <plugins> ... <plugin> <groupId>io.reactiverse</groupId> <artifactId>vertx-maven-plugin</artifactId> <version>USD{vertx-maven-plugin.version}</version> <executions> <execution> <id>vmp</id> <goals> <goal>initialize</goal> <goal>package</goal> </goals> </execution> </executions> <configuration> <redeploy>true</redeploy> </configuration> </plugin> ... </plugins> </build> ... </project> Include repositories and pluginRepositories to specify the repositories that contain the artifacts and plugins to build your application: <project> ... <repositories> <repository> <id>redhat-ga</id> <name>Red Hat GA Repository</name> <url>https://maven.repository.redhat.com/ga/</url> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>redhat-ga</id> <name>Red Hat GA Repository</name> <url>https://maven.repository.redhat.com/ga/</url> </pluginRepository> </pluginRepositories> ... </project> Additional resources For more information about packaging your Eclipse Vert.x application, see the Vert.x Maven Plugin documentation.
[ "<project> <dependencyManagement> <dependencies> <dependency> <groupId>io.vertx</groupId> <artifactId>vertx-dependencies</artifactId> <version>USD{vertx.version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> </project>", "<project> <properties> <vertx.version>USD{vertx.version}</vertx.version> <vertx-maven-plugin.version>USD{vertx-maven-plugin.version}</vertx-maven-plugin.version> </properties> </project>", "<project> <build> <plugins> <plugin> <groupId>io.reactiverse</groupId> <artifactId>vertx-maven-plugin</artifactId> <version>USD{vertx-maven-plugin.version}</version> <executions> <execution> <id>vmp</id> <goals> <goal>initialize</goal> <goal>package</goal> </goals> </execution> </executions> <configuration> <redeploy>true</redeploy> </configuration> </plugin> </plugins> </build> </project>", "<project> <repositories> <repository> <id>redhat-ga</id> <name>Red Hat GA Repository</name> <url>https://maven.repository.redhat.com/ga/</url> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>redhat-ga</id> <name>Red Hat GA Repository</name> <url>https://maven.repository.redhat.com/ga/</url> </pluginRepository> </pluginRepositories> </project>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_eclipse_vert.x/4.3/html/eclipse_vert.x_4.3_migration_guide/configuring-your-application-to-use-vertx_vertx
Configuring your Red Hat build of Quarkus applications by using a properties file
Configuring your Red Hat build of Quarkus applications by using a properties file Red Hat build of Quarkus 3.15 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.15/html/configuring_your_red_hat_build_of_quarkus_applications_by_using_a_properties_file/index
Chapter 2. Requirements
Chapter 2. Requirements 2.1. Red Hat Virtualization Manager Requirements 2.1.1. Hardware Requirements The minimum and recommended hardware requirements outlined here are based on a typical small to medium-sized installation. The exact requirements vary between deployments based on sizing and load. Hardware certification for Red Hat Virtualization is covered by the hardware certification for Red Hat Enterprise Linux. For more information, see https://access.redhat.com/solutions/725243 . To confirm whether specific hardware items are certified for use with Red Hat Enterprise Linux, see https://access.redhat.com/ecosystem/#certifiedHardware . Table 2.1. Red Hat Virtualization Manager Hardware Requirements Resource Minimum Recommended CPU A dual core CPU. A quad core CPU or multiple dual core CPUs. Memory 4 GB of available system RAM if Data Warehouse is not installed and if memory is not being consumed by existing processes. 16 GB of system RAM. Hard Disk 25 GB of locally accessible, writable disk space. 50 GB of locally accessible, writable disk space. You can use the RHV Manager History Database Size Calculator to calculate the appropriate disk space for the Manager history database size. Network Interface 1 Network Interface Card (NIC) with bandwidth of at least 1 Gbps. 1 Network Interface Card (NIC) with bandwidth of at least 1 Gbps. 2.1.2. Browser Requirements The following browser versions and operating systems can be used to access the Administration Portal and the VM Portal. Browser support is divided into tiers: Tier 1: Browser and operating system combinations that are fully tested and fully supported. Red Hat Engineering is committed to fixing issues with browsers on this tier. Tier 2: Browser and operating system combinations that are partially tested, and are likely to work. Limited support is provided for this tier. Red Hat Engineering will attempt to fix issues with browsers on this tier. Tier 3: Browser and operating system combinations that are not tested, but may work. Minimal support is provided for this tier. Red Hat Engineering will attempt to fix only minor issues with browsers on this tier. Table 2.2. Browser Requirements Support Tier Operating System Family Browser Tier 1 Red Hat Enterprise Linux Mozilla Firefox Extended Support Release (ESR) version Any Most recent version of Google Chrome, Mozilla Firefox, or Microsoft Edge Tier 2 Tier 3 Any Earlier versions of Google Chrome or Mozilla Firefox Any Other browsers 2.1.3. Client Requirements Virtual machine consoles can only be accessed using supported Remote Viewer ( virt-viewer ) clients on Red Hat Enterprise Linux and Windows. To install virt-viewer , see Installing Supporting Components on Client Machines in the Virtual Machine Management Guide . Installing virt-viewer requires Administrator privileges. Virtual machine consoles are accessed through the SPICE, VNC, or RDP (Windows only) protocols. The QXL graphical driver can be installed in the guest operating system for improved/enhanced SPICE functionalities. SPICE currently supports a maximum resolution of 2560x1600 pixels. Supported QXL drivers are available on Red Hat Enterprise Linux, Windows XP, and Windows 7. SPICE support is divided into tiers: Tier 1: Operating systems on which Remote Viewer has been fully tested and is supported. Tier 2: Operating systems on which Remote Viewer is partially tested and is likely to work. Limited support is provided for this tier. Red Hat Engineering will attempt to fix issues with remote-viewer on this tier. Table 2.3. Client Operating System SPICE Support Support Tier Operating System Tier 1 Red Hat Enterprise Linux 7.2 and later Microsoft Windows 7 Tier 2 Microsoft Windows 8 Microsoft Windows 10 2.1.4. Operating System Requirements The Red Hat Virtualization Manager must be installed on a base installation of Red Hat Enterprise Linux 7 that has been updated to the latest minor release. Do not install any additional packages after the base installation, as they may cause dependency issues when attempting to install the packages required by the Manager. Do not enable additional repositories other than those required for the Manager installation. 2.2. Host Requirements Hardware certification for Red Hat Virtualization is covered by the hardware certification for Red Hat Enterprise Linux. For more information, see https://access.redhat.com/solutions/725243 . To confirm whether specific hardware items are certified for use with Red Hat Enterprise Linux, see https://access.redhat.com/ecosystem/#certifiedHardware . For more information on the requirements and limitations that apply to guests see https://access.redhat.com/articles/rhel-limits and https://access.redhat.com/articles/906543 . 2.2.1. CPU Requirements All CPUs must have support for the Intel(R) 64 or AMD64 CPU extensions, and the AMD-VTM or Intel VT(R) hardware virtualization extensions enabled. Support for the No eXecute flag (NX) is also required. The following CPU models are supported: AMD Opteron G4 Opteron G5 EPYC Intel Nehalem Westmere Sandybridge Haswell Haswell-noTSX Broadwell Broadwell-noTSX Skylake (client) Skylake (server) IBM POWER8 2.2.1.1. Checking if a Processor Supports the Required Flags You must enable virtualization in the BIOS. Power off and reboot the host after this change to ensure that the change is applied. At the Red Hat Enterprise Linux or Red Hat Virtualization Host boot screen, press any key and select the Boot or Boot with serial console entry from the list. Press Tab to edit the kernel parameters for the selected option. Ensure there is a space after the last kernel parameter listed, and append the parameter rescue . Press Enter to boot into rescue mode. At the prompt, determine that your processor has the required extensions and that they are enabled by running this command: If any output is shown, the processor is hardware virtualization capable. If no output is shown, your processor may still support hardware virtualization; in some circumstances manufacturers disable the virtualization extensions in the BIOS. If you believe this to be the case, consult the system's BIOS and the motherboard manual provided by the manufacturer. 2.2.2. Memory Requirements The minimum required RAM is 2 GB. The maximum supported RAM per VM in Red Hat Virtualization Host is 4 TB. However, the amount of RAM required varies depending on guest operating system requirements, guest application requirements, and guest memory activity and usage. KVM can also overcommit physical RAM for virtualized guests, allowing you to provision guests with RAM requirements greater than what is physically present, on the assumption that the guests are not all working concurrently at peak load. KVM does this by only allocating RAM for guests as required and shifting underutilized guests into swap. 2.2.3. Storage Requirements Hosts require storage to store configuration, logs, kernel dumps, and for use as swap space. Storage can be local or network-based. Red Hat Virtualization Host (RHVH) can boot with one, some, or all of its default allocations in network storage. Booting from network storage can result in a freeze if there is a network disconnect. Adding a drop-in multipath configuration file can help address losses in network connectivity. If RHVH boots from SAN storage and loses connectivity, the files become read-only until network connectivity restores. Using network storage might result in a performance downgrade. The minimum storage requirements of RHVH are documented in this section. The storage requirements for Red Hat Enterprise Linux hosts vary based on the amount of disk space used by their existing configuration but are expected to be greater than those of RHVH. The minimum storage requirements for host installation are listed below. However, Red Hat recommends using the default allocations, which use more storage space. / (root) - 6 GB /home - 1 GB /tmp - 1 GB /boot - 1 GB /var - 15 GB /var/crash - 10 GB /var/log - 8 GB /var/log/audit - 2 GB swap - 1 GB (for the recommended swap size, see https://access.redhat.com/solutions/15244 ) Anaconda reserves 20% of the thin pool size within the volume group for future metadata expansion. This is to prevent an out-of-the-box configuration from running out of space under normal usage conditions. Overprovisioning of thin pools during installation is also not supported. Minimum Total - 55 GB If you are also installing the RHV-M Appliance for self-hosted engine installation, /var/tmp must be at least 5 GB. If you plan to use memory overcommitment, add enough swap space to provide virtual memory for all of virtual machines. See Memory Optimization . 2.2.4. PCI Device Requirements Hosts must have at least one network interface with a minimum bandwidth of 1 Gbps. Red Hat recommends that each host have two network interfaces, with one dedicated to supporting network-intensive activities, such as virtual machine migration. The performance of such operations is limited by the bandwidth available. For information about how to use PCI Express and conventional PCI devices with Intel Q35-based virtual machines, see Using PCI Express and Conventional PCI Devices with the Q35 Virtual Machine . 2.2.5. Device Assignment Requirements If you plan to implement device assignment and PCI passthrough so that a virtual machine can use a specific PCIe device from a host, ensure the following requirements are met: CPU must support IOMMU (for example, VT-d or AMD-Vi). IBM POWER8 supports IOMMU by default. Firmware must support IOMMU. CPU root ports used must support ACS or ACS-equivalent capability. PCIe devices must support ACS or ACS-equivalent capability. Red Hat recommends that all PCIe switches and bridges between the PCIe device and the root port support ACS. For example, if a switch does not support ACS, all devices behind that switch share the same IOMMU group, and can only be assigned to the same virtual machine. For GPU support, Red Hat Enterprise Linux 7 supports PCI device assignment of PCIe-based NVIDIA K-Series Quadro (model 2000 series or higher), GRID, and Tesla as non-VGA graphics devices. Currently up to two GPUs may be attached to a virtual machine in addition to one of the standard, emulated VGA interfaces. The emulated VGA is used for pre-boot and installation and the NVIDIA GPU takes over when the NVIDIA graphics drivers are loaded. Note that the NVIDIA Quadro 2000 is not supported, nor is the Quadro K420 card. Check vendor specification and datasheets to confirm that your hardware meets these requirements. The lspci -v command can be used to print information for PCI devices already installed on a system. 2.2.6. vGPU Requirements A host must meet the following requirements in order for virtual machines on that host to use a vGPU: vGPU-compatible GPU GPU-enabled host kernel Installed GPU with correct drivers Predefined mdev_type set to correspond with one of the mdev types supported by the device vGPU-capable drivers installed on each host in the cluster vGPU-supported virtual machine operating system with vGPU drivers installed 2.3. Networking Requirements 2.3.1. General Requirements Red Hat Virtualization requires IPv6 to remain enabled on the computer or virtual machine where you are running the Manager (also called "the Manager machine"). Do not disable IPv6 on the Manager machine, even if your systems do not use it. 2.3.2. Firewall Requirements for DNS, NTP, IPMI Fencing, and Metrics Store The firewall requirements for all of the following topics are special cases that require individual consideration. DNS and NTP Red Hat Virtualization does not create a DNS or NTP server, so the firewall does not need to have open ports for incoming traffic. By default, Red Hat Enterprise Linux allows outbound traffic to DNS and NTP on any destination address. If you disable outgoing traffic, define exceptions for requests that are sent to DNS and NTP servers. Important The Red Hat Virtualization Manager and all hosts (Red Hat Virtualization Host and Red Hat Enterprise Linux host) must have a fully qualified domain name and full, perfectly-aligned forward and reverse name resolution. Running a DNS service as a virtual machine in the Red Hat Virtualization environment is not supported. All DNS services the Red Hat Virtualization environment uses must be hosted outside of the environment. Red Hat strongly recommends using DNS instead of the /etc/hosts file for name resolution. Using a hosts file typically requires more work and has a greater chance for errors. IPMI and Other Fencing Mechanisms (optional) For IPMI (Intelligent Platform Management Interface) and other fencing mechanisms, the firewall does not need to have open ports for incoming traffic. By default, Red Hat Enterprise Linux allows outbound IPMI traffic to ports on any destination address. If you disable outgoing traffic, make exceptions for requests being sent to your IPMI or fencing servers. Each Red Hat Virtualization Host and Red Hat Enterprise Linux host in the cluster must be able to connect to the fencing devices of all other hosts in the cluster. If the cluster hosts are experiencing an error (network error, storage error... ) and cannot function as hosts, they must be able to connect to other hosts in the data center. The specific port number depends on the type of the fence agent you are using and how it is configured. The firewall requirement tables in the following sections do not represent this option. Metrics Store, Kibana, and ElasticSearch For Metrics Store, Kibana, and ElasticSearch, see Network Configuration for Metrics Store virtual machines . 2.3.3. Red Hat Virtualization Manager Firewall Requirements The Red Hat Virtualization Manager requires that a number of ports be opened to allow network traffic through the system's firewall. The engine-setup script can configure the firewall automatically, but this overwrites any pre-existing firewall configuration if you are using iptables . If you want to keep the existing firewall configuration, you must manually insert the firewall rules required by the Manager. The engine-setup command saves a list of the iptables rules required in the /etc/ovirt-engine/iptables.example file. If you are using firewalld , engine-setup does not overwrite the existing configuration. The firewall configuration documented here assumes a default configuration. Note A diagram of these firewall requirements is available at https://access.redhat.com/articles/3932211 . You can use the IDs in the table to look up connections in the diagram. Table 2.4. Red Hat Virtualization Manager Firewall Requirements ID Port(s) Protocol Source Destination Purpose Encrypted by default M1 - ICMP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Manager Optional. May help in diagnosis. No M2 22 TCP System(s) used for maintenance of the Manager including backend configuration, and software upgrades. Red Hat Virtualization Manager Secure Shell (SSH) access. Optional. Yes M3 2222 TCP Clients accessing virtual machine serial consoles. Red Hat Virtualization Manager Secure Shell (SSH) access to enable connection to virtual machine serial consoles. Yes M4 80, 443 TCP Administration Portal clients VM Portal clients Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts REST API clients Red Hat Virtualization Manager Provides HTTP (port 80, not encrypted) and HTTPS (port 443, encrypted) access to the Manager. HTTP redirects connections to HTTPS. Yes M5 6100 TCP Administration Portal clients VM Portal clients Red Hat Virtualization Manager Provides websocket proxy access for a web-based console client, noVNC , when the websocket proxy is running on the Manager. If the websocket proxy is running on a different host, however, this port is not used. No M6 7410 UDP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Manager If Kdump is enabled on the hosts, open this port for the fence_kdump listener on the Manager. See fence_kdump Advanced Configuration . fence_kdump doesn't provide a way to encrypt the connection. However, you can manually configure this port to block access from hosts that are not eligible. No M7 54323 TCP Administration Portal clients Red Hat Virtualization Manager (ImageIO Proxy server) Required for communication with the ImageIO Proxy ( ovirt-imageio-proxy ). Yes M8 6442 TCP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Open Virtual Network (OVN) southbound database Connect to Open Virtual Network (OVN) database Yes M9 9696 TCP Clients of external network provider for OVN External network provider for OVN OpenStack Networking API Yes, with configuration generated by engine-setup. M10 35357 TCP Clients of external network provider for OVN External network provider for OVN OpenStack Identity API Yes, with configuration generated by engine-setup. M11 53 TCP, UDP Red Hat Virtualization Manager DNS Server DNS lookup requests from ports above 1023 to port 53, and responses. Open by default. No M12 123 UDP Red Hat Virtualization Manager NTP Server NTP requests from ports above 1023 to port 123, and responses. Open by default. No Note A port for the OVN northbound database (6641) is not listed because, in the default configuration, the only client for the OVN northbound database (6641) is ovirt-provider-ovn . Because they both run on the same host, their communication is not visible to the network. By default, Red Hat Enterprise Linux allows outbound traffic to DNS and NTP on any destination address. If you disable outgoing traffic, make exceptions for the Manager to send requests to DNS and NTP servers. Other nodes may also require DNS and NTP. In that case, consult the requirements for those nodes and configure the firewall accordingly. 2.3.4. Host Firewall Requirements Red Hat Enterprise Linux hosts and Red Hat Virtualization Hosts (RHVH) require a number of ports to be opened to allow network traffic through the system's firewall. The firewall rules are automatically configured by default when adding a new host to the Manager, overwriting any pre-existing firewall configuration. To disable automatic firewall configuration when adding a new host, clear the Automatically configure host firewall check box under Advanced Parameters . To customize the host firewall rules, see https://access.redhat.com/solutions/2772331 . Note A diagram of these firewall requirements is available at https://access.redhat.com/articles/3932211 . You can use the IDs in the table to look up connections in the diagram. Table 2.5. Virtualization Host Firewall Requirements ID Port(s) Protocol Source Destination Purpose Encrypted by default H1 22 TCP Red Hat Virtualization Manager Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Secure Shell (SSH) access. Optional. Yes H2 2223 TCP Red Hat Virtualization Manager Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Secure Shell (SSH) access to enable connection to virtual machine serial consoles. Yes H3 161 UDP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Manager Simple network management protocol (SNMP). Only required if you want Simple Network Management Protocol traps sent from the host to one or more external SNMP managers. Optional. No H4 111 TCP NFS storage server Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts NFS connections. Optional. No H5 5900 - 6923 TCP Administration Portal clients VM Portal clients Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Remote guest console access via VNC and SPICE. These ports must be open to facilitate client access to virtual machines. Yes (optional) H6 5989 TCP, UDP Common Information Model Object Manager (CIMOM) Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Used by Common Information Model Object Managers (CIMOM) to monitor virtual machines running on the host. Only required if you want to use a CIMOM to monitor the virtual machines in your virtualization environment. Optional. No H7 9090 TCP Red Hat Virtualization Manager Client machines Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Required to access the Cockpit web interface, if installed. Yes H8 16514 TCP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Virtual machine migration using libvirt . Yes H9 49152 - 49215 TCP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Virtual machine migration and fencing using VDSM. These ports must be open to facilitate both automated and manual migration of virtual machines. Yes. Depending on agent for fencing, migration is done through libvirt. H10 54321 TCP Red Hat Virtualization Manager Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts VDSM communications with the Manager and other virtualization hosts. Yes H11 54322 TCP Red Hat Virtualization Manager (ImageIO Proxy server) Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Required for communication with the ImageIO daemon ( ovirt-imageio-daemon ). Yes H12 6081 UDP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Required, when Open Virtual Network (OVN) is used as a network provider, to allow OVN to create tunnels between hosts. No H13 53 TCP, UDP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts DNS Server DNS lookup requests from ports above 1023 to port 53, and responses. This port is required and open by default. No Note By default, Red Hat Enterprise Linux allows outbound traffic to DNS and NTP on any destination address. If you disable outgoing traffic, make exceptions for the Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts to send requests to DNS and NTP servers. Other nodes may also require DNS and NTP. In that case, consult the requirements for those nodes and configure the firewall accordingly. 2.3.5. Database Server Firewall Requirements Red Hat Virtualization supports the use of a remote database server for the Manager database ( engine ) and the Data Warehouse database ( ovirt-engine-history ). If you plan to use a remote database server, it must allow connections from the Manager and the Data Warehouse service (which can be separate from the Manager). Similarly, if you plan to access a local or remote Data Warehouse database from an external system, such as Red Hat CloudForms, the database must allow connections from that system. Important Accessing the Manager database from external systems is not supported. Note A diagram of these firewall requirements is available at https://access.redhat.com/articles/3932211 . You can use the IDs in the table to look up connections in the diagram. Table 2.6. Database Server Firewall Requirements ID Port(s) Protocol Source Destination Purpose Encrypted by default D1 5432 TCP, UDP Red Hat Virtualization Manager Data Warehouse service Manager ( engine ) database server Data Warehouse ( ovirt-engine-history ) database server Default port for PostgreSQL database connections. No, but can be enabled . D2 5432 TCP, UDP External systems Data Warehouse ( ovirt-engine-history ) database server Default port for PostgreSQL database connections. Disabled by default. No, but can be enabled .
[ "grep -E 'svm|vmx' /proc/cpuinfo | grep nx" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/installing_red_hat_virtualization_as_a_self-hosted_engine_using_the_command_line/RHV_requirements
7.2. Performing a CMC Revocation
7.2. Performing a CMC Revocation Similar to Certificate Management over CMS (CMC) enrollment, CMC revocation enables users to set up a revocation client, and sign the revocation request with either an agent certificate or a user certificate with a matching subjectDN attribute. Then the user can send the signed request to the Certificate Manager. Alternatively, CMC revocation can also be authenticated using the Shared Secret Token mechanism. For details, see Enabling the CMC Shared Secret Feature . Regardless of whether a user or agent signs the request or if a Shared Secret Token is used, the Certificate Manager automatically revokes the certificate when it receives a valid revocation request. Certificate System provides the following utilities for CMC revocation requests: CMCRequest . For details, see Section 7.2.1, "Revoking a Certificate Using CMCRequest " . CMCRevoke . For details, see Section 7.2.2, "Revoking a Certificate Using CMCRevoke " . Important Red Hat recommends using the CMCRequest utility to generate CMC revocation requests, because it provides more options than CMCRevoke . 7.2.1. Revoking a Certificate Using CMCRequest To revoke a certificate using CMCRequest : Create a configuration file for the CMC revocation request, such as /home/user_name/cmc-request.cfg , with the following content: Create the CMC request: If the command succeeds, the CMCRequest utility stores the CMC request in the file specified in the output parameter in the request configuration file. Create a configuration file, such as /home/user_name/cmc-submit.cfg , which you use in a later step to submit the CMC revocation request to the CA. Add the following content to the created file: Important If the CMC revocation request is signed, set the secure and clientmode parameters to true and, additionally, fill the nickname parameter. Depending on who signed the request, the servlet parameter in the configuration file for HttpClient must be set accordingly: If an agent signed the request, set: If a user signed the request, set: Submit the CMC request: For further details about revoking a certificate using CMCRequest , see the CMCRequest (1) man page. 7.2.2. Revoking a Certificate Using CMCRevoke The CMC revocation utility, CMCRevoke , is used to sign a revocation request with an agent's certificate. This utility simply passes the required information - certificate serial number, issuer name, and revocation reason - to identify the certificate to revoke, and then the require information to identify the CA agent performing the revocation (certificate nickname and the database with the certificate). The reason the certificate is being revoked can be any of the following (with the number being the value passed to the CMCRevoke utility): 0 - unspecified 1 - the key was compromised 2 - the CA key was compromised 3 - the employee's affiliation changed 4 - the certificate has been superseded 5 - cessation of operation 6 - the certificate is on hold The available tool arguments are described in detail in the Command-Line Tools Guide . 7.2.2.1. Testing CMCRevoke Create a CMC revocation request for an existing certificate. For example, if the directory containing the agent certificate is ~jsmith/.mozilla/firefox/ , the nickname of the certificate is AgentCert , and the serial number of the certificate is 22 , the command is as shown: Note Surround values that include spaces in quotation marks. Important Do not have a space between the argument and its value. For example, giving a serial number of 26 is -s26 , not -s 26 . Open the end-entities page. Select the Revocation tab. Select the CMC Revoke link on the menu. Paste the output from the CMCRevoke into the text area. Remove -----BEGIN NEW CERTIFICATE REQUEST----- and ----END NEW CERTIFICATE REQUEST----- from the pasted content. Click Submit . The returned page should confirm that correct certificate has been revoked.
[ "#numRequests: Total number of PKCS10 requests or CRMF requests. numRequests=1 #output: full path for the CMC request in binary format output= /home/user_name/cmc.revoke.userSigned.req #tokenname: name of token where user signing cert can be found #(default is internal) tokenname= internal #nickname: nickname for user signing certificate which will be used #to sign the CMC full request. nickname= signer_user_certificate #dbdir: directory for cert9.db, key4.db and pkcs11.txt dbdir= /home/user_name/.dogtag/nssdb/ #password: password for cert9.db which stores the user signing #certificate and keys password=myPass #format: request format, either pkcs10 or crmf. format= pkcs10 ## revocation parameters revRequest.enable=true revRequest.serial= 45 revRequest.reason= unspecified revRequest.comment= user test revocation revRequest.issuer= issuer revRequest.sharedSecret= shared_secret", "CMCRequest /home/user_name/cmc-request.cfg", "#host: host name for the http server host= >server.example.com #port: port number port= 8443 #secure: true for secure connection, false for nonsecure connection secure=true #input: full path for the enrollment request, the content must be #in binary format input= /home/user_name/cmc.revoke.userSigned.req #output: full path for the response in binary format output= /home/user_name/cmc.revoke.userSigned.resp #tokenname: name of token where SSL client authentication certificate #can be found (default is internal) #This parameter will be ignored if secure=false tokenname= internal #dbdir: directory for cert9.db, key4.db and pkcs11.txt #This parameter will be ignored if secure=false dbdir= /home/user_name/.dogtag/nssdb/ #clientmode: true for client authentication, false for no client #authentication. This parameter will be ignored if secure=false clientmode=true #password: password for cert9.db #This parameter will be ignored if secure=false and clientauth=false password= password #nickname: nickname for client certificate #This parameter will be ignored if clientmode=false nickname= signer_user_certificate", "servlet=/ca/ee/ca/profileSubmitCMCFull", "servlet=/ca/ee/ca/profileSubmitSelfSignedCMCFull", "HttpClient /home/user_name/cmc-submit.cfg", "CMCRevoke -d /path/to/agent-cert-db -n nickname -i issuerName -s serialName -m reason -c comment", "CMCRevoke -d\"~jsmith/.mozilla/firefox/\" -n\"ManagerAgentCert\" -i\"cn=agentAuthMgr\" -s22 -m0 -c\"test comment\"", "http s ://server.example.com: 8443/ca/ee/ca" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/cmc_revocation
Chapter 1. Introduction to the Ceph Orchestrator
Chapter 1. Introduction to the Ceph Orchestrator As a storage administrator, you can use the Ceph Orchestrator with Cephadm utility that provides the ability to discover devices and create services in a Red Hat Ceph Storage cluster. 1.1. Use of the Ceph Orchestrator Red Hat Ceph Storage Orchestrators are manager modules that primarily act as a bridge between a Red Hat Ceph Storage cluster and deployment tools like Rook and Cephadm for a unified experience. They also integrate with the Ceph command line interface and Ceph Dashboard. The following is a workflow diagram of Ceph Orchestrator: Note NFS-Ganesha gateway is not supported, starting from Red Hat Ceph Storage 5.1 release. Types of Red Hat Ceph Storage Orchestrators There are three main types of Red Hat Ceph Storage Orchestrators: Orchestrator CLI : These are common APIs used in Orchestrators and include a set of commands that can be implemented. These APIs also provide a common command line interface (CLI) to orchestrate ceph-mgr modules with external orchestration services. The following are the nomenclature used with the Ceph Orchestrator: Host : This is the host name of the physical host and not the pod name, DNS name, container name, or host name inside the container. Service type : This is the type of the service, such as nfs, mds, osd, mon, rgw, and mgr. Service : A functional service provided by a Ceph storage cluster such as monitors service, managers service, OSD services, Ceph Object Gateway service, and NFS service. Daemon : A specific instance of a service deployed by one or more hosts such as Ceph Object Gateway services can have different Ceph Object Gateway daemons running in three different hosts. Cephadm Orchestrator - This is a Ceph Orchestrator module that does not rely on an external tool such as Rook or Ansible, but rather manages nodes in a cluster by establishing an SSH connection and issuing explicit management commands. This module is intended for day-one and day-two operations. Using the Cephadm Orchestrator is the recommended way of installing a Ceph storage cluster without leveraging any deployment frameworks like Ansible. The idea is to provide the manager daemon with access to an SSH configuration and key that is able to connect to all nodes in a cluster to perform any management operations, like creating an inventory of storage devices, deploying and replacing OSDs, or starting and stopping Ceph daemons. In addition, the Cephadm Orchestrator will deploy container images managed by systemd in order to allow independent upgrades of co-located services. This orchestrator will also likely highlight a tool that encapsulates all necessary operations to manage the deployment of container image based services on the current host, including a command that bootstraps a minimal cluster running a Ceph Monitor and a Ceph Manager. Rook Orchestrator - Rook is an orchestration tool that uses the Kubernetes Rook operator to manage a Ceph storage cluster running inside a Kubernetes cluster. The rook module provides integration between Ceph's Orchestrator framework and Rook. Rook is an open source cloud-native storage operator for Kubernetes. Rook follows the "operator" model, in which a custom resource definition (CRD) object is defined in Kubernetes to describe a Ceph storage cluster and its desired state, and a rook operator daemon is running in a control loop that compares the current cluster state to desired state and takes steps to make them converge. The main object describing Ceph's desired state is the Ceph storage cluster CRD, which includes information about which devices should be consumed by OSDs, how many monitors should be running, and what version of Ceph should be used. Rook defines several other CRDs to describe RBD pools, CephFS file systems, and so on. The Rook Orchestrator module is the glue that runs in the ceph-mgr daemon and implements the Ceph orchestration API by making changes to the Ceph storage cluster in Kubernetes that describe desired cluster state. A Rook cluster's ceph-mgr daemon is running as a Kubernetes pod, and hence, the rook module can connect to the Kubernetes API without any explicit configuration.
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/operations_guide/introduction-to-the-ceph-orchestrator
probe::netdev.change_mtu
probe::netdev.change_mtu Name probe::netdev.change_mtu - Called when the netdev MTU is changed Synopsis netdev.change_mtu Values old_mtu The current MTU new_mtu The new MTU dev_name The device that will have the MTU changed
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-netdev-change-mtu
8.4. Understanding the Network Teaming Daemon and the "Runners"
8.4. Understanding the Network Teaming Daemon and the "Runners" The Team daemon, teamd , uses libteam to control one instance of the team driver. This instance of the team driver adds instances of a hardware device driver to form a " team " of network links. The team driver presents a network interface, team0 for example, to the other parts of the kernel. The interfaces created by instances of the team driver are given names such as team0 , team1 , and so forth in the documentation. This is for ease of understanding and other names can be used. The logic common to all methods of teaming is implemented by teamd ; those functions that are unique to the different load sharing and backup methods, such as round-robin, are implemented by separate units of code referred to as " runners " . Because words such as " module " and " mode " already have specific meanings in relation to the kernel, the word " runner " was chosen to refer to these units of code. The user specifies the runner in the JSON format configuration file and the code is then compiled into an instance of teamd when the instance is created. A runner is not a plug-in because the code for a runner is compiled into an instance of teamd as it is being created. Code could be created as a plug-in for teamd should the need arise. The following runners are available at time of writing. broadcast (data is transmitted over all ports) round-robin (data is transmitted over all ports in turn) active-backup (one port or link is used while others are kept as a backup) loadbalance (with active Tx load balancing and BPF-based Tx port selectors) lacp (implements the 802.3ad Link Aggregation Control Protocol) In addition, the following link-watchers are available: ethtool (Libteam lib uses ethtool to watch for link state changes). This is the default if no other link-watcher is specified in the configuration file. arp_ping (The arp_ping utility is used to monitor the presence of a far-end hardware address using ARP packets.) nsna_ping (Neighbor Advertisements and Neighbor Solicitation from the IPv6 Neighbor Discovery protocol are used to monitor the presence of a neighbor's interface) There are no restrictions in the code to prevent a particular link-watcher from being used with a particular runner, however when using the lacp runner, ethtool is the only recommended link-watcher.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/sec-understanding_the_network_teaming_daemon_and_the_runners
Chapter 10. Networking
Chapter 10. Networking Trusted Network Connect Red Hat Enterprise Linux 7.1 introduces the Trusted Network Connect functionality as a Technology Preview. Trusted Network Connect is used with existing network access control (NAC) solutions, such as TLS, 802.1X, or IPsec to integrate endpoint posture assessment; that is, collecting an endpoint's system information (such as operating system configuration settings, installed packages, and others, termed as integrity measurements). Trusted Network Connect is used to verify these measurements against network access policies before allowing the endpoint to access the network. SR-IOV Functionality in the qlcnic Driver Support for Single-Root I/O virtualization (SR-IOV) has been added to the qlcnic driver as a Technology Preview. Support for this functionality will be provided directly by QLogic, and customers are encouraged to provide feedback to QLogic and Red Hat. Other functionality in the qlcnic driver remains fully supported. Berkeley Packet Filter Support for a Berkeley Packet Filter (BPF) based traffic classifier has been added to Red Hat Enterprise Linux 7.1. BPF is used in packet filtering for packet sockets, for sand-boxing in secure computing mode ( seccomp ), and in Netfilter. BPF has a just-in-time implementation for the most important architectures and has a rich syntax for building filters. Improved Clock Stability Previously, test results indicated that disabling the tickless kernel capability could significantly improve the stability of the system clock. The kernel tickless mode can be disabled by adding nohz=off to the kernel boot option parameters. However, recent improvements applied to the kernel in Red Hat Enterprise Linux 7.1 have greatly improved the stability of the system clock and the difference in stability of the clock with and without nohz=off should be much smaller now for most users. This is useful for time synchronization applications using PTP and NTP . libnetfilter_queue Packages The libnetfilter_queue package has been added to Red Hat Enterprise Linux 7.1. libnetfilter_queue is a user space library providing an API to packets that have been queued by the kernel packet filter. It enables receiving queued packets from the kernel nfnetlink_queue subsystem, parsing of the packets, rewriting packet headers, and re-injecting altered packets. Teaming Enhancements The libteam packages have been updated to version 1.15 in Red Hat Enterprise Linux 7.1. It provides a number of bug fixes and enhancements, in particular, teamd can now be automatically re-spawned by systemd , which increases overall reliability. Intel QuickAssist Technology Driver Intel QuickAssist Technology (QAT) driver has been added to Red Hat Enterprise Linux 7.1. The QAT driver enables QuickAssist hardware which adds hardware offload crypto capabilities to a system. LinuxPTP timemaster Support for Failover between PTP and NTP The linuxptp package has been updated to version 1.4 in Red Hat Enterprise Linux 7.1. It provides a number of bug fixes and enhancements, in particular, support for failover between PTP domains and NTP sources using the timemaster application. When there are multiple PTP domains available on the network, or fallback to NTP is needed, the timemaster program can be used to synchronize the system clock to all available time sources. Network initscripts Support for custom VLAN names has been added in Red Hat Enterprise Linux 7.1. Improved support for IPv6 in GRE tunnels has been added; the inner address now persists across reboots. TCP Delayed ACK Support for a configurable TCP Delayed ACK has been added to the iproute package in Red Hat Enterprise Linux 7.1. This can be enabled by the ip route quickack command. NetworkManager NetworkManager has been updated to version 1.0 in Red Hat Enterprise Linux 7.1. The support for Wi-Fi, Bluetooth, wireless wide area network (WWAN), ADSL, and team has been split into separate subpackages to allow for smaller installations. To support smaller environments, this update introduces an optional built-in Dynamic Host Configuration Protocol (DHCP) client that uses less memory. A new NetworkManager mode for static networking configurations that starts NetworkManager, configures interfaces and then quits, has been added. NetworkManager provides better cooperation with non-NetworkManager managed devices, specifically by no longer setting the IFF_UP flag on these devices. In addition, NetworkManager is aware of connections created outside of itself and is able to save these to be used within NetworkManager if desired. In Red Hat Enterprise Linux 7.1, NetworkManager assigns a default route for each interface allowed to have one. The metric of each default route is adjusted to select the global default interface, and this metric may be customized to prefer certain interfaces over others. Default routes added by other programs are not modified by NetworkManager. Improvements have been made to NetworkManager's IPv6 configuration, allowing it to respect IPv6 router advertisement MTUs and keeping manually configured static IPv6 addresses even if automatic configuration fails. In addition, WWAN connections now support IPv6 if the modem and provider support it. Various improvements to dispatcher scripts have been made, including support for a pre-up and pre-down script. Bonding option lacp_rate is now supported in Red Hat Enterprise Linux 7.1. NetworkManager has been enhanced to provide easy device renaming when renaming master interfaces with slave interfaces. A priority setting has been added to the auto-connect function of NetworkManager . Now, if more than one eligible candidate is available for auto-connect, NetworkManager selects the connection with the highest priority. If all available connections have equal priority values, NetworkManager uses the default behavior and selects the last active connection. This update also introduces numerous improvements to the nmcli command-line utility, including the ability to provide passwords when connecting to Wi-Fi or 802.1X networks. Network Namespaces and VTI Support for virtual tunnel interfaces ( VTI ) with network namespaces has been added in Red Hat Enterprise Linux 7.1. This enables traffic from a VTI to be passed between different namespaces when packets are encapsulated or de-encapsulated. Alternative Configuration Storage for the MemberOf Plug-In The configuration of the MemberOf plug-in for the Red Hat Directory Server can now be stored in a suffix mapped to a back-end database. This allows the MemberOf plug-in configuration to be replicated, which makes it easier for the user to maintain a consistent MemberOf plug-in configuration in a replicated environment.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.1_release_notes/chap-Red_Hat_Enterprise_Linux-7.1_Release_Notes-Networking
Part II. Managing Confined Services
Part II. Managing Confined Services This part of the book focuses more on practical tasks and provides information how to set up and configure various services. For each service, there are listed the most common types and Booleans with the specifications. Also included are real-world examples of configuring those services and demonstrations of how SELinux complements their operation. When SELinux is in enforcing mode, the default policy used in Red Hat Enterprise Linux, is the targeted policy. Processes that are targeted run in a confined domain, and processes that are not targeted run in an unconfined domain. See Chapter 3, Targeted Policy for more information about targeted policy and confined and unconfined processes.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/part_ii-managing_confined_services
Chapter 16. Red Hat Quay auto-pruning overview
Chapter 16. Red Hat Quay auto-pruning overview Red Hat Quay administrators can set up multiple auto-pruning policies on organizations and repositories; administrators can also set up auto-pruning policies at the registry level so that they apply to all organizations, including all newly created organizations. This feature allows for image tags to be automatically deleted within an organization or a repository based on specified criteria, which allows Red Hat Quay organization owners to stay below the storage quota by automatically pruning content. Currently, two policies have been added: Prune images by the number of tags . For this policy, when the actual number of tags exceeds the desired number of tags, the oldest tags are deleted by their creation date until the desired number of tags is achieved. Prune image tags by creation date . For this policy, any tags with a creation date older than the given time span, for example, 10 days, are deleted. After tags are automatically pruned, they go into the Red Hat Quay time machine, or the amount of time, after a tag is deleted, that the tag is accessible before being garbage collected. The expiration time of an image tag is dependent on your organization's settings. For more information, see Red Hat Quay garbage collection . Users can configure multiple policies per namespace or repository; this can be done through the Red Hat Quay v2 UI. Policies can also be set by using the API endpoints through the command-line interface (CLI). 16.1. Prerequisites and limitations for auto-pruning and multiple policies The following prerequisites and limitations apply to the auto-pruning feature: Auto-pruning is not available when using the Red Hat Quay legacy UI. You must use the v2 UI to create, view, or modify auto-pruning policies. Auto-pruning is only supported in databases that support the FOR UPDATE SKIP LOCKED SQL command. Auto-pruning is unavailable on mirrored repositories and read-only repositories. If you are configuring multiple auto-prune policies, rules are processed without particular order, and individual result sets are processed immediately before moving on to the rule. For example, if an image is already subject for garbage collection by one rule, it cannot be excluded from pruning by another rule. If you have both an auto-pruning policy for an organization and a repository, the auto-pruning policies set at the organization level are executed first. 16.2. Regular expressions with auto-pruning Red Hat Quay administrators can leverage regular expressions , or regex , to match a subset of tags for both organization- and repository-level auto-pruning policies. This provides more granular auto-pruning policies to target only certain image tags for removal. Consider the following when using regular expressions with the auto-pruning feature: Regular expressions are optional. If a regular expression is not provided, the auto-pruner defaults to pruning all image tags in the organization or the repository. These are user-supplied and must be protected against ReDOS attacks. Registry-wide policies do not currently support regular expressions . Only organization- and repository-level auto-pruning policies support regular expressions . Regular expressions can be configured to prune images that either do, or do not , match the provided regex pattern. Some of the following procedures provide example auto-pruning policies using regular expressions that you can use as a reference when creating an auto-prune policy. 16.3. Managing auto-pruning policies using the Red Hat Quay UI All auto-pruning policies, with the exception of a registry-wide auto pruning policy, are created using the Red Hat Quay v2 UI or by using the API. This can be done after you have configured your Red Hat Quay config.yaml file to enable the auto-pruning feature and the v2 UI. Note This feature is not available when using the Red Hat Quay legacy UI. 16.3.1. Configuring the Red Hat Quay auto-pruning feature Use the following procedure to configure your Red Hat Quay config.yaml file to enable the auto-pruning feature. Prerequisites You have set FEATURE_UI_V2 to true in your config.yaml file. Procedure In your Red Hat Quay config.yaml file, add, and set, the FEATURE_AUTO_PRUNE environment variable to True . For example: # ... FEATURE_AUTO_PRUNE: true # ... 16.3.2. Creating a registry-wide auto-pruning policy Registry-wide auto-pruning policies can be configured on new and existing organizations. This feature saves Red Hat Quay administrators time, effort, and storage by enforcing registry-wide rules. Red Hat Quay administrators must enable this feature by updating their config.yaml file through the inclusion of DEFAULT_NAMESPACE_AUTOPRUNE_POLICY configuration field, and one of number_of_tags or creation_date methods. Currently, this feature cannot be enabled by using the v2 UI or the API. Use the following procedure to create an auto-prune policy for your Red Hat Quay registry. Prerequisites You have enabled the FEATURE_AUTO_PRUNE feature. Procedure Update your config.yaml file to add the DEFAULT_NAMESPACE_AUTOPRUNE_POLICY configuration field: To set the policy method to remove the oldest tags by their creation date until the number of tags provided is left, use the number_of_tags method: # ... DEFAULT_NAMESPACE_AUTOPRUNE_POLICY: method: number_of_tags value: 2 1 # ... 1 In this scenario, two tags remain. To set the policy method to remove tags with a creation date older than the provided time span, for example, 5d , use the creation_date method: DEFAULT_NAMESPACE_AUTOPRUNE_POLICY: method: creation_date value: 5d Restart your Red Hat Quay deployment. Optional. If you need to tag and push images to test this feature: Tag four sample images that will be pushed to a Red Hat Quay registry. For example: USD podman tag docker.io/library/busybox <quay-server.example.com>/<quayadmin>/busybox:test USD podman tag docker.io/library/busybox <quay-server.example.com>/<quayadmin>/busybox:test2 USD podman tag docker.io/library/busybox <quay-server.example.com>/<quayadmin>/busybox:test3 USD podman tag docker.io/library/busybox <quay-server.example.com>/<quayadmin>/busybox:test4 Push the four sample images to the registry with auto-pruning enabled by entering the following commands: USD podman push <quay-server.example.com>/quayadmin/busybox:test USD podman push <quay-server.example.com>/<quayadmin>/busybox:test2 USD podman push <quay-server.example.com>/<quayadmin>/busybox:test3 USD podman push <quay-server.example.com>/<quayadmin>/busybox:test4 Check that there are four tags in the registry that you pushed the images to. By default, the auto-pruner worker at the registry level runs every 24 hours. After 24 hours, the two oldest image tags are removed, leaving the test3 and test4 tags if you followed these instructions. Check your Red Hat Quay organization to ensure that the two oldest tags were removed. 16.3.3. Creating an auto-prune policy for an organization by using the Red Hat Quay v2 UI Use the following procedure to create an auto-prune policy for an organization using the Red Hat Quay v2 UI. Prerequisites You have enabled the FEATURE_AUTO_PRUNE feature. Your organization has image tags that have been pushed to it. Procedure On the Red Hat Quay v2 UI, click Organizations in the navigation pane. Select the name of an organization that you will apply the auto-pruning feature to, for example, test_organization . Click Settings . Click Auto-Prune Policies . For example: Click the drop down menu and select the desired policy, for example, By number of tags . Select the desired number of tags to keep. By default, this is set at 20 tags. For this example, the number of tags to keep is set at 3 . Optional. With the introduction of regular expressions , you are provided the following options to fine-grain your auto-pruning policy: Match : When selecting this option, the auto-pruner prunes all tags that match the given regex pattern. Does not match : When selecting this option, the auto-pruner prunes all tags that do not match the regex pattern. If you do not select an option, the auto-pruner defaults to pruning all image tags. For this example, click the Tag pattern box and select match . In the regex box, enter a pattern to match tags against. For example, to automatically prune all test tags, enter ^test.* . Optional. You can create a second auto-prune policy by clicking Add Policy and entering the required information. Click Save . A notification that your auto-prune policy has been updated appears. With this example, the organization is configured to keep the three latest tags that are named ^test.* . Verification Navigate to the Tags page of your Organization's repository. After a few minutes, the auto-pruner worker removes tags that no longer fit within the established criteria. In this example, it removes the busybox:test tag, and keeps the busybox:test2 , busybox:test3 , and busybox:test4 tag. After tags are automatically pruned, they go into the Red Hat Quay time machine, or the amount of time after a tag is deleted that the tag is accessible before being garbage collected. The expiration time of an image tag is dependent on your organization's settings. For more information, see Red Hat Quay garbage collection . 16.3.4. Creating an auto-prune policy for a namespace by using the Red Hat Quay API You can use Red Hat Quay API endpoints to manage auto-pruning policies for an namespace. Prerequisites You have set BROWSER_API_CALLS_XHR_ONLY: false in your config.yaml file. You have created an OAuth access token. You have logged into Red Hat Quay. Procedure Enter the following POST /api/v1/organization/{orgname}/autoprunepolicy/ command create a new policy that limits the number of tags allowed in an organization: USD curl -X POST -H "Authorization: Bearer <access_token>" -H "Content-Type: application/json" -d '{"method": "number_of_tags", "value": 10}' http://<quay-server.example.com>/api/v1/organization/<organization_name>/autoprunepolicy/ Alternatively, you can can set tags to expire for a specified time after their creation date: USD curl -X POST -H "Authorization: Bearer <access_token>" -H "Content-Type: application/json" -d '{ "method": "creation_date", "value": "7d"}' http://<quay-server.example.com>/api/v1/organization/<organization_name>/autoprunepolicy/ Example output {"uuid": "73d64f05-d587-42d9-af6d-e726a4a80d6e"} Optional. You can add an additional policy to an organization and pass in the tagPattern and tagPatternMatches fields to prune only tags that match the given regex pattern. For example: USD curl -X POST \ -H "Authorization: Bearer <bearer_token>" \ -H "Content-Type: application/json" \ -d '{ "method": "creation_date", "value": "7d", "tagPattern": "^v*", "tagPatternMatches": <true> 1 }' \ "https://<quay-server.example.com>/api/v1/organization/<organization_name>/autoprunepolicy/" 1 Setting tagPatternMatches to true makes it so that tags that match the given regex pattern will be pruned. In this example, tags that match ^v* are pruned. Example output {"uuid": "ebf7448b-93c3-4f14-bf2f-25aa6857c7b0"} You can update your organization's auto-prune policy by using the PUT /api/v1/organization/{orgname}/autoprunepolicy/{policy_uuid} command. For example: USD curl -X PUT -H "Authorization: Bearer <bearer_token>" -H "Content-Type: application/json" -d '{ "method": "creation_date", "value": "4d", "tagPattern": "^v*", "tagPatternMatches": true }' "<quay-server.example.com>/api/v1/organization/<organization_name>/autoprunepolicy/<uuid>" This command does not return output. Continue to the step. Check your auto-prune policy by entering the following command: USD curl -X GET -H "Authorization: Bearer <access_token>" http://<quay-server.example.com>/api/v1/organization/<organization_name>/autoprunepolicy/ Example output {"policies": [{"uuid": "ebf7448b-93c3-4f14-bf2f-25aa6857c7b0", "method": "creation_date", "value": "4d", "tagPattern": "^v*", "tagPatternMatches": true}, {"uuid": "da4d0ad7-3c2d-4be8-af63-9c51f9a501bc", "method": "number_of_tags", "value": 10, "tagPattern": null, "tagPatternMatches": true}, {"uuid": "17b9fd96-1537-4462-a830-7f53b43f94c2", "method": "creation_date", "value": "7d", "tagPattern": "^v*", "tagPatternMatches": true}]} You can delete the auto-prune policy for your organization by entering the following command. Note that deleting the policy requires the UUID. USD curl -X DELETE -H "Authorization: Bearer <access_token>" http://<quay-server.example.com>/api/v1/organization/<organization_name>/autoprunepolicy/73d64f05-d587-42d9-af6d-e726a4a80d6e 16.3.5. Creating an auto-prune policy for a namespace for the current user by using the API You can use Red Hat Quay API endpoints to manage auto-pruning policies for your account. Note The use of /user/ in the following commands represents the user that is currently logged into Red Hat Quay. Prerequisites You have set BROWSER_API_CALLS_XHR_ONLY: false in your config.yaml file. You have created an OAuth access token. You have logged into Red Hat Quay. Procedure Enter the following POST command create a new policy that limits the number of tags for the current user: USD curl -X POST -H "Authorization: Bearer <access_token>" -H "Content-Type: application/json" -d '{"method": "number_of_tags", "value": 10}' http://<quay-server.example.com>/api/v1/user/autoprunepolicy/ Example output {"uuid": "8c03f995-ca6f-4928-b98d-d75ed8c14859"} Check your auto-prune policy by entering the following command: USD curl -X GET -H "Authorization: Bearer <access_token>" http://<quay-server.example.com>/api/v1/user/autoprunepolicy/ Alternatively, you can include the UUID: USD curl -X GET -H "Authorization: Bearer <access_token>" http://<quay-server.example.com>/api/v1/user/autoprunepolicy/8c03f995-ca6f-4928-b98d-d75ed8c14859 Example output {"policies": [{"uuid": "8c03f995-ca6f-4928-b98d-d75ed8c14859", "method": "number_of_tags", "value": 10}]} You can delete the auto-prune policy by entering the following command. Note that deleting the policy requires the UUID. USD curl -X DELETE -H "Authorization: Bearer <access_token>" http://<quay-server.example.com>/api/v1/user/autoprunepolicy/8c03f995-ca6f-4928-b98d-d75ed8c14859 Example output {"uuid": "8c03f995-ca6f-4928-b98d-d75ed8c14859"} 16.3.6. Creating an auto-prune policy for a repository using the Red Hat Quay v2 UI Use the following procedure to create an auto-prune policy for a repository using the Red Hat Quay v2 UI. Prerequisites You have enabled the FEATURE_AUTO_PRUNE feature. You have pushed image tags to your repository. Procedure On the Red Hat Quay v2 UI, click Repository in the navigation pane. Select the name of an organization that you will apply the auto-pruning feature to, for example, <organization_name>/<repository_name> . Click Settings . Click Repository Auto-Prune Policies . Click the drop down menu and select the desired policy, for example, By age of tags . Set a time, for example, 5 and an interval, for example minutes to delete tags older than the specified time frame. For this example, tags older than 5 minutes are marked for deletion. Optional. With the introduction of regular expressions , you are provided the following options to fine-grain your auto-pruning policy: Match : When selecting this option, the auto-pruner prunes all tags that match the given regex pattern. Does not match : When selecting this option, the auto-pruner prunes all tags that do not match the regex pattern. If you do not select an option, the auto-pruner defaults to pruning all image tags. For this example, click the Tag pattern box and select Does not match . In the regex box, enter a pattern to match tags against. For example, to automatically prune all tags that do not match the test tag, enter ^test.* . Optional. You can create a second auto-prune policy by clicking Add Policy and entering the required information. Click Save . A notification that your auto-prune policy has been updated appears. Verification Navigate to the Tags page of your Organization's repository. With this example, Tags that are older than 5 minutes that do not match the ^test.* regex tag are automatically pruned when the pruner runs. After tags are automatically pruned, they go into the Red Hat Quay time machine, or the amount of time after a tag is deleted that the tag is accessible before being garbage collected. The expiration time of an image tag is dependent on your organization's settings. For more information, see Red Hat Quay garbage collection . 16.3.7. Creating an auto-prune policy for a repository using the Red Hat Quay API You can use Red Hat Quay API endpoints to manage auto-pruning policies for an repository. Prerequisites You have set BROWSER_API_CALLS_XHR_ONLY: false in your config.yaml file. You have created an OAuth access token. You have logged into Red Hat Quay. Procedure Enter the following POST /api/v1/repository/{repository}/autoprunepolicy/ command create a new policy that limits the number of tags allowed in an organization: USD curl -X POST -H "Authorization: Bearer <access_token>" -H "Content-Type: application/json" -d '{"method": "number_of_tags","value": 2}' http://<quay-server.example.com>/api/v1/repository/<organization_name>/<repository_name>/autoprunepolicy/ Alternatively, you can can set tags to expire for a specified time after their creation date: USD curl -X POST -H "Authorization: Bearer <access_token>" -H "Content-Type: application/json" -d '{"method": "creation_date", "value": "7d"}' http://<quay-server.example.com>/api/v1/repository/<organization_name>/<repository_name>/autoprunepolicy/ Example output {"uuid": "ce2bdcc0-ced2-4a1a-ac36-78a9c1bed8c7"} Optional. You can add an additional policy and pass in the tagPattern and tagPatternMatches fields to prune only tags that match the given regex pattern. For example: USD curl -X POST \ -H "Authorization: Bearer <access_token>" \ -H "Content-Type: application/json" \ -d '{ "method": "<creation_date>", "value": "<7d>", "tagPattern": "<^test.>*", "tagPatternMatches": <false> 1 }' \ "https://<quay-server.example.com>/api/v1/repository/<organization_name>/<repository_name>/autoprunepolicy/" 1 Setting tagPatternMatches to false makes it so that tags that all tags that do not match the given regex pattern are pruned. In this example, all tags but ^test. are pruned. Example output {"uuid": "b53d8d3f-2e73-40e7-96ff-736d372cd5ef"} You can update your policy for the repository by using the PUT /api/v1/repository/{repository}/autoprunepolicy/{policy_uuid} command and passing in the UUID. For example: USD curl -X PUT \ -H "Authorization: Bearer <bearer_token>" \ -H "Content-Type: application/json" \ -d '{ "method": "number_of_tags", "value": "5", "tagPattern": "^test.*", "tagPatternMatches": true }' \ "https://quay-server.example.com/api/v1/repository/<namespace>/<repo_name>/autoprunepolicy/<uuid>" This command does not return output. Continue to the step to check your auto-prune policy. Check your auto-prune policy by entering the following command: USD curl -X GET -H "Authorization: Bearer <access_token>" http://<quay-server.example.com>/api/v1/repository/<organization_name>/<repository_name>/autoprunepolicy/ Alternatively, you can include the UUID: USD curl -X GET -H "Authorization: Bearer <access_token>" http://<quay-server.example.com>/api/v1/repository/<organization_name>/<repository_name>/autoprunepolicy/ce2bdcc0-ced2-4a1a-ac36-78a9c1bed8c7 Example output {"policies": [{"uuid": "ce2bdcc0-ced2-4a1a-ac36-78a9c1bed8c7", "method": "number_of_tags", "value": 10}]} You can delete the auto-prune policy by entering the following command. Note that deleting the policy requires the UUID. USD curl -X DELETE -H "Authorization: Bearer <access_token>" http://<quay-server.example.com>/api/v1/repository/<organization_name>/<repository_name>/autoprunepolicy/ce2bdcc0-ced2-4a1a-ac36-78a9c1bed8c7 Example output {"uuid": "ce2bdcc0-ced2-4a1a-ac36-78a9c1bed8c7"} 16.3.8. Creating an auto-prune policy on a repository for a user with the API You can use Red Hat Quay API endpoints to manage auto-pruning policies on a repository for user accounts that are not your own, so long as you have admin privileges on the repository. Prerequisites You have set BROWSER_API_CALLS_XHR_ONLY: false in your config.yaml file. You have created an OAuth access token. You have logged into Red Hat Quay. You have admin privileges on the repository that you are creating the policy for. Procedure Enter the following POST /api/v1/repository/<user_account>/<user_repository>/autoprunepolicy/ command create a new policy that limits the number of tags for the user: USD curl -X POST -H "Authorization: Bearer <access_token>" -H "Content-Type: application/json" -d '{"method": "number_of_tags","value": 2}' https://<quay-server.example.com>/api/v1/repository/<user_account>/<user_repository>/autoprunepolicy/ Example output {"uuid": "7726f79c-cbc7-490e-98dd-becdc6fefce7"} Optional. You can add an additional policy for the current user and pass in the tagPattern and tagPatternMatches fields to prune only tags that match the given regex pattern. For example: USD curl -X POST \ -H "Authorization: Bearer <bearer_token>" \ -H "Content-Type: application/json" \ -d '{ "method": "creation_date", "value": "7d", "tagPattern": "^v*", "tagPatternMatches": true }' \ "http://<quay-server.example.com>/api/v1/repository/<user_account>/<user_repository>/autoprunepolicy/" Example output {"uuid": "b3797bcd-de72-4b71-9b1e-726dabc971be"} You can update your policy for the current user by using the PUT /api/v1/repository/<user_account>/<user_repository>/autoprunepolicy/<policy_uuid> command. For example: USD curl -X PUT -H "Authorization: Bearer <bearer_token>" -H "Content-Type: application/json" -d '{ "method": "creation_date", "value": "4d", "tagPattern": "^test.", "tagPatternMatches": true }' "https://<quay-server.example.com>/api/v1/repository/<user_account>/<user_repository>/autoprunepolicy/<policy_uuid>" Updating a policy does not return output in the CLI. Check your auto-prune policy by entering the following command: USD curl -X GET -H "Authorization: Bearer <access_token>" http://<quay-server.example.com>/api/v1/repository/<user_account>/<user_repository>/autoprunepolicy/ Alternatively, you can include the UUID: USD curl -X GET -H "Authorization: Bearer <access_token>" http://<quay-server.example.com>/api/v1/repository/<user_account>/<user_repository>/autoprunepolicy/7726f79c-cbc7-490e-98dd-becdc6fefce7 Example output {"uuid": "81ee77ec-496a-4a0a-9241-eca49437d15b", "method": "creation_date", "value": "7d", "tagPattern": "^v*", "tagPatternMatches": true} You can delete the auto-prune policy by entering the following command. Note that deleting the policy requires the UUID. USD curl -X DELETE -H "Authorization: Bearer <access_token>" http://<quay-server.example.com>/api/v1/repository/<user_account>/<user_repository>/autoprunepolicy/<policy_uuid> Example output {"uuid": "7726f79c-cbc7-490e-98dd-becdc6fefce7"}
[ "FEATURE_AUTO_PRUNE: true", "DEFAULT_NAMESPACE_AUTOPRUNE_POLICY: method: number_of_tags value: 2 1", "DEFAULT_NAMESPACE_AUTOPRUNE_POLICY: method: creation_date value: 5d", "podman tag docker.io/library/busybox <quay-server.example.com>/<quayadmin>/busybox:test", "podman tag docker.io/library/busybox <quay-server.example.com>/<quayadmin>/busybox:test2", "podman tag docker.io/library/busybox <quay-server.example.com>/<quayadmin>/busybox:test3", "podman tag docker.io/library/busybox <quay-server.example.com>/<quayadmin>/busybox:test4", "podman push <quay-server.example.com>/quayadmin/busybox:test", "podman push <quay-server.example.com>/<quayadmin>/busybox:test2", "podman push <quay-server.example.com>/<quayadmin>/busybox:test3", "podman push <quay-server.example.com>/<quayadmin>/busybox:test4", "curl -X POST -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{\"method\": \"number_of_tags\", \"value\": 10}' http://<quay-server.example.com>/api/v1/organization/<organization_name>/autoprunepolicy/", "curl -X POST -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{ \"method\": \"creation_date\", \"value\": \"7d\"}' http://<quay-server.example.com>/api/v1/organization/<organization_name>/autoprunepolicy/", "{\"uuid\": \"73d64f05-d587-42d9-af6d-e726a4a80d6e\"}", "curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -d '{ \"method\": \"creation_date\", \"value\": \"7d\", \"tagPattern\": \"^v*\", \"tagPatternMatches\": <true> 1 }' \"https://<quay-server.example.com>/api/v1/organization/<organization_name>/autoprunepolicy/\"", "{\"uuid\": \"ebf7448b-93c3-4f14-bf2f-25aa6857c7b0\"}", "curl -X PUT -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -d '{ \"method\": \"creation_date\", \"value\": \"4d\", \"tagPattern\": \"^v*\", \"tagPatternMatches\": true }' \"<quay-server.example.com>/api/v1/organization/<organization_name>/autoprunepolicy/<uuid>\"", "curl -X GET -H \"Authorization: Bearer <access_token>\" http://<quay-server.example.com>/api/v1/organization/<organization_name>/autoprunepolicy/", "{\"policies\": [{\"uuid\": \"ebf7448b-93c3-4f14-bf2f-25aa6857c7b0\", \"method\": \"creation_date\", \"value\": \"4d\", \"tagPattern\": \"^v*\", \"tagPatternMatches\": true}, {\"uuid\": \"da4d0ad7-3c2d-4be8-af63-9c51f9a501bc\", \"method\": \"number_of_tags\", \"value\": 10, \"tagPattern\": null, \"tagPatternMatches\": true}, {\"uuid\": \"17b9fd96-1537-4462-a830-7f53b43f94c2\", \"method\": \"creation_date\", \"value\": \"7d\", \"tagPattern\": \"^v*\", \"tagPatternMatches\": true}]}", "curl -X DELETE -H \"Authorization: Bearer <access_token>\" http://<quay-server.example.com>/api/v1/organization/<organization_name>/autoprunepolicy/73d64f05-d587-42d9-af6d-e726a4a80d6e", "curl -X POST -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{\"method\": \"number_of_tags\", \"value\": 10}' http://<quay-server.example.com>/api/v1/user/autoprunepolicy/", "{\"uuid\": \"8c03f995-ca6f-4928-b98d-d75ed8c14859\"}", "curl -X GET -H \"Authorization: Bearer <access_token>\" http://<quay-server.example.com>/api/v1/user/autoprunepolicy/", "curl -X GET -H \"Authorization: Bearer <access_token>\" http://<quay-server.example.com>/api/v1/user/autoprunepolicy/8c03f995-ca6f-4928-b98d-d75ed8c14859", "{\"policies\": [{\"uuid\": \"8c03f995-ca6f-4928-b98d-d75ed8c14859\", \"method\": \"number_of_tags\", \"value\": 10}]}", "curl -X DELETE -H \"Authorization: Bearer <access_token>\" http://<quay-server.example.com>/api/v1/user/autoprunepolicy/8c03f995-ca6f-4928-b98d-d75ed8c14859", "{\"uuid\": \"8c03f995-ca6f-4928-b98d-d75ed8c14859\"}", "curl -X POST -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{\"method\": \"number_of_tags\",\"value\": 2}' http://<quay-server.example.com>/api/v1/repository/<organization_name>/<repository_name>/autoprunepolicy/", "curl -X POST -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{\"method\": \"creation_date\", \"value\": \"7d\"}' http://<quay-server.example.com>/api/v1/repository/<organization_name>/<repository_name>/autoprunepolicy/", "{\"uuid\": \"ce2bdcc0-ced2-4a1a-ac36-78a9c1bed8c7\"}", "curl -X POST -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{ \"method\": \"<creation_date>\", \"value\": \"<7d>\", \"tagPattern\": \"<^test.>*\", \"tagPatternMatches\": <false> 1 }' \"https://<quay-server.example.com>/api/v1/repository/<organization_name>/<repository_name>/autoprunepolicy/\"", "{\"uuid\": \"b53d8d3f-2e73-40e7-96ff-736d372cd5ef\"}", "curl -X PUT -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -d '{ \"method\": \"number_of_tags\", \"value\": \"5\", \"tagPattern\": \"^test.*\", \"tagPatternMatches\": true }' \"https://quay-server.example.com/api/v1/repository/<namespace>/<repo_name>/autoprunepolicy/<uuid>\"", "curl -X GET -H \"Authorization: Bearer <access_token>\" http://<quay-server.example.com>/api/v1/repository/<organization_name>/<repository_name>/autoprunepolicy/", "curl -X GET -H \"Authorization: Bearer <access_token>\" http://<quay-server.example.com>/api/v1/repository/<organization_name>/<repository_name>/autoprunepolicy/ce2bdcc0-ced2-4a1a-ac36-78a9c1bed8c7", "{\"policies\": [{\"uuid\": \"ce2bdcc0-ced2-4a1a-ac36-78a9c1bed8c7\", \"method\": \"number_of_tags\", \"value\": 10}]}", "curl -X DELETE -H \"Authorization: Bearer <access_token>\" http://<quay-server.example.com>/api/v1/repository/<organization_name>/<repository_name>/autoprunepolicy/ce2bdcc0-ced2-4a1a-ac36-78a9c1bed8c7", "{\"uuid\": \"ce2bdcc0-ced2-4a1a-ac36-78a9c1bed8c7\"}", "curl -X POST -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{\"method\": \"number_of_tags\",\"value\": 2}' https://<quay-server.example.com>/api/v1/repository/<user_account>/<user_repository>/autoprunepolicy/", "{\"uuid\": \"7726f79c-cbc7-490e-98dd-becdc6fefce7\"}", "curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -d '{ \"method\": \"creation_date\", \"value\": \"7d\", \"tagPattern\": \"^v*\", \"tagPatternMatches\": true }' \"http://<quay-server.example.com>/api/v1/repository/<user_account>/<user_repository>/autoprunepolicy/\"", "{\"uuid\": \"b3797bcd-de72-4b71-9b1e-726dabc971be\"}", "curl -X PUT -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -d '{ \"method\": \"creation_date\", \"value\": \"4d\", \"tagPattern\": \"^test.\", \"tagPatternMatches\": true }' \"https://<quay-server.example.com>/api/v1/repository/<user_account>/<user_repository>/autoprunepolicy/<policy_uuid>\"", "curl -X GET -H \"Authorization: Bearer <access_token>\" http://<quay-server.example.com>/api/v1/repository/<user_account>/<user_repository>/autoprunepolicy/", "curl -X GET -H \"Authorization: Bearer <access_token>\" http://<quay-server.example.com>/api/v1/repository/<user_account>/<user_repository>/autoprunepolicy/7726f79c-cbc7-490e-98dd-becdc6fefce7", "{\"uuid\": \"81ee77ec-496a-4a0a-9241-eca49437d15b\", \"method\": \"creation_date\", \"value\": \"7d\", \"tagPattern\": \"^v*\", \"tagPatternMatches\": true}", "curl -X DELETE -H \"Authorization: Bearer <access_token>\" http://<quay-server.example.com>/api/v1/repository/<user_account>/<user_repository>/autoprunepolicy/<policy_uuid>", "{\"uuid\": \"7726f79c-cbc7-490e-98dd-becdc6fefce7\"}" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3/html/manage_red_hat_quay/red-hat-quay-namespace-auto-pruning-overview
Chapter 165. StrimziPodSetStatus schema reference
Chapter 165. StrimziPodSetStatus schema reference Used in: StrimziPodSet Property Property type Description conditions Condition array List of status conditions. observedGeneration integer The generation of the CRD that was last reconciled by the operator. pods integer Number of pods managed by this StrimziPodSet resource. readyPods integer Number of pods managed by this StrimziPodSet resource that are ready. currentPods integer Number of pods managed by this StrimziPodSet resource that have the current revision.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-StrimziPodSetStatus-reference
Chapter 1. Camel K release notes
Chapter 1. Camel K release notes Camel K is a lightweight integration framework built from Apache Camel K that runs natively in the cloud on OpenShift. Camel K is specifically designed for serverless and microservice architectures. You can use Camel K to instantly run integration code written in Camel Domain Specific Language (DSL) directly on OpenShift. Using Camel K with OpenShift Serverless and Knative, containers are automatically created only as needed and are autoscaled under load up and down to zero. This removes the overhead of server provisioning and maintenance and enables you to focus instead on application development. Using Camel K with OpenShift Serverless and Knative Eventing, you can manage how components in your system communicate in an event-driven architecture for serverless applications. This provides flexibility and creates efficiencies using a publish/subscribe or event-streaming model with decoupled relationships between event producers and consumers. 1.1. Camel K features The Camel K provides cloud-native integration with the following main features: Knative Serving for autoscaling and scale-to-zero Knative Eventing for event-driven architectures Performance optimizations using Quarkus runtime by default Camel integrations written in Java or YAML DSL Monitoring of integrations using Prometheus in OpenShift Quickstart tutorials Kamelet Catalog for connectors to external systems such as AWS, Jira, and Salesforce Support for Timer and Log Kamelets Support for IBM MQ connector Support for Oracle 19 database 1.2. Supported Configurations For information about Camel K supported configurations, standards, and components, see the following Customer Portal articles: Camel K Supported Configurations Camel K Component Details 1.2.1. Camel K Operator metadata The Camel K includes updated Operator metadata used to install Camel K from the OpenShift OperatorHub. This Operator metadata includes the Operator bundle format for release packaging, which is designed for use with OpenShift Container Platform 4.6 or later. Additional resources Operator bundle format in the OpenShift documentation . 1.3. Important notes Important notes for the Red Hat Integration - Camel K release: Camel K deprecation Red Hat Camel K has been deprecated and the support for Camel K 1.10.x version will continue until June 30, 2025. Camel K v2 will not be released. For more information, see Red Hat Camel K End of Life Notice The javax to jakarta Package Namespace Change Following the move of Java EE to the Eclipse Foundation and the establishment of Jakarta EE, in order to continue to evolve the EE APIs, beginning with Jakarta EE 9 the packages used for all EE APIs have changed from javax.* to jakarta.* . Code snippets in documentation have been updated to use the jakarta.* namespace, but you must take utmost care and review your own applications. Note This change does not affect javax packages that are part of Java SE. When migrating applications to EE 10, you need to: Update any import statements or other source code uses of EE API classes from the javax package to jakarta . Change any EE-specified system properties or other configuration properties whose names begin with javax. to begin with jakarta. . Use the META-INF/services/jakarta.[rest_of_name] name format to identify implementation classes in your applications that use the implement EE interfaces or abstract classes bootstrapped with the java.util.ServiceLoader mechanism. Migration tools Source code migration: How to use Red Hat Migration Toolkit for Auto-Migration of an Application to the Jakarta EE 10 Namespace Bytecode transforms: For cases where source code migration is not an option, the open source Eclipse Transformer project provides bytecode transformation tooling to transform existing Java archives from the javax namespace to jakarta . Additional resources Background: Update on Jakarta EE Rights to Java Trademarks Red Hat Customer Portal: Red Hat JBoss EAP Application Migration from Jakarta EE 8 to EE 10 Jakarta EE: Javax to Jakarta Namespace Ecosystem Progress Removing support of metering labels from Red Hat Integration - Camel K Metering labels for Camel K Operator and pods are no longer supported. Security update for Red Hat Integration - Camel K For details on how to apply this update, see How do I apply package updates to my RHEL system? Note You must apply all the previously release Errata upgrades to your system before applying this security update. Support to run Camel K on ROSA Camel K is now supported to run on Red Hat OpenShift Service on AWS (ROSA). Support for IBM MQ source connector in Camel K IBM MQ source connector kamelet is added to latest Camel K. Support for Oracle 19 Oracle 19 is now supported in Camel K. Refer Supported configurations page for more information. Using Camel K CLI commands on Windows machine When using kamel cli commands on Windows machine, the path in the resource option in the command must use linux format. For example: Red Hat Integration - Camel K Operator image size is increased Since Red Hat Integration - Camel K 1.10.9.redhat-00005, the size of the Camel K Operator image is doubled. Accepted Camel case notations in YAML DSL Since Red Hat Integration - Camel K 1.10.9.redhat-00005, the YAML DSL will accept camel case notation (i.e, setBody ) as well as snake case (i.e set-body ). Please note that there are some differences in the syntax as schema is subject to changes within Camel versions. 1.4. Supported Camel Quarkus extensions This section lists the Camel Quarkus extensions that are supported for this release of Camel K (only when used inside a Camel K application). Note These Camel Quarkus extensions are supported only when used inside a Camel K application. These Camel Quarkus extensions are not supported for use in standalone mode (without Camel K). 1.4.1. Supported Camel Quarkus connector extensions The following table shows the Camel Quarkus connector extensions that are supported for this release of Camel K (only when used inside a Camel K application). Name Package AWS 2 Kinesis camel-quarkus-aws2-kinesis AWS 2 Lambda camel-quarkus-aws2-lambda AWS 2 S3 Storage Service camel-quarkus-aws2-s3 AWS 2 Simple Notification System (SNS) camel-quarkus-aws2-sns AWS 2 Simple Queue Service (SQS) camel-quarkus-aws2-sqs Cassandra CQL camel-quarkus-cassandraql Core camel-quarkus-core Direct camel-quarkus-direct File camel-quarkus-file FTP camel-quarkus-ftp FTPS camel-quarkus-ftps HTTP camel-quarkus-http JMS camel-quarkus-jms Kafka camel-quarkus-kafka Kamelet camel-quarkus-kamelet Master camel-quarkus-master Metrics camel-quarkus-metrics MongoDB camel-quarkus-mongodb Salesforce camel-quarkus-salesforce SFTP camel-quarkus-sftp SQL camel-quarkus-sql Timer camel-quarkus-timer 1.4.2. Supported Camel Quarkus dataformat extensions The following table shows the Camel Quarkus dataformat extensions that are supported for this release of Camel K (only when used inside a Camel K application). Name Package Avro camel-quarkus-avro Avro Jackson camel-quarkus-jackson-avro Bindy (for CSV) camel-qaurkus-bindy Jackson camel-quarkus-jackson JSON Gson camel-quarkus-gson 1.4.3. Supported Camel Quarkus language extensions In this release, Camel K supports the following Camel Quarkus language extensions (for use in Camel expressions and predicates): Constant ExchangeProperty File Header Ref Simple Tokenize JsonPath 1.4.4. Supported Camel K traits In this release, Camel K supports the following Camel K traits. Builder trait Camel trait Container trait Dependencies trait Deployer trait Deployment trait Environment trait Error Handler trait Jvm trait Kamelets trait Owner trait Platform trait Prometheus trait Pull Secret trait Quarkus trait Route trait Service trait 1.5. Supported Kamelets The following table lists the kamelets that are provided as OpenShift resources when you install the Camel K operator. For details about these kamelets, go to: https://github.com/openshift-integration/kamelet-catalog/tree/kamelet-catalog-1.8 For information about how to use kamelets to connect applications and services, see https://access.redhat.com/documentation/en-us/red_hat_integration/2022.q3/html-single/integrating_applications_with_kamelets . Important Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview . Table 1.1. Kamelets provided with the Camel K operator Kamelet File name Type (Sink, Source, Action) Ceph sink ceph-sink.kamelet.yaml Sink Ceph Source ceph-source.kamelet.yaml Source Jira Add Comment sink jira-add-comment-sink.kamelet.yaml Sink Jira Add Issue sink jira-add-issue-sink.kamelet.yaml Sink Jira Transition Issue sink jira-transition-issue-sink.kamelet.yaml Sink Jira Update Issue sink jira-update-issue-sink.kamelet.yaml Sink Avro Deserialize action avro-deserialize-action.kamelet.yaml Action (data conversion) Avro Serialize action avro-serialize-action.kamelet.yaml Action (data conversion) AWS DynamoDB sink aws-ddb-sink.kamelet.yaml Sink AWS DynamoDB Streams source aws-ddb-streams-source.kamelet.yaml Source AWS Redshift sink aws-redshift-sink.kamelet.yaml Sink AWS Kinesis Firehose Sink aws-kinesis-firehose-sink.kamelet.yaml Sink AWS 2 Kinesis sink aws-kinesis-sink.kamelet.yaml Sink AWS 2 Kinesis source aws-kinesis-source.kamelet.yaml Source AWS 2 Lambda sink aws-lambda-sink.kamelet.yaml Sink AWS 2 Simple Notification System sink aws-sns-sink.kamelet.yaml Sink AWS 2 Simple Queue Service sink aws-sqs-sink.kamelet.yaml Sink AWS 2 Simple Queue Service source aws-sqs-source.kamelet.yaml Source AWS 2 Simple Queue Service FIFO sink aws-sqs-fifo-sink.kamelet.yaml Sink AWS 2 S3 sink aws-s3-sink.kamelet.yaml Sink AWS 2 S3 source aws-s3-source.kamelet.yaml Source AWS 2 S3 Streaming Upload sink aws-s3-streaming-upload-sink.kamelet.yaml Sink Azure Storage Blob Source (Technology Preview) azure-storage-blob-source.kamelet.yaml Source Azure Storage Blob Sink (Technology Preview) azure-storage-blob-sink.kamelet.yaml Sink Azure Storage Queue Source (Technology Preview) azure-storage-queue-source.kamelet.yaml Source Azure Storage Queue Sink (Technology Preview) azure-storage-queue-sink.kamelet.yaml Sink Cassandra sink cassandra-sink.kamelet.yaml Sink Cassandra source cassandra-source.kamelet.yaml Source Extract Field action extract-field-action.kamelet.yaml Action FTP sink ftp-sink.kamelet.yaml Sink FTP source ftp-source.kamelet.yaml Source FTPS sink ftps-sink.kamelet.yaml Sink FTPS source ftps-source.kamelet.yaml Source Has Header Key Filter action has-header-filter-action.kamelet.yaml Action (data transformation) Hoist Field action hoist-field-action.kamelet.yaml Action HTTP sink http-sink.kamelet.yaml Sink Insert Field action insert-field-action.kamelet.yaml Action (data transformation) Insert Header action insert-header-action.kamelet.yaml Action (data transformation) Is Tombstone Filter action is-tombstone-filter-action.kamelet.yaml Action (data transformation) Jira source jira-source.kamelet.yaml Source JMS sink jms-amqp-10-sink.kamelet.yaml Sink JMS source jms-amqp-10-source.kamelet.yaml Source JMS IBM MQ sink jms-ibm-mq-sink.kamelet.yaml Sink JMS IBM MQ source jms-ibm-mq-source.kamelet.yaml Source JSON Deserialize action json-deserialize-action.kamelet.yaml Action (data conversion) JSON Serialize action json-serialize-action.kamelet.yaml Action (data conversion) Kafka sink kafka-sink.kamelet.yaml Sink Kafka source kafka-source.kamelet.yaml Source Kafka Topic Name Filter action topic-name-matches-filter-action.kamelet.yaml Action (data transformation) Log sink (for development and testing purposes) log-sink.kamelet.yaml Sink MariaDB sink mariadb-sink.kamelet.yaml Sink Mask Fields action mask-field-action.kamelet.yaml Action (data transformation) Message TimeStamp Router action message-timestamp-router-action.kamelet.yaml Action (router) MongoDB sink mongodb-sink.kamelet.yaml Sink MongoDB source mongodb-source.kamelet.yaml Source MySQL sink mysql-sink.kamelet.yaml Sink PostgreSQL sink postgresql-sink.kamelet.yaml Sink Predicate filter action predicate-filter-action.kamelet.yaml Action (router/filter) Protobuf Deserialize action protobuf-deserialize-action.kamelet.yaml Action (data conversion) Protobuf Serialize action protobuf-serialize-action.kamelet.yaml Action (data conversion) Regex Router action regex-router-action.kamelet.yaml Action (router) Replace Field action replace-field-action.kamelet.yaml Action Salesforce Create salesforce-create-sink.kamelet.yaml Sink Salesforce Delete salesforce-delete-sink.kamelet.yaml Sink Salesforce Update salesforce-update-sink.kamelet.yaml Sink SFTP sink sftp-sink.kamelet.yaml Sink SFTP source sftp-source.kamelet.yaml Source Slack source slack-source.kamelet.yaml Source SQL Server Database sink sqlserver-sink.kamelet.yaml Sink Telegram source telegram-source.kamelet.yaml Source Throttle action throttle-action.kamelet.yaml Action Timer source (for development and testing purposes) timer-source.kamelet.yaml Source TimeStamp Router action timestamp-router-action.kamelet.yaml Action (router) Value to Key action value-to-key-action.kamelet.yaml Action (data transformation) 1.6. Camel K known issues The following known issues apply to the Camel K: ENTESB-15306 - CRD conflicts between Camel K and Fuse Online If an older version of Camel K has ever been installed in the same OpenShift cluster, installing Camel K from the OperatorHub fails due to conflicts with custom resource definitions. For example, this includes older versions of Camel K previously available in Fuse Online. For a workaround, you can install Camel K in a different OpenShift cluster, or enter the following command before installing Camel K: ENTESB-15858 - Added ability to package and run Camel integrations locally or as container images Packaging and running Camel integrations locally or as container images is not currently included in the Camel K and has community-only support. For more details, see the Apache Camel K community . ENTESB-16477 - Unable to download jira client dependency with productized build When using Camel K operator, the integration is unable to find dependencies for jira client. The work around is to add the atlassian repo manually. ENTESB-17033 - Camel-K ElasticsearchComponent options ignored When configuring the Elasticsearch component, the Camel K ElasticsearchComponent options are ignored. The work around is to add getContext().setAutowiredEnabled(false) when using the Elasticsearch component. ENTESB-17061 - Can't run mongo-db-source kamelet route with non-admin user - Failed to start route mongodb-source-1 because of null It is not possible to run mongo-db-source kamelet route with non-admin user credentials. Some part of the component require admin credentials hence it is not possible run the route as a non-admin user. 1.7. Camel K Fixed Issues The following sections list the issues that have been fixed in Red Hat Integration - Camel K: 1.7.1. Bugs resolved in Camel K The following table lists the resolved bugs in Camel K 1.10.9.redhat-00005. Table 1.2. Camel K 1.10.9.redhat-00005 Resolved Bugs Issue Description CMLK-2420 CVE-2024-47561 org.apache.avro/avro: Schema parsing may trigger Remote Code Execution (RCE) CMLK-1882 CVE-2024-28752 cxf-core: Apache CXF SSRF Vulnerability using the Aegis databinding CMLK-1804 CVE-2024-23114 org.apache.camel-camel-cassandraql: : Apache Camel-CassandraQL: Unsafe Deserialization from CassandraAggregationRepository The following table lists the resolved bugs in Camel K 1.10.7.redhat-00013: Table 1.3. Camel K 1.10.7.redhat-00013 Resolved Bugs Issue Description CMLK-1865 CVE-2024-1597 pgjdbc: PostgreSQL JDBC Driver allows attacker to inject SQL if using PreferQueryMode=SIMPLE [rhint-camel-k-1] The following table lists the resolved bugs in Camel K 1.10.5.redhat-00016: Table 1.4. Camel K 1.10.5.redhat-00016 Resolved Bugs Issue Description CMLK-954 CVE-2023-34462 netty: SniHandler 16MB allocation leads to OOM [rhint-camel-k-1] CMLK-911 CVE-2023-34455 snappy-java: Unchecked chunk length leads to DoS [rhint-camel-k-1] CMLK-1386 CVE-2023-5072 JSON-java: parser confusion leads to OOM [rhint-camel-k-1.10] The following table lists the resolved bugs in Camel K 1.10.4.redhat-00007. Table 1.5. Camel K 1.10.4.redhat-00007 Resolved Bugs Issue Description CMLK-1313 [Major Incident] CVE-2023-44487 undertow: HTTP/2: Multiple HTTP/2 enabled web servers are vulnerable to a DDoS attack (Rapid Reset Attack) [rhint-camel-k-1.10] CMLK-1312 [Major Incident] CVE-2023-44487 netty-codec-http2: HTTP/2: Multiple HTTP/2 enabled web servers are vulnerable to a DDoS attack (Rapid Reset Attack) [rhint-camel-k-1.10] CMLK-243 Camel-K uses io.quarkus.quarkus-maven-plugin but we should use com.quarkus.redhat.platform:quarkus-maven-plugin CMLK-1314 Upgrade x/net to version 0.17.0 Camel K 1.10.3.redhat-00001 release, addresses underlying base images only, product is not changed. The following table lists the resolved bugs in Camel K 1.10.2.redhat-00002: Table 1.6. Camel K 1.10.2.redhat-00002 Resolved Bugs Issue Description CMLK-1238 [Major Incident] CVE-2023-4853 quarkus-vertx-http: quarkus: HTTP security policy bypass [rhint-camel-k-1.10]
[ "//Windows path kamel run file.groovy --dev --resource file:C:\\user\\folder\\tempfile@/tmp/file.txt //Must be converted to kamel run file.groovy --dev --resource file:C:/user/folder/tempfile@/tmp/file.txt", "oc get crds -l app=camel-k -o json | oc delete -f -", "apiVersion: camel.apache.org/v1 kind: IntegrationPlatform metadata: labels: app: camel-k name: camel-k spec: configuration: - type: repository value: <atlassian repo here>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.9/html/release_notes_for_red_hat_build_of_apache_camel_k/camel-k-relnotes_camelk
Examples
Examples Red Hat Service Interconnect 1.8 Service network tutorials with the CLI and YAML
[ "sudo dnf install skupper-cli", "export KUBECONFIG=~/.kube/config-west Enter your provider-specific login command create namespace west config set-context --current --namespace west", "export KUBECONFIG=~/.kube/config-east Enter your provider-specific login command create namespace east config set-context --current --namespace east", "create deployment frontend --image quay.io/skupper/hello-world-frontend", "create deployment backend --image quay.io/skupper/hello-world-backend --replicas 3", "skupper init skupper status", "skupper init Waiting for LoadBalancer IP or hostname Waiting for status Skupper is now installed in namespace 'west'. Use 'skupper status' to get more information. skupper status Skupper is enabled for namespace \"west\". It is not connected to any other sites. It has no exposed services.", "skupper init skupper status", "skupper init Waiting for LoadBalancer IP or hostname Waiting for status Skupper is now installed in namespace 'east'. Use 'skupper status' to get more information. skupper status Skupper is enabled for namespace \"east\". It is not connected to any other sites. It has no exposed services.", "skupper token create ~/secret.token", "skupper token create ~/secret.token Token written to ~/secret.token", "skupper link create ~/secret.token", "skupper link create ~/secret.token Site configured to link to https://10.105.193.154:8081/ed9c37f6-d78a-11ec-a8c7-04421a4c5042 (name=link1) Check the status of the link using 'skupper link status'.", "skupper expose deployment/backend --port 8080", "skupper expose deployment/backend --port 8080 deployment backend exposed as backend", "port-forward deployment/frontend 8080:8080", "sudo dnf install skupper-cli", "export KUBECONFIG=~/.kube/config-public Enter your provider-specific login command create namespace public config set-context --current --namespace public", "export KUBECONFIG=~/.kube/config-private Enter your provider-specific login command create namespace private config set-context --current --namespace private", "apply -f server", "kubectl apply -f server deployment.apps/broker created", "skupper init skupper status", "skupper init Waiting for LoadBalancer IP or hostname Waiting for status Skupper is now installed in namespace 'public'. Use 'skupper status' to get more information. skupper status Skupper is enabled for namespace \"public\". It is not connected to any other sites. It has no exposed services.", "skupper init skupper status", "skupper init Waiting for LoadBalancer IP or hostname Waiting for status Skupper is now installed in namespace 'private'. Use 'skupper status' to get more information. skupper status Skupper is enabled for namespace \"private\". It is not connected to any other sites. It has no exposed services.", "skupper token create ~/secret.token", "skupper token create ~/secret.token Token written to ~/secret.token", "skupper link create ~/secret.token", "skupper link create ~/secret.token Site configured to link to https://10.105.193.154:8081/ed9c37f6-d78a-11ec-a8c7-04421a4c5042 (name=link1) Check the status of the link using 'skupper link status'.", "skupper expose deployment/broker --port 5672", "skupper expose deployment/broker --port 5672 deployment broker exposed as broker", "get service/broker", "kubectl get service/broker NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE broker ClusterIP 10.100.58.95 <none> 5672/TCP 2s", "run client --attach --rm --restart Never --image quay.io/skupper/activemq-example-client --env SERVER=broker", "kubectl run client --attach --rm --restart Never --image quay.io/skupper/activemq-example-client --env SERVER=broker ____ __ _____ ___ __ ____ ____ --/ __ \\/ / / / _ | / _ \\/ //_/ / / / / -/ /_/ / /_/ / __ |/ , / ,< / // /\\ --\\___\\_\\____/_/ |_/_/|_/_/|_|\\____/_/ 2022-05-27 11:19:07,149 INFO [io.sma.rea.mes.amqp] (main) SRMSG16201: AMQP broker configured to broker:5672 for channel incoming-messages 2022-05-27 11:19:07,170 INFO [io.sma.rea.mes.amqp] (main) SRMSG16201: AMQP broker configured to broker:5672 for channel outgoing-messages 2022-05-27 11:19:07,198 INFO [io.sma.rea.mes.amqp] (main) SRMSG16212: Establishing connection with AMQP broker 2022-05-27 11:19:07,212 INFO [io.sma.rea.mes.amqp] (main) SRMSG16212: Establishing connection with AMQP broker 2022-05-27 11:19:07,215 INFO [io.quarkus] (main) client 1.0.0-SNAPSHOT on JVM (powered by Quarkus 2.9.2.Final) started in 0.397s. 2022-05-27 11:19:07,215 INFO [io.quarkus] (main) Profile prod activated. 2022-05-27 11:19:07,215 INFO [io.quarkus] (main) Installed features: [cdi, smallrye-context-propagation, smallrye-reactive-messaging, smallrye-reactive-messaging-amqp, vertx] Sent message 1 Sent message 2 Sent message 3 Sent message 4 Sent message 5 Sent message 6 Sent message 7 Sent message 8 Sent message 9 Sent message 10 2022-05-27 11:19:07,434 INFO [io.sma.rea.mes.amqp] (vert.x-eventloop-thread-0) SRMSG16213: Connection with AMQP broker established 2022-05-27 11:19:07,442 INFO [io.sma.rea.mes.amqp] (vert.x-eventloop-thread-0) SRMSG16213: Connection with AMQP broker established 2022-05-27 11:19:07,468 INFO [io.sma.rea.mes.amqp] (vert.x-eventloop-thread-0) SRMSG16203: AMQP Receiver listening address notifications Received message 1 Received message 2 Received message 3 Received message 4 Received message 5 Received message 6 Received message 7 Received message 8 Received message 9 Received message 10 Result: OK", "kamel install", "export KUBECONFIG=~/.kube/config-private1", "export KUBECONFIG=~/.kube/config-public1", "export KUBECONFIG=~/.kube/config-public2", "create namespace private1 config set-context --current --namespace private1", "create namespace public1 config set-context --current --namespace public1", "create namespace public2 config set-context --current --namespace public2", "skupper init", "skupper init", "skupper init", "skupper status", "skupper status", "skupper status", "Skupper is enabled for namespace \"<namespace>\" in interior mode. It is not connected to any other sites. It has no exposed services. The site console url is: http://<address>:8080 The credentials for internal console-auth mode are held in secret: 'skupper-console-users'", "skupper token create ~/public1.token --uses 2", "skupper link create ~/public1.token skupper link status --wait 30 skupper token create ~/public2.token", "skupper link create ~/public1.token skupper link create ~/public2.token skupper link status --wait 30", "create -f src/main/resources/database/postgres-svc.yaml skupper expose deployment postgres --address postgres --port 5432 -n private1", "run pg-shell -i --tty --image quay.io/skupper/simple-pg --env=\"PGUSER=postgresadmin\" --env=\"PGPASSWORD=admin123\" --env=\"PGHOST=USD(kubectl get service postgres -o=jsonpath='{.spec.clusterIP}')\" -- bash psql --dbname=postgresdb CREATE EXTENSION IF NOT EXISTS \"uuid-ossp\"; CREATE TABLE tw_feedback (id uuid DEFAULT uuid_generatev4 (),sigthning VARCHAR(255),created TIMESTAMP default CURRENTTIMESTAMP,PRIMARY KEY(id));", "src/main/resources/scripts/setUpPublic1Cluster.sh", "src/main/resources/scripts/setUpPublic2Cluster.sh", "attach pg-shell -c pg-shell -i -t psql --dbname=postgresdb SELECT * FROM twfeedback;", "id | sigthning | created --------------------------------------+-----------------+---------------------------- 95655229-747a-4787-8133-923ef0a1b2ca | Testing skupper | 2022-03-10 19:35:08.412542", "kamel logs twitter-route", "\"[1] 2022-03-10 19:35:08,397 INFO [postgresql-sink-1] (Camel (camel-1) thread #0 - twitter-search://skupper) Testing skupper\"", "sudo dnf install skupper-cli", "export KUBECONFIG=~/.kube/config-public Enter your provider-specific login command create namespace public config set-context --current --namespace public", "export KUBECONFIG=~/.kube/config-private Enter your provider-specific login command create namespace private config set-context --current --namespace private", "apply -f server", "kubectl apply -f server deployment.apps/ftp-server created", "skupper init skupper status", "skupper init Waiting for LoadBalancer IP or hostname Waiting for status Skupper is now installed in namespace 'public'. Use 'skupper status' to get more information. skupper status Skupper is enabled for namespace \"public\". It is not connected to any other sites. It has no exposed services.", "skupper init skupper status", "skupper init Waiting for LoadBalancer IP or hostname Waiting for status Skupper is now installed in namespace 'private'. Use 'skupper status' to get more information. skupper status Skupper is enabled for namespace \"private\". It is not connected to any other sites. It has no exposed services.", "skupper token create ~/secret.token", "skupper token create ~/secret.token Token written to ~/secret.token", "skupper link create ~/secret.token", "skupper link create ~/secret.token Site configured to link to https://10.105.193.154:8081/ed9c37f6-d78a-11ec-a8c7-04421a4c5042 (name=link1) Check the status of the link using 'skupper link status'.", "skupper expose deployment/ftp-server --port 21100 --port 21", "skupper expose deployment/ftp-server --port 21100 --port 21 deployment ftp-server exposed as ftp-server", "echo \"Hello!\" | kubectl run ftp-client --stdin --rm --image=docker.io/curlimages/curl --restart=Never -- -s -T - ftp://example:example@ftp-server/greeting run ftp-client --attach --rm --image=docker.io/curlimages/curl --restart=Never -- -s ftp://example:example@ftp-server/greeting", "echo \"Hello!\" | kubectl run ftp-client --stdin --rm --image=docker.io/curlimages/curl --restart=Never -- -s -T - ftp://example:example@ftp-server/greeting pod \"ftp-client\" deleted kubectl run ftp-client --attach --rm --image=docker.io/curlimages/curl --restart=Never -- -s ftp://example:example@ftp-server/greeting Hello! pod \"ftp-client\" deleted", "sudo dnf install skupper-cli", "export KUBECONFIG=~/.kube/config-public1", "export KUBECONFIG=~/.kube/config-public2", "export KUBECONFIG=~/.kube/config-private1", "create namespace public1 config set-context --current --namespace public1", "create namespace public2 config set-context --current --namespace public2", "create namespace private1 config set-context --current --namespace private1", "skupper init --enable-console --enable-flow-collector", "skupper init", "skupper init", "skupper init Waiting for LoadBalancer IP or hostname Waiting for status Skupper is now installed in namespace '<namespace>'. Use 'skupper status' to get more information.", "skupper status", "skupper status", "skupper status", "Skupper is enabled for namespace \"<namespace>\" in interior mode. It is connected to 1 other site. It has 1 exposed service. The site console url is: <console-url> The credentials for internal console-auth mode are held in secret: 'skupper-console-users'", "skupper token create ~/private1-to-public1-token.yaml skupper token create ~/public2-to-public1-token.yaml", "skupper token create ~/private1-to-public2-token.yaml skupper link create ~/public2-to-public1-token.yaml skupper link status --wait 60", "skupper link create ~/private1-to-public1-token.yaml skupper link create ~/private1-to-public2-token.yaml skupper link status --wait 60", "apply -f deployment-iperf3-a.yaml", "apply -f deployment-iperf3-b.yaml", "apply -f deployment-iperf3-c.yaml", "skupper expose deployment/iperf3-server-a --port 5201", "skupper expose deployment/iperf3-server-b --port 5201", "skupper expose deployment/iperf3-server-c --port 5201", "exec USD(kubectl get pod -l application=iperf3-server-a -o=jsonpath='{.items[0].metadata.name}') -- iperf3 -c iperf3-server-a exec USD(kubectl get pod -l application=iperf3-server-a -o=jsonpath='{.items[0].metadata.name}') -- iperf3 -c iperf3-server-b exec USD(kubectl get pod -l application=iperf3-server-a -o=jsonpath='{.items[0].metadata.name}') -- iperf3 -c iperf3-server-c", "exec USD(kubectl get pod -l application=iperf3-server-b -o=jsonpath='{.items[0].metadata.name}') -- iperf3 -c iperf3-server-a exec USD(kubectl get pod -l application=iperf3-server-b -o=jsonpath='{.items[0].metadata.name}') -- iperf3 -c iperf3-server-b exec USD(kubectl get pod -l application=iperf3-server-b -o=jsonpath='{.items[0].metadata.name}') -- iperf3 -c iperf3-server-c", "exec USD(kubectl get pod -l application=iperf3-server-c -o=jsonpath='{.items[0].metadata.name}') -- iperf3 -c iperf3-server-a exec USD(kubectl get pod -l application=iperf3-server-c -o=jsonpath='{.items[0].metadata.name}') -- iperf3 -c iperf3-server-b exec USD(kubectl get pod -l application=iperf3-server-c -o=jsonpath='{.items[0].metadata.name}') -- iperf3 -c iperf3-server-c", "sudo dnf install skupper-cli", "export KUBECONFIG=~/.kube/config-public Enter your provider-specific login command create namespace public config set-context --current --namespace public", "export KUBECONFIG=~/.kube/config-private Enter your provider-specific login command create namespace private config set-context --current --namespace private", "create -f server/strimzi.yaml apply -f server/cluster1.yaml wait --for condition=ready --timeout 900s kafka/cluster1", "kubectl create -f server/strimzi.yaml customresourcedefinition.apiextensions.k8s.io/kafkas.kafka.strimzi.io created rolebinding.rbac.authorization.k8s.io/strimzi-cluster-operator-entity-operator-delegation created clusterrolebinding.rbac.authorization.k8s.io/strimzi-cluster-operator created rolebinding.rbac.authorization.k8s.io/strimzi-cluster-operator-topic-operator-delegation created customresourcedefinition.apiextensions.k8s.io/kafkausers.kafka.strimzi.io created customresourcedefinition.apiextensions.k8s.io/kafkarebalances.kafka.strimzi.io created deployment.apps/strimzi-cluster-operator created customresourcedefinition.apiextensions.k8s.io/kafkamirrormaker2s.kafka.strimzi.io created clusterrole.rbac.authorization.k8s.io/strimzi-entity-operator created clusterrole.rbac.authorization.k8s.io/strimzi-cluster-operator-global created clusterrolebinding.rbac.authorization.k8s.io/strimzi-cluster-operator-kafka-broker-delegation created rolebinding.rbac.authorization.k8s.io/strimzi-cluster-operator created clusterrole.rbac.authorization.k8s.io/strimzi-cluster-operator-namespaced created clusterrole.rbac.authorization.k8s.io/strimzi-topic-operator created clusterrolebinding.rbac.authorization.k8s.io/strimzi-cluster-operator-kafka-client-delegation created clusterrole.rbac.authorization.k8s.io/strimzi-kafka-client created serviceaccount/strimzi-cluster-operator created clusterrole.rbac.authorization.k8s.io/strimzi-kafka-broker created customresourcedefinition.apiextensions.k8s.io/kafkatopics.kafka.strimzi.io created customresourcedefinition.apiextensions.k8s.io/kafkabridges.kafka.strimzi.io created customresourcedefinition.apiextensions.k8s.io/kafkaconnectors.kafka.strimzi.io created customresourcedefinition.apiextensions.k8s.io/kafkaconnects2is.kafka.strimzi.io created customresourcedefinition.apiextensions.k8s.io/kafkaconnects.kafka.strimzi.io created customresourcedefinition.apiextensions.k8s.io/kafkamirrormakers.kafka.strimzi.io created configmap/strimzi-cluster-operator created kubectl apply -f server/cluster1.yaml kafka.kafka.strimzi.io/cluster1 created kafkatopic.kafka.strimzi.io/topic1 created kubectl wait --for condition=ready --timeout 900s kafka/cluster1 kafka.kafka.strimzi.io/cluster1 condition met", "spec: kafka: listeners: - name: plain port: 9092 type: internal tls: false configuration: brokers: - broker: 0 advertisedHost: cluster1-kafka-0.cluster1-kafka-brokers", "skupper init skupper status", "skupper init Waiting for LoadBalancer IP or hostname Waiting for status Skupper is now installed in namespace 'public'. Use 'skupper status' to get more information. skupper status Skupper is enabled for namespace \"public\". It is not connected to any other sites. It has no exposed services.", "skupper init skupper status", "skupper init Waiting for LoadBalancer IP or hostname Waiting for status Skupper is now installed in namespace 'private'. Use 'skupper status' to get more information. skupper status Skupper is enabled for namespace \"private\". It is not connected to any other sites. It has no exposed services.", "skupper token create ~/secret.token", "skupper token create ~/secret.token Token written to ~/secret.token", "skupper link create ~/secret.token", "skupper link create ~/secret.token Site configured to link to https://10.105.193.154:8081/ed9c37f6-d78a-11ec-a8c7-04421a4c5042 (name=link1) Check the status of the link using 'skupper link status'.", "skupper expose statefulset/cluster1-kafka --headless --port 9092", "skupper expose statefulset/cluster1-kafka --headless --port 9092 statefulset cluster1-kafka exposed as cluster1-kafka-brokers", "get service/cluster1-kafka-brokers", "kubectl get service/cluster1-kafka-brokers NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE cluster1-kafka-brokers ClusterIP None <none> 9092/TCP 2s", "run client --attach --rm --restart Never --image quay.io/skupper/kafka-example-client --env BOOTSTRAPSERVERS=cluster1-kafka-brokers:9092", "kubectl run client --attach --rm --restart Never --image quay.io/skupper/kafka-example-client --env BOOTSTRAPSERVERS=cluster1-kafka-brokers:9092 [...] Received message 1 Received message 2 Received message 3 Received message 4 Received message 5 Received message 6 Received message 7 Received message 8 Received message 9 Received message 10 Result: OK [...]", "sudo dnf install skupper-cli", "export KUBECONFIG=~/.kube/config-public Enter your provider-specific login command create namespace public config set-context --current --namespace public", "export KUBECONFIG=~/.kube/config-private Enter your provider-specific login command create namespace private config set-context --current --namespace private", "export SKUPPERPLATFORM=podman network create skupper systemctl --user enable --now podman.socket", "system service --time=0 unix://USDXDGRUNTIMEDIR/podman/podman.sock &", "apply -f frontend/kubernetes.yaml", "apply -f payment-processor/kubernetes.yaml", "run --name database-target --network skupper --detach --rm -p 5432:5432 quay.io/skupper/patient-portal-database", "skupper init", "skupper init --ingress none", "skupper init --ingress none", "skupper token create --uses 2 ~/secret.token", "skupper link create ~/secret.token", "skupper link create ~/secret.token", "skupper expose deployment/payment-processor --port 8080", "skupper service create database 5432 skupper service bind database host database-target --target-port 5432", "skupper service create database 5432", "expose deployment/frontend --port 8080 --type LoadBalancer get service/frontend curl http://<external-ip>:8080/api/health", "kubectl expose deployment/frontend --port 8080 --type LoadBalancer service/frontend exposed kubectl get service/frontend NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE frontend LoadBalancer 10.103.232.28 <external-ip> 8080:30407/TCP 15s curl http://<external-ip>:8080/api/health OK", "sudo dnf install skupper-cli", "export KUBECONFIG=~/.kube/config-public Enter your provider-specific login command create namespace public config set-context --current --namespace public", "export KUBECONFIG=~/.kube/config-private Enter your provider-specific login command create namespace private config set-context --current --namespace private", "create -f kafka-cluster/strimzi.yaml apply -f kafka-cluster/cluster1.yaml wait --for condition=ready --timeout 900s kafka/cluster1", "spec: kafka: listeners: - name: plain port: 9092 type: internal tls: false configuration: brokers: - broker: 0 advertisedHost: cluster1-kafka-0.cluster1-kafka-brokers", "apply -f order-processor/kubernetes.yaml apply -f market-data/kubernetes.yaml apply -f frontend/kubernetes.yaml", "skupper init skupper status", "skupper init Waiting for LoadBalancer IP or hostname Waiting for status Skupper is now installed in namespace 'public'. Use 'skupper status' to get more information. skupper status Skupper is enabled for namespace \"public\". It is not connected to any other sites. It has no exposed services.", "skupper init skupper status", "skupper init Waiting for LoadBalancer IP or hostname Waiting for status Skupper is now installed in namespace 'private'. Use 'skupper status' to get more information. skupper status Skupper is enabled for namespace \"private\". It is not connected to any other sites. It has no exposed services.", "skupper token create ~/secret.token", "skupper token create ~/secret.token Token written to ~/secret.token", "skupper link create ~/secret.token", "skupper link create ~/secret.token Site configured to link to https://10.105.193.154:8081/ed9c37f6-d78a-11ec-a8c7-04421a4c5042 (name=link1) Check the status of the link using 'skupper link status'.", "skupper expose statefulset/cluster1-kafka --headless --port 9092", "get service/cluster1-kafka-brokers", "expose deployment/frontend --port 8080 --type LoadBalancer get service/frontend curl http://<external-ip>:8080/api/health", "kubectl expose deployment/frontend --port 8080 --type LoadBalancer service/frontend exposed kubectl get service/frontend NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE frontend LoadBalancer 10.103.232.28 <external-ip> 8080:30407/TCP 15s curl http://<external-ip>:8080/api/health OK" ]
https://docs.redhat.com/en/documentation/red_hat_service_interconnect/1.8/html-single/examples/client
Chapter 1. Preparing to install on IBM Power
Chapter 1. Preparing to install on IBM Power 1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . 1.2. Choosing a method to install OpenShift Container Platform on IBM Power You can install a cluster on IBM Power infrastructure that you provision, by using one of the following methods: Installing a cluster on IBM Power : You can install OpenShift Container Platform on IBM Power infrastructure that you provision. Installing a cluster on IBM Power in a restricted network : You can install OpenShift Container Platform on IBM Power infrastructure that you provision in a restricted or disconnected network, by using an internal mirror of the installation release content. You can use this method to install a cluster that does not require an active internet connection to obtain the software components. You can also use this installation method to ensure that your clusters only use container images that satisfy your organizational controls on external content.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_ibm_power/preparing-to-install-on-ibm-power
Argo CD applications
Argo CD applications Red Hat OpenShift GitOps 1.12 Creating and deploying applications on the OpenShift cluster by using the Argo CD dashboard, oc tool, or GitOps CLI Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.12/html/argo_cd_applications/index
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.15/html/configuring_your_red_hat_build_of_quarkus_applications_by_using_a_properties_file/making-open-source-more-inclusive
Migrating from version 3 to 4
Migrating from version 3 to 4 OpenShift Container Platform 4.12 Migrating to OpenShift Container Platform 4 Red Hat OpenShift Documentation Team
[ "oc expose svc <app1-svc> --hostname <app1.apps.source.example.com> -n <app1-namespace>", "podman login registry.redhat.io", "cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./", "cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./", "oc run test --image registry.redhat.io/ubi8 --command sleep infinity", "oc create -f operator.yml", "namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-builders\" already exists 1 Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-pullers\" already exists", "oc create -f controller.yml", "oc get pods -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress", "apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route]", "spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"", "oc get migrationcontroller <migration_controller> -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2", "oc replace -f migration-controller.yaml -n openshift-migration", "BUCKET=<your_bucket>", "REGION=<your_region>", "aws s3api create-bucket --bucket USDBUCKET --region USDREGION --create-bucket-configuration LocationConstraint=USDREGION 1", "aws iam create-user --user-name velero 1", "cat > velero-policy.json <<EOF { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"ec2:DescribeVolumes\", \"ec2:DescribeSnapshots\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:CreateSnapshot\", \"ec2:DeleteSnapshot\" ], \"Resource\": \"*\" }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:GetObject\", \"s3:DeleteObject\", \"s3:PutObject\", \"s3:AbortMultipartUpload\", \"s3:ListMultipartUploadParts\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}/*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:ListBucket\", \"s3:GetBucketLocation\", \"s3:ListBucketMultipartUploads\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}\" ] } ] } EOF", "aws iam put-user-policy --user-name velero --policy-name velero --policy-document file://velero-policy.json", "aws iam create-access-key --user-name velero", "{ \"AccessKey\": { \"UserName\": \"velero\", \"Status\": \"Active\", \"CreateDate\": \"2017-07-31T22:24:41.576Z\", \"SecretAccessKey\": <AWS_SECRET_ACCESS_KEY>, \"AccessKeyId\": <AWS_ACCESS_KEY_ID> } }", "gcloud auth login", "BUCKET=<bucket> 1", "gsutil mb gs://USDBUCKET/", "PROJECT_ID=USD(gcloud config get-value project)", "gcloud iam service-accounts create velero --display-name \"Velero service account\"", "gcloud iam service-accounts list", "SERVICE_ACCOUNT_EMAIL=USD(gcloud iam service-accounts list --filter=\"displayName:Velero service account\" --format 'value(email)')", "ROLE_PERMISSIONS=( compute.disks.get compute.disks.create compute.disks.createSnapshot compute.snapshots.get compute.snapshots.create compute.snapshots.useReadOnly compute.snapshots.delete compute.zones.get storage.objects.create storage.objects.delete storage.objects.get storage.objects.list iam.serviceAccounts.signBlob )", "gcloud iam roles create velero.server --project USDPROJECT_ID --title \"Velero Server\" --permissions \"USD(IFS=\",\"; echo \"USD{ROLE_PERMISSIONS[*]}\")\"", "gcloud projects add-iam-policy-binding USDPROJECT_ID --member serviceAccount:USDSERVICE_ACCOUNT_EMAIL --role projects/USDPROJECT_ID/roles/velero.server", "gsutil iam ch serviceAccount:USDSERVICE_ACCOUNT_EMAIL:objectAdmin gs://USD{BUCKET}", "gcloud iam service-accounts keys create credentials-velero --iam-account USDSERVICE_ACCOUNT_EMAIL", "az login", "AZURE_RESOURCE_GROUP=Velero_Backups", "az group create -n USDAZURE_RESOURCE_GROUP --location CentralUS 1", "AZURE_STORAGE_ACCOUNT_ID=\"veleroUSD(uuidgen | cut -d '-' -f5 | tr '[A-Z]' '[a-z]')\"", "az storage account create --name USDAZURE_STORAGE_ACCOUNT_ID --resource-group USDAZURE_RESOURCE_GROUP --sku Standard_GRS --encryption-services blob --https-only true --kind BlobStorage --access-tier Hot", "BLOB_CONTAINER=velero", "az storage container create -n USDBLOB_CONTAINER --public-access off --account-name USDAZURE_STORAGE_ACCOUNT_ID", "AZURE_SUBSCRIPTION_ID=`az account list --query '[?isDefault].id' -o tsv` AZURE_TENANT_ID=`az account list --query '[?isDefault].tenantId' -o tsv`", "AZURE_CLIENT_SECRET=`az ad sp create-for-rbac --name \"velero\" --role \"Contributor\" --query 'password' -o tsv --scopes /subscriptions/USDAZURE_SUBSCRIPTION_ID/resourceGroups/USDAZURE_RESOURCE_GROUP`", "AZURE_CLIENT_ID=`az ad app credential list --id <your_app_id>`", "cat << EOF > ./credentials-velero AZURE_SUBSCRIPTION_ID=USD{AZURE_SUBSCRIPTION_ID} AZURE_TENANT_ID=USD{AZURE_TENANT_ID} AZURE_CLIENT_ID=USD{AZURE_CLIENT_ID} AZURE_CLIENT_SECRET=USD{AZURE_CLIENT_SECRET} AZURE_RESOURCE_GROUP=USD{AZURE_RESOURCE_GROUP} AZURE_CLOUD_NAME=AzurePublicCloud EOF", "oc delete migrationcontroller <migration_controller>", "oc delete USD(oc get crds -o name | grep 'migration.openshift.io')", "oc delete USD(oc get crds -o name | grep 'velero')", "oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io')", "oc delete clusterrole migration-operator", "oc delete USD(oc get clusterroles -o name | grep 'velero')", "oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io')", "oc delete clusterrolebindings migration-operator", "oc delete USD(oc get clusterrolebindings -o name | grep 'velero')", "podman login registry.redhat.io", "cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./", "cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./", "grep openshift-migration-legacy-rhel8-operator ./mapping.txt | grep rhmtc", "registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a=<registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator", "containers: - name: ansible image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 1 - name: operator image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 2 env: - name: REGISTRY value: <registry.apps.example.com> 3", "oc create -f operator.yml", "namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-builders\" already exists 1 Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-pullers\" already exists", "oc create -f controller.yml", "oc get pods -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress", "apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route]", "spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"", "oc get migrationcontroller <migration_controller> -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2", "oc replace -f migration-controller.yaml -n openshift-migration", "oc delete migrationcontroller <migration_controller>", "oc delete USD(oc get crds -o name | grep 'migration.openshift.io')", "oc delete USD(oc get crds -o name | grep 'velero')", "oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io')", "oc delete clusterrole migration-operator", "oc delete USD(oc get clusterroles -o name | grep 'velero')", "oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io')", "oc delete clusterrolebindings migration-operator", "oc delete USD(oc get clusterrolebindings -o name | grep 'velero')", "podman login registry.redhat.io", "podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.8):/operator.yml ./", "oc replace --force -f operator.yml", "oc scale -n openshift-migration --replicas=0 deployment/migration-operator", "oc scale -n openshift-migration --replicas=1 deployment/migration-operator", "oc -o yaml -n openshift-migration get deployment/migration-operator | grep image: | awk -F \":\" '{ print USDNF }'", "podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.8):/controller.yml ./", "oc create -f controller.yml", "oc sa get-token migration-controller -n openshift-migration", "oc get pods -n openshift-migration", "oc get migplan <migplan> -o yaml -n openshift-migration", "spec: indirectImageMigration: true indirectVolumeMigration: true", "oc replace -f migplan.yaml -n openshift-migration", "oc get migplan <migplan> -o yaml -n openshift-migration", "oc get pv", "oc get pods --all-namespaces | egrep -v 'Running | Completed'", "oc get pods --all-namespaces --field-selector=status.phase=Running -o json | jq '.items[]|select(any( .status.containerStatuses[]; .restartCount > 3))|.metadata.name'", "oc get csr -A | grep pending -i", "oc get -n openshift-migration route/migration -o go-template='https://{{ .spec.host }}'", "oc create token migration-controller -n openshift-migration", "eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtaWciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoibWlnLXRva2VuLWs4dDJyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im1pZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE1YjFiYWMwLWMxYmYtMTFlOS05Y2NiLTAyOWRmODYwYjMwOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptaWc6bWlnIn0.xqeeAINK7UXpdRqAtOj70qhBJPeMwmgLomV9iFxr5RoqUgKchZRG2J2rkqmPm6vr7K-cm7ibD1IBpdQJCcVDuoHYsFgV4mp9vgOfn9osSDp2TGikwNz4Az95e81xnjVUmzh-NjDsEpw71DH92iHV_xt2sTwtzftS49LpPW2LjrV0evtNBP_t_RfskdArt5VSv25eORl7zScqfe1CiMkcVbf2UqACQjo3LbkpfN26HAioO2oH0ECPiRzT0Xyh-KwFutJLS9Xgghyw-LD9kPKcE_xbbJ9Y4Rqajh7WdPYuB0Jd9DPVrslmzK-F6cgHHYoZEv0SvLQi-PO0rpDrcjOEQQ", "oc create route passthrough --service=docker-registry --port=5000 -n default", "oc create route passthrough --service=image-registry --port=5000 -n openshift-image-registry", "az group list", "{ \"id\": \"/subscriptions/...//resourceGroups/sample-rg-name\", \"location\": \"centralus\", \"name\": \"...\", \"properties\": { \"provisioningState\": \"Succeeded\" }, \"tags\": { \"kubernetes.io_cluster.sample-ld57c\": \"owned\", \"openshift_creationDate\": \"2019-10-25T23:28:57.988208+00:00\" }, \"type\": \"Microsoft.Resources/resourceGroups\" },", "podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-controller-rhel8:v1.8):/crane ./", "oc config view", "crane tunnel-api [--namespace <namespace>] --destination-context <destination-cluster> --source-context <source-cluster>", "crane tunnel-api --namespace my_tunnel --destination-context openshift-migration/c131-e-us-east-containers-cloud-ibm-com/admin --source-context default/192-168-122-171-nip-io:8443/admin", "oc get po -n <namespace>", "NAME READY STATUS RESTARTS AGE <pod_name> 2/2 Running 0 44s", "oc logs -f -n <namespace> <pod_name> -c openvpn", "oc get service -n <namespace>", "oc sa get-token -n openshift-migration migration-controller", "oc create route passthrough --service=docker-registry -n default", "oc create route passthrough --service=image-registry -n openshift-image-registry", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress", "apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route]", "spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"", "oc get migrationcontroller <migration_controller> -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2", "oc replace -f migration-controller.yaml -n openshift-migration", "cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <host_cluster> namespace: openshift-migration spec: isHostCluster: true EOF", "cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: <cluster_secret> namespace: openshift-config type: Opaque data: saToken: <sa_token> 1 EOF", "oc sa get-token migration-controller -n openshift-migration | base64 -w 0", "cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <remote_cluster> 1 namespace: openshift-migration spec: exposedRegistryPath: <exposed_registry_route> 2 insecure: false 3 isHostCluster: false serviceAccountSecretRef: name: <remote_cluster_secret> 4 namespace: openshift-config url: <remote_cluster_url> 5 EOF", "oc describe MigCluster <cluster>", "cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: namespace: openshift-config name: <migstorage_creds> type: Opaque data: aws-access-key-id: <key_id_base64> 1 aws-secret-access-key: <secret_key_base64> 2 EOF", "echo -n \"<key>\" | base64 -w 0 1", "cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigStorage metadata: name: <migstorage> namespace: openshift-migration spec: backupStorageConfig: awsBucketName: <bucket> 1 credsSecretRef: name: <storage_secret> 2 namespace: openshift-config backupStorageProvider: <storage_provider> 3 volumeSnapshotConfig: credsSecretRef: name: <storage_secret> 4 namespace: openshift-config volumeSnapshotProvider: <storage_provider> 5 EOF", "oc describe migstorage <migstorage>", "cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: destMigClusterRef: name: <host_cluster> namespace: openshift-migration indirectImageMigration: true 1 indirectVolumeMigration: true 2 migStorageRef: name: <migstorage> 3 namespace: openshift-migration namespaces: - <source_namespace_1> 4 - <source_namespace_2> - <source_namespace_3>:<destination_namespace> 5 srcMigClusterRef: name: <remote_cluster> 6 namespace: openshift-migration EOF", "oc describe migplan <migplan> -n openshift-migration", "cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: <migmigration> namespace: openshift-migration spec: migPlanRef: name: <migplan> 1 namespace: openshift-migration quiescePods: true 2 stage: false 3 rollback: false 4 EOF", "oc watch migmigration <migmigration> -n openshift-migration", "Name: c8b034c0-6567-11eb-9a4f-0bc004db0fbc Namespace: openshift-migration Labels: migration.openshift.io/migplan-name=django Annotations: openshift.io/touch: e99f9083-6567-11eb-8420-0a580a81020c API Version: migration.openshift.io/v1alpha1 Kind: MigMigration Spec: Mig Plan Ref: Name: migplan Namespace: openshift-migration Stage: false Status: Conditions: Category: Advisory Last Transition Time: 2021-02-02T15:04:09Z Message: Step: 19/47 Reason: InitialBackupCreated Status: True Type: Running Category: Required Last Transition Time: 2021-02-02T15:03:19Z Message: The migration is ready. Status: True Type: Ready Category: Required Durable: true Last Transition Time: 2021-02-02T15:04:05Z Message: The migration registries are healthy. Status: True Type: RegistriesHealthy Itinerary: Final Observed Digest: 7fae9d21f15979c71ddc7dd075cb97061895caac5b936d92fae967019ab616d5 Phase: InitialBackupCreated Pipeline: Completed: 2021-02-02T15:04:07Z Message: Completed Name: Prepare Started: 2021-02-02T15:03:18Z Message: Waiting for initial Velero backup to complete. Name: Backup Phase: InitialBackupCreated Progress: Backup openshift-migration/c8b034c0-6567-11eb-9a4f-0bc004db0fbc-wpc44: 0 out of estimated total of 0 objects backed up (5s) Started: 2021-02-02T15:04:07Z Message: Not started Name: StageBackup Message: Not started Name: StageRestore Message: Not started Name: DirectImage Message: Not started Name: DirectVolume Message: Not started Name: Restore Message: Not started Name: Cleanup Start Timestamp: 2021-02-02T15:03:18Z Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Running 57s migmigration_controller Step: 2/47 Normal Running 57s migmigration_controller Step: 3/47 Normal Running 57s (x3 over 57s) migmigration_controller Step: 4/47 Normal Running 54s migmigration_controller Step: 5/47 Normal Running 54s migmigration_controller Step: 6/47 Normal Running 52s (x2 over 53s) migmigration_controller Step: 7/47 Normal Running 51s (x2 over 51s) migmigration_controller Step: 8/47 Normal Ready 50s (x12 over 57s) migmigration_controller The migration is ready. Normal Running 50s migmigration_controller Step: 9/47 Normal Running 50s migmigration_controller Step: 10/47", "- hosts: localhost gather_facts: false tasks: - name: get pod name shell: oc get po --all-namespaces", "- hosts: localhost gather_facts: false tasks: - name: Get pod k8s_info: kind: pods api: v1 namespace: openshift-migration name: \"{{ lookup( 'env', 'HOSTNAME') }}\" register: pods - name: Print pod name debug: msg: \"{{ pods.resources[0].metadata.name }}\"", "- hosts: localhost gather_facts: false tasks: - name: Set a boolean set_fact: do_fail: true - name: \"fail\" fail: msg: \"Cause a failure\" when: do_fail", "- hosts: localhost gather_facts: false tasks: - set_fact: namespaces: \"{{ (lookup( 'env', 'MIGRATION_NAMESPACES')).split(',') }}\" - debug: msg: \"{{ item }}\" with_items: \"{{ namespaces }}\" - debug: msg: \"{{ lookup( 'env', 'MIGRATION_PLAN_NAME') }}\"", "oc edit migrationcontroller <migration_controller> -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: disable_image_migration: true 1 disable_pv_migration: true 2 additional_excluded_resources: 3 - resource1 - resource2", "oc get deployment -n openshift-migration migration-controller -o yaml | grep EXCLUDED_RESOURCES -A1", "name: EXCLUDED_RESOURCES value: resource1,resource2,imagetags,templateinstances,clusterserviceversions,packagemanifests,subscriptions,servicebrokers,servicebindings,serviceclasses,serviceinstances,serviceplans,imagestreams,persistentvolumes,persistentvolumeclaims", "spec: namespaces: - namespace_2 - namespace_1:namespace_2", "spec: namespaces: - namespace_1:namespace_1", "spec: namespaces: - namespace_1", "apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: selection: action: skip", "apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: name: <source_pvc>:<destination_pvc> 1", "apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: pvc-095a6559-b27f-11eb-b27f-021bddcaf6e4 proposedCapacity: 10Gi pvc: accessModes: - ReadWriteMany hasReference: true name: mysql namespace: mysql-persistent selection: action: <copy> 1 copyMethod: <filesystem> 2 verify: true 3 storageClass: <gp2> 4 accessMode: <ReadWriteMany> 5 storageClass: cephfs", "apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind> 1 group: \"\" - kind: <kind> group: \"\"", "apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind> 1 group: \"\" - kind: <kind> group: \"\" labelSelector: matchLabels: <label> 2", "apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: generateName: <migplan> namespace: openshift-migration spec: migPlanRef: name: <migplan> namespace: openshift-migration stage: false", "oc edit migrationcontroller -n openshift-migration", "mig_controller_limits_cpu: \"1\" 1 mig_controller_limits_memory: \"10Gi\" 2 mig_controller_requests_cpu: \"100m\" 3 mig_controller_requests_memory: \"350Mi\" 4 mig_pv_limit: 100 5 mig_pod_limit: 100 6 mig_namespace_limit: 10 7", "oc patch migrationcontroller migration-controller -p '{\"spec\":{\"enable_dvm_pv_resizing\":true}}' \\ 1 --type='merge' -n openshift-migration", "oc patch migrationcontroller migration-controller -p '{\"spec\":{\"pv_resizing_threshold\":41}}' \\ 1 --type='merge' -n openshift-migration", "status: conditions: - category: Warn durable: true lastTransitionTime: \"2021-06-17T08:57:01Z\" message: 'Capacity of the following volumes will be automatically adjusted to avoid disk capacity issues in the target cluster: [pvc-b800eb7b-cf3b-11eb-a3f7-0eae3e0555f3]' reason: Done status: \"False\" type: PvCapacityAdjustmentRequired", "oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/mig_controller_enable_cache\", \"value\": true}]'", "oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/mig_controller_limits_memory\", \"value\": <10Gi>}]'", "oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/mig_controller_requests_memory\", \"value\": <350Mi>}]'", "apiVersion: migration.openshift.io/v1alpha1 kind: DirectImageMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <direct_image_migration> spec: srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration namespaces: 1 - <source_namespace_1> - <source_namespace_2>:<destination_namespace_3> 2", "apiVersion: migration.openshift.io/v1alpha1 kind: DirectImageStreamMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <direct_image_stream_migration> spec: srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration imageStreamRef: name: <image_stream> namespace: <source_image_stream_namespace> destNamespace: <destination_image_stream_namespace>", "apiVersion: migration.openshift.io/v1alpha1 kind: DirectVolumeMigration metadata: name: <direct_volume_migration> namespace: openshift-migration spec: createDestinationNamespaces: false 1 deleteProgressReportingCRs: false 2 destMigClusterRef: name: <host_cluster> 3 namespace: openshift-migration persistentVolumeClaims: - name: <pvc> 4 namespace: <pvc_namespace> srcMigClusterRef: name: <source_cluster> namespace: openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: DirectVolumeMigrationProgress metadata: labels: controller-tools.k8s.io: \"1.0\" name: <direct_volume_migration_progress> spec: clusterRef: name: <source_cluster> namespace: openshift-migration podRef: name: <rsync_pod> namespace: openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigAnalytic metadata: annotations: migplan: <migplan> name: <miganalytic> namespace: openshift-migration labels: migplan: <migplan> spec: analyzeImageCount: true 1 analyzeK8SResources: true 2 analyzePVCapacity: true 3 listImages: false 4 listImagesLimit: 50 5 migPlanRef: name: <migplan> namespace: openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: labels: controller-tools.k8s.io: \"1.0\" name: <host_cluster> 1 namespace: openshift-migration spec: isHostCluster: true 2 The 'azureResourceGroup' parameter is relevant only for Microsoft Azure. azureResourceGroup: <azure_resource_group> 3 caBundle: <ca_bundle_base64> 4 insecure: false 5 refresh: false 6 The 'restartRestic' parameter is relevant for a source cluster. restartRestic: true 7 The following parameters are relevant for a remote cluster. exposedRegistryPath: <registry_route> 8 url: <destination_cluster_url> 9 serviceAccountSecretRef: name: <source_secret> 10 namespace: openshift-config", "apiVersion: migration.openshift.io/v1alpha1 kind: MigHook metadata: generateName: <hook_name_prefix> 1 name: <mighook> 2 namespace: openshift-migration spec: activeDeadlineSeconds: 1800 3 custom: false 4 image: <hook_image> 5 playbook: <ansible_playbook_base64> 6 targetCluster: source 7", "apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migmigration> namespace: openshift-migration spec: canceled: false 1 rollback: false 2 stage: false 3 quiescePods: true 4 keepAnnotations: true 5 verify: false 6 migPlanRef: name: <migplan> namespace: openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migplan> namespace: openshift-migration spec: closed: false 1 srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration hooks: 2 - executionNamespace: <namespace> 3 phase: <migration_phase> 4 reference: name: <hook> 5 namespace: <hook_namespace> 6 serviceAccount: <service_account> 7 indirectImageMigration: true 8 indirectVolumeMigration: false 9 migStorageRef: name: <migstorage> namespace: openshift-migration namespaces: - <source_namespace_1> 10 - <source_namespace_2> - <source_namespace_3>:<destination_namespace_4> 11 refresh: false 12", "apiVersion: migration.openshift.io/v1alpha1 kind: MigStorage metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migstorage> namespace: openshift-migration spec: backupStorageProvider: <backup_storage_provider> 1 volumeSnapshotProvider: <snapshot_storage_provider> 2 backupStorageConfig: awsBucketName: <bucket> 3 awsRegion: <region> 4 credsSecretRef: namespace: openshift-config name: <storage_secret> 5 awsKmsKeyId: <key_id> 6 awsPublicUrl: <public_url> 7 awsSignatureVersion: <signature_version> 8 volumeSnapshotConfig: awsRegion: <region> 9 credsSecretRef: namespace: openshift-config name: <storage_secret> 10 refresh: false 11", "oc -n openshift-migration get pods | grep log", "oc -n openshift-migration logs -f <mig-log-reader-pod> -c color 1", "oc adm must-gather --image=registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v1.7", "oc adm must-gather --image=registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v1.7 -- /usr/bin/gather_metrics_dump", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero <backup_restore_cr> <command> <cr_name>", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero --help", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero <backup_restore_cr> describe <cr_name>", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero <backup_restore_cr> logs <cr_name>", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf", "oc get migmigration <migmigration> -o yaml", "status: conditions: - category: Warn durable: true lastTransitionTime: \"2021-01-26T20:48:40Z\" message: 'Final Restore openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf: partially failed on destination cluster' status: \"True\" type: VeleroFinalRestorePartiallyFailed - category: Advisory durable: true lastTransitionTime: \"2021-01-26T20:48:42Z\" message: The migration has completed with warnings, please look at `Warn` conditions. reason: Completed status: \"True\" type: SucceededWithWarnings", "oc -n {namespace} exec deployment/velero -c velero -- ./velero restore describe <restore>", "Phase: PartiallyFailed (run 'velero restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf' for more information) Errors: Velero: <none> Cluster: <none> Namespaces: migration-example: error restoring example.com/migration-example/migration-example: the server could not find the requested resource", "oc -n {namespace} exec deployment/velero -c velero -- ./velero restore logs <restore>", "time=\"2021-01-26T20:48:37Z\" level=info msg=\"Attempting to restore migration-example: migration-example\" logSource=\"pkg/restore/restore.go:1107\" restore=openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf time=\"2021-01-26T20:48:37Z\" level=info msg=\"error restoring migration-example: the server could not find the requested resource\" logSource=\"pkg/restore/restore.go:1170\" restore=openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf", "labels: migrationcontroller: ebe13bee-c803-47d0-a9e9-83f380328b93", "labels: migrationcontroller: ebe13bee-c803-47d0-a9e9-83f380328b93", "oc get migmigration -n openshift-migration", "NAME AGE 88435fe0-c9f8-11e9-85e6-5d593ce65e10 6m42s", "oc describe migmigration 88435fe0-c9f8-11e9-85e6-5d593ce65e10 -n openshift-migration", "name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10 namespace: openshift-migration labels: <none> annotations: touch: 3b48b543-b53e-4e44-9d34-33563f0f8147 apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: creationTimestamp: 2019-08-29T01:01:29Z generation: 20 resourceVersion: 88179 selfLink: /apis/migration.openshift.io/v1alpha1/namespaces/openshift-migration/migmigrations/88435fe0-c9f8-11e9-85e6-5d593ce65e10 uid: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 spec: migPlanRef: name: socks-shop-mig-plan namespace: openshift-migration quiescePods: true stage: false status: conditions: category: Advisory durable: True lastTransitionTime: 2019-08-29T01:03:40Z message: The migration has completed successfully. reason: Completed status: True type: Succeeded phase: Completed startTimestamp: 2019-08-29T01:01:29Z events: <none>", "apiVersion: velero.io/v1 kind: Backup metadata: annotations: openshift.io/migrate-copy-phase: final openshift.io/migrate-quiesce-pods: \"true\" openshift.io/migration-registry: 172.30.105.179:5000 openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-44dd3bd5-c9f8-11e9-95ad-0205fe66cbb6 openshift.io/orig-reclaim-policy: delete creationTimestamp: \"2019-08-29T01:03:15Z\" generateName: 88435fe0-c9f8-11e9-85e6-5d593ce65e10- generation: 1 labels: app.kubernetes.io/part-of: migration migmigration: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 migration-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 velero.io/storage-location: myrepo-vpzq9 name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7 namespace: openshift-migration resourceVersion: \"87313\" selfLink: /apis/velero.io/v1/namespaces/openshift-migration/backups/88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7 uid: c80dbbc0-c9f8-11e9-95ad-0205fe66cbb6 spec: excludedNamespaces: [] excludedResources: [] hooks: resources: [] includeClusterResources: null includedNamespaces: - sock-shop includedResources: - persistentvolumes - persistentvolumeclaims - namespaces - imagestreams - imagestreamtags - secrets - configmaps - pods labelSelector: matchLabels: migration-included-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 storageLocation: myrepo-vpzq9 ttl: 720h0m0s volumeSnapshotLocations: - myrepo-wv6fx status: completionTimestamp: \"2019-08-29T01:02:36Z\" errors: 0 expiration: \"2019-09-28T01:02:35Z\" phase: Completed startTimestamp: \"2019-08-29T01:02:35Z\" validationErrors: null version: 1 volumeSnapshotsAttempted: 0 volumeSnapshotsCompleted: 0 warnings: 0", "apiVersion: velero.io/v1 kind: Restore metadata: annotations: openshift.io/migrate-copy-phase: final openshift.io/migrate-quiesce-pods: \"true\" openshift.io/migration-registry: 172.30.90.187:5000 openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-36f54ca7-c925-11e9-825a-06fa9fb68c88 creationTimestamp: \"2019-08-28T00:09:49Z\" generateName: e13a1b60-c927-11e9-9555-d129df7f3b96- generation: 3 labels: app.kubernetes.io/part-of: migration migmigration: e18252c9-c927-11e9-825a-06fa9fb68c88 migration-final-restore: e18252c9-c927-11e9-825a-06fa9fb68c88 name: e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx namespace: openshift-migration resourceVersion: \"82329\" selfLink: /apis/velero.io/v1/namespaces/openshift-migration/restores/e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx uid: 26983ec0-c928-11e9-825a-06fa9fb68c88 spec: backupName: e13a1b60-c927-11e9-9555-d129df7f3b96-sz24f excludedNamespaces: null excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io includedNamespaces: null includedResources: null namespaceMapping: null restorePVs: true status: errors: 0 failureReason: \"\" phase: Completed validationErrors: null warnings: 15", "podman login -u USD(oc whoami) -p USD(oc whoami -t) --tls-verify=false <registry_url>:<port>", "podman login -u USD(oc whoami) -p USD(oc whoami -t) --tls-verify=false <registry_url>:<port>", "podman pull <registry_url>:<port>/openshift/<image>", "oc get bc --all-namespaces --template='range .items \"BuildConfig:\" .metadata.namespace/.metadata.name => \"\\t\"\"ImageStream(FROM):\" .spec.strategy.sourceStrategy.from.namespace/.spec.strategy.sourceStrategy.from.name \"\\t\"\"ImageStream(TO):\" .spec.output.to.namespace/.spec.output.to.name end'", "podman tag <registry_url>:<port>/openshift/<image> \\ 1 <registry_url>:<port>/openshift/<image> 2", "podman push <registry_url>:<port>/openshift/<image> 1", "oc get imagestream -n openshift | grep <image>", "NAME IMAGE REPOSITORY TAGS UPDATED my_image image-registry.openshift-image-registry.svc:5000/openshift/my_image latest 32 seconds ago", "oc describe migmigration <pod> -n openshift-migration", "Some or all transfer pods are not running for more than 10 mins on destination cluster", "oc get namespace <namespace> -o yaml 1", "oc edit namespace <namespace>", "apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/node-selector: \"region=east\"", "echo -n | openssl s_client -connect <host_FQDN>:<port> \\ 1 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > <ca_bundle.cert> 2", "oc logs <Velero_Pod> -n openshift-migration", "level=error msg=\"Error checking repository for stale locks\" error=\"error getting backup storage location: BackupStorageLocation.velero.io \\\"ts-dpa-1\\\" not found\" error.file=\"/remote-source/src/github.com/vmware-tanzu/velero/pkg/restic/repository_manager.go:259\"", "level=error msg=\"Error backing up item\" backup=velero/monitoring error=\"timed out waiting for all PodVolumeBackups to complete\" error.file=\"/go/src/github.com/heptio/velero/pkg/restic/backupper.go:165\" error.function=\"github.com/heptio/velero/pkg/restic.(*backupper).BackupPodVolumes\" group=v1", "spec: restic_timeout: 1h 1", "status: conditions: - category: Warn durable: true lastTransitionTime: 2020-04-16T20:35:16Z message: There were verify errors found in 1 Restic volume restores. See restore `<registry-example-migration-rvwcm>` for details 1 status: \"True\" type: ResticVerifyErrors 2", "oc describe <registry-example-migration-rvwcm> -n openshift-migration", "status: phase: Completed podVolumeRestoreErrors: - kind: PodVolumeRestore name: <registry-example-migration-rvwcm-98t49> namespace: openshift-migration podVolumeRestoreResticErrors: - kind: PodVolumeRestore name: <registry-example-migration-rvwcm-98t49> namespace: openshift-migration", "oc describe <migration-example-rvwcm-98t49>", "completionTimestamp: 2020-05-01T20:49:12Z errors: 1 resticErrors: 1 resticPod: <restic-nr2v5>", "oc logs -f <restic-nr2v5>", "backup=openshift-migration/<backup_id> controller=pod-volume-backup error=\"fork/exec /usr/bin/restic: permission denied\" error.file=\"/go/src/github.com/vmware-tanzu/velero/pkg/controller/pod_volume_backup_controller.go:280\" error.function=\"github.com/vmware-tanzu/velero/pkg/controller.(*podVolumeBackupController).processBackup\" logSource=\"pkg/controller/pod_volume_backup_controller.go:280\" name=<backup_id> namespace=openshift-migration", "spec: restic_supplemental_groups: <group_id> 1", "spec: restic_supplemental_groups: - 5555 - 6666", "cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migmigration> namespace: openshift-migration spec: rollback: true migPlanRef: name: <migplan> 1 namespace: openshift-migration EOF", "oc delete USD(oc get pods -l migration.openshift.io/is-stage-pod -n <namespace>) 1", "oc scale deployment <deployment> --replicas=<premigration_replicas>", "apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: \"1\" migration.openshift.io/preQuiesceReplicas: \"1\"", "oc get pod -n <namespace>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html-single/migrating_from_version_3_to_4/index
15.5.3. Related Books
15.5.3. Related Books Red Hat RPM Guide by Eric Foster-Johnson; Wiley, John &Sons, Incorporated - This book is a comprehensive guide to RPM, from installing package to building RPMs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Package_Management_with_RPM-Additional_Resources-Related_Books